Having spent a considerable amount of time studying how standard bibliometric measures like the h-index work, there is no doubt in my mind that they don’t. Such indicators are based on the assumption that quantitative measures can be used to assess universal quality. They can’t. Think of this example: IKEA has a machine which tests the quality of drawers. The machine slams the drawer again and again until it breaks. The more slams it can take, the higher quality IKEA claims that it is. But it is not that simple. The quality of a drawer cannot be assessed simply by how many slams it can take. Its looks, how well the size fits the content, ease of handling and so on and so forth, are also indicators of its quality. What may be a perfect drawer for me and my stockings may be a lousy drawer for you and your cutlery.
How can anybody with this simple example in mind think that with the magnitude and complexity of research, the ground-breaking thoughts produced by researchers can be reduced to a simple number that will do justice to its actual quality? One size certainly does not fit all when assessing quality of drawers or research.
Nevertheless, are we constantly faced with a demand for such simple quantitative assessments of the quality of researchers and their work. In my presentation I will address and discuss the contradiction embedded in performing quantitative assessments of quality, and exemplify how a much fairer assessment can be enhanced through a negotiated research strategy.