B v. H - The H index debate
The mis-application of H-index and its hindrance to academic nutritionists
In common with many mid-career scientists, my progression forwards is now inextricably linked with my H-index. This is an increasingly used metric for reducing both the quality and quantity of a scientist’s entire career output to a single integer. Panels or individuals responsible for appointment of new staff, promotion of existing staff or transfer to tenured posts will now require one to submit an H-score for consideration. Available information suggests that rejection can be on the basis of H-score alone, so it must be a robust measure, right?
I have spent approximately 20 years since graduation pursuing science as a career, and have noted over time the fluid switching of goalposts from ‘publish more’ to ‘publish higher impact’ to ‘get more grants’ and back again, the stick ever changing and the carrot ever-distant. The H-index supposedly offers a leveller, a barometer of both quantity and quality as it reflects peers’ citations of one’s entire body of work. It is a relatively simple metric that scores the point at which your number of publications equals your number of citations of publications. To get an H-index of 40 one needs forty papers which have forty citations (although this will in practise mean a lot more than forty papers). The index has been subject to a number of criticisms, for example if one were consistently to target only the highest impact journals and, say, get five papers in Cell, each with >200 citations, one’s H-index would remain at five, despite a body of work of earth-shattering consequence. Likewise, one could flagrantly write flimsy reviews on a monthly basis for indexed journals, each self-citing the previous reviews and get that H well into double-figures within 1 year, despite a body of work of mind-numbing mediocrity. In a recent editorial in ACS Chemical Biology, tongue firmly in cheek, James Williamson suggested a series of strategies to bump-up one’s H-index. Strategies included changing fields, ensure that you are working in a field that is busy, where lots of citations are made in the first place. Second, write more reviews – it is clear that review journals have higher impact than original research-led journals and as I sit on a couple of journal editorial boards it is evident that Editors in Chief regard reviews as a cash cow (or citation-cow) for chasing higher impact factors (IF); Williamson also suggests ‘discrete self-citation’, promoting papers just below one’s own impact threshold, and adding as much data as possible into papers as ‘impact caching’. This makes for very amusing reading and is certainly ironic, but whilst the H-index remains the tool of choice for evaluating a body of work, it is a sad reflection that these and other strategies will likely become practised, if they are not already. To these strategies, I am minded to suggest one of my own: stop doing research altogether. The process of writing grants, writing papers and administering students does nothing for your H-index and you would be better off finding very prolific scientists to hang out with, give a little support and get your name on their papers. H does not care if you are mid- first- or last author, so why should you? This strategy will very rapidly get you a high H with none of the emotional torment associated with actually practising science.
To my mind there are two key issues here: the reductionist approach of a career body of work to an integer and the lack of indicative thresholding. The latter issue is of key importance for researchers in some areas of nutrition and similar smaller disciplines. An H-index reflects the amount of times a body of work is cited, but does not account for the size of the field (i.e. total number of citations in the field). A proxy measure of the latter comes from the IF of the leading research journals in the field, with areas like medicine, gastroenterology, cell biology, oncology all having lead journals with IF well into double figures; nutrition, likely due to the size of the field, lags with a best achievable IF of approximately 6. Are nutritionists therefore subject to a level playing field when being judged using this metric? When hearing about the introduction of H-index as a promotional gauge in my own department, I asked a minuted question about how the metric would be applied, whether it would be differential, how the evaluation of ‘young’ papers would be accounted for, and if flat thresholding was to be applied across the disciplines, what the thresholds would be, and if variable thresholds were to be set, then how in turn would they be subject to indicative thresholds. I am still waiting for an answer on the indicative thresholds, but have nonetheless been told my H-index does not reach them.
I am very sceptical of the quantification of quality. Is it really true that an entire body of work can be evaluated through a single metric? I doubt this, and I am not alone in doubting it. There is too much variety in the scheme of science (consider the hypothetical examples above). It is my belief that once any strategy for quantification of quality is suggested, the metric will immediately cease to measure that quality, but only the ability of the measuree to play the game. This might apply equally to H-index, the RAE/REF, environmental impact, NICE guidelines. However, only the first of these is electively used by scientists to judge each other. Such judgement is lazy and ill-informed and provokes suspicion that the H-index is not so much used as a leveller, but as a way to create obstacles to career progression. By all means make a judgement, but make it on sound and justified grounds. After all, we are humans – not numbers ….
….Finally, the Editor of the Gazette in which this article first appeared, had originally suggested that this piece was one half of a debate with a ‘for’ and ‘against’ piece and some right to reply. Unfortunately, no-one approached was willing to argue for the H-index. Whilst not a systematic piece of research, it is still a damning reflection on the questionability of the metric that no-one believes in it enough to support it publicly. If you do believe the index is a leveller, that has benefits I have missed or that this article is unfair, please use the comments section below to reply as this debate needs to be conducted, not imposed.
Latest Twitter feed