A Normalized Index to Quantify Publication Productivity across Disciplinary Boundaries
Abstract
The number of papers and number of citations have been widely used as indices for research capability in various situations, such as the comparison of research institutions and performance evaluation of researchers. However, it is well known that these na¨ıve indices have large disciplinary variations, making them difficult to apply across disciplinary boundaries. Although various normalized citation indices have been provided by research assessment tools, no widely accepted indices have been established for the number of papers despite this number’s significance in research performance metrics. In the present paper, we propose a normalized index for the number of papers to enable a fair comparison of research capability. A key idea is to introduce the concept of work efficiency into the quantification of publication productivity. We further investigate the effects of normalization on the publication data of our institute. The normalization resulted in a relative change in the total number of papers in inverse relation to publication intensity in each discipline. Similar results were obtained between two major bibliometric databases for the publication productivity in well-populated disciplines with similar field coverage. Bootstrap analysis revealed that a sample size of 200–300 is required to obtain statistically significant publication productivity.
References
H. Fukunari, ”Practical and applied bibliometric analysis for research strategy,” Journal of Information Processing and Management, 57(6), pp.376–386, 2014 (in Japanese).
L. Butler, ”Assessing university research: a plea for a balanced approach,” Science and Public Policy, 34(8), pp.565–574, 2007.
N. Furubayashi, ”Beyond bibliometrics. Research analytics and evaluation based on citation data,” Information Science and Technology Association, 64(12), pp.520–526, 2014 (in Japanese).
Thomson Reuters, ”InCites Indicators Handbook,” pp.28, 2014.
Elsevier, ”SciVal Metrics Guidebook,” pp.95, 2014.
T. Koski, E. Sandstrom, and U. Sandstr ¨ om, ”Towards field-adjusted production: Esti- ¨ mating research productivity from a zero-truncated distribution,” Journal of Informetrics, 10(4), pp.1143–1152, 2016.
L. Butler, ”Explaining Australia’s increased share of ISI publications – the effects of a funding formula based on publication counts,” Research Policy, 32(1), pp.143–155, 2003.
J. W. Schneider, ”Publications or citations – does it matter? Beneficiaries in two different versions of a national bibliometric performance model, an existing publicationbased and a suggested citation-based model,” Proceedings of ISSI 2015 Istanbul, pp.477–488, 2015.
L. Colledge, ”Snowball metrics recipe book,” Amsterdam: Snowball Metrics Program Partners, pp.110, 2014.
Research project for the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan, ”Survey analysis report on performance evaluation of researchers,” 2015 (in Japanese).
L. R. Vinluan, ”Research productivity in education and psychology in the Philippines and comparison with ASEAN countries,” Scientometrics, 91(1), pp.277–294, 2011.
J. S. Barrot, ”Research impact and productivity of Southeast Asian countries in language and linguistics,” Scientometrics, pp.1–15, 2016; doi:10.1007/s11192-016-2163-3.
D. Hicks, ”Performance-based university research funding systems,” Research policy, 41(2), pp.251–261, 2012.
G. Sivertsen, ”Publication-based funding: the Norwegian model,” Research Assessment in the Humanities, Springer, pp.79–90, 2016.
J. E. Hirsch, ”An index to quantify an individual’s scientific research output,” Proceedings of the National academy of Sciences of the United States of America, pp.16569–16572, 2005.
G. Abramo and C. A. D’Angelo, ”How do you define and measure research productivity?,” Scientometrics, 101(2), pp.1129–1144, 2014.
A. Schubert, W. Glanzel, and T. Braun, ”Against absolute methods. Relative scien- ¨ tometric indicators and relational charts as evaluation tools,” In: A. van Raan (Ed.), Handbook of Quantitative Studies of Science and Technology, North-Holland Publishing Company, pp.137–176, 1988.
P. Mongeon and A. Paul-Hus, ”The journal coverage of Web of Science and Scopus: a comparative analysis,” Scientometrics, 106(1), pp.213–228, 2016.
L. Egghe, R. Rousseau, and G. Van Hooydonk, ”Methods for accrediting publications to authors or countries: Consequences for evaluation studies,” Journal of the American Society for Information Science, 52(2), pp.145–157, 2000.
J. Wang and Y. Sakurai, ”An introduction to bootstrap methods,” Kyoritsu Shuppan Co., LTD, pp.236, 2011 (in Japanese).
R Development Core Team, ”R: A language and environment for statistical computing. R Foundation for Statistical Computing,” Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org, 2005.