Skip to Main Content

Publication Metrics

What are publication metrics?

Publication metrics, frequently referred to as 'bibliometrics', provide a quantitative method to analyse published research. Publication metrics can help you to get information about the reach and potential impact of your research, as well as helping you to make decisions about where to publish.

This guide introduces a number of tools which provide publication metrics for both the individual researcher and for institutional level analysis. For individual researchers, information is provided on how to track your citations and the sources that are available to you at Sussex for measuring how often your work has been cited. For institutional analysis, the guide provides information about School level metrics. 

Responsible metrics

Publication metrics can be used to 'measure' the impact of a researcher, their outputs, or the outputs of an organisational unit (research group, department, Institute or University). Some institutions have used them in recruitment, probation, promotion or other processes. They also form part of the calculations used in university rankings. However, as this guide will show, metrics cannot provide a complete picture of the impact, or potential impact, of a researcher or their outputs. Quantitative methods alone cannot do justice to the richness of research culture. There is growing consensus that researchers and institutions should use metrics responsibly.  

The San Francisco Declaration on Research Assessment (DORA)

The San Francisco Declaration on Research Assessment sets out principles for assessing and evaluating research quality. The declaration was initially formulated in 2012 during at the Annual Meeting of the American Society for Cell Biology in San Francisco. It has become a worldwide initiative covering all scholarly disciplines and all key stakeholders including funders, publishers, professional societies, institutions, and researchers. The University of Sussex is a signatory of DORA. 

Key principles:

  • Research should be assessed on its own merits rather than on the basis of the journal in which the research is published.
  • Eliminating the use of journal-based metrics such as the Journal Impact Factor in any funding, recruitment or promotion considerations.
  • For the purposes of assessment, value of all research outputs (e.g. research data) and other contributions (e.g. influencing policy, training ECRs) should be considered, in addition to publications. 
  • In any recruitment or promotion decision-making, a broad range of impact measures should be used including qualitative indictors of significance and impact, and a range of article-level metrics.
  • The Declaration recommends responsible authorship practices, encouraging clarity around author contributions

Find out more about the University of Sussex’s commitment to the Declaration on Research Assessment (DORA) and its principles.

The Leiden Manifesto

The Leiden Manifesto for research metrics is a list of ten principles to guide research evaluation. It was developed collaboratively by researchers at the 19th International Conference on Science and Technology Indicators, 2014 in Leiden and later published as a comment in the journal NatureIt is proposed guide to combat misuse of publication metrics when evaluating research literature. 

The Leiden Manifesto for Research Metrics from Diana Hicks on Vimeo.

The Metric Tide

The Independent Review of the Role of Metrics in Research Assessment and Management was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research. Its report, The Metric Tide, was published in July 2015.

The review identified 20 recommendations for further work and action by stakeholders across the UK research system. These recommendations are underpinned by the notion of ‘responsible metrics’ as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research. Responsible metrics can be understood in terms of the following dimensions: 

  • Robustness: basing metrics on the best possible data in terms of accuracy and scope
  • Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment
  • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results
  • Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
  • Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.