Research evaluation: Moving beyond an emphasis on metrics

Picture credits: DORA

Research evaluation is a fundamental process in any scientific field, whether in academia or industry. However, the increasing reliance on metrics such as the impact factor has led to a situation where the quality and value of research are measured by its ability to influence the most widely circulated journals. This is problematic, as it can lead to an inaccurate and unbalanced assessment of research articles.

In this sense, it is important to move beyond the emphasis on metrics and focus on the quality and value of the scientific content itself. For this reason, we at Training Data Lab have signed up to the “San Francisco Declaration on Research Assessment”. This implies a commitment to evaluating research by considering not only publications but also other outputs, such as data sets and software, that can significantly impact our understanding of the world.

The importance of assessing scientific content

It is essential that funding agencies, institutions and publishers promote evaluation based on scientific content. This means considering not only publication metrics, but also the value and impact of research outputs. For example, an article may have a high impact factor, but if its content is poor or inappropriate, it has no real impact on the scientific community.

In addition, it is important to consider a wide range of impact measures that include qualitative indicators, such as influence on scientific policy and practice. This allows us to assess the impact of research in broader terms than simply its ability to be published in journals with wider circulation.

The importance of transparency

Transparency is another fundamental aspect of research evaluation. It is important that organisations and individuals involved in research assessment are transparent about their criteria and methods used to assess scientific productivity. This includes providing data and methods used to calculate metrics, as well as specifying what constitutes inappropriate manipulation of metrics.

Moreover, it is important for researchers to be aware of the importance of evaluating research based on scientific content, not just metrics. This means that they should be aware of the variation in article types (e.g. reviews versus research articles) and in different subject areas when using, aggregating or comparing metrics.

The importance of encouraging change

Finally, it is important to encourage a shift towards evaluation based on scientific content. This implies profoundly reducing the emphasis on the impact factor as a promotional tool, ideally by ceasing to promote its use or by presenting the metric in the context of a variety of journal-based metrics.

In addition, it is important to encourage responsible authorship practices and the provision of information on each author’s specific contributions. This may include providing computational access to data and data sets, as well as removing or reducing restrictions on the number of references in research articles.

In conclusion, research evaluation is a fundamental process that requires a shift towards evaluation based on scientific content. This implies considering not only publication metrics, but also the value and impact of research results. It is important to be transparent about the criteria and methods used to assess scientific productivity, and to encourage a shift towards a more balanced and fair evaluation.

Recommendations

  • Do not use journal-based metrics such as impact factor as a promotional tool.
  • Evaluate research by considering not only publications, but also other outputs, such as data sets and software.
  • Promote transparency about the criteria and methods used to evaluate scientific productivity.
  • Encourage a shift towards evaluation based on scientific content.
  • Provide computational access to data and data sets.
  • Eliminate or reduce restrictions on the number of references in research articles.

* AI-generated text
Read more about how we generate our content

ChatBot Ollama
ChatBot Ollama
Large Language Model

ChatBot Ollama deployed locally by Training Data Lab based on different versions of LLaMA 3, LLaMA 2, Mistral.

Next
Previous