Skip to main content
SearchLoginLogin or Signup

Valuing a broad range of research contributions through Team Infrastructure Roles: Why CRediT is not enough

Published onDec 12, 2023
Valuing a broad range of research contributions through Team Infrastructure Roles: Why CRediT is not enough


Scientific research is increasingly reliant on larger teams with diverse skills and expertise. As a result, Team Infrastructure Roles (TIRs, Bennett et al. 2023) have gained prominence in research. These roles include Lab Technicians, Project Managers, Data Stewards, Research Community Managers, and Research Software Engineers. These specialised roles are key to the success of large research projects, and their expertise may contribute to enhancing the transparency of the research process. Yet, the current focus in research assessment on publications inadvertently sidelines supporting contributions of TIRs that may not translate directly into authorship on publications. TIR contributions are therefore invisible in research assessment if the focus remains on publications, even when contribution roles are described by the CRediT taxonomy. CRediT is a step forward as it allows for a more transparent recording of contributions, but only for a limited range of research outputs. Contributorship on tangible research outputs, while more transparent, is therefore still insufficient to evaluate all research contributions. The current research assessment system needs to be reimagined so that the entire spectrum of research activities and contributions to the research process are valued.


Scientific questions and challenges are increasingly complex, requiring research teams instead of individuals to address them (Baumgartner et al., 2023). This has led to the rise of new specialised roles involved structurally in research processes: Team Infrastructure Roles (TIRs, Bennett et al., 2023). These roles include Laboratory Technicians, Project Managers, Research Community Managers, Librarians, Data Stewards/Managers and Research Software Engineers (Heffner, 1979; UKRI, 2023). Previously, TIR roles have been described as “professional staff” occupying a “third space” in academia (Whitchurch, 2008). These roles contribute specialised expertise to research processes, with the potential to significantly improve the efficiency and transparency of the research process. Their unique skills enable them to more effectively oversee, disseminate, and maintain various research outputs as well as support complex collaborations and partnerships.

This ongoing diversification of roles within research has prompted important inquiries into how to accurately document each individual's contribution to the research process. Currently, TIRs are not well integrated into traditional systems of academic recognition focused on authorship of research articles. One proposed solution entails adopting a more transparent approach to documenting contributions to research articles by specifying contributor roles. Here, it is argued that this shift in focus from authorship to contributorship on articles is not sufficient to address the unequal recognition of contributions to research.

CRediT taxonomy

The proposition to replace authorship with contributorship is not new (Rennie et al., 1997). Contributorship would allow for any individual who made a contribution to an article to be listed as a co-author, or co-contributor, regardless of whether their work included writing. This provides a more equitable distribution of credit for contributors, facilitating credit for the entire research team. The Contributor Roles Taxonomy (CRediT, Allen et al., 2019; Brand et al., 2015; McNutt et al., 2018) outlines contributions over 14 categories (conceptualisation, methodology, software, validation, formal analysis, investigation, resources, data curation, writing—original draft, writing—review and editing, visualisation, supervision, project administration, and funding acquisition). These contributions are clearly listed and attributed to each individual.

CRediT has been increasingly implemented in publishing venues (Allen et al., 2019). Nevertheless, there remains critique on the use and implementation of CrediT. Some studies argue that CRediT may not be generally applicable (see critical trials (Zhang et al., 2019), software development (Alliez et al., 2019) and library science (Fitzgerald et al., 2020)). Not all journals have implemented CRediT in a machine-readable manner or have failed to make this data available publicly (Shotton, 2017). There may also be inaccurately declared contributions (Sauermann & Haeussler, 2017).

Next to this, important questions have been asked about whether CRediT actually reinforces the issue of incentivising competitive behaviour that does not collectively benefit researchers, as CRediT still facilitates output based evaluation of individuals (Gadd, 2020; Larivière et al., 2021) and may result in some roles being more valued than others (Gadd, 2020; Hosseini et al., 2023).

Limitations of CRediT for Team Infrastructure Roles

In addition to the concerns already raised regarding CRediT, CRediT is also limited in that it only focuses on research articles. Therefore, contributions to research that do not directly result in articles do not become more visible or transparent. For example, Matarese & Shashok (2019) highlight the lack of recognition for technical and editorial support (including translation efforts) in the current CRediT roles. This lack of recognition can be extended as there is a broad range of contributions that are excluded by a focus on contributions to publications (see Figure 1).


Figure 1: A representation of the multiple contribution layers that can underlie a research article. The Authors are on the top layer and the most closely linked to the research article, followed by the Contributors layer. The bottom layer consisting of the Team Infrastructure Roles is supporting these two layers of Contributors and Authors. The individuals on the Team Infrastructure Role layer are more invisible and further away from the research article on the top. Illustration adapted from illustrations created by Scriberia with The Turing Way community (CC-BY 4.0 - DOI: 10.5281/zenodo.3332807).

CRediT could play a role in making some of these contributions more visible if it were applied to other tangible research outputs, such as datasets, software and protocols. Nevertheless, there are currently no formalised processes to acknowledge and credit these non-article research objects (Himanen et al., 2023; Vasilevsky et al., 2021). A new focus on these research outputs also risks increasing the number of dimensions on which individuals contributing to research are assessed (Himanen et al., 2023; Hostler, 2023). Furthermore, as long as the focus of research evaluation remains on outputs, research assessment will continue to fail to capture essential contributions to the research process that are less tangible, such as interpersonal skills involved in mentoring and community/project management. These skills are also essential for managing research groups, yet are currently undervalued in tenure and promotion processes focused primarily on authorship of traditional research publications (Schimanski & Alperin 2018).

Examples of other contributions that go beyond the contribution roles outlined by CRediT are:

  • Conceptual contributions that are not substantial enough to be recognised in the author or contributor list (discussions, problem and hypothesis formulation, generation of ideas)

  • Negative results or replications that do not result in tangible research outputs

  • Facilitation of work, such as providing connections between contributors or including stakeholders outside of academia (a key component of the work by Research Community Managers)

  • Maintenance of work, such as the running of research projects or labs (performed by roles such as Project Managers, Technicians)

  • Development of open source software, code and tools, which may also require maintenance so that others can continue to use it in their analyses (Research Software Engineers or Data Scientists)

  • Labour related to societal impact, such as contributions to public discussions and conversations, policy work

  • Leadership in committees, organisation of conferences, workshops and scholarly societies

  • Peer review and quality control of research

  • Teaching and mentoring activities

Where to go from here?

There is a role for transparent recording of research contributions and taxonomies such as CRediT. Nevertheless, by recording and measuring the productivity of a narrow range of research outputs we are not going far enough in addressing the roots of academic inequity, such as the perceptions of what it means to be a researcher and participate in research (Bennett et al., 2023).

As a first step, recording of research contributions should be extended and acknowledge the full breadth of contributions and individuals that make them. This should include contributions that do not directly result in authorship on articles, and look beyond articles to other tangible research outputs such as datasets, software packages, and protocols. Longer term, focusing solely on these tangible research outputs does not recognise the labour involved to enhance the quality of the work produced (Agate et al., 2020). Rather than placing emphasis solely on quantifiable or describable aspects within contributor roles, new approaches should explore methods for recording and assessing a broader range of valued contributions (Agate et al., 2020). Acknowledging the full breadth of contributions extends beyond the advantages it offers to TIRs and the recognition of these specialised roles in research. It holds the potential to positively impact researchers focusing on intangible research contributions, such as those that prioritise teaching, mentoring, leadership, service, and societal impact.

Examples of research assessment forms that are more inclusive of a broad range of contributions are already available. Examples are the ‘evidence based CV’ implemented by the Dutch Research Council (NWO) as a pilot in 2022 and the SCOPE framework shared in 2021 (INORMS, 2022). The evidence based CV asks about ten outputs, not limited to research objects, that the individual thinks are the most defining in their career. The SCOPE framework for research evaluation provides five stages to evaluate research responsibly (Himanen et al., 2023; INORMS, 2022; 2023). The first stage starts with what is valued about the entity being evaluated. The second stage aims to ensure that the evaluation is context-specific. In the third stage all the options for evaluation should be considered. The fourth stage is about probing deeply to find out how the evaluation may be biased or gamed, as well as what the unintended consequences and cost-benefits are. In the fifth stage the evaluation should be evaluated: Did the evaluation achieve its aims (INORMS 2023)? These five stages are designed to help plan, design, and conduct research evaluations, as well as to check their effectiveness (Himanen et al., 2023). INORMS shares case studies that implemented the SCOPE framework on the INORMS website and the INORMS (2023) report. Both the evidenced based CV and the SCOPE framework focus on what is valued most, instead of what is easily recorded and measured, and are widely applicable - including to TIR contributions. Nevertheless, as these are recent developments, evaluations of these frameworks have not yet taken place.

The ‘evidenced based CV’ and SCOPE also allow for more tailored assessment of individuals. All contributions to research are made by a wide variety of roles and individuals that have different ambitions and are difficult to lump together even when they perform the same role. While some roles are not expected to contribute to research outputs it does not mean that they do not make contributions to articles or would like to be recognised in this manner. On the other hand, other roles may not want this formal recognition because they lack the control over the article after their work is complete (see Matarese & Shashok, 2019 that describe the concerns that editors and translators may have about the quality of their contributions when changes occur during the writing process without their explicit approval). This more tailored assessment of individuals contributing to research may also dispel the idea that each individual has to excel in all research contributions (Himanen et al., 2023; Hostler, 2023), and would allow specialisation and development of expertise that would benefit research teams in tackling more complex research challenges.

Nevertheless, as long as researchers are being hired and promoted based primarily on their publications and individual contributions, more accurate and inclusive identification of contributions would have a limited impact on scientific evaluation and promotion processes. Evaluations based on processes and values provide the opportunity to capture research contributions in a more nuanced way, including team efforts which make up so much of the research process. Focusing on recognising contributions from the entire research team may prevent TIRs and other contributors to leave academia for other occupations, which would ultimately lead to a loss of institutional skill, expertise, and memory (Bennett et al., 2023). Capturing research contributions in a more tailored manner would also allow for more flexibility and equity in tenure and promotion processes of Faculty members, allowing Faculty to focus on research activities beyond the traditional publication.


Contributor taxonomies like CRediT are an important step forward in recognising the variety of research contributions. Nevertheless, a sole emphasis on contributorship or CRediT is insufficient to encompass all the activities and efforts that enhance the quality of scientific research. If tenure and promotion processes continue to prioritise contributions to written research outputs that are more easily recorded and measured, rather than embracing the entire spectrum of research activities and contributions that are needed for the creation of knowledge, research assessment will continue to rewards only a narrow range of outputs and individuals.


Many thanks to Joshua Peterson, Arielle Bennett and Sam Teplitzky for feedback on the draft text of this essay. Thanks also to the other co-authors of ‘A Manifesto for Rewarding and Recognizing Team Infrastructure Roles’ who have undoubtedly co-shaped my visions and opinions on this topic. Thanks to Sarah Gulliford-Kearns, Abbey K. Elder and Juan Pablo Alperin for their edits and kind comments that improved the readability and quality of this work.


Agate, N., Kennison, R., Konkiel, S., Long, C. P., Rhody, J., Sacchi, S., & Weber, P. (2020). The transformative power of values-enacted scholarship. Humanities and Social Sciences Communications, 7(1), 165.

Allen, L., O’Connell, A., & Kiermer, V. (2019). How can we ensure visibility and diversity in research contributions? How the Contributor Role Taxonomy (CRediT) is helping the shift from authorship to contributorship. Learned Publishing, 32(1), 71–74.

Alliez, P., Di Cosmo, R., Guedj, B., Girault, A., Hacid, M.-S., Legrand, A., & Rougier, N. P. (2019). Attributing and Referencing (Research) Software: Best Practices and Outlook from Inria.

Baumgartner, H. A., Alessandroni, N., Byers-Heinlein, K., Frank, M. C., Hamlin, J. K., Soderstrom, M., Voelkel, J. G., Willer, R., Yuen, F., & Coles, N. A. (2023). How to build up big team science: A practical guide for large-scale collaborations. Royal Society Open Science, 10(6), 230235.

Bennett, A., Garside, D., Gould Van Praag, C., Hostler, T. J., Kherroubi Garcia, I., Plomp, E., Schettino, A., Teplitzky, S., & Ye, H. (2023). A Manifesto for Rewarding and Recognizing Team Infrastructure Roles. Journal of Trial and Error.

Brand, A., Allen, L., Altman, M., Hlava, M., & Scott, J. (2015). Beyond authorship: Attribution, contribution, collaboration, and credit. Learned Publishing, 28(2), 151–155.

Fitzgerald, S., Budd, J., Beile, P., & Kaspar, W. (2020). Modeling Transparency in Roles: Moving from Authorship to Contributorship. College & Research Libraries, 81(7).

Gadd, E. (2020, January 20). CRediT Check – Should we welcome tools to differentiate the contributions made to academic papers? LSE Impact Blog.

Heffner, A. G. (1979). Authorship Recognition of Subordinates in Collaborative Research. Social Studies of Science, 9(3), 377–384.

Himanen, L., Conte, E., Gauffriau, M., Strøm, T., Wolf, B., & Gadd, E. (2023). The SCOPE framework – implementing the ideals of responsible research assessment. F1000Research, 12, 1241.

Hosseini, M., Gordijn, B., Wafford, Q. E., & Holmes, K. L. (2023). A systematic scoping review of the ethics of Contributor Role Ontologies and Taxonomies. Accountability in Research, 1–28.

Hostler, T. J. (2023). The Invisible Workload of Open Research. Journal of Trial and Error.

INORMS. (2022). The SCOPE Framework, A five-stage process for evaluating research responsibly (10).

INORMS. (2023) The SCOPE Framework. Figshare.

Larivière, V., Pontille, D., & Sugimoto, C. R. (2021). Investigating the division of scientific labor using the Contributor Roles Taxonomy (CRediT). Quantitative Science Studies, 2(1), 111–128.

Matarese, V., & Shashok, K. (2019). Transparent Attribution of Contributions to Research: Aligning Guidelines to Real-Life Practices. Publications, 7(2), 24.

McNutt, M. K., Bradford, M., Drazen, J. M., Hanson, B., Howard, B., Jamieson, K. H., Kiermer, V., Marcus, E., Pope, B. K., Schekman, R., Swaminathan, S., Stang, P. J., & Verma, I. M. (2018). Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication. Proceedings of the National Academy of Sciences, 115(11), 2557–2560.

Rennie, D., Yank, V., & Emanuel, L. (1997). When authorship fails. A proposal to make contributors accountable. JAMA: The Journal of the American Medical Association, 278(7), 579–585.

Sauermann, H., & Haeussler, C. (2017). Authorship and contribution disclosures. Science Advances, 3(11), e1700404.

Schimanski, L. A. and Alperin, J. P. (2018). The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Research, 7:1605.

Shotton, D. (2017, November 24). Elsevier references dominate those that are not open at Crossref. OpenCitations Blog.

UKRI. (2023). 101 jobs that change the world.

Vasilevsky, N. A., Hosseini, M., Teplitzky, S., Ilik, V., Mohammadi, E., Schneider, J., Kern, B., Colomb, J., Edmunds, S. C., Gutzman, K., Himmelstein, D. S., White, M., Smith, B., O’Keefe, L., Haendel, M., & Holmes, K. L. (2021). Is authorship sufficient for today’s collaborative research? A call for contributor roles. Accountability in Research, 28(1), 23–43.

Whitchurch, C. (2008). Shifting Identities and Blurring Boundaries: The Emergence of Third Space Professionals in UK Higher Education. Higher Education Quarterly, 62(4), 377–396.

Zhang, Z., Wang, S. D., Li, G. S., Kong, G., Gu, H., & Alfonso, F. (2019). The contributor roles for randomized controlled trials and the proposal for a novel CRediT-RCT. Annals of Translational Medicine, 7(24), 812–812.

No comments here
Why not start the discussion?