Skip to main content
SearchLoginLogin or Signup

Disrupting Hierarchies of Evaluation: The Case of Reviews in Digital Humanities

Recognizing that the many facets of scholars’ identities as people has a direct impact on their professional lives, we identified the lack of peer review as a clear deterrent to building up digital scholarship in these underrepresented fields in digital humanities.
Published onNov 15, 2022
Disrupting Hierarchies of Evaluation: The Case of Reviews in Digital Humanities
·

Abstract

This essay discusses how the editors of the journal Reviews in Digital Humanities have developed a people-first approach to peer review: community-centered peer review policies, workflows, and practices intended to address the gap in evaluation of digital scholarship. This work offers a model for disrupting hierarchies of evaluation that position senior, tenured professors as the appropriate gatekeepers of “quality” for digital scholarship and instead reframes the notion of “scholarly community” to recognize that expertise lies beyond the professoriate — particularly when evaluating public-facing scholarship. The essay further offers an example of how to create a community-driven peer review culture that brings in graduate students, librarians, archivists, public humanities workers, curators, and more to assess scholarship. In doing so, it articulates a vision for disrupting conventional notions of “expertise” and, in turn, hierarchies of evaluation for scholarship within the academy.

What does it mean to develop and implement a people-first peer review system? This question lies at the heart of our work founding and running Reviews in Digital Humanities, an open-access journal published on PubPub that is dedicated to peer reviewing digital scholarly outputs (e.g., digital archives, exhibits, data sets, games) based on humanities research.

Reviews responds to a gap in evaluation at the intersection of technology and the humanities, offering researchers who produce scholarship in genres other than traditional monographs, journal articles, and book chapters the opportunity to seek the imprimatur of peer review and external vetting of their work. From our commitment to creating a humane system of peer review that supports scholars as people, to the design of our peer review workflow, to the selection of reviewers who participants, Reviews disrupts hierarchies of evaluation in the academy and aims to consistently remind our scholarly community that we are all people first. 

The journal emerged from conversations between us, based on our experiences running peer review mechanisms for digital humanities conferences together. Through this work, we recognized a lack of consensus over how to peer review digital scholarly outputs. Despite the fact that colleagues in digital humanities create digital scholarship, there appeared to be no shared sense of how to evaluate digital scholarship created by others. Although professional organizations like the Modern Language Association (MLA) and American Historical Association (AHA) have invested time in developing guidelines, these have yet to be operationalized in evaluation. In addition to the challenges of conference abstract reviewing, there has also been a lack of outlets for peer review of digital scholarly projects themselves.

We further observed that those most negatively affected by this lack of consensus were scholars in areas such as African diaspora studies, Latinx studies, Native and Indigenous studies, Asian American studies, and other areas that have been systematically marginalized in the academy. As many in these fields are also often scholars of color and/or Indigenous scholars, the peer review problems for digital scholarship compound harm in multiple ways: scholars in these areas already have a burden of demonstrating the legitimacy of their research, which is further compounded by the lack of an evaluation structure for the digital scholarship they create. This, in turn, has impacts on how their work is (or isn’t) valued in hiring, reappointment, tenure, and promotion. Recognizing that the many facets of these scholars’ identities as people has a direct impact on their professional lives, we identified the lack of peer review as a clear deterrent to building up digital scholarship in these underrepresented fields in digital humanities.

Re-imagining Evaluation

Together, the lack of standards and representation signaled to us an absence of a community around this form of peer review within the field of digital humanities, which we then aimed to address with Reviews. Each month, we release issues comprising 500-word overviews of projects paired with 500-word reviews by practitioners in the field. Digital humanities projects considered by our journal include, but are not limited to, digital archives, multimedia or multimodal scholarship, digital exhibits, visualizations, digital games, and digital tools. Examples can be found in our A-Z project registry. Our hybrid model is not quite post-publication peer review — as digital projects can be revised — but requires a substantially developed project for evaluation. Projects are reviewed at the request of project directors, who provide the 500-word overview of their project. The overviews ask the project director and/or team to articulate the stakes of their project, the status of its development, the significance of both humanistic and technical content, and how the humanistic and technological intersect to produce new knowledge. The inclusion of this overview is essential to the people-first nature of Reviews, because it asks project directors and teams to articulate their goals for the purposes of assessment. This way, their work is evaluated on the basis of their understanding of how it fits into scholarly fields and through their goals and objectives for a project. (See this example.)

As the overviews indicate, we reject the notion that evaluation of scholarship — digital or analog — is somehow more “valid” or “rigorous” when undertaken in a vacuum. Instead, our overviews provide reviewers with guidance on the goals of project directors, offering reviewers insight on the goals and purpose of the project. Reviewers are then able to assess projects through the vision and mission articulated by project directors, not through some arbitrary sense of what digital scholarship is or should do. At the same time, to ensure that projects are also in line with recommendations of scholarly communities, we ask that reviewers assess project directors’ articulation of their work in relation to relevant professional organization guidelines (e.g., MLA and AHA). This process has provided an optimal balance between disrupting the notion that there is a single way to evaluate scholarship that varies widely in method, tool, and genre while taking advantage of extant guidelines and advocating for their inclusion in assessment.

We further recognize that digital scholarship may never be finished and, instead, may be a phased process. While some projects may be presented after completion, many may not evolve past the stage at which we review them, whether due to lack of funding, labor, or changes in institutional affiliation or job role for project directors. As such, digital scholarship itself disrupts hierarchies of evaluation because it does not hew to a timeline consistent with traditional academic outputs. This is certainly a feature of the genre-bending nature of digital scholarship but is also, in no small part, due to the precarity (and often forced mobility) of individuals and funding for those who work on projects. Project directors occupy a variety of job roles — students, tenure-stream faculty, non-tenure track faculty, alt-ac practitioners (e.g., digital scholarship librarians, humanities developers, academic technologists), librarians, archivists, and more. These roles may involve or foreclose access to institutional resources and to the opportunity to apply for grants, while changing of jobs by these individuals may put continued work on a project in question. Aiming to be responsive to the intersection of our project directors’ jobs and their projects — which, more often than not, do not align with each other — we offer project directors the opportunity to seek review at any point in a project’s development at which they determine they want the feedback that peer review will offer. Moreover, we offer the possibility for them to seek a re-review when further phases of a project have been completed, if applicable. In this way, we are able to be responsive to the professional and scholarly needs of people who participate in the journal. Importantly, we also try to be kind to our contributors by setting deadlines that align to their availability, sending them reminders near deadlines, and providing generous extension opportunities when requested. We also encourage team authorship of materials to spread the labor among an entire time rather than placing the entirety of work on the project director.

Making Peer Review People-first

Our approach to the reviewer side of work also demonstrates our keen awareness of the human beings that subtend academic evaluation processes. First, our peer review process is an open one intended to emphasize not hierarchies of project director/reviewer but a mutually beneficial relationship with the shared goal of improving digital scholarship. For example, the project director and/or team is known to the reviewer, while the reviewer’s identity is disclosed upon publication of the review, through their byline, which is crucial to ensuring that reviewers receive some professional benefit and recognition from participating in the process (e.g., a publication on a CV or resume, akin to a book review). Project directors and teams that wish to respond to a review (e.g., to indicate that areas for improvement suggested in a review have been addressed) are offered the opportunity to write up their thoughts, which are posted beneath the review.

Our position is that the so-called “rigor” of blind peer review is, at best, a fiction — and impossible when working with digital scholarship because it is often public-facing with robust credits and acknowledgments. The open process also promotes an ethics of accountability and care through a supportive peer review process. Reviewers are being asked to provide constructive feedback, to avoid ad hominem attacks or other forms of abuse, and to resist categorically negative reviews that fail to recognize the value within a project. “Reviewer 2” is not welcome at Reviews.

This accountability goes both ways: as much as it requires care and accountability for the reviewer, our process also pushes project directors and their teams to be open and receptive to constructive feedback. We do not publish reviews that are purely praise; rather, we ask reviewers to make suggestions, whether changes to extant projects that would improve usability, accessibility, and the scholarly dimensions of the work; considerations for subsequent phases of development; and/or future areas for growth, development, or application of a project. This is accomplished through the work of mentoring that we invest in both our project directors and our reviewers. When we receive overviews and reviews, we frequently provide feedback to authors, identifying areas to clarify, expand, and revise. In the case of special issue editors, mentorship includes assisting them with articulating their vision for the issue, providing constructive feedback to project directors and reviewers, and addressing any conflicts when they arise. Most commonly, our primary areas of concern are getting content and edits back in a timely manner and framing criticism constructively and kindly. This labor by us is a core part of our commitment to using the journal to collectively build the capacity of those who engage with digital scholarship to participate in evaluation of it. 

Finally, our selection of reviewers plays a role in our people-first approach to peer review and, in turn, disrupts hierarchical power dynamics of evaluation. Our reviewer pool is vast and varied because the expertise needed to review scholarship that blends an area of scholarly content with technical knowledge is unique. If we were to simply embrace a standard model of peer review that positions professors (preferably senior, tenured professors) as the appropriate gatekeepers of “quality” for digital scholarship, we would have no reviewer pool. The work of digital scholarship reframes the notion of “scholarly community” to encompass a wide range of practitioners and project creators: graduate students, librarians, archivists, public humanities workers, curators, and more. Outreach begins with developing reviewer pools based participation in publications, digital humanities conferences, major projects, degree programs, other projects, and the lively digital humanities social media networks on Twitter and Slack. This approach, coupled with suggestions from our advisory board, allows us to reframe the notion of who is qualified to evaluate this work.  Our work, thus, challenges conventional notions of “expertise” to recognize that “expertise” lies outside of the professoriate, particularly when evaluating public-facing scholarship.

Coda

While our design choices for reviews in digital humanities have been made in response to the specificities of digital scholarship and its manifold genres, the lessons from our work for designing people-first peer review have broader application. Facilitating more equitable review practices that address the needs of minoritized members of our scholarly community is possible by developing practices that emphasize their voice, agency, and legitimacy in situating the stakes of their work. Providing mentorship for both reviewers and those seeking review, to ensure that we are all collectively participating in a peer review community grounded in an ethics of care and accountability, is crucial to a more humane experience. Offering a mechanism for credit to reviewers provides a measure of reward for the oft-hidden labor of peer review. And recognizing that a broader community of practitioners may have the requisite expertise to assess scholarship challenges the university as a site of power and thus disrupts the hierarchies of scholarly evaluation.

Comments
0
comment
No comments here
Why not start the discussion?