RRI as peer review practice

By Davies

In research we review each other’s work. This idea is fundamental to scholarly practice; it is, perhaps, one of the few things that connects research across disciplines, paradigms, and modes of enquiry. From theology to computational social science to astrophysics, we share drafts, present at conferences, write articles for peer review, and solicit international comments in recruitment or funding processes. Our work is discussed in more or less public spaces and, through that discussion, assessed by colleagues, institutions, and funders.


I start with this point because it seems to me that one important way that RRI is institutionalised and implemented is through its integration into peer review processes. Recent years have seen RRI and related ideas becoming folded into reviewing, as a criteria for assessment, an activity to be noticed, or a signal of particular priorities or approaches. While my own experience of reviewing is primarily through written reviews, comments, or letters – writing peer review reports for journals or tenure review assessments, for instance – in the case of RRI I have been involved in more extended forms of evaluation, involving meetings over several days. This (I believe) is not unusual: requesting that large scale research projects take up RRI methods and ideas, as has been the case at the European Commission and beyond, means that ‘RRI people’ – often, social scientists with a background in studying science, technology, and society – become part of the panels that assess these projects. It is on my (admittedly limited) experience of such panels that I want to reflect.

Reviewing is a social process, involving judgements as to quality and value that are drawn in the light of how one imagines a wider discipline or field. This is even more the case within evaluation meetings, particularly those that take place in person (remember those?). Solitary reviewing – writing reports that only a couple of people will read, where only an editor knows your identity – means contributing to discussion in a rather passive manner: we send our words out to authors and editors, but have little sense of how they are received. They are part only of a very slow conversation, and the ways in which our judgements may be challenged remain largely invisible (bad tempered ‘response to review’ letters notwithstanding). In panel meetings, however, we are part of a community of reviewers, one that can demonstrate, in real time, the non-universal nature of reviewers’ assessments. While those being assessed are (usually) not present, even the most monodisciplinary panels contain people with different views, experiences, and backgrounds. In such contexts it is not only the applicants whom we are evaluating, but – at least to some extent – each other. 

Picture the scene: the first day of a multi-day evaluation panel, in which you are meeting people you have only previously corresponded with. Perhaps you are nervous – these are, after all, new colleagues, likely from different disciplines. If you are the ‘RRI person’ you may well be the sole non-natural scientist, and, well, the future is a foreign country, and they do things differently there. Gradually, as the discussion unfolds, you start to place and understand the perspectives and backgrounds that different people are bringing. This person knows about this thing. This person sees that as the priority. That person makes (what you think are) particularly good points. Over time the atmosphere in the room loosens (even as the air becomes staler, and stuffier). You go out for dinner, and it loosens even further. In the days that follow the familiarity grows: you might not agree with each other, allegiances and coalitions of varying degrees of visibility may even form, but you at least develop some kind of shared understanding of how different people operate, and the criteria they are applying around quality. If you are the RRI person, others on the panel start to see how you assess projects with regard to their commitments to this notion.

This takes time. For many of us, such peer discussion is a vulnerable activity: we must unpack, lay bare, our judgements and how we have come to them. This is especially challenging in an interdisciplinary setting, exactly because such judgements are not universal or natural, but trained into us by the contexts in which we work. It is not obvious what makes for ‘excellence’, ‘innovation’, or ‘responsibility’. Different disciplines and subfields, different individuals, will characterise these in different ways. An evaluation panel is, among other things, a space of slow mutual investigation whereby we start to understand what others mean when they talk about excellence or responsible research. In the best cases it becomes a space of mutual appreciation, of a collegial spirit where we are able to respect the positions and judgements of others. In the context of RRI, this is often about reaching a place where dominant notions of scientific excellence are disrupted to incorporate other kinds of values.

What are the implications of this? Thinking of review panels as spaces of mutual education and shared understanding highlights, I think, that such review is relational. It involves building trust and understanding and, as I have already noted, takes time (and energy). I think we should take this seriously and consider reviewing for particular programmes as a learned, collective skill, one that shouldn’t be dissipated after one cycle. While concerns about groupthink are also valid, in my experience involvement in just one evaluation cycle (e.g. for one year of a multi-year call) wastes hard-won understanding of RRI ideas (in particular) about the nature of good research.
Perhaps a further implication is the need to foreground the fact that evaluation is a necessarily diverse space in which evaluators will disagree. In practice it is often hardest for those who have historically been marginalised in the academy – whether that is scholars of colour or lay representatives – to forcefully argue for their notions of excellence or valuable research. Peer review as mutual education only works when all involved are open to hearing, and respecting, different perspectives and assessments. While I have been fortunate in those who have chaired the panels in which I have participated, it is clear that such skilled moderation is not always in place. Perhaps we should take moderation, and participation in reviewing generally, (more) seriously as a craft to be learned, discussed, and explicitly refined. 


We all of us review each other’s work: why not, then, open up how this is done in different disciplines and traditions, and explore how we might learn from each others’ practices?