Musings of a responsible research and innovation hall monitor during recess
By Michael J. Bernstein
Last year, my experience as an expert reviewer for Horizon 2020 afforded me a fresh perspective on the challenges of institutionalizing (counter-cultural) normative ambitions like Responsible Research and Innovation (RRI) into research and innovation governance mechanisms. In an otherwise uncertain world and a particularly unsettled time amidst global pandemic, I appreciated the clarity of my charge as reviewer: critically review a proposal on its merits and seek strict alignment with delineated call criteria.
Two years ago, my colleagues and I reported findings on the integration of RRI into European Commission Horizon 2020 funding lines. We found some elements of RRI, like concern for gender balance, robustly ensconced across all levels of programming—from work programme text to proposal review criteria. Other RRI elements, like accounting for broad ethical concerns (e.g., not just ethics as data privacy) or governance, were not so prominent. Drawing on more than two hundred interviews, we concluded: “The lack of clarity in conceptualizing RRI for research policy and governance, the limited understanding among key stakeholders, and the concept’s conflation with other—often conflicting— policy goals (e.g., scientific excellence, economic value, technological readiness) hinder the emergence of a specific RRI-oriented policy frame” (Novitzky et al 2020). RRI advanced far afield in H2020, but not as evenly as one might expect for an issue with a cross-cutting mandate.
Of course, clarity-of-task as a reviewer did not equate to simplicity. As reviewers we were expected to wade diligently, as if automata, through 500-700 pages of proposal material whilst keeping in mind a veritable family tree of three criteria, sub-criteria, and 30 indicators. Imagine the reviewer as arborist, if you will, tending a tree of three core branches (i.e., criteria excellence, impact, and quality of implementation). From each core branch, observe two bifurcations (i.e., generating 12 sub-criteria, like, from excellence, “Clarity and pertinence of the objectives”). Follow each bifurcation to several spritely twigs (i.e., indicators, continuing; “Clear, measurable, realistic and achievable within the duration of the project.”). Returning to the forest, the guidance also urges, with the constrained politesse of an announcement to keep one’s arms and legs inside a fast-moving vehicle, “Please take into account applicants have invested a lot of time and effort in putting a consortium together and preparing a proposal.” The appeal to recognizable human effort did give me pause. How might I feel as a proposal author, perhaps one unaware of a cross-cutting term of art, to be docked points for not addressing a concept I’d never been socialized to care about?
In my case, I care very much. In 2017 I moved to the far reaches of the Arctic Circle to support efforts to advance the cross-cutting issue RRI, “in H2020 and beyond.” Based in charming Tromsø, Norway, I merrily identified and interviewed key R&I stakeholders in two programmes of H2020. While interviewees shared their perspectives, I noticed a rather persistent inner monologue: Oh, so you haven’t heard of RRI…that’s interesting. I eagerly noted ways in which RRI implementation fell flat. Alternatively, Oh, that’s fascinating, could you tell me more about how you put that aspect of RRI into practice, and, surely, this insight will help the Commission see there are other ways to advance its proclaimed interest in implementing cross-cutting issues. I analysed documents voraciously, from legislation to call texts to project summaries, seeking evidence (or its absence) of RRI uptake and integration into Commission programming. The proverbial holy grail in my quest: find RRI at the level of evaluation criterion. Yes, when researchers face consequences for not addressing RRI, they’ll wise up.
Back in the Southwestern U.S., awash in evaluation criteria, sub-criteria, and indicators, I tried, as urged, to review with the sense of fairness, accuracy, and conscientiousness I would wish a proposal of my own be examined. Stakeholder and public participation featured prominently in the call text, along with a connection to the mission of advancing the European Green Deal. This will be good! With the thrill of witnessing a favourite player turn a hattrick, I tallied up RRI in all three indicators: in the Excellence criterion, stakeholder involvement and gender; in the Impact criterion, open science; in the Quality of Implementation criterion, partner suitability (including sectoral diversity) for the topic challenge and scope. Re-reading the criteria a second time, my glee turned to curiosity, I wonder how reading for each of these criteria will play out. Opening-up my first proposal, facing down the scroll bar of text, then looking again at not just the RRI sub-criteria but also all the others, I noticed my curiosity gingerly tiptoe away. Wait, how am I supposed to keep all these in mind!? My disquiet grew the more I tried to think about what each indicator asked. What does it mean for objectives to be realistic and achievable? For the concept to be sound? For an effort to be beyond state of the art or have innovation potential? For barriers and obstacles to be well considered? Or for exploitation and dissemination measures to be of high quality? These questions, to say nothing of topics on appropriateness of organizational structure, decision-making, complementarity of participants, and logic of work plan and components. I felt unsettled not only by the task but also the incredible breadth of what we ask of researchers and reviewers in these funding award processes.
With time, I found the proposal review process a formidable crucible for socialization of artful review skills. I learned to address criteria and proposal material with the precision demanded of us by the Research Executive Agency (REA) staff in our training, and our panel vice-chairs. First, we submit individual evaluation reports (IER), spending about four to six hours engaging with proposal material and, for me, another two to four hours generating and distilling comments about the criteria. After this, we received our first lessons on giving comments with precision. Each IER gets checked by a panel vice-chair. Like a benevolent, visible hand, my vice-chairs guided me to address features of a proposal I meant to critique but didn’t know how. Paraphrasing: “you say this thing about deficiency in stakeholder composition, but the comment is unclear. Do you mean to say this, in relation to the dissemination and communication objectives? What do you mean by that word? Can you explain in terms of the sub-criteria? For example, the extent is too narrow because the measures are unlikely to support objectives X, Y, Z stated in the proposal?” Ah, that’s how to do it! I quickly modified my other responses in this spirit, soon a deft hand at drawing out and evidencing deficiencies in Excellence, Impact, or Quality of Implementation criteria.
Clarity of objectives was often simple enough to evaluate, along with expected project impacts. Most proposals included tables where one might easily track 1:1 correspondence between call text objectives, and proposal objectives and outputs. While such formats don’t make for scintillating reads, I found myself grateful to authors for the formatting. I also found myself resentful of those who made me *work* to dig into the proposals for key bits of information. Don’t they know how many pages I have to review! I was surprised, in turn, to discover the difficulty of addressing indicators like soundness of the concept and credibility of the methodology—a sub-criterion of the Excellence criteria. Even for a Research and Innovation Action topic, I could never quite find a place to comment on, for example, the presence and appropriateness of a research question, or the fit between proposal design and said question.
With my teeth cut on five IERs, I advanced to the next round, wary but prepared, to generate two consensus reviews (CRs) among a set of mine and two other experts’ IERs for each proposal. Here, IER tutelage proved essential; as the CR rapporteur, I must find overlap and divergence among reviewer comments and craft a consensus response. For a cross-cutting issue like RRI, a CR cognizant of such a topic may make-or-break the significance of a sub-criterion. He or she can draw attention to and enrich fellow reviewers’ perceptual lacunae about concepts and illuminate positive or negative examples in text to justify merits or demerits for indicators. As CR, I often found myself challenged to stand for and evidence what I meant with comments about comprehensiveness or relevance of a proposed measure – or be prepared to stand down. In truth, I came to appreciate the privilege of the process quite a lot.
I found fulfilment in the social, somewhat convivial, deeply human pursuit of a shared reality through respectful and precise application of our minds. But my thrill at the dialectic of review thrummed, too, with an acute sadness. The call I reviewed was loaded with 18-parameter-setting “should” statements and six expected impacts (including reference to the UN sustainable development goals, adding an additional potential 17 issues). The breathlessness of the demands imposed made me question the soundness of the enterprise—why not have projects seek two impacts to greater depth, rather than six shallowly and in a way that, let’s be honest, most often end after a project is completed?
Reflecting as I write this, I can’t help but feel we’re all missing the mark. If we are genuinely interested in orienting our R&I policies toward public-serving missions, changing work program documents, eligibility considerations, call texts, and proposal review criteria are without a doubt vital to the task. However, such instruments will only be as efficacious as the people socialized in a culture ready for uptake and implementation. If there are no reviewers to speak to RRI, evaluating for RRI is absurd. If there are but a handful of reviewers dubbed “responsibility hall monitors” amidst a Babel of researchers, it seems little better. Researchers are laden with pressures to publish, track hours, generate economic impact, get their next contract; live very often precariously from contract to contract. We stack proscription upon proscription upon researcher desks sagging under the weight of societal expectations. Without a balancing perspective on the lived experience of people actually doing the work, is it any wonder people hem haw and protest additional criteria, and initiatives like RRI flounder?
In Horizon Europe programming, the Commission has removed the singular push for RRI as part of a more distributed strategy. To even be eligible for review, consortium member organizations must have gender equality plans in place. Proposal templates now make explicit an impact pathways approach, pushing researchers to identify not only key indicators and examples of potential research activities a la dissemination and exploitation, but also specific research outputs and plausible outcomes (you know, for those years after the project funding ends and all the work is most certainly taken up as foreseen and researchers ride off into the sunset). Open science, consideration of gender dimension and interdisciplinary research “where appropriate,” feature prominently in proposal review criteria, along with consideration of addressing sustainability development goals. Researchers may look back on H2020 and find the requests for inclusion of RRI modest, indeed.
I hope to have the chance to review again. Although, don’t yet ask me how I’ll assert whether a project will “do no significant harm,” emerging as a nascent consideration for research and innovation activities and proposal review. The very devices on which we process our reviews and energy infrastructure to transmit our data rely on exploitative labour, non-renewable minerals extracted in toxic settings, and fossil energy. The commercial pipeline researchers are so avidly encouraged to feed perpetuates corrosive inequalities and unsustainable consumption patterns. I feel stripped of the blanket of certainty over the right question to ask now, roused from my dream of a solution as concrete and clear as including RRI in programme documents, call texts, and proposal review criteria.
Maybe instead it’s time to ask how we reclaim and engage our humanity in the R&I process (keep those arms and legs inside the train of modernity!)? If we take seriously the proposition that the subjective experience of the people involved in a subjective process matters for the integrity of the process, how might we approach R&I as a humane endeavour? Maybe these all-too-human feelings of loss and concern could just do with more space; more space to attend with care to our research and innovation environments as we ride time’s arrow, inexorably, into the future.