Ruth Interview
By Ros and Rob
Ros
Ruth
Bob
So even if you go back to Ludwik Fleck’s work on the genesis of scientific fact, what he’s trying to say is that ‘neutral’ scientific inquiry doesn’t exist; that each scientific inquiry is always shared by social historical context. That immediately opens us the question ‘what kind of values are shaping scientific inquiry?’
And with this question immediately comes the question of responsibility; it brings up questions of reflexivity, about that interaction between scientific knowledge production and its social and historical context. That’s what I mean when I say that the question of responsibilities is, in a sense, built in to the field of STS, its core premises and its core endeavors.
So, there is no ‘outside of normative’ questions. If scientific work is shaped by social and historical context there’s always a question of responsibility immediately attached. And then the question is how do we frame this responsibility? Who do we hold responsible? Do we see it as something that individual scientists have to contend with – I think we should do that to a certain degree – but we should also think about it as a structural question like, for example, if we would like researchers in various fields of the natural, technical sciences – but of course also the social sciences and humanities – to be reflexive about their knowledge production and the ways in which it is shaped by and shaping society. Then we provide them with the tools and the types of knowledge they need to engage reflexively with their own practices.
Then it becomes a structural question in terms of education and science funding – what kind of work can be conducted in academia. But it also becomes a question of career incentives. This is what STS teaches us: We need to understand the whole academic system as a social system whose structure intrinsically shapes what kind of knowledge can be produced or what technologies can be developed.
So, I wouldn’t only see it as something that comes from the outside. What happens if we don’t change any of the systems around that? If we, for example, evaluate an RRI project the same way a mono-disciplinary project – if we expect all the same outcomes from it – that doesn’t fit. That creates tensions and there’s a sense that ‘I have to do this on top of all the things I’m already doing.’ This is something that university government, research funding, and other bodies have a lot of power to influence the life-worlds of researchers. They could say ‘Okay, if I wanted this kind of knowledge production, what would be a good instrument to evaluate if this project has been successful? If I’ve actually been working along the mission they’ve set out to accomplish?’
So, we shouldn’t think about responsibility as an offensive term – like, in a sense of ‘legal responsibility’ – that’s a different matter altogether. The goal of these types of projects is to allow for more perspectives to shape the knowledge production and technological development process and to create a space for interdisciplinary and transparent reflection about goals and values that shape this process. In this sense, I think the most important element is to create structural conditions that allow for the different partners to actually engage in this process.
And with regard to the policy discourse: I think what’s been the most useful about the adoption of RRI framework is that it creates these pockets where people who are interested in interdisciplinary work. It allows the building of these networks that can move forward and create more structural change. But that will only be possible if the important things like funding incentives or career incentives change with it or add new categories of evaluation to their current repertoire.
So often we think primarily along the lines of ‘whose research is not being valued.’ But I think it’s as much a problem of ‘whose work is immediately seen as valuable’ because it’s in the right outlets or comes in the right form. But I think there is more discussion. It’s becoming harder to just say ‘Oh, this person has published in high impact journal, they must be good.’ So I think it’s a gradual change.
You know one of my research foci is research evaluation. One of the major problems is that we have too much of it. There’s so much evaluation going on through the time-limit of character, positions, third-party funding, and because of the huge amount of third-party funding that researchers are expected to acquire. There’s just too much evaluation and it’s hard to do this in a way that doesn’t use proxies as shortcuts to facilitate an understanding of the worth or value of what people are doing. So, in a sense, that’s one of the key issues: We’ve created a system where we have to put so much labor into evaluating work that it has become hard to do it without shortcuts.
If we stick with bio-medicine – this is one of the fields that I’m most familiar with – for example, if every project that looks at some basic biological mechanism needs to explain how it will in the end contribute to human health, this often leads to research being translated from model organisms to humans way too quickly and to actually epistemic leaps that shouldn’t be there. This is why I’m saying we need more diversification, with different ideas of what research can be. We need to have types of evaluation that fit this specific mission that the research is following. So, too much relevance is actually not good either, in a sense.
For example, some of the work we’ve been doing is on epigenetics and how certain extremely gendered stereotypes or social class stereotypes are built on research in early life adversity from epigenetics. Those are mouse studies – very basic mechanisms are being looked at. But there’s a lot of very irresponsible conduct happening in some of these studies. They take social categories and shift them around between mice and humans and very selectively quote from psychological literature that supports their claims, to make easy ‘just-so’ stories that gets momentum because of the social stereotypes they’re working on.
So, here I think the question of responsibility lies in, for example, performing interdisciplinarity responsibly. For example, to not just read three papers from psychology and then build your arguments on them. That’s more about research practices, right? But let’s say I want to redesign a quarter (?) with a smart-energy system? Responsibility means that I have to actually engage with the stakeholders that are there. I think researchers should always act responsibly. And funders, for example, should ask the right questions about their projects. Reviews should ask the right questions about their papers to draw it out when researchers are not acting responsibly. But this might significantly change what ‘acting responsibly’ might mean.
About the question of social relevance: So, something needs to be translated into some kind of societal application whatsoever – that’s another question. Here, the question is how we can ask scientists to articulate themselves towards possible applications without using it as this criterion that ‘we’re not funding any work that doesn’t already promise it will create some benefits.’ So, I think that relevance and responsibility is often thrown into one pot and used interchangeably. But they’re quite different: You can act very responsibly in research, without necessarily doing work that is societally relevant in the classical sense that it will create some societal impact.
But in terms of gradual change or radical change, I think it will only be gradual. And I think in a way this is also fair. Because imagine you’ve been trained for decades to do research one way and then suddenly it won’t be valuable anymore. That’s not necessarily how we can you know to change things. So, from my perspective, if I’m sitting in an evaluative committee like this … if I know there’s, let’s say, four projects that get funded, and there are one or two that are clearly also there because they’re better at addressing the social dimension of their work, then that’s already a win. And the question is, how can you really institutionalize these processes and make them not experimental, but something that remains? And how can you also make sure that they travel from one type of review panel to the next and become something that is more normal. Then the gradual change makes a difference over time.
I have to think here of Donna Haraway’s notion of ‘narrative field.’ In Primate Visions, she talks about how with the influx of female and feminists, new stores were created about male and female primates that moved away from very traditional patriarchal stories about very passive females and reactive males towards the more varied stories that also centered the females or told different stories about their bodies and their sexuality and their behavior. She has this nice metaphor, which said that through these new stories the ‘narrative field’ gradually shifted, which meant that some stories became less plausible and shifted outside of this narrative field, while other stories gradually became more plausible and became a part of this narrative field. I think this is in a sense what all this work is for: to shift that narrative field and make some evaluations that are more plausible and certain evaluations less plausible.