Ruth Interview

By Ros and Rob

Ros

Ruth

Bob

We’re trying to capture some of the behind the scenes, human aspects of these projects. Is there anything that you wanted to just chat about up front?
I mean, yes, we need these kinds of languages for certain strategic reasons. But, to be honest, for me, STS has always been about the questions of responsibility. Responsibility is, for me, at the Center of all STS inquiry.
Can you elaborate on that please? When you say you think is in the Center of all STS inquiry, what do you mean by that? I’m not sure – I mean – I think one of the reasons that we have been uncomfortable with doing this work is the idea of responsibility – like scientific responsibility – and it having this normative tense.
Well, I’m coming from feminist STS tradition, okay, and if you think about the core premises – the core endeavors – of STS and what they were all about: They are about creating an awareness that scientific research is a social process; that it is shaped by and, of course, is also shaping social, historical, cultural context.

So even if you go back to Ludwik Fleck’s work on the genesis of scientific fact, what he’s trying to say is that ‘neutral’ scientific inquiry doesn’t exist; that each scientific inquiry is always shared by social historical context. That immediately opens us the question ‘what kind of values are shaping scientific inquiry?’

And with this question immediately comes the question of responsibility; it brings up questions of reflexivity, about that interaction between scientific knowledge production and its social and historical context. That’s what I mean when I say that the question of responsibilities is, in a sense, built in to the field of STS, its core premises and its core endeavors.

So, there is no ‘outside of normative’ questions. If scientific work is shaped by social and historical context there’s always a question of responsibility immediately attached. And then the question is how do we frame this responsibility? Who do we hold responsible? Do we see it as something that individual scientists have to contend with – I think we should do that to a certain degree – but we should also think about it as a structural question like, for example, if we would like researchers in various fields of the natural, technical sciences – but of course also the social sciences and humanities – to be reflexive about their knowledge production and the ways in which it is shaped by and shaping society. Then we provide them with the tools and the types of knowledge they need to engage reflexively with their own practices.

Then it becomes a structural question in terms of education and science funding – what kind of work can be conducted in academia. But it also becomes a question of career incentives. This is what STS teaches us: We need to understand the whole academic system as a social system whose structure intrinsically shapes what kind of knowledge can be produced or what technologies can be developed.

We’re quite interested in these questions of holding individuals accountable in a scientific context and how this is done more systemically. You mentioned education and scientific funding – I’m interested in how you related these ideas to policy?
One issue that I really see within RRI discourse – and research partially in STS – is that we often operate with a distinction between content and the social structures around them. There’s these questions: ‘How can we make this specific field of innovation more socially responsible?’ or ‘How could we integrate some kind of engagement with the public or interdisciplinary aspects into this specific type of knowledge production or technology development?’ But, if we want to move toward a more reflexive and socially responsible research, we really need to think in terms of larger structures. This is where things begin rubbing against each other: We still think in three-year projects. We think in terms of high-impact papers. We think in all these kinds of currencies that do not sit well with actually changing processes of knowledge production or with introducing different types of values in a sustainable way within academic knowledge production. So, there’s a new demand upon natural science researchers and engineers to engage differently with their colleagues in the social sciences, to amend their procedures for developing products. There is also an agreed need for doing this among many natural scientists and engineers; many actually want to engage in different processes where they feel like ‘okay, this could actually mean that the thing I’m doing could serve more people in society in better ways.’

So, I wouldn’t only see it as something that comes from the outside. What happens if we don’t change any of the systems around that? If we, for example, evaluate an RRI project the same way a mono-disciplinary project – if we expect all the same outcomes from it – that doesn’t fit. That creates tensions and there’s a sense that ‘I have to do this on top of all the things I’m already doing.’ This is something that university government, research funding, and other bodies have a lot of power to influence the life-worlds of researchers. They could say ‘Okay, if I wanted this kind of knowledge production, what would be a good instrument to evaluate if this project has been successful? If I’ve actually been working along the mission they’ve set out to accomplish?’

So, we shouldn’t think about responsibility as an offensive term – like, in a sense of ‘legal responsibility’ – that’s a different matter altogether. The goal of these types of projects is to allow for more perspectives to shape the knowledge production and technological development process and to create a space for interdisciplinary and transparent reflection about goals and values that shape this process. In this sense, I think the most important element is to create structural conditions that allow for the different partners to actually engage in this process.

And with regard to the policy discourse: I think what’s been the most useful about the adoption of RRI framework is that it creates these pockets where people who are interested in interdisciplinary work. It allows the building of these networks that can move forward and create more structural change. But that will only be possible if the important things like funding incentives or career incentives change with it or add new categories of evaluation to their current repertoire.

It’s interesting that those two discourses (policy and knowledge production) were understood to develop separately, but now people are starting to put them together – that one shapes the other. Calls for participation and social responsibility are very long-standing, but the period that everyone talks about is post-90’s or 2000’s. But, at the same you see this rise of indicators and metrics filtering through into science administration. You mention three-year contracts – that’s something that has become particularly acute at the moment. Is that something that’s always been there? Or, do you think the context has changed?
It’s important to recognize the tremendous gap in STS. If you look at the history, between the 70’s and early 80’s, researchers were engaging more with the conditions of knowledge production within academia. And then it kind of stopped. There’s been little work – you of course have the classics like Latour’s ‘Cycle of Credit’ – but it wasn’t trendy in STS at all. People were doing ANT work or other things, it was work that engaged with the epistemological dimension of science and technology, but kind of ‘zoned out’ organizational institutional context. It’s only been in the last decade that really started to develop a critical knowledge base in STS that seems to gather epistemic practices and organizational institutional conditions of scientific work. There’ just this enormous gap. Which is a bit sad because we’re always talking about change, but it’s really hard to map change if no one’s done the work in a long time. So….
I think this is a very interesting kind of area, this intersection between the epistemological questions and conditions of work. For example, how irresponsible systems are perpetuated and how we can create systems of responsibility when it feels like systems are trending towards irresponsibility through incentives in academic research. I wonder if you wanted to say anything more specific about where those contradictions are and if those systems need to be deconstructed rather than adding new forms of incentives.
That’s a very difficult question: What is the lever for change? Ultimately, a lot of possibility for change rests with funders but also with universities and their practices of evaluation. I’m just going to share an anecdote: We were having an event and a very, very famous biomedical researcher – who is also a leader in initiatives that are aiming at change of the academic value systems and the kind of way people are being evaluated – was talking very freely about the contradiction of his own practices. It’s well known that biomedicine is an extremely competitive field and very indicator based. He was talking about advising his postdocs and PhD students and how to build their careers. He has to give them advice that is built on questions of indicators and metrics. Because, as a supervisor, he cannot say ‘oh, just don’t care about these things.’ He might be jeopardizing their careers, right? All this while at the same time, he’s trying on an institutional level to create a culture that also values other achievements. I think a lot of academics, particularly in their roles as supervisors or group leaders, are caught in this dilemma where you preach one thing but, in their practices, have to comply with the current evaluative systems. This is true not only for yourself – you don’t know how to advise the younger researchers you’re working with differently. I think one of the things one can do is to make this conflict a) transparent and b) to work against one-sided systems of evaluation in every appointment committee that you’re in and in every one of those instances.

So often we think primarily along the lines of ‘whose research is not being valued.’ But I think it’s as much a problem of ‘whose work is immediately seen as valuable’ because it’s in the right outlets or comes in the right form. But I think there is more discussion. It’s becoming harder to just say ‘Oh, this person has published in high impact journal, they must be good.’ So I think it’s a gradual change.

You know one of my research foci is research evaluation. One of the major problems is that we have too much of it. There’s so much evaluation going on through the time-limit of character, positions, third-party funding, and because of the huge amount of third-party funding that researchers are expected to acquire. There’s just too much evaluation and it’s hard to do this in a way that doesn’t use proxies as shortcuts to facilitate an understanding of the worth or value of what people are doing. So, in a sense, that’s one of the key issues: We’ve created a system where we have to put so much labor into evaluating work that it has become hard to do it without shortcuts.

That’s an interesting point: There’s too much evaluation. Is it a political question? In terms of how you begin to challenge these things? Because I think sometimes its de-politicized. It’s framed as ‘if we find the right evaluation system of research, then we can improve the systems or we can change the values that get upheld.’ Is it actually more of a political fight? Are there powerful bodies that uphold traditional way of assessment; that are incumbents in the system that you can’t change unless you challenge them?
That’s a very difficult question: What is the lever for change? Ultimately, a lot of possibility for change rests with funders but also with universities and their practices of evaluation. I’m just going to share an anecdote: We were having an event and a very, very famous biomedical researcher – who is I don’t think we need a complete change in how we’re assessing work. We need more diversity in how we’re assessing work. So, the problem would be for now asking all researchers to be super interdisciplinary and to do super socially responsible work, right? That’s actually not the answer. There should be a place for mono-disciplinary work, for work that focuses on very discipline-specific problems. That work should be valued too. So, I’m actually very critical of that move that we are also seeing at the moment, that each and every project needs to make a claim of its social relevance. That’s problematic too and, in my opinion, actually decreases the socially responsible conduct in research.

If we stick with bio-medicine – this is one of the fields that I’m most familiar with – for example, if every project that looks at some basic biological mechanism needs to explain how it will in the end contribute to human health, this often leads to research being translated from model organisms to humans way too quickly and to actually epistemic leaps that shouldn’t be there. This is why I’m saying we need more diversification, with different ideas of what research can be. We need to have types of evaluation that fit this specific mission that the research is following. So, too much relevance is actually not good either, in a sense.

I agree with you basically: There’s too much evaluation, things are too short-term, too competitive. But isn’t the counter to that ‘academic freedom?’
No, that’s not really what I’m saying. I think it’s very important to distinguish between questions of responsibilities and questions of social relevance. I think we really want to keep these two concepts apart. Let’s say, we always want researchers to act responsibly, but the question is, to me: ‘towards whom do researchers need to act responsibility in a specific project context?’ This varies greatly with the goals of that science project context.

For example, some of the work we’ve been doing is on epigenetics and how certain extremely gendered stereotypes or social class stereotypes are built on research in early life adversity from epigenetics. Those are mouse studies – very basic mechanisms are being looked at. But there’s a lot of very irresponsible conduct happening in some of these studies. They take social categories and shift them around between mice and humans and very selectively quote from psychological literature that supports their claims, to make easy ‘just-so’ stories that gets momentum because of the social stereotypes they’re working on.

So, here I think the question of responsibility lies in, for example, performing interdisciplinarity responsibly. For example, to not just read three papers from psychology and then build your arguments on them. That’s more about research practices, right? But let’s say I want to redesign a quarter (?) with a smart-energy system? Responsibility means that I have to actually engage with the stakeholders that are there. I think researchers should always act responsibly. And funders, for example, should ask the right questions about their projects. Reviews should ask the right questions about their papers to draw it out when researchers are not acting responsibly. But this might significantly change what ‘acting responsibly’ might mean.

About the question of social relevance: So, something needs to be translated into some kind of societal application whatsoever – that’s another question. Here, the question is how we can ask scientists to articulate themselves towards possible applications without using it as this criterion that ‘we’re not funding any work that doesn’t already promise it will create some benefits.’ So, I think that relevance and responsibility is often thrown into one pot and used interchangeably. But they’re quite different: You can act very responsibly in research, without necessarily doing work that is societally relevant in the classical sense that it will create some societal impact.

That’s a good distinguishing point. To keep going on that: If we think about that in terms of funders or people with power, are there examples that you can think of where funding programs have done a good job creating space for people to be reflexive and think about what they’re doing?
I’m a gender equality expert for the Swiss National Science Fund and a new scheme called ‘Spirit.’ It funds internationally collaborative projects between Swiss researchers with researchers from low and middle-income countries. What they’ve done is they’ve defined this criterium called ‘gender awareness’ as one of the criteria for evaluating a proposal. I think they’ve done it in an interesting way by defining it ‘agenda awareness.’ The idea is that the researchers need to think through what kind of gendered aspects their work could have. They need to actually show that they’ve done that – they might arrive at the conclusion that their work doesn’t have an impact, but they need to show that they’ve done that. There might be projects that get points because they have a topic that is gender relevant, but they can also get points for having a gender-balanced team or by organizing workshops on gender balancing. It doesn’t say that your project has something to do with gender on the level of the researcher, but it specifies a level of reflexivity. This, I think, is a good way of approaching these things.
Has the funding call happened?
Yea, we’ve done it … two or three time at this point.
Do you think it changed the projects?
Yes, definitely.
Did it change which projects were funded?
Yes. It did. But the cool thing is that it is changing the review culture. So, when you discuss all the proposals, you have two people that are in charge of reviewing all the reviews and then writing their own reports and so on, and they talk about the project and comment on gender readiness. When they’re done, I do my report. And from meeting to meeting, I have to say less and less because the reviewers have already become sensitive to these questions and they’re suddenly noticing if a project in their field of competence should actually have thought about these questions more properly or if they’ve actually missed a chance to address a content-specifically aspect of questions of gender. So, it has a strong impact here on the evaluative culture in specific settings. So, this is the model for how to think about questions of gender, but one could also do this with the social and ethical implications of research, for example, to say: ‘Okay, really think through what or who might be affected by your work in which kind of way social categories can be built into your research design, so you could ask STS-type question and have a social impact or societal dimension awareness of the work.
We’ve seen that sometimes proposals that get highly scored because they’re technically excellent, but that basically write ‘we don’t need to do this because we don’t have to’ or something else and score effectively 0. But sometimes they still get funded. Do you see those kinds of things playing out? And, if you do, to what extent do you think it’s a concern? Or is it just a broader culture change?
One of the essential things that I see – and I’m going to go back to this example of gender here – is to really understand that if questions of sex and gender are a part of your research object, and you’re not addressing it, then you’re doing worse science. You need to make it an essential epistemic category. So, if I have a design on, say, kidney disease, and I already know that there’s a gender disparity in kidney disease, but I ignore that, then I’m not doing good research. Same with the urban design example … people put into these positions need to again and again and again stress that these are epistemic categories. They are at the heart of knowledge production. They are not external to it or on top of it.

But in terms of gradual change or radical change, I think it will only be gradual. And I think in a way this is also fair. Because imagine you’ve been trained for decades to do research one way and then suddenly it won’t be valuable anymore. That’s not necessarily how we can you know to change things. So, from my perspective, if I’m sitting in an evaluative committee like this … if I know there’s, let’s say, four projects that get funded, and there are one or two that are clearly also there because they’re better at addressing the social dimension of their work, then that’s already a win. And the question is, how can you really institutionalize these processes and make them not experimental, but something that remains? And how can you also make sure that they travel from one type of review panel to the next and become something that is more normal. Then the gradual change makes a difference over time.

How is the gender dimension received by the evaluators and the scientists?
Overall it’s quite a positive experience. So, I’m also doing a workshop on gender awareness every time before the evaluation panel for people that are new to the panel to pick up also questions that might have arisen in the community and discuss them. When I did the first workshop I was armored with all this data – you know, stuff to convince them that it’s really important to address gender. I was prepared for the devil’s advocates and so on. But no! They were like ‘yea, that is really important.’ Sure, there’s been a pre-selection process in some way, but these were researchers. It’s a very international scheme, with people from around the globe, including the African continent, for example, and so it’s a more varied group of reviews than you would have in a more standard panel. But I was very surprised. I think that overall, the process is good: You see how people are asking different questions, but that it becomes more obvious to them if the gender dimension hasn’t been addressed when it should have been.

I have to think here of Donna Haraway’s notion of ‘narrative field.’ In Primate Visions, she talks about how with the influx of female and feminists, new stores were created about male and female primates that moved away from very traditional patriarchal stories about very passive females and reactive males towards the more varied stories that also centered the females or told different stories about their bodies and their sexuality and their behavior. She has this nice metaphor, which said that through these new stories the ‘narrative field’ gradually shifted, which meant that some stories became less plausible and shifted outside of this narrative field, while other stories gradually became more plausible and became a part of this narrative field. I think this is in a sense what all this work is for: to shift that narrative field and make some evaluations that are more plausible and certain evaluations less plausible.