Researchers' experiences of patient & public involvement

Measuring impact of involvement

We asked researchers what they thought about trying to measure the impact of involvement. This covered three main areas: what people knew about the current evidence base for involving people; whether they felt more measurement of impact was necessary; and if so how this might be done.

The current evidence base
Generally researchers agreed the evidence base for involvement was not very strong and needed improving. Some said they did not really know enough about the evidence base to comment.

Narinder is not aware of any ‘hard evidence’ to support involvement but thinks it’s essential.

Narinder is not aware of any ‘hard evidence’ to support involvement but thinks it’s essential.

Age at interview: 64
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
And do you think that there's sufficient evidence about the impact that involvement has on research?

I suspect there isn’t now. The problem is that I'm not aware of any and I think that that’s an important point that, if one's going to make rational and make significant judgements or decisions about involvement, it should be evidence-based and it should be based on what people think about involvement. Now, as I said, that will vary depending on the patients, depending on the type of study and, etc., etc. – so, a number of those factors. So I think that, I think if there was some research on patient involvement and some hard evidence one could fall back on then, it would be easier for people who want to make a decision about involvement to actually, to make a more evidence-based decision. So I think from that point of view research could be important.

It’s understandable people are sceptical about PPI when the evidence is so weak. But Rebecca argues it’s only by doing it and reflecting on it we can build the evidence.

Text only
Read below

It’s understandable people are sceptical about PPI when the evidence is so weak. But Rebecca argues it’s only by doing it and reflecting on it we can build the evidence.

Age at interview: 31
Sex: Female
HIDE TEXT
PRINT TRANSCRIPT
I think – I can understand why people are sceptical because we live in an evidence-based society and we don’t have much – well not society, evidence-based world – and we don’t have a lot of evidence about it. But that doesn’t mean that you shouldn’t do it because if you don’t do it you can't feed into the evidence. And if you're sceptical and you find you do it and you don’t like it for whatever reasons, then that’s part of the evidence. Because you have to figure out the reasons. You can't just say I don’t like it because if we did that with everything we just wouldn’t ever do it again. But we've got to figure out well why? What works? What isn’t working? And that, it does take time and I know some people will say, "It's not worth the time." But, if you want your work, you know I want my research to be applied. I want it to be of use. And I think if that’s what you want as a researcher then involving stakeholders makes sense; if you involve a clinician why shouldn’t you involve a member of the public or members of the public. I think it's important it shouldn’t just be one person because the dynamics and the power issues that go in in those sort of meetings seem very straightforward if you're not on the other side. 

But when you're the other side and you're the member, you know the one person and you don’t know what everyone's talking about, it's quite hard to say, "I don’t understand that." But if there's a group of you, you know you can either ask and if you all don’t understand then it's easier. But having people involved in your research and, like I say, if you have a clinician because you want their clinical input to shape the research to make input in clinical decision-making, well the end users of that clinical decision making are the patients that are going to be doing it, so why wouldn’t you have them there? And then feed into it, feed into change; like to change the evidence base so we know what's working and what isn’t, so we don’t just say, "There is no evidence so we shouldn’t do it," because if you said that in medicine a hundred, two hundred, well history of medicine's a long time – but you know if no-one said, "Well I would just keep doing it," we don’t challenge that then we wouldn’t, we wouldn’t have a job [laughs]. But we also, nothing would ever change so things are going to change because nothing's ever perfect. But be part of the discussion, don’t just sit there and be a sceptic for a sceptic's sake, scepticism sake. You know I think get involved.
Our sample included some researchers with a particular research interest in patient and public involvement who had more detailed knowledge of the available evidence or lack of it. Some mentioned recent studies which are attempting to improve the reporting and measurement of impact, though it is still very early days to see how this will work out in practice.

There isn’t enough evidence for involvement, and mostly its ‘rose-tinted case studies’. We need better reporting of involvement, but also more clarity on what ‘impact’ means.

There isn’t enough evidence for involvement, and mostly its ‘rose-tinted case studies’. We need better reporting of involvement, but also more clarity on what ‘impact’ means.

Age at interview: 32
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
I don’t think there is sufficient evidence. I think we're still grappling with what evidence we want to look at. You know, so is it that we think that if there's better PPI it should improve your recruitment rates? I think there was an MHRN [mental health research network] paper recently which said that it did. You know, is it that you have a more insightful conclusion? You know, what does ‘insightful’ mean? So I think we're kind of having to grapple with that. I think, I think as long as that line gets trotted out of, you know, ‘PPI makes research better’ and I think until we nail down what better means for both researchers and for patients and members of the public I think it’s, you know, it's a tough situation. I think there's also there's a, there's kind of a catch twenty-two I guess in that there's a thing that no-one really measures or reports PPI as it goes along, it just kind of happens. So if you're not reporting it how can you ever demonstrate the impact it has? So that’s kind of a methodological issue actually I think. I think that’s something that we need to start and I know there's things like the GRIPP checklist and things like that and to try and encourage people to start reporting PPI, and I think once we do that, you know, at least then you'd be able to say, "Well so, you know, this number of studies did this much PPI, did this type of PPI" and then you can choose an outcome and can compare them on. But at the moment we can't even do that because I don’t think PPI gets reported. 

I think most of the stuff about impact that I see is kind of case studies talking about, "Oh we did this," and personally I think they're often very kind of rose-tinted case studies and they're kind of like, "Well it was great because we did PPI, isn’t that nice of us?" kind of thing. And I think, I think we do need to, you know, assess its impact, assess what happened with PPI, how did it impact the research if it did? I think there should be some way for the PPI members involved to say what they think the impact was, and I think it's tough because I think actually then we’d just start getting examples of where it didn’t have an impact or where it had a negative impact either on the researchers or on the people involved. Personally I don’t think that would be a reason not then to do it. What sold me I think, as I said before, is this thing that I think we should do it. For me it's, I think it's about transparency and accountability in some ways and for me the impact is kind of an unanswered question. Yeah I think it's, I think a lot more needs to be done, I think sort of methodologically and in terms of encouraging or enforcing proper reporting about it, and I think until that happens I'm not sure what we'll start to see. But I know someone I'd spoken to the other day who used the term, you know, "Oh well if we start really evaluating this it'll be de-bunked within a few years." The idea that we’ll show that it's not, it's not making better research. But as I say that kind of leads onto this question of it depends what you mean by better yeah, so it's complicated.

Evaluation is often done poorly and delegated to a junior person. Sabi describes the Public Involvement Impact Assessment Framework (PiiAF) for improving evaluation.

Evaluation is often done poorly and delegated to a junior person. Sabi describes the Public Involvement Impact Assessment Framework (PiiAF) for improving evaluation.

Age at interview: 50
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
I think it was an MRC funded study, which developed the PiiAF framework - has been very helpful in establishing some of the intellectual work that I was thinking about. You know, being clear about the values that underpin, that underpin your involvement activities, why you want to do it, what your, what your theory of it, your intervention theory is you know – “I'm doing this because I want to achieve this and in order to do so I need to choose the following tools, the following methods.” I think, you know, that’s sort of conceptual clarity is really important and then developing an evaluation plan, but again not from a checklist type of, through a checklist type of approach, but through a very pragmatic and individually focused project focused standpoint, and it takes time and I think we ought to be unapologetic about the time of thinking, planning, development etc. – it takes you, you're not, you don’t expect to put together a research plan in half a day so don’t expect to put together a PPI evaluation plan, you know, in an hour - it takes a lot of thinking, talking through developing to do so. But that has resource implications. 

But if it's always devolved to the most junior member of the team it's not helpful because if you don’t provide support for that member of staff, you know, research proposals that are not supervised by senior investigators tend not to get funded because they're not of a terribly high quality, and the same applies to PPI evaluation plans.

Vanessa describes a study which showed data collected by a mental health user researcher was no different to data collected by other researchers.

Vanessa describes a study which showed data collected by a mental health user researcher was no different to data collected by other researchers.

Age at interview: 42
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
So the key in this study is that the moment we get it we wanted to know whether it made any difference being interviewed by a peer researcher or not. So we did a randomised three-arm bit of the study to start with. We had people who were sent a consent form which said you're going to be interviewed by a researcher, people who were sent a consent form said you'd going to be interviewed by a researcher who's got a mental health problem, and people who were sent a consent form that said that they were interviewed by a researcher, it wasn’t said they were a person with mental health problems but when they were interviewed the person disclosed they had mental health problems. 

And we looked at response rates and we looked at impact on the interview etc. and data and we found nothing. We found it didn’t make a difference and one of the reasons it might not have made a difference is it because it was a structured interview and over the phone the rapport that you build. It could have been the methodology. It could also have been that the people that were – the non-peer researchers – had kind of been trained to a degree that there wasn’t enough difference between them so we couldn’t tell whether there was any difference between this or not. So basically we decided we were just going to employ peer researchers.
Do we need to measure impact?
Views on the need to find ways to measure impact were mixed. One view was that this is vital to convince funders and colleagues it is worth doing, and to make sure we understand how to do it better. Bernadette felt she herself needed better evidence. Sarah, Andy, Jo and Pam all suggested that PPI partners themselves might also want to know they are making a difference. Hayley also pointed out that people will understand not everything they say can be used.

Many researchers will not take involvement seriously unless they can see some convincing evidence for it.

Many researchers will not take involvement seriously unless they can see some convincing evidence for it.

Age at interview: 52
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
What is it you think about the people who know about it and aren't doing who, I mean what do you think their concerns are; why do you think they're not involving people?

A range of things I guess. I mean I find it hard to understand as well but, I think it's probably a combination of there's some pressure to only do things where there's evidence for it and then the evidence, the hard evidence for public, you know for the benefits of public involvement, is slim on the ground, thin on the ground. So that’s one thing. So, “show me the evidence it works well.” It's difficult to do that. “Well, unless you show me the evidence I won't do it.” So, there's some quite sort of dyed-in-the-wool type responses like that.

Hayley describes how young people and researchers assess the impact of involvement. Young people understand not every idea can be used but appreciate it if researchers are honest about this.

Hayley describes how young people and researchers assess the impact of involvement. Young people understand not every idea can be used but appreciate it if researchers are honest about this.

Age at interview: 30
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
Well currently we, we run the young people's group on a Saturday and the researchers come in and they work with them. And we ask them to – we've put a system in place really where both the researchers and the young people get an opportunity to assess how it went if you like; how did the involvement go? And, one of the things we did with the researchers is after the session and the researchers are at the sessions, we write a summary and we try to do it that the young people have a bit of time to discuss and debate some of the issues and then they try and raise three or four main things, main important things, they want the researcher to take away. And myself and the youth worker will always write that up and send that over to them and say, "This is kind of a summary of what has come from your session." And then about a month after that we follow that up with a reflections questionnaire, just asking the researcher to reflect on what the young people have told them. Have they been able to use their views? How have they been able to use their views? And also we get the young people to do a quality assessment where we ask them what they felt of the involvement activity they ran with the researcher and how that could be changed. So we've got those two kinds of processes of assessing in place. 

And then it's easier for us to track what the young people have said and what the researchers have responded to it. So we've had some instances where it's not been possible for the researchers to take on what the young people say. But I feel like the young people are happy as long as we go back and we say, "This is the feedback and these are the reasons why we can't take up this idea of yours." And I think it's kind of sometimes researchers feel, "Oh I can't do it so I should just like not tell them that I can't do it." But actually I think the young people respect the researchers more when they do come back and say, "We can't do it and these are the reasons." 

Bernadette is unconvinced that involvement makes a difference and wants more data – which she is happy to help generate.

Text only
Read below

Bernadette is unconvinced that involvement makes a difference and wants more data – which she is happy to help generate.

Age at interview: 39
Sex: Female
HIDE TEXT
PRINT TRANSCRIPT
The only way I’m going to get funding is if my research proposal makes sense to other researchers, to other successful researchers, people who are in, you know, positions of power because they’ve done very successful research. So I feel that that governance if you want to call it that is already there and adding a layer of seeing actually patients have to look at that and have an opinion about whether my research makes sense or that’s how we should be spending the money, I don’t think that's right. So yes that’s my difficulty with it… I need to see the evidence for it. I’ll become convinced of it if I can see a project that went through the normal peer review process and had a layer of patient involvement and that made the project better.

I think we need to have more information about how useful or otherwise it is before we think about doing it and again, I think this is still a relatively new area and it’s quite popular at the moment. And I think a lot of stuff that’s happening now will turn into data that we can then look back on, but now it’s not very clear. 

In a way right now I’m happy to get involved, I really want to see a lot more data about it. But the only way to get more data is to generate it and I’m happy to be part of that process but I think it’s too early to feel adamant about it one way or another.
But it could also be argued measuring impact is inappropriate or unnecessary. Clinical and quantitative researchers such as Carl, Sergio and John, who recognised people might assume they wanted randomised controlled trial (RCT) evidence, actually felt involvement was just a matter of common sense and something you would do anyway, regardless of evidence. Sergio commented: “Sometimes we do not really need data to figure out whether something is worth its while or not… I think that there are enough logical arguments there to allow us to sustain that it's a positive thing.” Jim and Ann wondered why we single out patients or members of the public when we don’t evaluate the impact of any other member of the research team.

Carl argues some things don’t need trial evidence. Involving patients in research is just good sense.

Carl argues some things don’t need trial evidence. Involving patients in research is just good sense.

Age at interview: 46
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
Yeah no that’s interesting like, I mean I'm sure there are people out there saying, "You need clinical trials for, to compare patient involvement versus not," and as an evidence-based medicine person you know, they expect me to say, "Where's the clinical trial evidence?" But that’s not everything, you have to say is, "We're not going to do this until we have evidence over effectiveness." You just can't run a world like that. Some things are pragmatic, make sense and actually are a good idea. Now, if we're going to spend hundreds and millions on it then maybe you do need evidence, but the idea that, actually in designing questions, designing research, I might speak to people about what it means to them as opposed to not. Makes a clinical, pragmatic, sensible thing to do and, my experience is it'll improve your research. Now, if you want to wait till clinical trial evidence appears, you're probably going to end up with research that doesn’t make much of a difference in the meantime. But if you want to wait that’s OK. I would suggest some things are a sensible thing to do and at the moment it makes sense.

It would not be difficult to show involvement makes a difference, but it seems unfair to measure the impact of patients and not other members of the research team.

It would not be difficult to show involvement makes a difference, but it seems unfair to measure the impact of patients and not other members of the research team.

Age at interview: 52
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
Yeah there's, there is some evidence emerging and there was a paper published last year which showed that in mental health studies that did involve patients recruited better; so recruited more quickly and to target better than those that didn’t – small difference but a difference nonetheless. And there was another study published a few years before in cancer which showed that a participant information sheet that had been improved by patients actually lead to the patients who were consented to the study, understanding what they were consenting to better than one that had been written by the researchers. So, they did a trial within a trial. So, there are two areas where, in a very practical level, where public involvement can make a difference and there isn't much evidence because people haven’t made the effort to collect it, but actually there's enough evidence; there's enough information there because the system's around, so around what the researcher ethics committees do and the Health Research Authority's responsible for. 

We could very easily identify projects where there is and isn't involvement and then identify how long it took for those that are recruiting patients, that is to recruit the first patient, to recruit to target, if they recruited to target, and how long it took to publish the results which is some very simple metrics which might show a difference. So there's. It wouldn’t be. It's not, it wouldn’t be rocket science, it wouldn’t be that hard to do but would require a concerted effort, but it needs a driver from somewhere to actually do that. And there's a. It's a sort of, you know it's one of these typical double edged swords – there's a little bit of an issue about whether you do and whether you don’t need evidence for something which is a matter of principle. And I sometimes feel that actually requiring there to be hard evidence that public involvement works seems a bit unfair because nobody would question involving a clinician in a piece of clinical research. 

It's blindingly obvious that they’ve got to be there. And actually in the same – if you take the analogy with market research, it's blindingly obvious that you need the consumers of whatever it is you're researching there. So, sometimes I think it's just, you know public involvement's having to do more than it would have to do if it was something else. 
While people were conscious that there was a lot of pressure to produce evidence of impact, they also felt a contrasting pressure not to question the value of PPI, and a sense that any critique or evidence that it did not ‘work’ or even caused harm would be unwelcome. Felix insisted it was important to be honest about negative impacts as well as positive. Ann commented that criticism is rare and that “There are people desperately trying to prove the impact because they want to prove that it has a positive impact.” Alison added: “I think there is a touch of Emperor's new clothes going on, that this is so valuable, important and useful….People don’t really question it now.”

Alison says she still has to constantly remind herself about involvement. It’s high on funders’ agenda but she is not always sure it’s as valuable as everyone says.

Alison says she still has to constantly remind herself about involvement. It’s high on funders’ agenda but she is not always sure it’s as valuable as everyone says.

Age at interview: 47
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
And is there something that – does this compete with other things that you could be doing with your time or compete with other things that maybe benefit your career more?

Yes, but I don’t really think of it in those terms. Because you, it's become essential now so you can't really think of it as getting in the way of doing other stuff because it has got to be integral yeah.

And is it easy for it be integral?

Not really no, no. It's still a, I still have to sort of constantly remind myself, 'Oh yeah I need to do that; I need to think about that.' And yet for all the reasons we're talking about before about the mechanics of identifying people and having appropriate structure and all the rest of it, there's a lot of thought needs to go into that and that sort of flexibility and responsiveness and all the rest of it yeah. And I think sometimes we sort of try to take what appears to be the easy routes so, you know, sometimes you're seeing examples where people sort of say, "Oh yeah we're doing user involvement." Actually it's not user involvement it's involvement of a professional who works with users, you know someone speaking on their behalf which is quite a different thing. Or it's a one way communication. You know you tell them what's going on but actually there's no scope for anything to come back. So all those things take effort to make sure you're not caught into those traps.

…And so all the PPI that you have done so far, is it something you feel confident about embedding in your research and thinking about those sort of soft skills and the people skills?

Yeah I guess so. I feel fairly confident in being able to carry on doing the kind of things we've done up to now. But not necessarily confident that we're doing the most effective or productive thing that we could be doing I guess. And I think some of that questioning is not, it's simply because there is no ideal model, there is no perfect answer and actually the whole thing is a compromise and fundamentally limited. And that I think at the moment because there's such, it's so high on the agenda, funders and all sorts of people I think there is a touch of Emperor's new clothes going on, that this is so valuable, important and useful, [inaudible]. Well that’s one good thing to do but it's not, in reality is not actually that valuable, important and useful – that is quite heavy as well. But I think it's definitely a sort of tacit consensus that it's all absolutely great and people don’t really question it now.

We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement. We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement.

We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement. We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement.

Age at interview: 32
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
I suppose I would say I understand why there is scepticism because I don’t believe there is an evidence base for it. However, I don’t think there has been genuine effort to develop an evidence base for it and I would also point out that I think, you know, we do a lot of things as part of academic culture that there isn’t an evidence base for – you know there's not an evidence base for going to journal clubs but we do it because it's part of the research, so in that sense I think well why can't PPI just be part of research as well. But I suppose I would also say I think sometimes people who've had negative experiences where it was too demanding and they felt really put upon or if they’ve had a negative experience with a PPI partner who’s, you know, shouted them down or made them feel bad or something like that and I think they feel like I'm not allowed to say that because the only stuff that gets out there is these rose tinted case studies about how lovely PPI is and how happy it makes everyone. 

And I suppose I'd say to them that I think the reason for us doing work in PPI recognise that is has pros and cons and that it is very complicated and that is something that we want to start to capture and we want to start to evaluate and think about and so I wouldn’t use those experiences as kind of, 'Oh I'm done with PPI because I don’t buy into this; isn’t it all lovely framework?' I'd say, "No you should stick with it but think about well how can that get evaluated and captured; how does this feed into lessons about the kind of support that actually we need to deliver genuine PPI?" Because I think it's kind of stuck at the moment from people who just think it's rubbish and people who outwardly at least think it's the best thing since sliced bread and actually I think the reality is in the middle; I think actually that’s where most PPI research is, is somewhere in the middle and I think that’s where the progress is going to be made actually. Kind of the how you capture that complexity and start pulling it apart and I think there's potential there to do really interesting stuff yeah [laughs].
There was also a fear that some people would remain sceptical no matter what evidence was produced, unless they personally experienced the impact of involvement. As one person commented, “I work a lot with surgeons and surgeons are very firm in their opinion as to what is right... I'm not convinced that any evidence presented to them would make a huge difference. It would take a research project that they'd tried to do and failed to do to be then inputted by PPI to succeed to make a difference to them, I think, and that is a big challenge… I can be as enthusiastic as anything, but I'm not going to convince a true sceptic.” Others echoed the idea that a positive and enriching experience of involvement at first-hand was an effective route to help people see its value.

How to measure impact
Regardless of whether researchers felt there was a need for better evidence of impact, there was recognition that actually getting such evidence is not easy. The lack of agreement about what we mean by either ‘involvement’ or ‘impact’ remains a problem for trying to come up with suitable measures. Felix and Andy suggested it was important to clarify what impacts you were expecting at the start of the project, and Jo suggested involving PPI advisers themselves in defining what impacts might be reasonable. Chris recommended keeping track of possible impacts during the study rather than trying to do it retrospectively. As Sarah A commented, ‘no one really measures or reports PPI as it goes along, it just kind of happens… So if you’re not reporting it how can you ever demonstrate the impact it has?’

Some PPI researchers are working on clearer standards for reporting involvement. But Sabi suggests randomised trials are impractical and we will never get clear quantitative evidence of impact.

Some PPI researchers are working on clearer standards for reporting involvement. But Sabi suggests randomised trials are impractical and we will never get clear quantitative evidence of impact.

Age at interview: 50
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
Because there isn’t, there isn’t a framework for thinking about PPI, there isn’t a framework for reporting on it, and because it's not been taken seriously enough, there is a lack of reporting guidelines for PPI which means that there is stuff out there reported that you didn’t know whether that’s all they did or that was the amount of lines or words they were given to report on PPI which makes it a really difficult. So with colleagues I've done a review of the involvement of minority ethnic groups in applied health research and that was in the UK, North America, Canada and the US, and Australia. And we can only go on stuff that’s been reported and, you know, some of the, some of the studies have got very detailed and very clear descriptions of how people were involved, how local communities were involved, whereas others there might just have been a few lines and you don’t know whether those few lines behind that piece, it's a huge sort of array of practices or whether it was just tokenistic. 

So that’s the problem of a wider assessment of the impact of PPI in health research. So colleagues are looking at reporting and standardising reporting so that it becomes more, more manageable because what we ended up doing is, is contacting authors and asking authors to supply more information. But you know, that’s a very laborious, unwieldy way; also not terribly robust because, you know, you can't always contact the authors. So reporting is one issue. The lack of clear, some intellectual clarity about what we mean by impact is also an issue. You know, is impact that your recruitment was, was good, but compared to what? So you can't set up an RCT to test the PPI intervention – it doesn’t make any sense. But you know, but therefore because you can't sort of play the game of the hierarchy of evidence you can't ever produce evidence in PPI that is robust and strong with regard to its impact - even on something relatively simple like recruitment rates. So that’s a real measurement, the lack of measurement is a real issue. So you're always relying on which in the hierarchy of evidence is relatively low down the line, its qualitative evaluations of impact. So that makes it difficult.

Some new evidence is coming out but Hayley feels the focus is still too much on whether people are involved rather than how they are involved and what difference it makes.

Some new evidence is coming out but Hayley feels the focus is still too much on whether people are involved rather than how they are involved and what difference it makes.

Age at interview: 30
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
We're having someone come in to do a little bit of research on the young people's group - hopefully in a couple of months' time that’s going to start. I've done quite a lot of reading of the evidence base. I think I've noticed more recently that there are more projects being funded to look at the evidence base and realist evaluations. I know there's been two realist evaluations which are currently, collected data and are preparing academic outputs – journal articles and stuff and I'm really interested to see those.

Because when I first came into post in 2011 it was; the idea was there's a lot of anecdotal evidence, there's a lot of anecdotal sort of stories, but there's not a lot of hard cutting evidence that this makes any difference. We've always here had the idea that we'd like to go into research in the area a bit more and what we'll be looking into, putting research bids together with other partners to look at how involvement is happening. 

…And I think we should be looking at saying, "OK we should be going to steering groups and we should be focusing on things like maybe conversational analysis of when the decisions are made who's saying what and it's the point at where the decision is made where I think we're not focusing. We're focusing on all the other stuff which is practically getting people there but it's great to have people on a steering group and to have lay members stay on a steering group. Or have young people on a research advisory group. But if that happens but then the decision is made without considering that and we've already talked a little bit about sometimes you cannot take on what the public, sometimes you have limitations because what funders will fund or because you have to do things a certain way. Nobody's really looking at, OK this is where the decision is being made though and there's three different types of knowledge here and how is this knowledge being combined. 

If those researchers are just going, "Oh yeah that was lovely; we did a lovely event and we had the young people there or, we had two people on the steering group, but actually what they said is brilliant but we're just going to carry on." Then are we, is that good public involvement? And I think that’s where the evidence base is missing. And there's lots on shared decision making between consultants and practitioners and the public in individual decision making. And that for me would be a better area to explore. So OK well some people have gone in and they’ve researched consultants or people within medicine talking to patients and them coming to a decision together. OK well how do we research involvement thinking about decision making and the conversations which are being made around research? So I think it's got better, but I think we've still got some way to go to focus down on making sure that we're evidencing the right things. It'll be interesting.

Chris wants more evaluation of which methods of involvement are best. Keeping a record as you go will help.

Chris wants more evaluation of which methods of involvement are best. Keeping a record as you go will help.

Age at interview: 48
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
I think we need more evidence of what has, what leads to a greater impact. What methods of going about involvement create more impact for the people, for the research and for the researchers? And I think that’s; it's understanding the relationships between the methods and the impact that’s the important thing. Certainly if you go about it in the wrong way and then you're, could at worst have, you know, harmful impacts on people. So I think those relationships are very important to understand. Though I mean there are in, you know, emerging ways to evaluate the impacts from the way people have been involved and we're very interested in that and using those, and building that sort of evaluation into any sort of major grants that we do from the beginning.

Because otherwise we get to the end and we think, 'God that was. I wonder how everyone felt about that,' and we send out this retrospective questionnaire and by then we can't remember where we've come from, even ourselves, let alone asking other people to reflect in that way. But we had a really – at the end of a project that was looking at sort of patient reported outcomes for children with neuro-disability, and four parents had been heavily involved throughout that project and. So we sent the questionnaire to the four parents and also to our co-investigating professional researchers, and it was really interesting because the family, the parents, four parents really sort of had enjoyed the experience but felt like they hadn’t had much impact on the research itself which wasn’t our perception and the perception of our co-researchers was they were amazed at how the parents had come along and given that time and had that input and been really sort of impressive and active in the meetings with, you know, with us all there together. So, you know, I think it's easy, I think it's really important to investigate those things and to highlight them and to give people feedback, you know, about making people understand where they’ve had an impact on stuff. And in order to do that you’ve really got to be spending the additional resource, monitoring it while you're doing it so that you don’t miss out on seeing where you've come from.

Andy argues that unless you are clear what you expect from PPI, you won’t do it well or be able to identify impacts, so it will appear to have failed.

Andy argues that unless you are clear what you expect from PPI, you won’t do it well or be able to identify impacts, so it will appear to have failed.

Age at interview: 49
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
So I should make it clear that I was a member of a research team that was, got a grant from the MRC to do precisely, to look at this particular question. And we developed this Public Impact Assessment Framework or PIAF so I kind of, I’m going to take, draw on that. But I think in a nutshell if the thing, the key thing as with any research is what is your question and you’ve got to clearly define research questions. Because if the question isn’t very defined then you’re not likely to be able to find any evidence to answer your question. It’s a bit like the answer to the meaning of life and everything is 42. Well what does that mean? Was the question really clear in the first place? So I think one of the problems with it is if you’re not really clear about what your PPI is meant to do then how are you going to know whether I did it or not? 

So I think one of the things that you need to do is to sit down as a team and say what do we want our PPI to achieve? Is it to improve recruitment to trials? Is it to empower people? Is it what…? And it may be more than one and that’s fine. Then you have to think if that’s our aim what is the mechanism that we’re going to build into the project to deliver that out, to deliver that. And clearly if you’re saying that you one of your outcomes of PPI is that people will be, patients and members of the public can be involved and feel that they’re on an equal footing with the researchers and contributing and so on and then your Patient and Public Involvement mechanism is an annual once a year or once every six months meeting where you invite patients and members of the public to comment on what you are doing, that mechanism isn’t going to deliver that outcome. 

So once you’ve decided that you’ve got to make sure that your mechanism is at least reasonably likely to deliver this thing and then once you’ve done that you’ve got to clear out, come on a clear mechanism then you can say well what evidence would I need to collect, not to prove it necessarily because I think it’s very difficult to prove some of these but at least evidence that would, you know, give us some idea to support or refute whether we were able to achieve this or not. and then that again depends, so whether you use quantitative or qualitative research depends on exactly what you want to do, so if you want to use, if you want to know if it improved recruitment to your trial then you’re probably going to use some quantitative methods, if it’s about did people feel really involved and able to participate on an equal footing with researchers then you’re probably going to do some observational stuff and some interviews and qualitative research so then you build in your methods to, to suit that, the evidence and the evidence that should be driven by the question, your question tells you what kind of evidence that you need to need to collect. 

And I think one of the problems with it is a kind of a self-fulfilling prophecy with Patient and Public Involvement is that you don’t put much money into it you aren’t clear about what you want to achieve, you don’t put sufficient mechanisms in and then the impact is very small and then when you can’t see much impact you feel that the next time that you do your research you again as a result of that experience you’re not going to put much resources into it, you’re not going to spend a lot of time planning it, you’re not going to be clear and it becomes a vicious cycle. So actually what we’ve got is badly thought out and badly planned PPI that’s not delivering much impact and the fact that it doesn’t deliver much impact reinforces the fact that we don’t spend much time on resources and so on. And I think that’s one of the things we need to break out of.

Felix suggests that the most important impacts are on people and relationships. Making changes to a specific piece of research is secondary.

Felix suggests that the most important impacts are on people and relationships. Making changes to a specific piece of research is secondary.

Age at interview: 36
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
Another great insight from the project was probably for me, an eye-opener, was that most of the impacts are on people rather than on the actual research. So I think that’s something that’s probably mis, not misunderstood, but not known about or taken in, you know, taken in account of you know; this public involvement is about the people so that’s all the people that, you know, all the stakeholders within a research team and the people that you enrol to collaborate with. 

So, you know, and that’s the main thing and that’s also where most of the positive and the negative impacts happen. And then the, almost a secondary part is about the actual impact on the actual research and this is based on our review of the literatures. So I think, you know, when, you know, I would tell to anyone who engages in it it's more about, you know, it's going to challenge you as a researcher and it's going to challenge the members of the public because everyone has different values, expectations and impacts that they're interested in. But it's primarily about that interaction and this is where you're creating impacts and not the actual research. So and that’s, you know, so if, if you take that down into numbers, you know, so sixty, impact on research, sixty different impacts on the various phases of the research and it's a hundred and twenty impacts reported on the actual people involved. So that’s twice, you know, it's twice as much, twice, oh you know, more important – not more important but you'll create more impact on the people and on the research. 
There was disagreement whether it was better to stick to measuring processes of involvement (such as number and diversity of people involved) rather than trying to show a difference to the outcome or progress of the research itself. Pam suggested formative evaluation to help improve current PPI practice might be more important than trying to prove whether it ‘works’ or not. Further difficulties were deciding the time point at which to measure, and tracing impact through what might be many years of an evolving research idea. Sometimes involvement may contribute to a change in culture that assists and enables other changes which are never directly attributed – or attributable – to that initial involvement.

Involvement might not show an immediate difference but its effect could resurface later. Deciding when to measure is a problem.

Text only
Read below

Involvement might not show an immediate difference but its effect could resurface later. Deciding when to measure is a problem.

Age at interview: 31
Sex: Female
HIDE TEXT
PRINT TRANSCRIPT
I've been in around meetings where people have talked about measuring impact of things like the PPI bursaries. So one kind of very quantitative way of doing it would be how many get funded you know. If you’ve had a bursary, how many projects then get funded. And I sat there and I thought, 'Well I've been involved with projects where you had a PPI bursary. That led into a project, that one didn’t get funded but then it got funded a different way. So there's the main funding I maybe applied it for it didn’t work, but the development work didn’t stop because the funding didn’t happen. 

And it either fed in in terms of thinking about analysis in projects, or it fed in in another project grant and that got funded. So it's not always quick, which, you know most probably good things aren't, but for an impact statement for a two-year project or a twelve-month project that can be quite challenging because you might say, "Well it's had this, these very specific impacts, some of them I'm not going to be able to tease out." But then it might have a longer term impact that we don’t want to miss in our measurement, because it might be in five – it might be that this involvement started you doing something else that has led to something else and it becomes even more rewarding because you're kind of building on each thing.

…And the thing is actually for being realistic about timeframes for measurement, that’s another challenge actually… The timeframes of things is a really big problem, I think, a challenge. But if you're measuring impact and you say, "Well when do you measure impact?" because once a project's finished maybe the researchers have, the PIs are probably still involved, maybe taking that research on, but maybe, if you're lucky you know, and you're a junior researcher you might still be involved or you might yourself been able to develop other projects. But some of that, sometimes some projects, they go so far and the PI might take it on, and who's the one being asked about the impact, you know? The PI should know. But if it's a bursary that someone put in they might not know all the impacts, but all the impacts, like I say, might just be very difficult to measure. 

John says it’s important to have a good PPI process but ‘you could waste a huge amount of money on proving any effect’. Like research, PPI is a long term and complex process.

Text only
Read below

John says it’s important to have a good PPI process but ‘you could waste a huge amount of money on proving any effect’. Like research, PPI is a long term and complex process.

Age at interview: 59
Sex: Male
HIDE TEXT
PRINT TRANSCRIPT
So I think that it’s a process, I think you just need to demonstrate that the process is in place. I think you could waste a huge amount of money on proving any effect. And I think if you want to spend money in this area, I think it’s much more important to spend it on the public perception of research and the importance of research. 

I would not spend valuable research dollars on Mickey Mouse ways of trying to demonstrate that a particular process ended up with a particular product. Because I think that the problem about research is - well another analogy here is something I hate which is ‘hole in one research’, like golf. You never expect to get a hole in one and yet so much of research is almost predicated on ‘hole in one research’. Most research, if successful, is a drive and a pitch and a putt in three. And if you get in in three then I think you’re doing great. But of course your drive, you know, it’s a long shot and you may say that’s really resolved nothing. I guess what I’m trying to say is it’s a body of research - And therefore I think that you should look at the process, make sure the process is right and I think the results will take care of themselves. 

Tina supports the need for better evidence of impact, but suggests replacing the word ‘measure’ with ‘demonstrate’.

Text only
Read below

Tina supports the need for better evidence of impact, but suggests replacing the word ‘measure’ with ‘demonstrate’.

Age at interview: 56
Sex: Female
HIDE TEXT
PRINT TRANSCRIPT
So what is it that it adds - and I think the methodological side of it is really key to it - it's not well articulated in the journal articles if you look through, it's not - And even the ones that demonstrate some impact or articulate some impact – it's usually the impact on the participatory researchers not the methodological impact. I think we have to get much better at doing that. And the impact on the organisations that we work in, and the issue with that is that it's longitudinal often; it doesn’t happen – the research ends and it hasn’t happened there, it kind of trickles on. And we haven’t got good mechanisms for capturing it.

So I think that impact section on the grant funding form– we've got to get better at articulating rationally expected impacts as well. What we might rationally expect from this. And we are getting better as we all become more experienced at doing it. We're getting better in knowing what we might rationally expect, but of course the unexpected is always the most important bit - well usually, isn’t it? The unexpected thing that happens or you find out that’s the thing that’s like the bolt of lightning. And you're not going to be able to say that’s going to be the impact of it. When we did the Inclusion research we wouldn’t say, "Well the impact is we want to change communication procedures." We would never have thought of that in a million years but actually that has to be the impact because that was the outcome so. It's hard, it's not like a randomised control trial where it works or it doesn’t.

Measuring is a way of demonstrating an impact. Measuring gets used as that universal way, doesn’t it? You’ve got to measure your impact. Actually it's one way of demonstrating impact and we have lots of others. So I'd like to eradicate the term ‘measure’ as the universal and have it under a word like ‘demonstrate’ which I think would be much more useful.
There was little support for the idea that conducting RCTs to establish the effectiveness of PPI was realistic or desirable, although a few people felt it was possible, and there are a growing number of examples. Pam argued that ‘PPI is not a thing… it’s a set of complex relationships’ and that developing a defined and measurable intervention was always going to be challenging. Both Jim and Suzanne suggested it has to be thought of as a ‘complex intervention’ with lots of interconnecting components.

You could compare two studies, one with patient involvement and one without, and measure differences in outcome.

You could compare two studies, one with patient involvement and one without, and measure differences in outcome.

Age at interview: 64
Sex: Male
SHOW TEXT VERSION
PRINT TRANSCRIPT
Well I think that if one was trying to get the impact of involvement then I think there's, presumably there would be interviews and questionnaire based methodologies that would be, would be saying. I think that, I mean I think that obviously if patients were involved in one study but they weren't involved in another study, then you could look at the effect it, the effectiveness of the outcome. So, if for example, one was designing a rehabilitation programme and in one study the patients were involved, another study patients weren't and then, and you looked at the outcomes or you looked at the effectiveness then, you could look and see whether, you know the outcomes were better in one study rather than the other. But, I suppose in terms of – so but I suppose evidence in relation to patient involvement may largely be based on what patients, how they felt the benefit from that. But, as I said you could, in theory have two programmes of intervention – one that had patient involvement one that didn’t and then look at various outcome measures in the two.

Alice’s instinct is always to want to measure but feels ‘you can’t really do a parallel project with no PPI’. She is unsure we should even try.

Alice’s instinct is always to want to measure but feels ‘you can’t really do a parallel project with no PPI’. She is unsure we should even try.

Age at interview: 26
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
From what I understand there are two sides to the argument of whether or not we should be measuring impact of PPI. One person, well not one person, but one side is yes we should its research, don’t be stupid, of course we'll be measuring it. And the other side is, well we should be doing it ethically and, you know, it makes sense why it's better – does it matter if we can measure it; can it even be measured? So I can kind of see both sides of the argument – I don’t feel strongly on either point other than I agree that we should be doing it. Having seen real life examples in my work I can see where we should be doing it but I don’t think we should be forcing it down into everything, every aspect of every bit of health research because I think some things aren’t suited to it necessarily - I may be wrong. I think that it's just a very hard thing to measure the impact of not only because you can't really do a parallel project with no PPI and see how many conferences you get invited to you know. 

…Well I do know of some ways that people have tried to measure impacts in PPI which I think are good. But I think if somebody had developed a really super-duper, really great way of measuring PPI we'd be using it already.

But maybe they're just getting known, I don’t know, but you know some ways of measuring PPI that are good but limited. If we can't, should we even be trying to measure it? Well you see I find it really hard to say no we shouldn’t try and measure it because my instinct is to measure, measure. But I think we need to think back to the reasons why we're doing it which is to try and make more effective research that’s more applicable for better health outcomes and if asking people what they think is going to be achieving that and we know the theory of how it should be working. I say the theory but I don’t mean, you know, we know how it should work – maybe that in itself is enough to know that you're doing that. I suppose there's no – even if you did try to measure it, measure the impact, there's still no way of knowing whether or not what people are saying is genuine, so you could say, you know, “did people have a real impact on this project?” “Oh yes of course they do, they decided this, this and this.” 

Pam is sceptical about impact measures and how to disentangle cause and effect. Where PPI advisers agree with researchers it may look as if they made no difference.

Pam is sceptical about impact measures and how to disentangle cause and effect. Where PPI advisers agree with researchers it may look as if they made no difference.

Age at interview: 54
Sex: Female
SHOW TEXT VERSION
PRINT TRANSCRIPT
So that on the one hand you’ve got the idea of science, science and rationality and building this evidence base and on the other hand you’ve got a very messy complex world of policy and practice and how some of these decisions get made. 

So I think you won’t stop researchers calling for more research is needed – that’s what we do we want to keep ourselves in a job. Sorry if that sounds very sceptical. But I think, I think that partly relates to how I feel about wearing different hats. We’re all patients and we’re all citizens as well. So I think people will have a variety of modes of knowledge and evidence for different purposes and uses. It can be handy when you’re in a biomedical environment to be able to point people and say this is published evidence and, you know, for some that’s, that’s persuasive yeah so I think it’s interesting.

And what do you think about impact or accessing capture and measuring values and impact?

I think people would probably like to know the impact of their involvement. I think the difficulty of disentangling cause and effect – do you know that its PPI that’s made a difference – is methodologically challenging. And if you go back to a democratic or an emancipatory rational for why you’re doing PPI, then you’re doing it because you should, because people have rights and entitlements to influence over what’s done in their name or with public money and so on. 

So I think because – I’m primarily a qualitative researcher, so I just have some scepticism about impact measures, but I don’t think there’s anything wrong with trying to see the difference that it made. But I don’t think if you were to involve people and they didn’t make a difference that might be because actually they’re in agreement with the researchers and I wouldn’t see that as wasteful or ineffective or inefficient.
Like Pam, most people thought qualitative approaches to trying to describe impact were likely to be more practical, but this could take a lot of time, reflection and resources. Retrospective analysis of funding and ethics applications or published studies was suggested, to compare levels of PPI and various outcomes (such as funding success, speed of approval or recruitment rates). But again this relies on a clear agreement on what ‘PPI’ means and good reporting of PPI activity. 

Pam also drew attention to the fact that the impact debate tends to assume that to have value involvement must change things, whereas sometimes patients may agree with researchers or validate the research design, and this is also useful. John  made a similar point: ‘I think that endorsement is a good thing to have… You say “I got this idea for some research” and the patient says, “That’s a great idea”.’

Copyright © 2024 University of Oxford. All rights reserved.