Researchers' experiences of patient & public involvement
Measuring impact of involvement
We asked researchers what they thought about trying to measure the impact of involvement. This covered three main areas: what people knew about the current evidence base for involving people; whether they felt more measurement of impact was necessary; and if so how this might be done.
The current evidence base
Generally researchers agreed the evidence base for involvement was not very strong and needed improving. Some said they did not really know enough about the evidence base to comment.
Narinder is not aware of any ‘hard evidence’ to support involvement but thinks it’s essential.
Narinder is not aware of any ‘hard evidence’ to support involvement but thinks it’s essential.
SHOW TEXT VERSION
PRINT TRANSCRIPT
I suspect there isn’t now. The problem is that I'm not aware of any and I think that that’s an important point that, if one's going to make rational and make significant judgements or decisions about involvement, it should be evidence-based and it should be based on what people think about involvement. Now, as I said, that will vary depending on the patients, depending on the type of study and, etc., etc. – so, a number of those factors. So I think that, I think if there was some research on patient involvement and some hard evidence one could fall back on then, it would be easier for people who want to make a decision about involvement to actually, to make a more evidence-based decision. So I think from that point of view research could be important.
There isn’t enough evidence for involvement, and mostly its ‘rose-tinted case studies’. We need better reporting of involvement, but also more clarity on what ‘impact’ means.
There isn’t enough evidence for involvement, and mostly its ‘rose-tinted case studies’. We need better reporting of involvement, but also more clarity on what ‘impact’ means.
SHOW TEXT VERSION
PRINT TRANSCRIPT
I think most of the stuff about impact that I see is kind of case studies talking about, "Oh we did this," and personally I think they're often very kind of rose-tinted case studies and they're kind of like, "Well it was great because we did PPI, isn’t that nice of us?" kind of thing. And I think, I think we do need to, you know, assess its impact, assess what happened with PPI, how did it impact the research if it did? I think there should be some way for the PPI members involved to say what they think the impact was, and I think it's tough because I think actually then we’d just start getting examples of where it didn’t have an impact or where it had a negative impact either on the researchers or on the people involved. Personally I don’t think that would be a reason not then to do it. What sold me I think, as I said before, is this thing that I think we should do it. For me it's, I think it's about transparency and accountability in some ways and for me the impact is kind of an unanswered question. Yeah I think it's, I think a lot more needs to be done, I think sort of methodologically and in terms of encouraging or enforcing proper reporting about it, and I think until that happens I'm not sure what we'll start to see. But I know someone I'd spoken to the other day who used the term, you know, "Oh well if we start really evaluating this it'll be de-bunked within a few years." The idea that we’ll show that it's not, it's not making better research. But as I say that kind of leads onto this question of it depends what you mean by better yeah, so it's complicated.
Evaluation is often done poorly and delegated to a junior person. Sabi describes the Public Involvement Impact Assessment Framework (PiiAF) for improving evaluation.
Evaluation is often done poorly and delegated to a junior person. Sabi describes the Public Involvement Impact Assessment Framework (PiiAF) for improving evaluation.
SHOW TEXT VERSION
PRINT TRANSCRIPT
But if it's always devolved to the most junior member of the team it's not helpful because if you don’t provide support for that member of staff, you know, research proposals that are not supervised by senior investigators tend not to get funded because they're not of a terribly high quality, and the same applies to PPI evaluation plans.
Vanessa describes a study which showed data collected by a mental health user researcher was no different to data collected by other researchers.
Vanessa describes a study which showed data collected by a mental health user researcher was no different to data collected by other researchers.
SHOW TEXT VERSION
PRINT TRANSCRIPT
And we looked at response rates and we looked at impact on the interview etc. and data and we found nothing. We found it didn’t make a difference and one of the reasons it might not have made a difference is it because it was a structured interview and over the phone the rapport that you build. It could have been the methodology. It could also have been that the people that were – the non-peer researchers – had kind of been trained to a degree that there wasn’t enough difference between them so we couldn’t tell whether there was any difference between this or not. So basically we decided we were just going to employ peer researchers.
Views on the need to find ways to measure impact were mixed. One view was that this is vital to convince funders and colleagues it is worth doing, and to make sure we understand how to do it better. Bernadette felt she herself needed better evidence. Sarah, Andy, Jo and Pam all suggested that PPI partners themselves might also want to know they are making a difference. Hayley also pointed out that people will understand not everything they say can be used.
Many researchers will not take involvement seriously unless they can see some convincing evidence for it.
Many researchers will not take involvement seriously unless they can see some convincing evidence for it.
SHOW TEXT VERSION
PRINT TRANSCRIPT
A range of things I guess. I mean I find it hard to understand as well but, I think it's probably a combination of there's some pressure to only do things where there's evidence for it and then the evidence, the hard evidence for public, you know for the benefits of public involvement, is slim on the ground, thin on the ground. So that’s one thing. So, “show me the evidence it works well.” It's difficult to do that. “Well, unless you show me the evidence I won't do it.” So, there's some quite sort of dyed-in-the-wool type responses like that.
Hayley describes how young people and researchers assess the impact of involvement. Young people understand not every idea can be used but appreciate it if researchers are honest about this.
Hayley describes how young people and researchers assess the impact of involvement. Young people understand not every idea can be used but appreciate it if researchers are honest about this.
SHOW TEXT VERSION
PRINT TRANSCRIPT
And then it's easier for us to track what the young people have said and what the researchers have responded to it. So we've had some instances where it's not been possible for the researchers to take on what the young people say. But I feel like the young people are happy as long as we go back and we say, "This is the feedback and these are the reasons why we can't take up this idea of yours." And I think it's kind of sometimes researchers feel, "Oh I can't do it so I should just like not tell them that I can't do it." But actually I think the young people respect the researchers more when they do come back and say, "We can't do it and these are the reasons."
Carl argues some things don’t need trial evidence. Involving patients in research is just good sense.
Carl argues some things don’t need trial evidence. Involving patients in research is just good sense.
SHOW TEXT VERSION
PRINT TRANSCRIPT
It would not be difficult to show involvement makes a difference, but it seems unfair to measure the impact of patients and not other members of the research team.
It would not be difficult to show involvement makes a difference, but it seems unfair to measure the impact of patients and not other members of the research team.
SHOW TEXT VERSION
PRINT TRANSCRIPT
We could very easily identify projects where there is and isn't involvement and then identify how long it took for those that are recruiting patients, that is to recruit the first patient, to recruit to target, if they recruited to target, and how long it took to publish the results which is some very simple metrics which might show a difference. So there's. It wouldn’t be. It's not, it wouldn’t be rocket science, it wouldn’t be that hard to do but would require a concerted effort, but it needs a driver from somewhere to actually do that. And there's a. It's a sort of, you know it's one of these typical double edged swords – there's a little bit of an issue about whether you do and whether you don’t need evidence for something which is a matter of principle. And I sometimes feel that actually requiring there to be hard evidence that public involvement works seems a bit unfair because nobody would question involving a clinician in a piece of clinical research.
It's blindingly obvious that they’ve got to be there. And actually in the same – if you take the analogy with market research, it's blindingly obvious that you need the consumers of whatever it is you're researching there. So, sometimes I think it's just, you know public involvement's having to do more than it would have to do if it was something else.
Alison says she still has to constantly remind herself about involvement. It’s high on funders’ agenda but she is not always sure it’s as valuable as everyone says.
Alison says she still has to constantly remind herself about involvement. It’s high on funders’ agenda but she is not always sure it’s as valuable as everyone says.
SHOW TEXT VERSION
PRINT TRANSCRIPT
Yes, but I don’t really think of it in those terms. Because you, it's become essential now so you can't really think of it as getting in the way of doing other stuff because it has got to be integral yeah.
And is it easy for it be integral?
Not really no, no. It's still a, I still have to sort of constantly remind myself, 'Oh yeah I need to do that; I need to think about that.' And yet for all the reasons we're talking about before about the mechanics of identifying people and having appropriate structure and all the rest of it, there's a lot of thought needs to go into that and that sort of flexibility and responsiveness and all the rest of it yeah. And I think sometimes we sort of try to take what appears to be the easy routes so, you know, sometimes you're seeing examples where people sort of say, "Oh yeah we're doing user involvement." Actually it's not user involvement it's involvement of a professional who works with users, you know someone speaking on their behalf which is quite a different thing. Or it's a one way communication. You know you tell them what's going on but actually there's no scope for anything to come back. So all those things take effort to make sure you're not caught into those traps.
…And so all the PPI that you have done so far, is it something you feel confident about embedding in your research and thinking about those sort of soft skills and the people skills?
Yeah I guess so. I feel fairly confident in being able to carry on doing the kind of things we've done up to now. But not necessarily confident that we're doing the most effective or productive thing that we could be doing I guess. And I think some of that questioning is not, it's simply because there is no ideal model, there is no perfect answer and actually the whole thing is a compromise and fundamentally limited. And that I think at the moment because there's such, it's so high on the agenda, funders and all sorts of people I think there is a touch of Emperor's new clothes going on, that this is so valuable, important and useful, [inaudible]. Well that’s one good thing to do but it's not, in reality is not actually that valuable, important and useful – that is quite heavy as well. But I think it's definitely a sort of tacit consensus that it's all absolutely great and people don’t really question it now.
We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement. We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement.
We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement. We need better evidence, but researchers don’t feel able to voice any negative views or bad experiences of involvement.
SHOW TEXT VERSION
PRINT TRANSCRIPT
And I suppose I'd say to them that I think the reason for us doing work in PPI recognise that is has pros and cons and that it is very complicated and that is something that we want to start to capture and we want to start to evaluate and think about and so I wouldn’t use those experiences as kind of, 'Oh I'm done with PPI because I don’t buy into this; isn’t it all lovely framework?' I'd say, "No you should stick with it but think about well how can that get evaluated and captured; how does this feed into lessons about the kind of support that actually we need to deliver genuine PPI?" Because I think it's kind of stuck at the moment from people who just think it's rubbish and people who outwardly at least think it's the best thing since sliced bread and actually I think the reality is in the middle; I think actually that’s where most PPI research is, is somewhere in the middle and I think that’s where the progress is going to be made actually. Kind of the how you capture that complexity and start pulling it apart and I think there's potential there to do really interesting stuff yeah [laughs].
How to measure impact
Regardless of whether researchers felt there was a need for better evidence of impact, there was recognition that actually getting such evidence is not easy. The lack of agreement about what we mean by either ‘involvement’ or ‘impact’ remains a problem for trying to come up with suitable measures. Felix and Andy suggested it was important to clarify what impacts you were expecting at the start of the project, and Jo suggested involving PPI advisers themselves in defining what impacts might be reasonable. Chris recommended keeping track of possible impacts during the study rather than trying to do it retrospectively. As Sarah A commented, ‘no one really measures or reports PPI as it goes along, it just kind of happens… So if you’re not reporting it how can you ever demonstrate the impact it has?’
Some PPI researchers are working on clearer standards for reporting involvement. But Sabi suggests randomised trials are impractical and we will never get clear quantitative evidence of impact.
Some PPI researchers are working on clearer standards for reporting involvement. But Sabi suggests randomised trials are impractical and we will never get clear quantitative evidence of impact.
SHOW TEXT VERSION
PRINT TRANSCRIPT
So that’s the problem of a wider assessment of the impact of PPI in health research. So colleagues are looking at reporting and standardising reporting so that it becomes more, more manageable because what we ended up doing is, is contacting authors and asking authors to supply more information. But you know, that’s a very laborious, unwieldy way; also not terribly robust because, you know, you can't always contact the authors. So reporting is one issue. The lack of clear, some intellectual clarity about what we mean by impact is also an issue. You know, is impact that your recruitment was, was good, but compared to what? So you can't set up an RCT to test the PPI intervention – it doesn’t make any sense. But you know, but therefore because you can't sort of play the game of the hierarchy of evidence you can't ever produce evidence in PPI that is robust and strong with regard to its impact - even on something relatively simple like recruitment rates. So that’s a real measurement, the lack of measurement is a real issue. So you're always relying on which in the hierarchy of evidence is relatively low down the line, its qualitative evaluations of impact. So that makes it difficult.
Some new evidence is coming out but Hayley feels the focus is still too much on whether people are involved rather than how they are involved and what difference it makes.
Some new evidence is coming out but Hayley feels the focus is still too much on whether people are involved rather than how they are involved and what difference it makes.
SHOW TEXT VERSION
PRINT TRANSCRIPT
Because when I first came into post in 2011 it was; the idea was there's a lot of anecdotal evidence, there's a lot of anecdotal sort of stories, but there's not a lot of hard cutting evidence that this makes any difference. We've always here had the idea that we'd like to go into research in the area a bit more and what we'll be looking into, putting research bids together with other partners to look at how involvement is happening.
…And I think we should be looking at saying, "OK we should be going to steering groups and we should be focusing on things like maybe conversational analysis of when the decisions are made who's saying what and it's the point at where the decision is made where I think we're not focusing. We're focusing on all the other stuff which is practically getting people there but it's great to have people on a steering group and to have lay members stay on a steering group. Or have young people on a research advisory group. But if that happens but then the decision is made without considering that and we've already talked a little bit about sometimes you cannot take on what the public, sometimes you have limitations because what funders will fund or because you have to do things a certain way. Nobody's really looking at, OK this is where the decision is being made though and there's three different types of knowledge here and how is this knowledge being combined.
If those researchers are just going, "Oh yeah that was lovely; we did a lovely event and we had the young people there or, we had two people on the steering group, but actually what they said is brilliant but we're just going to carry on." Then are we, is that good public involvement? And I think that’s where the evidence base is missing. And there's lots on shared decision making between consultants and practitioners and the public in individual decision making. And that for me would be a better area to explore. So OK well some people have gone in and they’ve researched consultants or people within medicine talking to patients and them coming to a decision together. OK well how do we research involvement thinking about decision making and the conversations which are being made around research? So I think it's got better, but I think we've still got some way to go to focus down on making sure that we're evidencing the right things. It'll be interesting.
Chris wants more evaluation of which methods of involvement are best. Keeping a record as you go will help.
Chris wants more evaluation of which methods of involvement are best. Keeping a record as you go will help.
SHOW TEXT VERSION
PRINT TRANSCRIPT
Because otherwise we get to the end and we think, 'God that was. I wonder how everyone felt about that,' and we send out this retrospective questionnaire and by then we can't remember where we've come from, even ourselves, let alone asking other people to reflect in that way. But we had a really – at the end of a project that was looking at sort of patient reported outcomes for children with neuro-disability, and four parents had been heavily involved throughout that project and. So we sent the questionnaire to the four parents and also to our co-investigating professional researchers, and it was really interesting because the family, the parents, four parents really sort of had enjoyed the experience but felt like they hadn’t had much impact on the research itself which wasn’t our perception and the perception of our co-researchers was they were amazed at how the parents had come along and given that time and had that input and been really sort of impressive and active in the meetings with, you know, with us all there together. So, you know, I think it's easy, I think it's really important to investigate those things and to highlight them and to give people feedback, you know, about making people understand where they’ve had an impact on stuff. And in order to do that you’ve really got to be spending the additional resource, monitoring it while you're doing it so that you don’t miss out on seeing where you've come from.
Andy argues that unless you are clear what you expect from PPI, you won’t do it well or be able to identify impacts, so it will appear to have failed.
Andy argues that unless you are clear what you expect from PPI, you won’t do it well or be able to identify impacts, so it will appear to have failed.
SHOW TEXT VERSION
PRINT TRANSCRIPT
So I think one of the things that you need to do is to sit down as a team and say what do we want our PPI to achieve? Is it to improve recruitment to trials? Is it to empower people? Is it what…? And it may be more than one and that’s fine. Then you have to think if that’s our aim what is the mechanism that we’re going to build into the project to deliver that out, to deliver that. And clearly if you’re saying that you one of your outcomes of PPI is that people will be, patients and members of the public can be involved and feel that they’re on an equal footing with the researchers and contributing and so on and then your Patient and Public Involvement mechanism is an annual once a year or once every six months meeting where you invite patients and members of the public to comment on what you are doing, that mechanism isn’t going to deliver that outcome.
So once you’ve decided that you’ve got to make sure that your mechanism is at least reasonably likely to deliver this thing and then once you’ve done that you’ve got to clear out, come on a clear mechanism then you can say well what evidence would I need to collect, not to prove it necessarily because I think it’s very difficult to prove some of these but at least evidence that would, you know, give us some idea to support or refute whether we were able to achieve this or not. and then that again depends, so whether you use quantitative or qualitative research depends on exactly what you want to do, so if you want to use, if you want to know if it improved recruitment to your trial then you’re probably going to use some quantitative methods, if it’s about did people feel really involved and able to participate on an equal footing with researchers then you’re probably going to do some observational stuff and some interviews and qualitative research so then you build in your methods to, to suit that, the evidence and the evidence that should be driven by the question, your question tells you what kind of evidence that you need to need to collect.
And I think one of the problems with it is a kind of a self-fulfilling prophecy with Patient and Public Involvement is that you don’t put much money into it you aren’t clear about what you want to achieve, you don’t put sufficient mechanisms in and then the impact is very small and then when you can’t see much impact you feel that the next time that you do your research you again as a result of that experience you’re not going to put much resources into it, you’re not going to spend a lot of time planning it, you’re not going to be clear and it becomes a vicious cycle. So actually what we’ve got is badly thought out and badly planned PPI that’s not delivering much impact and the fact that it doesn’t deliver much impact reinforces the fact that we don’t spend much time on resources and so on. And I think that’s one of the things we need to break out of.
Felix suggests that the most important impacts are on people and relationships. Making changes to a specific piece of research is secondary.
Felix suggests that the most important impacts are on people and relationships. Making changes to a specific piece of research is secondary.
SHOW TEXT VERSION
PRINT TRANSCRIPT
So, you know, and that’s the main thing and that’s also where most of the positive and the negative impacts happen. And then the, almost a secondary part is about the actual impact on the actual research and this is based on our review of the literatures. So I think, you know, when, you know, I would tell to anyone who engages in it it's more about, you know, it's going to challenge you as a researcher and it's going to challenge the members of the public because everyone has different values, expectations and impacts that they're interested in. But it's primarily about that interaction and this is where you're creating impacts and not the actual research. So and that’s, you know, so if, if you take that down into numbers, you know, so sixty, impact on research, sixty different impacts on the various phases of the research and it's a hundred and twenty impacts reported on the actual people involved. So that’s twice, you know, it's twice as much, twice, oh you know, more important – not more important but you'll create more impact on the people and on the research.
You could compare two studies, one with patient involvement and one without, and measure differences in outcome.
You could compare two studies, one with patient involvement and one without, and measure differences in outcome.
SHOW TEXT VERSION
PRINT TRANSCRIPT
Alice’s instinct is always to want to measure but feels ‘you can’t really do a parallel project with no PPI’. She is unsure we should even try.
Alice’s instinct is always to want to measure but feels ‘you can’t really do a parallel project with no PPI’. She is unsure we should even try.
SHOW TEXT VERSION
PRINT TRANSCRIPT
…Well I do know of some ways that people have tried to measure impacts in PPI which I think are good. But I think if somebody had developed a really super-duper, really great way of measuring PPI we'd be using it already.
But maybe they're just getting known, I don’t know, but you know some ways of measuring PPI that are good but limited. If we can't, should we even be trying to measure it? Well you see I find it really hard to say no we shouldn’t try and measure it because my instinct is to measure, measure. But I think we need to think back to the reasons why we're doing it which is to try and make more effective research that’s more applicable for better health outcomes and if asking people what they think is going to be achieving that and we know the theory of how it should be working. I say the theory but I don’t mean, you know, we know how it should work – maybe that in itself is enough to know that you're doing that. I suppose there's no – even if you did try to measure it, measure the impact, there's still no way of knowing whether or not what people are saying is genuine, so you could say, you know, “did people have a real impact on this project?” “Oh yes of course they do, they decided this, this and this.”
Pam is sceptical about impact measures and how to disentangle cause and effect. Where PPI advisers agree with researchers it may look as if they made no difference.
Pam is sceptical about impact measures and how to disentangle cause and effect. Where PPI advisers agree with researchers it may look as if they made no difference.
SHOW TEXT VERSION
PRINT TRANSCRIPT
So I think you won’t stop researchers calling for more research is needed – that’s what we do we want to keep ourselves in a job. Sorry if that sounds very sceptical. But I think, I think that partly relates to how I feel about wearing different hats. We’re all patients and we’re all citizens as well. So I think people will have a variety of modes of knowledge and evidence for different purposes and uses. It can be handy when you’re in a biomedical environment to be able to point people and say this is published evidence and, you know, for some that’s, that’s persuasive yeah so I think it’s interesting.
And what do you think about impact or accessing capture and measuring values and impact?
I think people would probably like to know the impact of their involvement. I think the difficulty of disentangling cause and effect – do you know that its PPI that’s made a difference – is methodologically challenging. And if you go back to a democratic or an emancipatory rational for why you’re doing PPI, then you’re doing it because you should, because people have rights and entitlements to influence over what’s done in their name or with public money and so on.
So I think because – I’m primarily a qualitative researcher, so I just have some scepticism about impact measures, but I don’t think there’s anything wrong with trying to see the difference that it made. But I don’t think if you were to involve people and they didn’t make a difference that might be because actually they’re in agreement with the researchers and I wouldn’t see that as wasteful or ineffective or inefficient.
Pam also drew attention to the fact that the impact debate tends to assume that to have value involvement must change things, whereas sometimes patients may agree with researchers or validate the research design, and this is also useful. John made a similar point: ‘I think that endorsement is a good thing to have… You say “I got this idea for some research” and the patient says, “That’s a great idea”.’
Copyright © 2024 University of Oxford. All rights reserved.