Patient and public involvement in research
The value and impact of patient and public involvement in research
As we explain in What is patient and public involvement, the aims of involving people in research can be thought of as making sure the right research is done and making sure research is done right. But understanding the impact of involvement (the difference it makes) has become a contentious issue for researchers and people who get involved. We asked people what they thought about the value and impact of involvement, including examples where it had made a difference, and how to improve the way we measure and record such impact.
Generally people agreed that we need good evidence of what difference it makes, but had no clear answers as to how best to measure this. Some researchers have designed tools for recording involvement, including the GRIPP Checklist (Guidance for Reporting Involvement of Patients and Public) and the PiiAF (Patient Involvement Impact Assessment Framework), which may help for the future in recording involvement at different stages of the process. Derek suggested this needed to be a collaborative process.
Kath thinks it’s important to assess the impact of PPI because it’s a ‘big investment’.
Kath thinks it’s important to assess the impact of PPI because it’s a ‘big investment’.
No actually it wouldn’t, it wouldn’t. I don’t, I don’t see it in that way at all. I think there is a logical ‘should’ in that if researchers want public funding then they need to actually make a case for what they are doing with the public. And involving people, is a good way of actually uncovering the value that research gives and that there is that ‘should’. But… this is a big investment that we are making, and so we ought to be contributing something. It’s not just about having quite a nice time.
Patients who are involved and researchers need to keep a record of the impact throughout the trial. Patients who are involved and researchers need to keep a record of the impact throughout the trial.
Patients who are involved and researchers need to keep a record of the impact throughout the trial. Patients who are involved and researchers need to keep a record of the impact throughout the trial.
Derek explains how patients can change the design of studies by bringing their experience to the table.
Derek explains how patients can change the design of studies by bringing their experience to the table.
Involvement can put researchers in touch with the ‘real world’. It would be satisfying to know research made a difference to health care.
Involvement can put researchers in touch with the ‘real world’. It would be satisfying to know research made a difference to health care.
Well to take an example – suppose we stick in my little field of stroke research. One meeting I went to, room full of clinicians, about a dozen of them, statisticians, clinicians and the theoretical researcher as well, and two lay people were there. And pretty near the start of the meeting we were looking at getting information from users on priorities for stroke research. And I quite innocently asked the question, 'What does the Stroke Association think of your priorities?' And the meeting went silent, and it was clear straight away that they hadn't even heard of the Stroke Association. This is a stroke research group who’d not heard of the Stroke Association. I wasn't terribly impressed with that [laughs] lack of knowledge on their part. However, they very quickly remedied that and they have now got, the Stroke Association, a representative on their steering group. So some researchers, I think, need to get out and live in the real world. You know, what are the voluntary groups, what are the carer's associations which are relevant to this research? It's no good just going down some wonderfully enthusiastic path as a researcher which may or may not have an impact on the real world. Far better to say, well I would find it more satisfying to be able to say at the end of it, ‘This research had an impact on hospital practise or what GPs do. To me that is such a valuable output from research that it's well worth taking a little time at the start to get lay input.
Margaret thinks involvement can have a positive impact on every part of the research process.
Margaret thinks involvement can have a positive impact on every part of the research process.
Rosie describes how patients can improve recruitment to research
Rosie describes how patients can improve recruitment to research
Now that was, you know, that's go through ethics but that process they discussed the ins and outs of that, that was really helpful. So, you know, I think, you know, clearly and obviously there are lots of things that as service users and patients, you know, you, you know, there are scientific and technical issues…
…but, you know, we don't know how to do that but if you're going to be asking people to do things or asking people to lie in scanners for forty five minutes or whatever it is, actually you need to ask somebody is that, can somebody do it for that long, can they stay still for long enough for you to do your…you know so there are all sorts of ways I think.
Research can be made more relevant to patients by involving them in deciding which outcomes to measure.
Research can be made more relevant to patients by involving them in deciding which outcomes to measure.
David Z gives an example where patient involvement led to a decision not to conduct a study that they didn’t feel was worth pursuing.
David Z gives an example where patient involvement led to a decision not to conduct a study that they didn’t feel was worth pursuing.
And I should think that this is really important, that public money is not wasted on research projects which may not have the desired benefit for the person who has experienced a stroke.
Sometimes patients can make a positive difference just by being there.
Sometimes patients can make a positive difference just by being there.
Maggie thinks that lay people can make a difference by asking ‘the elephant in the room question’.
Maggie thinks that lay people can make a difference by asking ‘the elephant in the room question’.
If involvement is done well and people are treated with respect, then it will lead to improvements.
If involvement is done well and people are treated with respect, then it will lead to improvements.
OK and…
Sounds like a script, it's not.
Involving people at the last minute is bound to be ineffective and unsatisfying for everyone.
Involving people at the last minute is bound to be ineffective and unsatisfying for everyone.
It's certainly been a steep learning curve. It has thrown up a number of points around issues of what constitutes good practice in this area and how some people involve patients very well and other people have not really taken the time to think through what participating could mean for this patient. It needs to be fleshed out a bit more fully than, ‘Well they said they wanted to so let's get on with it’. So certainly everybody tends to find it more satisfactory if people are involved in the conversation from the beginning. I've been involved in one or two things where the patient has been brought in quite well down the organisation or route and then where they raise issues, which are acknowledged as valid, but which run contrary to the sort of direction in which things are moving at present. It's just too difficult in terms of time spent and finances spent and where everybody's head space is, to turn things around to do it better. So involving people at the beginning is really important and making the effort to foster the communication, the team work, and the attention to detail so that you move forward as a close knit team. Everybody then finds it very exhilarating and my experience is not only do the individuals find this rewarding but I've seen researchers get really very excited about how real the whole thing seems as opposed to sort of theoretical and academic. So they can start to see how the research they're doing is really going to benefit people so it gives a sort of extra sort of brilliance to it. It makes it more exciting and engaging.
Evidence for involvement needs to be more than anecdotal. We need to know what doesn’t work and a recognised scale for measuring impact.
Evidence for involvement needs to be more than anecdotal. We need to know what doesn’t work and a recognised scale for measuring impact.
Yes I think so, yes and maybe it is, maybe it is a scale but it's a recognised scale and that, you know, if you've had PPI input at some area of your research cycle that they refer, return it back to the PPI lead and say, you know, "Score," there's that I mean, you know. I mean there are many different toolkits about at, at how to work as a PPI but we need to be a universal way of the scoring as such for the impact that gets beyond the anecdotal.
And I think that's the next big step in PPI that has to be found, how you, how you show that that was a good… an impact for good or not good and then you look and say, "Well why was it not so good in that area? What was the perception that made it like that?" But you can't keep going round saying, "We know this is good."
In a research without any evidence yes because we don't live in that sort of world.
But I'm not sure how you actually track that down.
Some felt that isolating the impact made by individual people or comments in a discussion was challenging. Peter said, ‘It’s more difficult to see a direct result of what you’ve done the more sophisticated you get with your PPI activity’, and Carolyn pointed out that ‘if you’re working really collaboratively and you’ve got ten people sitting round a meeting table it’s often hard to pin down where particular ideas come from…It's sometimes hard to pin down exactly what the difference is that you've made’.
Simplistic measures won’t capture the full complexity of involvement and could be damaging.
Simplistic measures won’t capture the full complexity of involvement and could be damaging.
I think there are issues around it. I think there is a problem about pulling out an element of a project. If public involvement is embedded within a research project it’s quite difficult then to pull it out and say what difference has this bit made. And, for instance, trying to do that for statistics or for, it doesn’t, that doesn’t quite, sort of, gel very well. And if, for instance, you’re holding workshops where people are talking with each other, including researchers and, and service users, it’s quite difficult to then pull apart whose contribution made which difference. So those things make it quite complicated. So I think simplistic tools for measuring impact can be quite damaging, because they’re not likely to notice it. But that said, I don’t think I’d want to be involved in research if I wasn’t making some kind of difference by doing it. So, yes, I want there to be an impact and it would be good to be able to start to uncover some of the impacts that user involvement makes, but, as I say, not in a very simplistic way. Some of the proposals for tools that I’ve seen have been just far too simplistic and haven’t actually understood what complex relationships there are in research processes, and that’s something that needs to be taken into account in it.
So what kind of tools do you think are better for trying to capture impact? Are we talking about narrative here, case studies?
I think they’ve got to be part of it and I think they can start to uncover what impact is intended and what impact is not intended as well, which is, I think, something that’s quite important in these very complex relationships. So then people can start to think about actually what impact is it they want to have and how perhaps the best ways of moving that forward? Because people are looking for different things in involvement and researchers are looking for different things, and the people who are involved are looking for different things and the idea is to try and start matching some of that up so that it works better and that’s I think, the purpose of trying to measure impact.
Lay people are now accepted without question – but the more closely involved they are the harder it is to identify what difference they’ve made.
Lay people are now accepted without question – but the more closely involved they are the harder it is to identify what difference they’ve made.
The more subtle and the more difficult things to tease out of that involvement is to try to work out exactly what the impact of patients and carers has been on that world. And it’s a question we’re always asked. I spend a lot of time now visiting research conferences and research meetings and, and talking to professionals and they immediately start to ask, you know, what difference have you made, what is your impact there, and that can be, that can be quite difficult to see really. Because we’re, I guess we are just one now of a group of multi-disciplinary people working on projects and studies, and to actually say which, which change you’ve been responsible for, is not an easy thing to do.
Alan doesn’t think we can measure the impact of PPI because involvement is just a little cog in the research machine.
Alan doesn’t think we can measure the impact of PPI because involvement is just a little cog in the research machine.
No not really. I think, I think we're just a little cog in, with, it's just that we're moving in the right direction. If there's enough of us moving in the right direction, just putting a word in. You don't, we never know you might meet somebody who might have been going down the road where they were going to top themselves, you know, commit suicide because they can't see a way out. And just one little word and saying, "Oh have you thought of joining this group?" You might have saved someone's life, but you'll never know whether you saved someone's life by saying that to them. But it's just helping people and it's not helping individuals.
Being able to demonstrate impact is important. Current evidence about impact is often dismissed as anecdotal but Mary thinks practical examples are useful.
Being able to demonstrate impact is important. Current evidence about impact is often dismissed as anecdotal but Mary thinks practical examples are useful.
Mm well actually I'm involved in a, there's a currently a bid out to measure the impact of PPI as [name] at [university name] is putting this bid together. And it's out for consultation or whatever, at the moment it, it's. And that is actually that, to find tools that measure the impact of PPI because the only way to get it more embedded is to actually be able to point to something that shows. Because at the moment it's dismissed as anecdotal evidence and I don't know why anecdotal evidence doesn't count, you know. It's because a lot of the tools that are used in research, clinical trials in particular, are validating measures that other people have done, you know, validating measures and a lot of them don't make a lot of sense. But they, because they're validated they're used, and you get ten different measures or something and when, in fact, a bit of anecdotal evidence would say that, in fact, either you need to draw a new tool or, you know, why not ask people what they think and then take it from there?
Helen doesn’t know if her involvement has made a difference. If people don’t get feedback on how they’ve contributed they may not feel motivated to continue.
Helen doesn’t know if her involvement has made a difference. If people don’t get feedback on how they’ve contributed they may not feel motivated to continue.
I wouldn't say that it annoys me massively but, I do recognise it from my own research that that is a very, very important part of the process that isn't happening. And I think, actually, that may well be one of the major parts of the process that keeps people participating’ essentially. Participating because you understand what's going on is great, and then being encouraged to participate is great, and then being supported in your participation is great. Enjoying the participation; fantastic. But, you need to understand what it is that you've done with your participation, you need to know. It's not patting somebody on the head but, it's saying "Thank you for what you've done, that was really useful, more of the same would be great, and this is what we have changed in light of all the comments we received." And even [if] it's just generalised, even if it's only something that's been sent round as 'We have received lots of comments, no names mentioned, these were the comments we received and this is what has changed.' I think that is hugely important, yeah.
Marney receives feedback on how she has made a difference to research.
Marney receives feedback on how she has made a difference to research.
Well only in the sense that I get the document, I look at it, I flag up, typically I would say three to four specific points to address. I usually try and give an overview as to whether I think the project is, you know, is valuable or interesting or, all those sort of things. So there's a broad sense in how's this project been received, so I tend to give that first and then I say these things, I think, could need , usually fleshing out further, you know I may say things like , "It's not clear to me how the psychologist on your team is to be used, are they going to be used to ensure people are able to give consent or to choose appropriate screening tools for their reaction to something?" or you know, so, so that sort of thing, or issues particularly, as I said, issues around if you're going to require the patients to go through all these tests, I think you're going trouble recruiting them. Or, this is going to be, you know, if it's taking medication and coming weekly for blood tests and, you know, having spinal fluid taken and, you know, these sorts of things. I say, "I think you're going to have a significant fall out rate because this is very difficult to live with if you're also dealing with recovery say from a, you know, from an illness." But the level of feedback I get usually says, you know, this is really useful, we're going to incorporate it all or, you know, this is useful and we'll explain further here or yes I see your concerns but actually I'm restricted to a hundred and fifty words and I've already bottled it into a small a space as possible so it's not possible for me to develop it, or …yes so people tend to give me an idea of how they're going to use my information, which of course is important because that informs how I react to the next set of reviewing that I do.
It would be interesting to test the impact of involvement by trying to recruit for a study using information sheets designed with patient involvement and without patient involvement.
It would be interesting to test the impact of involvement by trying to recruit for a study using information sheets designed with patient involvement and without patient involvement.
Involvement improves recruitment because it leads to better research, but there is still a lot to be done to understand impact.
Involvement improves recruitment because it leads to better research, but there is still a lot to be done to understand impact.
I think what they showed is that there needs to be lots more work to be done on actually assessing the value of PPI. But there, what it has shown is that the study design is more relevant to the research and also, I believe, that the information sheets are better written and better understood from a lay perspective if you involve patients with the writing of the lay reviews.
Roger thinks that that the impact of PPI can only be measured by asking researchers.
Roger thinks that that the impact of PPI can only be measured by asking researchers.
Ha ha, yes, the ultimate question. We've struggled with that for twelve years and we still struggle with it [laughs]. I think the only way you're ever going to be able to do this is subjectively. You're going to have to ask investigators, particularly chief investigators, principal investigators, the ones who actually put studies together in detail, what their perception of the value is and hopefully they will do more than give the nominally appropriate answer.
Charles thinks that measuring impact is important but very difficult, and we need to be careful what we measure.
Charles thinks that measuring impact is important but very difficult, and we need to be careful what we measure.
Oh absolutely. It has to be because it, you need to measure things because of the old adage that what gets measured gets done. And if you don't measure things in some way or other then you have no idea whether you're doing well or doing badly. But you need to be very, very careful what you measure otherwise you start setting metrics which distort the provision of the service, and in an ideal world you want an element of coupling of the career success of the service provider at a personal level to the quality of the service which they provide to the service user. You know if you, I don't know, suppose you paid five pounds for every patient who had a flu jab, you could be pretty sure that lots of people would get flu jabs because that's a nice mechanical thing which you can set up and which you can manage. But if you take that the more complicated things, you might want to provide a more expensive treatment on the grounds that it would be a better outcome for the patient and then you wouldn't see that patient again. So it's not just a question of through-putting, how many patients can you get through your door. It's a very, very difficult question, but I think there should be some kind of measurement. If you ask me how, I couldn't answer that question at the moment.
It’s more important for Richard that research results in better treatment for patients than if it’s published in a very important academic journal.
It’s more important for Richard that research results in better treatment for patients than if it’s published in a very important academic journal.
Last reviewed July 2017.
Copyright © 2024 University of Oxford. All rights reserved.