false
Catalog
Evidence Based Practice modules
Patient Reported Outcome Measures 101 - John Froel ...
Patient Reported Outcome Measures 101 - John Froelich - 2017
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thanks everyone, so I'm John Freyhoek, thanks for being here. Here's my conflicts. So we're talking about PROM and this is the first thing that sort of came to mind but then I realized we're talking more about these versus the cooler, much cooler PROM. The patient reported outcome measures, a couple of things we need to go over is that we're going to go over the basic principles of patient reported outcome measures and why we look at them as far as how they're structured, what does it mean for meaningful results, do we need to use them for everything, what to do with them. The end goals are to try to stay awake for about 20% and then do make sure you contact your friends about dinner before you get out of here. With our patient reported outcomes, one of the things to think about is there's three basic ideas behind them and understanding of a basic structure of a patient reported outcome measure is that the responsiveness or the degree of the instrument can actually detect something that's going on. So when you read through and you look at these, you want to understand is what are they actually getting at and can they actually measure it. These can be influenced by the measure effect of the size of how much they're trying to measure as well as the response meaning. So if they're trying to measure too many things, they become unresponsive as well as if they're trying to measure too broad of a spectrum of something, they can become unresponsive. It's essential when you're discovering the responsiveness of a patient reported outcome measure to do pre and post-op evaluations is one where you look at the responsiveness of an exam the most is you want to know that can it detect a difference. So when you're looking for the responsiveness of a measure, it's looking at can it actually find something that's different there. Then you're also going to be looking at the reliability is basically what is the reliability of this measure and looking at this, it's the internal, there's a couple different ways of looking at this. The internal consistency is that will it reliably find what it should be correlating within the test. So if you have somebody that you know it doesn't look well and is not healthy, does the test actually reflect that and does that actually measure that and are you finding that's a way to find the internal consistency with a known measure and that and then there's the test retest and the idea is that you're testing somebody and then retesting them. So those are the different ways that when you look at again an outcome measure is how have they validated this or shown the reliability of this exam. In general these are related to a coefficient of greater than .7 so again if you're reading through these critically, they're going to report a coefficient greater than .7. And then also we need to then not look at reproducibility and reliability but also the validity, looking at the content validity is one of them. Content validity is interesting so if you again when you look at these critically you say how did they come up with these measures. So the content validity, there's not really a true measure. It's sitting around with experts, patients and literature and then saying okay we have something we hear we think is important and then we're going to base this outcome measure on this information that we think is important and we start to base this content validity on this content over here. And then there's the construct validity where you basically are saying that our outcome measure and we know that score X or score 72 is associated with another validated measure that may not be an outcomes measure based or range of motion and you start to correlate the two of those and that starts to give you construct validity again looking at relationships saying if we get this patient has great range of motion and we get a high score here suggesting that they're happy with their outcomes those have some construct validity looking between the two of them. The thing is to understand that this is a bit of a gradient so it is a moving target that there's not an absolute that we can anchor to with these and you're starting to build them off of already known values which have some bias in them. When you look at these you also need to understand there's the ceiling and floor effect here and that basically is meaning that there's limits of the data what it can do on each end of the spectrum. So if you think about one of the ceiling effects is if you have a patient reported outcome measure and one of the things you're measuring and you're validating with this is you're looking at how much the patient smokes in relationship to what they're reporting of some of these other outcomes and you're looking at this data and you ask somebody to give you a very rigid answer and at which point you'd stop so you say you smoke half a pack of cigarettes, three quarters pack of cigarettes, one pack and then you go all the way up to two and then it's two plus. Everybody that smokes more than two plus gets put into the same category and that starts to create an effect in there, a ceiling effect that everybody's reporting at the end and you can get bias. Just as if you'd give a small child an IQ test for an adult, everyone would score at the bottom and that data gets all granulated or gets all clumped in together so you can't actually start to pull that out. So that gives you effects of above and below and what's going on. So that can decrease the impact of the information you're causing and actually bias your results with that. The other thing is to think about if an outcome measure has been validated against a healthy source. If you now say we've got young individuals fairly healthy and then we're using this outcome measure on older patients with rheumatoid disease, significant rheumatoid disease, their general spectrum which they're going to be in that can clump towards one area or another so it can create a floor effect for that outcome measure where you may not pick up something in there because all the data gets clumped together so it's something to be aware of that the patient's overall condition can affect this. The MCID is the minimally clinically important difference. It's basically it's the smallest change that's beneficial to the patient. It's not the smallest change that you can measure. It's not the smallest change that the numbers tell you exist. It's the smallest measure that is beneficial to the patient. So the issue with these is that you can have identical changes that can be different for different groups. So a value change of two or three on your scale might be very significantly clinically significant difference for somebody in patient groups that have multiple comorbidities but in young healthy groups they may need to have a spread of closer to eight to ten across the same value. So you need to look at these in relationship to that and what that measure is. When you go to establish what is that baseline or getting that effect, there's three different ways of trying to get that. The Delphi or consensus is basically what you do is you get an expert panel together and you vote on what you think that number should be or that range of when it starts to impact. So if you say you set up your scale and you say everyone that goes up two points on the scale starts to become that's clinically important and then you compare it to all the other experts in the room and then they say nope we didn't all agree but here's what everyone else voted on it and then you vote on it again until you all decide that you agree on the same level. So at a certain point you're grounding the outcome measure on opinion and it's expert opinion but it's still opinion. So you start to build a foundation off of this. So it's an interesting way of looking at validating these. So you basically look at the wind, figure out what direction the wind's going from and then row shambo from there. Anchoring is another way of doing this is that it's patient focused so you anchor again against some other type of measure, some more objective measure or an independent measure meaning that we found that the patients that scored had this delta started to have improvement and that's validated by another measure that may not be a patient reported outcome measure, a measure that you're measuring and then you start to anchor it against something that you already know. And then distribution based is actually minimal detectable change and that's based purely on statistics. So if a patient reported outcome measure is actually going on distribution based it's not really a good valid study. Why do we care about these? What's coming down the line? Is there a risk between outcomes and us using them for research versus being related to being used by the government? One of the risks associated with this big influence on patient reported outcomes is that we start to relate reimbursement to these outcomes and with that we can start to put a bias in how we just went through how you can bias and influence the potential of these studies. You can also start to bias and influence who you're going to let into the studies which can limit access to care because you know you want to control these outcome measures. Then you can also start to teach to the test. So even though it's a patient reported outcome measure, think about when you go, last time you go get your car worked on or something they're like, hey do you mind filling out this survey? And by the way if we don't get all tens then I get dinged, I'd really appreciate a ten and so would my family so I still have a job. It becomes that kind of thing that if we start doing this patient reported outcome measures and put too much emphasis on them financially there will become a way that they start to become gamed and we'll start to questionably lose the validity of this powerful tool. So it's a dangerous kind of game to play in there because I think there is ways that people will try to start to game these and we have to watch that and say is this the best idea that government reimbursement rates are related to patient reported outcomes if we also want to use those outcomes to help to understand what are the best studies for us to be doing and the best procedures to be doing for our patients. So this is just looking at a little bit different as far as objective versus reported outcomes. Again reported outcomes are patient focused versus the objectives are more quantitative and one of the things I just wanted to note in here is that patient reported outcome measures is we're really excited about them but there are questions to some of the validity and how you set those up and if you look at it and understand how they start to say when you just scan a study and you look at it and it says it's valid and it's been validated you need to understand that there's some bias in how you start to validate those and what are those measures that we talked about before. So in takeaways the 101, the really basics of patient reported outcome measures, they are important. I think it's a good new tool. We have to learn how to use them effectively and we have to learn how to protect that data and protect the gathering of that data so that we get it in a pure way that's not corrupted by finances or other influences like that. Also understand they're not perfect. I think that a lot of times people are out there saying that patient reported outcomes are the most pure outcomes. We have to be dependent 100% on those because it's what the patient's reporting so it's there for, it's the best. They're growing in the need for it and the want for it both from payers, from federal government payers as well as the research that we're publishing are wanting more and more patient reported outcomes so we need to do that. I think that ultimately at the end of the day it's a gray area. We're going to find that probably the answer is somewhere in between not a pure patient reported outcome measures relationship and not a pure more measured by us as providers but there's going to be some mixing of the two to get the best information and truly understand whether the surgeries we are doing, the procedures we are doing, the way we're treating our patients are doing the best things for them. Thank you for this opportunity. See you.
Video Summary
The video discusses patient reported outcome measures (PROM) and their importance in healthcare. The speaker explains that PROMs are used to measure a patient's perception of their own health and treatment outcomes. The video covers the basic principles of PROMs, including responsiveness, reliability, and validity. The speaker emphasizes the need to understand the biases and limitations of PROMs, such as ceiling and floor effects, and the minimally clinically important difference (MCID). The potential risks of using PROMs, such as influencing reimbursement and gaming the system, are also highlighted. The speaker concludes by suggesting a balanced approach that combines PROMs with objective measures for the best patient care.
Keywords
patient reported outcome measures
PROM
healthcare
biases
objective measures
×
Please select your language
1
English