false
Catalog
Evidence Based Practice modules
2016 - Evidence-Based Medicine in Ten Minutes - 20 ...
2016 - Evidence-Based Medicine in Ten Minutes - 2016 - Precourse
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
of the session, I think the most important one, by far, that I give. I'm the Vice Chair of the Evidence-Based Medicine Committee. I have no disclosures. So who am I? Basically you in a few years. I'm in private practice, but I teach residents, PAs, and students. I'm not an MPH, but I'm actively working on research. I'm involved with the Hand Society, but I'm not an academician. But mostly, on a day-to-day basis, I'm concerned with doing what is best for my patients. Here are a couple reasons why residency and fellowship failed to prepare me for real life. First of all, I spent more time learning the idiosyncrasies of my attendings than actually learning about patient management. I spent more time focused on surgical skills, many of which are now already obsolete, than on patient counseling, indications, and evidence-based treatment. I spent more time thinking about, oh, how am I going to finesse my presentation for conference so I don't look like an idiot? Then what am I actually gonna do when I'm in practice? Now's your chance to learn what the evidence is and to use it to guide treatment, not just justify what you or somebody else already did, to maybe learn what study design is, and really, God forbid, contribute something to the literature. And you guys are young enough to be unspoiled by the wisdom of your elders, for better for worse, and research allows you to pose a question and challenge the assumptions. You are in the perfect position to challenge the assumptions now before you get too far into it. So something you should realize is that your academies are funded by industry, your thought leaders and textbook authors are funded by industry, you and orthopedic implant manufacturers are doomed and destined to be strange bedfellows, and we struggle and often fail to avoid the taint of industry relationships, and just disclosing it is not absolution. Bias is everywhere. And then getting back to evidence, there are problems with every study. There are biases in every study. You need to understand how to look at the evidence and understand the limitations of every study you look at. For randomized clinical trials, the issue is generalizability. For systematic reviews, garbage in, garbage out. It's fairly simple. For level three, four, and level two studies, there are multiple biases. There is a difference between an association and causation, and that's sometimes glossed over. And if you fail to assess the literature critically, this could be you. You don't want to look back on your practice and realize that what you've done for the last 10 years was a mistake. So two basic themes of this talk, how and why do you learn to critique the evidence and apply it to your practice? How do you think intelligently about study design during your training? Each deserves a few hours. I usually give this as a hour-long lecture, and I've got about six minutes left. So applying EBM to your practice. If you don't do it, somebody's gonna do it for you. Your patients scouring the internet, looking for Dr. Google's advice, your board examiners, your insurance adjusters who love to practice medicine without a license by denying procedures you want to do, your fellowship mentors are going to be doing something different in three years. That's why they are considered thought leaders. Are you gonna do just what you learned in fellowship? And the reps are nice, and they take you out to dinner, and you're gonna feel guilty if you disappoint them, but you are not as impervious to suggestion as you think. So just as a little example here, a subacute scapholunate tear, okay, a fairly common injury that we see. 10 years ago, well, you could wait for degeneration and do a four-corner fusion. You could do a BLAT, you could do a Brunelli, you could do a bone ligament bone, you could do a RASL procedure. About 10 years ago, Mark R.C. Elias came up with a great paper on his modified Brunelli. That's an option now. And then a dorsal intercarpal ligament tenodesis. And then more recently, well, you have the SLAM procedure or multiple tenodesis screws, a SLIC screw, which is like a Herbert screw that rotates a little bit. Do you leave the hardware in permanently? Do you take it out? Do you combine techniques? Honestly, this is not a testable question. It's not on your OID. It's not gonna be on your boards. This is the scary truth about medical practice. Medical decision-making is difficult, it's confusing, and the burden is on you. Bias is everywhere, and it's easy to collapse under the weight of that responsibility. So evidence-based medicine in practice, at some point you have to move along from OID and board questions to what are you actually gonna do in practice. You need to start thinking about that now. Statistics are really about philosophy, not math. You don't need to be an MPH. You don't need to be able to use SPSS. You need to know what power is, what P values are. You need to understand what a minimally clinically important difference is. You need to understand how outcome measures are selected and why you use them. There's a fundamental question. What's evidence-based medicine, and what are statistics? EBM in a nutshell, learn from other people's mistakes, not just your own. The levels of evidence, they are a heuristic. They are not a grade. You're not judging the accuracy of the information. What does that mean? It means that level four studies are just as important as level one studies in a lot of situations. We add to our knowledge in multiple ways, and you need to understand and acknowledge the limitations of studies. That's what the levels of evidence are for, not just to knock them down in journal club because they're not level one evidence. You need to be always thinking, every time you pick up Journal of Hand Surgery, how can I apply this to my practice? I'm gonna gloss through this pretty quickly, but interpreting level one evidence, to read a study critically, you need to understand what confidence intervals and type one error are. You need to understand sample size calculations and the relationship to power, and you need to understand outcome measures a little bit. The basic idea of inferential statistics is simple but profound. You are taking samples of a population and looking at them and trying to make inferences about the entire population based on those samples. The mathematical techniques used to do that are fairly complicated, but the idea is powerful. Type one error is when you claim that there's a difference between two populations when in reality, the difference was only in the random sampling, in the random chance between the two samples of the population that you took. We arbitrarily select a p-value of .05. We arbitrarily say that a one in 20 chance of being wrong is okay. It doesn't mean that a p-value of .06 is irrelevant. This is a continuum. It's not usually acknowledged as that, but it is. Type one error depends on sample size. The larger sample of the population you take, the less variability there's going to be. You need to understand how much variability there is within your sample, and that affects your type one error as well. Study design and selection and standardization of outcome measures will help rein in the variability. Type two error, or beta error, is a chance that you missed a real difference because the samples were too variable for your statistical techniques. Basically, you lost a signal because there was too much noise, or your filters for reducing that noise were inadequate. You have limitations of statistical techniques as a cause for that, limitations of the population parameters, and outcome measures play into this profoundly. All of these lead to type two error. How do you power a study? The simple answer is before you start it. And a minimally clinically important difference, this is something you should be thinking about every time you design a study. The operation was a success, but the patient died. How are you measuring what an outcome is? An MCID is the smallest difference in the score on your outcome measure that would lead to a change in the patient's management or a change in the patient's perception of the outcome. There are various ways of deciding that, and I'll leave that for later, but basically, if you want to talk about study design in one slide, ask a well-formulated question, don't expect any of your secondary outcome measures to lead to anything but questions for subsequent studies. Choose your outcome measure wisely. You need to understand what the MCID is. Your outcome measure needs to be reliable, needs to be responsive, needs to be valid. If you don't know what those terms mean, you should look it up before you take your next in-service because those are test questions. Your pre-study power analysis is critical because if you have an underpowered study, you're not really going to be able to eliminate your beta error. You may not show anything. You may miss an important difference, and then you're gonna rely on other people down the road trying to do systematic analyses and meta-analyses to clean up your garbage. Don't worry, and this is probably the most important thing. Don't worry if your study can't be completed in the course of a one-year fellowship or from the time you decide you want to go into hand surgery as a PGY three or four. Get to be part of something that's going to be important. It may take three, four, five years for the publication to actually get out there, but it's very important to get your name on something worthwhile. So in conclusion, if you want to be a good doctor, if you want to have any control over your own practice, if you want to inspire confidence in your patients and your peers, if you want to stand up to scrutiny at the hands of lawyers or board examiners, and if you want to be able to sleep at night, you need to critique, synthesize, and use evidence to guide your practice, and you need to develop these tools now. Keep your eyes on the prize. Thanks. Thank you.
Video Summary
In this video, the speaker, who is the Vice Chair of the Evidence-Based Medicine Committee, discusses the importance of evidence-based medicine (EBM) in clinical practice. They highlight how residency and fellowship programs often fail to adequately prepare doctors for real-life patient management and emphasize the need to prioritize learning about study design, critically assessing the literature, and applying EBM to guide treatment decisions. The speaker stresses that bias is present in every study and medical decision-making can be challenging and confusing. They also discuss the importance of understanding statistics and outcome measures in interpreting and designing studies. The speaker concludes by emphasizing the importance of using evidence to guide practice, inspire confidence in patients and peers, and stand up to scrutiny.
Keywords
evidence-based medicine
clinical practice
residency programs
study design
statistics
×
Please select your language
1
English