I often hear coaches and/or clinicians claim that they are “evidence-based.”
Everyone talks about it. Even Barbell Medicine (BBM), a group that I have the utmost respect for, came out with a great podcast about epistemic responsibility in which they talk about how we owe it to our patients, better yet the world, to believe things in which there is a sufficient amount of evidence. I completely agree with that; it is imperative that we check ourselves, implement evidence in practice, and continually learn. However, I think that some believe that as long as they are up to date with research, then they are automatically practicing at the top of their license. I am here to make the case that being evidence-based is not enough.
I mean, how is “evidence” truly defined and weighted?
What does it really mean to be evidence-based?
Before I delve into this topic, I want to talk about my background, which may provide some insight to my interest in this topic. If my life story sounds uninteresting, skip a couple paragraphs (to the next headline).
When I started out in strength and conditioning (S & C), I worked at the University of Nebraska’s athletic department where there were two tribes within the department. About half of the full-time strength coaches were heavily influenced by the Postural Restoration Institute (PRI) and the other half was not. PRI is a way of looking at the human body, recognizing patterns, and describing the pain experience through a kinesiopathological8 model (just a real quick & dirty on PRI- I didn’t do PRI justice here).
As a new, moldable intern, I felt as though both tribes were trying to win my heart and get my agreeable personality on their side. On top of that, I didn't know much about fitness aside from what the average “gym bro” knew. And the icing on the cake was that I had been a long-time sufferer of low back pain, who’s response to different treatments was poor.
So, when the S & C coaches were preaching this gospel of how PRI could darn near cure cancer, I thought it couldn’t hurt to try it—therefore, it wasn’t long before I had joined the PRI tribe.
With this new-found hope in PRI, I did all the back exercises they told me to do, breathed the way they taught me, and even slept in the positions that were “good”. Unfortunately, none of it seemed to help. After several months without any change in my symptoms, I started developing some skepticism.
At this point, I still wanted answers, so I went to the non-PRI tribe and asked their opinion of PRI. They all had many reasons why they disliked PRI, but none of those reasons were super convincing either. The non-PRI tribe had loosely based opinions on anatomy, adaptation theories, etc. to disprove PRI, which was pretty much the same kind of evidence the pro-PRI tribe had. However, what the pro-PRI tribe did have was testimonials, an expert clinician revered as a pioneer, and clinical results; just no results for me.
At this point, I was more confused than ever, but I still kept trying to listen to both sides. One tribe says our anatomy is like this, which makes us do that, and that’s why you have pain. The other side would say, well we are also like this, which makes us like that, and that’s why that was nonsense. They kept using the same type of reasoning to argue their points: they took a small shred of truth, or evidence, followed by a leap to a very distant conclusion—the problem was, that both tribes were technically “evidence-based”. So, where does that get us?
What is Evidence?
In my story, I felt stuck between two disagreeing/opposing tribes who both had some level of evidence to support their stance. Whether the evidence was strong, or of high quality (figure 1), was another question. This happens all the time. We use a small shred of truth, call it “evidence”, and we extrapolate a conclusion. With this method in mind, we can justify anything and everything in clinical practice and call ourselves evidence-based1-3. Because of this dilemma, we must define what evidence even is. Furthermore, we need to ask ourselves, is just having evidence enough call ourselves “evidence-based”?
So, to answer my rhetorical question, “what is evidence?”... evidence can literally be anything we observe. Whether we observe it in the clinic or lab, on the internet, at a seminar, or during a webinar, it can be labelled as evidence. In the first study I referenced, researchers found that leech therapy seemed to have some positive effects for people suffering from chronic low back pain. I can’t dispute the results by saying it didn’t happen. However, could I start leeching every one of my patients with low back pain and declare that I am practicing in an evidence-based manner?
Similarly, could I be an “alternative therapist” that relies on clinical observations to declare myself as evidence-based as well? This predicament leaves us in a pickle: we want to be evidence-based, yet, because we really have not been able to narrow down what that means, it has been interpreted and “repurposed” as something that makes sense to each individual.
Let us go back to figure 1 and look at the hierarchy of evidence, which can be a guide towards being "less wrong". The problem is that there are so many poorly conducted RCTs and systematic reviews/meta-analyses (SRMA) out there that the hierarchy pyramid may just be a small part of a very large solution5-7,14.
What does it mean to be evidence based?
Evidence-based practice doesn’t stop at just having evidence. We need to look at the totality of evidence (which includes clinical observations), and utilize our appraisal and critical thinking skills. JUST having low quality evidence may encourage us to have tunnel-vision and justify continuing to do what we’ve always done9,10. Truly, as David Sackett (developer of EBP/EBM) imagined, evidence-based practice was supposed to help us reduce costs and improve quality of care by keeping clinicians up to date4. In order to improve quality of care, we need to sift through the scientific literature to see what else is out there, what else works better than what we are currently doing, why something works, and what other factors play a role. We can ask questions that a curious researcher might have already answered. It gives us a glance at the whole picture, instead of one piece of a 100,000 piece puzzle.
Imagine that, looking at the whole picture allows us to see the whole picture.
I know it is impossible to sift through everything that is published, as there are more and more publications every day6,7, and I do not want to pretend I know and have read about everything that is out there. In Erik Meira’s course (highly recommended), he gave great insight on a way to practically apply this. He asked, do we need to read every single article to know what is going on in the news? We can see some headlines, parts of articles, and listen to some news segments, and still have a good idea of what is going on. Then, if we want to make conclusions about what the news presents, we can read further on a current event. [Side note: Erik has a phenomenal/short blog on this topic, that I HIGHLY recommend you read].
Also, there is this awesome thing called systematic reviews and meta-analyses (SRMA) that can help us paint a broad brush on a given topic. We can easily get all the relevant literature (depending on inclusion/exclusion criteria) on a given topic and get an idea of what science is generally saying. There are some limitations to SRMAs11,12, but they are a great way to get a 30,000-foot view.
Since having evidence gives people the ability to be “evidence-based”, we either need to redefine what it means to be evidence-based, or we need to accept that being evidence-based it not enough15. Either way, we owe it to our patients to have evidence to support what we do, and we also owe it to our patients to critically think. If we lose our ability to question ourselves and become complacent, we fall in a trap where we “make the same mistakes with increasing confidence over an impressive number of years”13.
Our patients deserve better and we can do better.
Thank you for reading!
Nate Wong, SPT Creighton University
- Hohmann CD, Stange R, Steckhan N, et al. The Effectiveness of Leech Therapy in Chronic Low Back Pain. Dtsch Arztebl Int. 2018;115(47):785–792.
- Cuijpers P, Cristea I. How to prove that your therapy is effective, even when it is not: a guideline. Epidemiol Psychiatr Sci. 2016;25:428‐35.
- Hartman SE. Why do ineffective treatments seem helpful? A brief review. Chiropr Osteopat. 2009;17:10.
- Sackett DL, Rosenburg WM, Gray JA, Haynes RB, Richardson WS. Evidence‐based medicine: what it is and it isn't. BJM. 1996; 312:71–72.
- Ioannidis JP. Evidence-Based medicine has been hijacked: a report to David Sackett. J Clin Epidemiol. 2016;73:82–86.
- Greenhalgh T, Howick J, Maskrey N; Evidence Based Medicine Renaissance Group. Evidence based medicine: a movement in crisis?. BMJ. 2014;348:g3725.
- Fontelo P, Liu F. A review of recent publication trends from top publishing countries. Syst Rev. 2018;7(1):147.
- Lehman GH. The Role and Value of Symptom-Modification Approaches in Musculoskeletal Practice. J Orthop Sports Phys Ther. 2018;48(6):430-435
- Isaacs D, Fitzgerald D. Seven alternatives to evidence based medicine. BMJ. 1999;319(7225):1618.
- Cook C. Emotional-based practice. J Man Manip Ther. 2011;19(2):63–65.
- Travers MJ, Murphy MC, Debenham JR, et al. Should this systematic review and meta-analysis change my practice? Part 1: exploring treatment effect and trustworthiness. Br J Sports Med. 2019;pii: bjsports-2018-099958. [Epub ahead of print]
- Chevret Sylvie, Ferguson Niall D., Bellomo Rinaldo. Are systematic reviews and meta-analyses still useful research? No. Intensive Care Medicine. 2018;44(4):515–517.
- O’Donnell M. A Sceptic’s Medical Dictionary. London: BMJ Books, 1997.
- Yitschaky O, Yitschaky M, Zadik Y. Case report on trial: Do you, Doctor, swear to tell the truth, the whole truth and nothing but the truth?. J Med Case Rep. 2011;5:179.
- Djulbegovic B., Guyatt G.H. Progress in evidence-based medicine: A quarter century on. Lancet. 2017;390:415–423.