Decision making in medicine can be difficult. The best decisions are based on a combination of 1) the best available evidence; 2) the clinician’s experience, knowledge, and skills; and 3) the patient’s individual circumstances, wants, and needs.
Evidence can come in many forms: systematic reviews and meta-analyses (SRMAs), randomized controlled trials (RCTs), observational controlled trials (OCTs), epidemiological analyses, case series, and patient anecdotes.
For a long time, RCTs were considered the ‘gold standard’ of evidence. In this type of clinical trial, participants are randomly assigned to either a treatment group or a control group. You may have heard the term RCT with reference to the various medications being considered as possible COVID-19 treatments.
Another term you may have heard is ‘meta-analysis’ or SRMA. SRMAs involve systematically reviewing the published literature to combine patient data (qualitative and quantitative) from numerous studies of a particular medical intervention to reach a conclusion that has greater statistical power than any single study. Because of the methodology of combining data from multiple studies, SRMAs look at an increased number of subjects, with greater diversity among subjects, and can identify accumulated effects.
However, both RCTs and SRMAs can suffer from the same flaws as other study designs and, unfortunately, these flaws are sometimes intentional due to the financial influences on both researchers and journal editors.
For example, SRMAs can lead to inaccurate results because of biased inclusion or exclusion of study data, flawed analyses, or the exclusion of unpublished studies.
The problems with RCTs are numerous, not least of which is the heavy influence of the pharmaceutical industry in the design and execution of studies, particularly in the larger, well-funded studies published in high-impact medical journals. Studies can be designed to produce a particular set of results, and often are. So much so that often RCTs do not accurately reflect real-world clinical impacts.
“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines,” wrote Dr. Marcia Angell, in her book The Truth About Drug Companies: How They Deceive Us and What to Do About It. “I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of the New England Journal of Medicine (NEJM),” she continued.
Angell wrote that medical journals have become “primarily a marketing machine to sell drugs of dubious benefit,” and suggested that the pharmaceutical industry has so much wealth and power that it is able to co-opt any institution that might stand in its way. This, she says, includes Congress, the FDA, academic medical centers, “and the medical profession itself.”
Angell and her husband, Arthur Relman — a Harvard professor who also edited NEJM for decades — warned of the undue influence of the pharmaceutical industry for years, as did many other doctors, researchers and medical ethicists.
“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue… In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data,” Richard Horton, Editor-in-Chief of The Lancet, wrote in 2015.
In 2005, Stanford professor John P. Ioannidis wrote an article entitled Why most published research findings are false, in which he states, “it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.”
In a 2014 paper, Ioannidis analyzed over 9,000 published meta-analyses in biomedicine and found that 1 in 5 were flawed beyond repair. Another 1 in 3 were redundant and unnecessary, while many others were decent but had “non-informative” evidence. Good and truly informative meta-analyses, he found, were a small minority (3%).
An analysis in 2011 looked at the overall quality of evidence behind 41 guidelines put out by the Infectious Diseases Society of America between 1994 and 2010. Only 14% of the guidelines were based on the supposed ‘gold standard’ of RCTs. Nearly 40% were based on expert opinion alone.
So where, you might ask, does that leave us when it comes to decision-making in healthcare?
Just as a jury is asked to review and evaluate all the evidence put forth during a trial, so too must clinicians look at all the best available evidence before them. We propose that researchers rely on the ‘totality of evidence’ and incorporate data from basic science, pharmacology, epidemiology, clinical experience, OCTs, RCTs, and SRMAs.
A common type of observational study is an OCT, where investigators retrospectively assess health outcomes among groups of participants according to a research plan or protocol. The outcomes of study subjects receiving medical interventions (such as drugs, devices, or procedures) are compared to the outcomes of subjects that did not. In these comparisons, the subjects are not randomly assigned to specific interventions by the investigator (as in a prospective RCT). Many discount the value of findings from OCTs due to an excessive concern that the results can be incorrectly interpreted due to the existence of unmeasured confounders, such as certain characteristics or the behavior of both the patients and the treating physicians.
Although such concerns are valid when relying on the results of a single OCT, the reality is that findings based on data from groups of OCTs are, on average, identical to the findings from groups of RCTs. Unfortunately, in modern medicine, this fact is rarely taught to or appreciated by physicians and researchers.
“The whole art of medicine is in observation,” said Dr. William Osler, a Canadian physician often described as the father of modern medicine.
A study published in 2014 looked at healthcare outcomes assessed with observational study designs compared with those assessed with randomized trials. The researchers reported that: “On average, there is little evidence for significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design, heterogeneity, or inclusion of studies of pharmacological interventions.”
The American Thoracic Society (ATS), in an official 2020 research statement, said: “Observational studies can provide evidence in representative and diverse patient populations. Quality observational studies should be sought in the development of ATS clinical practice guidelines, and in medical decision making.”