Randomized Trials vs Meta-analyses: Which Is the Better Bet?

Get Permission

Heidi Nelson, MD

Natalie G. Coburn, MD, MPH

Randomized Trials and Meta-analyses

[Randomized controlled trials] provide the highest level of evidence because they contain the least amount of bias.

—Heidi Nelson, MD
Most people underestimate the importance of the systematic review behind [meta-analysis]. I think this is where the true power of the methodology comes into play.

—Natalie G. Coburn, MD, MPH

Two surgical oncology experts who squared off in a “Great Debate” at the 2014 Society of Surgical Oncology (SSO) Annual Cancer Symposium in Phoenix. Heidi Nelson, MD, Professor of Surgery at the Mayo Clinic in Rochester, Minnesota, argued for the superiority of randomized controlled trials in providing a “higher level of medical evidence,” while Natalie G. Coburn, MD, MPH, Associate Professor and Division Head of General Surgery at Sunnybrook Health Sciences Centre, University of Toronto, made the case for why meta-analyses have the upper hand.1

The Bias Factor

Dr. Nelson outlined what she called “critical design issues” with meta-analyses, many of which have to do with bias. First, all meta-analyses are biased by being retrospective. Then, the studies for inclusion have to be identified and selected, and there will be heterogeneity to those results.

“One laproscopic study might not be asking the same question as another laproscopic study,” Dr. Nelson explained by way of example. “They may have differences in terms of the specificity of the studies.”

Third, there is the matter of access to the information. This introduced what Dr. Nelson referred to as data availability bias.

Dr. Nelson led the Clinical Outcomes of Surgical Therapy (COST) Study Group for a trial that compared laparoscopically assisted and open colectomy for colon cancer.2 “At least once a year, I get a request from someone who is doing a meta-analysis and they want our data from the COST trial,” she said. “It’s a lot of work to try to get that data, get it formatted, explain what it means, and send it out. Then you just cross your fingers and have to trust that [the meta-analysis authors] understand it and that they do the right thing with it.”

Dr. Nelson highlighted results of a study published in 2012 that assessed publication bias, selection bias, and unavailable data in meta-analyses. In more than half the cases (16 of 31 meta-analyses), the requested individual participant data could not be obtained, but only 31% of these meta-analyses mentioned this as a potential limitation. Moreover, a mean of 87% of trial data was obtained.3

“The reality is that most of the time, people don’t get the data that they think they are going to get, or they don’t get enough of it,” she said.

Another limitation of a meta-analysis is publication bias. Dr. Nelson cited data from a 2008 paper that looked at the selective publication of U.S. Food and Drug Administration–registered antidepressant trials and how that bias influenced efficacy. She explained that 97% of the studies published showed positive results, whereas only 12% of studies with negative results made it to print. The absence of nonpublished data increased the positive effect anywhere from 11% to 69%.4

“If you don’t have that negative data in the literature to examine, you automatically inflate the positive results, and there’s no way you can compensate for that,” she said.

Ultimately, a meta-analysis is only as good as the contributing randomized controlled trials that it is evaluating. Dr. Nelson pointed out that “if you have a bunch of poor-quality randomized, controlled trials, it doesn’t make the meta-analysis data any better. Low quality plus low quality doesn’t equal high quality.”

The primary difference between a randomized controlled trial and meta-analysis is that the former “provide the highest level of evidence because they contain the least amount of bias. Randomized controlled trials reduce bias, while meta-analyses increase bias,” she stated.

If randomized controlled trials are designed well and conducted properly, the results would concur, and there would be no need for meta-analysis, she stated.

Finally, Dr. Nelson likened reading and interpreting a meta-analysis to “an insurance policy. You have to be able to get down to the fine print,” adding, “it’s not only some of the fine print; it’s all the fine print. You have to be able to sort it out.”

All About the Methodology

“When you think of meta-analyses, you think of the forest plot and the beautiful set of lines and what that means to you,” Dr. Coburn said. “But what most people underestimate is the importance of the systematic review behind it. I think this is where the true power of the methodology comes into play.”

Dr. Coburn pointed out that when the levels for evidence-based care are laid out in the pyramid formation, systematic reviews/meta-analyses are at the top, followed by randomized controlled trials, cohort studies, case-control studies, case series/case reports, and editorials/expert opinion.

Dr. Coburn acknowledged that randomized controlled trials deserve a place near the top of the pyramid because of their strengths, such as the ability to minimize patient selection bias and confounding bias, a superior ability to determine the effects of an intervention, and their tendency to encourage professional collaboration. But the idea that a randomized controlled trial is better than a meta-analysis stems from “a misunderstanding and a bit of misinterpretation of how the methodology works for meta-analyses and systematic reviews,” she said.

A systematic review has very specific steps, beginning with the establishment of a question. Once that has been set, a search strategy can be designed. It’s during this step that publication bias can be reduced using multiple sources such as trial registries, conference proceedings, published literature, and contacting the primary author.

“You can go after these things and you can minimize the chance of missing negative [trial results],” Dr. Coburn explained.

Next comes study selection, and Dr. Coburn emphasized that “this is not fantasy football. You don’t just get to pick your favorite studies. There’s a methodology here,” and that includes producing a CONSORT (CONsolidated Standards Of Reporting Trials) or PRISMA (Preferred Reporting Items of Systematic reviews and Meta-Analyses) diagram that explains which studies were included or excluded and why that was done. The idea is that if future investigators revisit the meta-analysis, they will be able to reproduce the results, she explained.

The subsequent step is to assess the quality of studies using a grading system to evaluate issues such as bias, concealment, and follow-up. The meta-analysis authors will then select and extract variables. Dr. Coburn referred to an editorial by Dr. Nelson in which she wrote, “[small, inadequate randomized clinical trials] … can be combined via a meta-analysis to gain a more precise estimate of potential impact.”5 These combined data allow for subgroup and secondary analysis, which sometimes cannot be done in a randomized controlled trial that was powered for a single primary outcome.

“Often, we come across things in post-hoc analysis that were simply not powered for in the original trial,” Dr. Coburn said. “How does this work? What’s the hocus pocus behind this?” she added. “It’s driven by numbers. The more patients you put in, the tighter your confidence interval. The tighter your confidence interval, the lower your P value, which lowers the risk that you got to these results by chance alone.”

Inadequate power is a common weakness of a randomized controlled trial. Dr. Coburn cited a 2007 study that looked at the statistical power of negative randomized controlled trials presented at ASCO Annual Meetings from 1995 to 2003.6 According to that research, “more than half of negative randomized controlled trials presented at ASCO Annual Meetings do not have an adequate sample to detect a medium-size treatment effect,” she explained. But “only 10% of the abstracts stated that they were underpowered. The rest just reported themselves as being negative. And I wonder as I watch some of the presentations [at the SSO meeting] this past week, why we’re so quick to call a trial negative, and why we often don’t admit they are just underpowered.”

Finally, meta-analyses are more likely to reflect actual practice vs the strict protocols that guide a randomized controlled trial, Dr. Coburn said. By evaluating results from multiple trials done globally, meta-analyses give clinicians the chance to see if those findings apply outside the trial and in a real-world setting.


Dr. Nelson responded to Dr. Coburn’s argument by emphasizing the strengths of randomized controlled trials, specifically:

  • Focused hypothesis
  • Rigorous inclusion and exclusion criteria
  • Adequate sample to avoid overestimates or underestimates of effect size
  • Randomization to eliminate bias
  • Quality assurance and quality controls

“When you do a post-hoc analysis, you can’t tell anything about quality assurance and quality control,” Dr. Nelson said. “But when you do a prospective, randomized trial, you can actually look at [quality]. There’s nothing about that in a meta-analysis.” She pointed out that a properly randomized trial provides definitive evidence, answers specific questions, has a low risk for misinterpretation, and a low risk for selection/publication bias.

“It might be nice to have lots of fireworks [with a meta-analysis]. But one well done randomized trial is going to address the … issue,” she said.

To illustrate the influence of meta-analysis, Dr. Coburn described studies done on using corticosteroids to reduce the risk of death in preterm delivery. She explained that the topic had been covered in 2 decades of studies during the 1970s and 1980s, including a dozen randomized controlled trials. Yet the uptake of corticosteroids for this condition among gynecologists was only 20% to 40% by the early 1990s.

It was only after a 1991 meta-analysis and a 1994 National Institutes of Health consensus conference that it became standard of care to give a single course of corticosteroids in preterm labor to reduce the risk of death and respiratory distress syndrome in infants.7,8 “I know that’s a bit far off oncology,” she said, “but I think [this illustrates] the power of meta-
analysis.” ■

Disclosure: Drs. Nelson and Coburn reported no potential conflicts of interest.


1. Nelson H, Coburn NG: Randomized clinical trials versus meta-analysis: Which is a higher level of medical evidence? 2014 SSO Annual Cancer Symposium. Presented March 15, 2014.

2. Clinical Outcomes of Surgical Therapy Study Group: A comparison of laparoscopically assisted and open colectomy for colon cancer. N Engl J Med 350:2050-2059, 2004.

3. Ahmed I, Sutton AJ, Riley RD: Assessment of publication bias, selection bias, and unavailable data in meta-analyses using individual participant data: A database survey. BMJ 3:344, 2012.

4. Turner EH, Matthews AM, Linadartos E, et al: Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 358:252-260, 2008.

5. Nelson H, Ballman K: Achieving the right volume of randomized controlled trials. Ann Surg 258:208-209, 2013.

6. Bedard PL, Krzyzanowska MK, et al: Statistical power of negative randomized controlled trials presented at American Society for Clinical Oncology annual meetings. J Clin Oncol 10:3482-3487, 2007.

7. Crowley P, Chalmers I, Keirse MJ: The effects of corticosteroid administration before preterm delivery: An overview of the evidence from controlled trials. Br J Obstet Gynaecol 97:11-25, 1990.

8. National Institutes of Health Consensus Development Conference Statement: The Effect of Corticosteroids for Fetal Maturation on Perinatal Outcomes. February 28–March 2, 1994. Available at