Reigning in the nation’s runaway medical costs was an underlying theme of President Obama’s health-care reform platform. Citing projects like The Dartmouth Atlas of Health Care, which documented large gaps in the quality, costs, and outcomes of health services around the country, the administration’s health-care policy proponents warned of the immediate need for better value in our health-care system. In 2009, $1.1 billion of the President’s stimulus package was earmarked for comparative effectiveness research. Many in the oncology community initially viewed comparative effectiveness research as an overarching government program that would ultimately ration oncologists’ treatment options in order to save money.
Health-care policy experts have made a strong case that comparative effectiveness research is an important instrument for comparing the value of competing strategies so that patients, providers, and policy makers can be offered appropriate recommendations for optimal care. However, while acceptance of comparative effectiveness research is growing, operational as well as political and cultural challenges remain.
The simple definition of comparative effectiveness research is: a methodology that attempts to frame the delivery of health care by comparing the benefit and harm of available diagnostic, prognostic, and therapeutic strategies in representative patients to define the most effective, safe, and cost-effective approach. To get a better understanding of comparative effectiveness research and its relationship with oncology services, The ASCO Post recently spoke with Gary Lyman, MD, Professor of Medicine at Duke University and the Duke Cancer Institute, where he is Director of the Comparative Effectiveness and Outcomes Research Program.
Informed Decisions, Rational Choices
How does the randomized controlled trials process fit in with comparative effectiveness research?
First off, it is important to note that comparative effectiveness research has been applied across the spectrum of cancer needs from prevention, screening, diagnosis, all the way through supportive care and end-of-life issues. In today’s difficult and rapidly changing-health-care environment, we need alternative sources of evidence to guide the evaluation and approval of new interventions while also addressing the need to make informed public health decisions. Randomized controlled clinical trials and meta-analyses of such trials are still the gold standard and backbone of comparative effectiveness research and the most definitive way to assess treatment efficacy.
However, the challenge with using randomized trials is that we simply don’t have trials that compare the vast majority of clinical issues. Moreover, randomized controlled trials tend to be restricted in terms of eligibility and design, which limits the generalizability of their findings to the broader cancer population. To further evaluate effectiveness and assess less common or delayed toxicities in the broad cancer population, additional sources of evidence are needed to guide the evaluation and approval of new interventions and better guide clinical decisions in the oncology setting.
The goal of comparative effectiveness research is to go beyond the boundaries of randomized trials and gather all the credible evidence on a given clinical question so that clinicians can make more rational choices in treatment selection. Comparative effectiveness research is not a license to do away with clinical trials; it is a way to make the best use of comparative data, and randomized trials will always be a component in that
process.
Placing Value Over Cost
Linking cost savings with the central mission of comparative effectiveness research has perhaps given the impression to many in the oncology community that they are going to be overly scrutinized about the costs of the care they deliver. Have we cleared up that concern?
Although we don’t specifically link cost savings to comparative effectiveness research, identifying interventions that provide the most value to patients is fundamental, and any cost benefits may certainly be a desired outcome. But cost is a relative issue. For instance, selecting the most appropriate targeted agent based on biomarkers associated with response, may increase effectiveness while at the same time reducing harmful and expensive complications. So, more selective use of exciting yet expensive technologies may also reduce health-care costs by increasing the value of cancer care.
That said, the issue that oncologists have wrestled with when it comes to comparative effectiveness research is one of oversight. In other words, who gathers the evidence and what do they do with it? It becomes a matter of not having downstream knowledgeable clinical oversight in reviewing the evidence that will ultimately be used to formulate decisions about care.
Community oncologists want to know how much control policymakers or Congressional committees will have over their clinical care decisions. Those concerns still need to be addressed. Needless to say, there are some members of the oncology community who believe that comparative effectiveness research is a backdoor to rationing care, which it is not.
Most importantly, it is essential that the oncology community and our professional society, ASCO, have a prominent seat at the comparative effectiveness research table defining the endpoints and the rigorous processes needed to interpret the evidence. Oncologists are the ones taking care of cancer patients in the clinic. If we don’t take the lead in this initiative to assess care, then other parties, such as payers or government policymakers will. That will not be a good result, so we need to be at the head of this discussion.
When I began speaking about comparative effectiveness research several years ago, there was almost total pushback from oncology audiences. I don’t get that feeling anymore. The oncology community is beginning to truly understand that comparative effectiveness research is a tool for obtaining and evaluating the best real-world information available on real-world patients that can ultimately translate into better care.
Big Data
There’s been a lot of interest in the development of rapid learning systems since the release of IOM’s report, The Learning Healthcare System. Does comparative effectiveness research dovetail with this exciting initiative, such as ASCO’s CancerLinQ?
For one, the hope around integrating rapid learning health systems and comparative effectiveness research is that these systems will synchronize and adapt to the real-time data mining that can be used as a comparative effectiveness research evidence base. Naturally, this type of large-scale system is dependent on widespread adoption of electronic health records, which is a ways down the road.
It’s important to note that although it makes intuitive sense to mine real-time clinical data, it is vital to understand both the strengths and limitations of the tools you use to gather the evidence. Ultimately, the rapid learning system, which is at the heart of ASCO’s CancerLinQ, uses observational data, which needs to be carefully quality controlled and can be biased by known as well as unknown confounding factors. That said, our traditional methods of gathering data from meetings and journals, is almost instantly out of date by the following week. Therefore, the goal of rapid learning systems such as CancerLinQ is to provide a robust data bank of clinically dynamic evidence that we’ll be able to tap into and see how patients, across similar settings, are responding to their treatments. This is but one part of the comparative effectiveness research framework, and like all rapidly culled observational data, it needs to be used wisely, with a full understanding of both the strengths and limitations.
Risk-to-Reward Equation
Will comparative effectiveness research have a demonstrable effect on the practice of oncology moving forward?
I think it already has. The challenge that we still have to grapple with is rapidly escalating health-care costs. We all agree that assessing value of the cancer care we deliver demands balancing effectiveness and safety within a risk-to-reward equation when it comes to treating our patients. So the value component of the equation is already imbedded into our culture. Our next step forward is placing cost into that value equation. But we need to take the lead in that part of the discussion. And to that end, comparative effectiveness research is a tool we should embrace. ■
Disclosure: Dr. Lyman is a member of the ASCO Board of Directors.