There is no doubt that this is a halcyon period in oncology. The unraveling of the genome has been tremendously important, and finally has helped us to move treatment selection from an era of rational empiricism to one of refined, molecular prognostication.
In the care of breast cancer, the impact of our understanding of BRCA1 and BRCA2, and of the genes that predict response to agents targeting HER2/neu is unquestioned. Similarly, in colorectal cancer, application of our understanding of thymidylate synthase and dihydropyrimidine
dehydrogenase and their relationship to efficacy and toxicity of the fluoropyrimidines is clear. There are innumerable examples of important genomic predictive and prognostic applications in the hematologic
malignancies, using well standardized techniques.
Two pharmaceutical companies approached the U.S. Food and Drug Administration (FDA) requesting a restriction of labeling of their anti–epidermal growth factor targeted therapies to those that had wild-type KRAS expression because of the lack of response among tumors with mutated genes. There is also exciting information about the relevance of ALK for a subset of lung cancers.
Making Old Mistakes
That said, I am increasingly concerned about the current stance on genomics in oncology, as we seem to be making old mistakes. I think back to the early days of the evolution of chemotherapy—novel agents were tested in a sequential fashion, progressing stepwise from phase I to phase II to phase III, after the presentation of encouraging preclinical information to decision networks. Support was provided by the National Institutes of Health and/or the pharmaceutical industry, usually with the production of free drugs, and support for the costs of clinical trials. We now know the imperfections and abuses of the system, but it actually made sense overall.
So what’s the problem in the new, genomic era? First, we have a collection of laboratories with the capacity to provide “routine” genetic testing, offered to oncologists in either clinical practices or academic centers, and sometimes to patients themselves. In the former situation, my concern is that the vast majority of algorithms for the use of gene-expression studies remain unproven, supported only by level 2/3 or presumptive evidence.
Need for Standardization
In addition, there is no national standardization of the assays for gene expression, although we know that the quality of the techniques employed really does matter. Yet these laboratories seek reimbursement routinely, either from centers engaged in investigative work or from patients directly, at great cost, and our system seems content to go along with this. This seems sadly reminiscent of the early days of direct chemosensitivity testing.
Would it not make more sense for our national system of health care to require the same type of corporate investment that has characterized the development of unproven anticancer agents? Should there not be a system of standardization, both of the assay systems and of the clinical trial mechanisms employed to assess them? Can someone explain to me the evidence-based standard algorithm for routine mathematical evaluation of gene-expression studies, including a way of dealing with the range of expressed mutations? Once that is done, perhaps we can start to consider a standard approach to the clinical interpretation of next-generation sequencing.
Need for Regulation
We have recently heard that the FDA has taken a stance on direct-to-patient marketing of gene-expression studies. That action may be the first regulatory step to limit implementation and the potential to create anxiety and panic among uninformed lay populations who have not had the benefit of counseling on the implications of such studies.
As is widely recognized, there is increasing urgency for us to start to think carefully about profligate expenditure in health care, which is becoming a huge component of the national debt. If we consider that more than 200,000 patients are diagnosed with lung cancer each year, and that a typical charge for a panel of genetic testing is in the range of $1,500 to $10,000 or more, it really isn’t rocket science to work out that the potential costs to the nation, for unproven sets of tests, will be vast. It is high time that some form of regulation, predicated on technology and interpretation standards as well as level 1 clinical evidence should be introduced.
Speaking of the FDA, I note that they have recently given authorization for a next-generation sequencer, and it will be important to see if and how they plan to be involved in quality assurance and standardization in the future evolution of this complex and lucrative technology.
Need for Speed
The other domain of serious concern is the need for speed. Some of those at the forefront of genomic research seem to be in a real hurry to publish preliminary data—we see this on a regular basis in grant applications in which “hypothesis-generating” underpowered genomic studies are added to well-designed clinical trials. The purported intent of these translational studies is to help to model future studies, but, in fact, they have so little statistical power that they are unlikely to be informative at all.
Against a background in which we are just beginning to understand the promiscuity of interactions between targeted therapies and gene pathways, as well as the extraordinarily complex interactions and relationships between genes and different genetic pathways, we are seeing increased use of trial designs that are extrapolated from fundamental Bayesian theory. The Bayesians have made great contributions to our understanding of parsimonious use of limited patient resources, but it may well be that the range of discontinuation designs, complicated by inadequate understanding of the aforementioned pathways and interactions, will lead us to discard important agents. It seems to me that this is exactly the time not to underpower studies in haste, but rather to secure clearly defined, reproducible pathway information before experimenting with less conventional, statistically weak trial algorithms.
Need for Rational Thinking
In closing, let me be very clear—I am very supportive of the importance of learning how to apply and implement routine genetic testing (of patients and tumor samples). At the Levine Cancer Institute, we have convened a working party of clinicians, molecular biologists, ethicists, and genomic biostatisticians to try to develop a rational stance and incorporate its principles into our system of electronic treatment pathways.
However, at a national level, understanding that our budget is under siege and that health-care dollars are not limitless, I just hope that, as a community, we increase the level of rational thinking about how to do this well. Hopefully government will involve real experts before setting up their system of oversight and regulation. ■
Dr. Raghavan is President, Levine Cancer Institute, Charlotte, North Carolina.
Disclaimer: This commentary represents the views of the author and may not necessarily reflect the views of ASCO.