Advertisement

A Blueprint for Drug/Diagnostic Development: Expansion and Use of Curated Genetic Databases


Advertisement
Get Permission

Ellen V. Sigal, PhD

Matthew Marton, PhD

Zivana Tezak, PhD

Roman Yelensky, PhD

Girish Putcha, MD, PhD

Mya Thomae, RAC, CQA

Larry Norton, MD

Janet Woodcock, MD

Shashikant Kulkarni, MS, PhD

There are many advantages of high-throughput sequencing over that of a single analyte, but demonstrating its adequacy for clinical use is challenging, particularly the tension between the need to ensure validity and the practical limitations of submitting data for every possible variant.

—Ellen V. Sigal, PhD
Nonrandomized trials have led us down the garden path many times in the past, so now we need additional ways to generate large-scale knowledge. Variability is rife, and data can differ depending on the tests used. We need similar answers to the same questions posed in different ways.

—Janet Woodcock, MD

In a continuation of a 2014 conference that explored regulatory considerations and strategies for next-generation sequencing, the Friends of Cancer Research, with support from Alexandria Real Estate Equities, Inc, Pasadena, California, met to discuss the issues and problems of coordinating drug and diagnostic development, specifically the use of curated databases.

Ellen V. Sigal, PhD, Chair and Founder of Friends of Cancer Research, introduced the gathering by noting that high-throughput genomic technologies, including next-generation sequencing, allow for rapid assessment of many analytes and can help predict patients’ risk of developing certain cancers and how they might respond to therapies. “There are many advantages of high-throughput sequencing over that of a single analyte, but demonstrating its adequacy for clinical use is challenging, particularly the tension between the need to ensure validity and the practical limitations of submitting data for every possible variant.”

She added that in 2011, the U.S. Food and Drug Administration (FDA) released a draft guidance on the codevelopment of targeted therapies and companion diagnostics. The next year, Friends of Cancer Research identified the aspects of codevelopment that would most benefit from increased clarity in such a guidance. But much work remains to be done. “This is a really big deal,” Dr. Sigal said. “Patients always come first, so we have to get it right.”

A curated genetic database focuses on genomes that have been completely sequenced and that are associated with an active research community to contribute and oversee gene-specific data. Information includes nomenclature, chromosomal localization, gene products and their attributes (eg, protein interactions), associated markers, phenotypes, and interactions, as well as links to citations, sequences, variation details, maps, expression reports, homologs, protein domain content, and external databases.

A significant stumbling block to this work is that there are as yet no evidence-based guidelines about sample collection, preparation, analysis, clinical reporting, or data storage and protection, said Lynne Zydowsky, PhD, President and Cofounder, Alexandria Summit, and Chief Science Advisor to the CEO, Alexandria Real Estate Equities, Inc.

Minimum Core Data Elements

The two sets of panelists agreed that a significant problem arises from defining minimum core data elements required to interpret the clinical significance of variants. In other words, how is information collected and processed to capture functional consequences of variants and other clinical details?

Matthew Marton, PhD, Director, Sr. Principal Scientist, Translational Biomarkers, Merck & Co, talked about the relationship between genotypes and phenotypes. “Both germline and somatic variants can be detected in tumors. The latter are the usual targets of cancer therapy, although some germline variants also are targetable (eg, BRCA1/2 variants with PARP inhibitors.)”

Zivana Tezak, PhD, Associate Director for Science and Technology, Office of In Vitro Diagnostics and Radiologic Health, FDA Center for Devices and Radiologic Health, added that risk variants for hereditary cancer are classified in the same way as those for other inherited diseases: benign, likely benign, unknown, likely pathogenic, and pathogenic. Moreover, somatic variants need a different classification system from germline variants, for example somatic variants are classified as cancer “driver” or “passenger” mutations—or unknown whether variant is involved in treatment response, etc.. Driver variants should be included in genetic databases; however, because they are affected by factors such as clonal vs subclonal status and the level of copy number ­amplification.

Dr. Marton added that biomarkers or variants can be grouped into classes and interpreted on the basis of clinical or preclinical data (eg, all EGFR exon 19 deletions). Variant groups with clinically relevant phenotypes are associated with factors such as diagnosis, prognosis, prediction of response to therapy, adverse events, and demography (age, gender, ethnicity).

All this would appear to be part of a significant move forward, he said. But there’s a big hitch: ensuring the validity and comparability of the data, such as sequencing technology, sample type (blood, tissue, etc), and analytic performance parameters. In addition, there are as yet no assay- and platform-specific standards to ensure adequate analytic performance for different variants, such as small or large deletions, insertions, rearrangements, and genomic context.

Roman Yelensky, PhD, Vice President, Biomarker and Companion Diagnostic Development, Foundation Medicine, Cambridge, Massachusetts, said that if these parameters are captured when a variant is submitted, a database would be consistent and accurate. Some variants fall into clear categories of driver or passenger mutations, but others have conflicting classifications among researchers, clinicians, and laboratories. This happens for a variety of reasons, but the fact that it happens at all means that a mechanism must be created to adjudicate variants, particularly those with clinical implications, those that affect regulatory and reimbursement decisions, and those that affect treatment and/or prognosis.

One way to do this, he said—and other panelists agreed—is to establish a group of subject matter experts to arbitrate inconsistent results and finalize the variant assignment based on the most current evidence. Such efforts are already underway for germline variant classification where groups of scientists have been organized to apply standards. They are trying to develop a “master” classification for an individual or group of variants by reviewing pertinent literature. The classification system is regularly updated, with an ultimate goal of creating standards that can be applied broadly among databases to minimize variability. There are, however, still significant discrepancies.

Jeff Allen, PhD, Friends of Cancer Research Executive Director, put it succinctly: “Since there is such enormous variability in gene panels, what is their ultimate use?”

“Anthem health plans do cover individual gene tests when clinically indicated, regardless of the testing methodology used,” said Jennifer Malin, MD, PhD, Staff Vice President, Clinical Strategy, Anthem, Inc. “However, entire panels that include tests that are investigational and not medically necessary are not covered.”

The Centers for Medicare & Medicaid Services (CMS) echoed that sentiment. “We need analytical and clinical validity, and we, as a public health agency, support this type of research,” said Tamara Syrek Jensen, JD, Director for the Coverage and Analysis Group at CMS.

Framework to Evaluate Strength of Evidence

Girish Putcha, MD, PhD, Director, Laboratory Science, Palmetto GBA, Columbia, South Carolina, said that assessing levels of evidence for genotype and phenotype associations is essential to the appropriate application of any proposed database. He suggested a grading system based on preclinical evidence. “Classification for a given variant is not intended to be static and will change as new evidence becomes available. Therefore, we have to continually re-evaluate it.”

The system he proposed:

  • Level 1A: FDA-approved for a patient’s tumor type with indication and outcomes associated with a specific biomarker
  • Level 1B: Adequately powered prospective study with biomarker selection/stratification or a meta-analysis, demonstrating that biomarker predicts tumor response (or resistance) to a drug or that a drug is clinically more or less effective
  • Level 2A: Robust demonstration that a biomarker is associated with tumor response or resistance to a drug in a patient’s tumor type
  • Level 2B: Single or few unusual responders showing a biomarker associated with response or resistance to a drug
  • Level 3A: Available clinical data demonstrating that a biomarker predicts tumor response to drug in a different tumor type
  • Level 3B: Preclinical data demonstrating that a biomarker predicts cell-based response to a drug

Panelists noted several problems with the system. First, words like “robust” or “adequate” are essentially meaningless. Second, “statistically significant” is always a tricky phrase but most especially in rare diseases where there are too few cases to mean much,” said Mya Thomae, RAC, CQA, Vice President of Regulatory Affairs, Illumina, Inc. Third, levels of evidence may not be mutually exclusive. Fourth, clinical validity has never been adequately defined. Finally, characterization of the variants is not clear.

Database Use in Context

What actual good are these databases, and what can be done with them? asked Dr. Yelensky. Even if all challenges are addressed and met, how can the data be used? There are three possibilities, all of which have problems: scientific research (including publication), regulatory approval, and reimbursement.

“Associations with level 3 evidence could be used for further investigation and early clinical trials. Level 2 associations could support clinical validity, although FDA review would be required to assess the need for and feasibility of additional studies. Associations with level 1 evidence may possibly not require additional FDA review, rather only analytic validation of variant detection. They are sufficient for diagnosis and reimbursement,” he said.

Still such database use remains a very iffy proposition. For example, is a level-based framework appropriate to guide regulatory approval, and how could it translate to other contexts such as reimbursement decisions? Even more problematic, if there are multiple databases that meet FDA requirements but contain discordant results, how will they be resolved? How will products that relied on these data be affected, and how would the FDA resolve the discrepancy?

Larry Norton, MD, Deputy Physician-in-Chief, Breast Cancer Programs and Medical Director, Evelyn H. Lauder Breast Center, Memorial Sloan Kettering Cancer Center, said, “If you don’t have prospective randomized trials, you don’t really have data.” He then immediately acknowledged the impracticality of large trials in the current climate of individual and small-number mutations for any given cancer.

Janet Woodcock, MD, Director, FDA Center for Drug Evaluation and Research, agreed in a slightly different way: “Nonrandomized trials have led us down the garden path many times in the past, so now we need additional ways to generate large-scale knowledge. Variability is rife, and data can differ depending on the tests used. We need similar answers to the same questions posed in different ways.”

Data Sharing and Publicly Available Databases

Shashikant Kulkarni, MS, PhD, Director, Cytogenetics and Molecular Pathology, Washington University School of Medicine, noted that a significant amount of data generated from genetic testing is not publicly available for two major reasons. First, clinical laboratories have limited resources to collate and upload their findings. Second, sponsors who build proprietary databases are reluctant to go public.

“Given that accurate interpretation of genomic data is essential to patients, developing incentives to help all laboratories [large and small, public and private] to share data should make it easier for everyone to perform more effective diagnostic tests,” he said.

To this end, he suggested the following measures:

Establish or upgrade databases to include the costs of creating pipelines to facilitate data transfer.

Give contributors to public databases access to priority review paths to assist in intensive development and accelerated review.

Provide competitive protection (similar to patenting) that provides a time-bound competitive advantage.

Share data published in peer-reviewed literature with at least one publicly available database.

Dr. Kulkarni noted that recent FDA programs to expedite development and review of new drugs (Fast Track designation, Breakthrough Therapy designation, accelerated approval, and priority review) have been a boon to the treatment of serious and life-threatening diseases—especially oncology, where targeted therapy is highly dependent on codevelopment of central data exchanges, which cannot be accomplished within the sped-up time frame of faster development tracks.

However, an expedited review path could be created to ensure that the drug and central data exchanges would reach the market at close to the same time. To this end, aggregated publicly available databases to support the clinical relevance of variants in the central data exchange genes may enable cleared or approved products to reach the market quickly. Simultaneous inclusion of data submission requirements as a qualifying criterion for expedited review can help to establish a feedback loop and provide sponsors with the framework they need to share proprietary data sources. ■

Disclosure: Drs. Sigal, Zydowsky, Tezak, Yelensky, Allen, Malin, Putcha, Norton, Woodcock, and Kulkarni reported no potential conflicts of interest. Dr. Marton is an employee and stock owner of Merck. Ms. Thomae is an employee of Illumina, Inc.

 


Advertisement

Advertisement




Advertisement