Advertisement

Cancer Informatics: A Future Necessity, but Challenges Abound


Advertisement
Get Permission

The National Cancer Policy Forum of the Institute of Medicine (IOM) recently convened a workshop on cancer informatics to examine and discuss needs and challenges facing biomedical researchers, which will in turn affect the way oncology is practiced in the future.

“This is a time of huge scientific opportunity, but significant challenges abound in collection, integration, analysis, and exchange of biomedical data. The problem is growing steadily worse as the amount of data increases exponentially,” said Sharon B. Murphy, MD, Scholar-in-Residence, IOM Board on Health Care Services, and Chair of the workshop.

Workshop objectives were to frame the problem and assess the need for a system of cancer informatics, to raise awareness about the challenges, gaps, and opportunities, and to discuss a vision for transforming the enterprise.

Issues Are Many and Complex

3.6.39_shulman.jpg3.6.39_kean.jpg3.6.39_murphy.jpg“Because of the opportunities provided by molecular understanding of disease, we now have transformative potential for cancer treatment and prevention. Leaders in biomedicine want to link research and care into a seamless process by using advances in information technology [IT],” said Marcia A. Kean, MBA, Chairman of Strategic Initiatives, Feinstein Kean Healthcare, Cambridge, Massachusetts.

But, she added, practitioners need evidence-based treatments that also require new information systems and reconfiguration of much existing data. These systems must allow for data liquidity—that is, the rapid, seamless, secure exchange of standards-based information among authorized individual and institutional senders and recipients, which will:

  • Allow clinicians to have access to all the data they need for decision-making and identification of patients at risk for poor outcomes
  • Allow researchers access to data needed to support discovery of molecular signatures that identify populations at risk, are predictive of response to therapy, provide prognosis and risk of recurrence information, and facilitate identification of new targets by drug developers that can be used for nonresponders

Lawrence N. Shulman, MD, Chief Medical Officer, Dana-Farber Cancer Institute, said that the old days are over. “We are at a point in cancer research where we have the opportunity for a truly transformative approach that will accelerate not only our understanding of the biology of cancer, but also development of new, more effective therapies.”

Critical Challenges

There are technical obstacles as well, said Ms. Kean. First, the sheer quantity of data is overwhelming. Second, data are not uniform. There are no standard vocabularies, which makes it difficult to translate from one discipline to another. Third, software vendors have no incentive to build interconnected systems that would undermine their profit potential, and IT systems developed in academia are site-specific and can’t serve a larger enterprise. Fourth, the growing complexity in oncology research hinges on deciphering intricate networks of molecular pathways that predict responses to treatment. This creates an increasing need to integrate genetics, genomics, proteomics, and other fields into new drug development, which in turn creates a need for new computational tools and algorithms.

George Poste, DVM, PhD, Del E. Webb Chair in Health Innovation, Arizona State University, noted that in biomedical research, raw data is increasingly cheap, but if it is ill defined, not standardized, and statistically underpowered, it is of little value. He described the critical challenges:

  • Negative findings are often highly instructive but rarely reported.
  • Discovery phase knowledge is accelerating, but there is little successful translation to clinical practice.
  • Products with uncertain and/or limited efficacy are a continuing problem.
  • Powerful research tools do not compensate for poor sampling and weak analytical and statistical rigor.

Health IT and Electronic Health Records

Electronic health records are generally looked on with favor. Their advantages are the amount and type of data that can be stored and the ready amenability to analysis.

But the many gaps in privacy protection need to be addressed. Deven McGraw, JD, LLM, MPH, Director of the Health Privacy Project, Center for Democracy and Technology, Washington, DC, said that if privacy and security are built into electronic health records from the outset, consumers and professionals will be more likely to adopt them.

She said that the Health Insurance Portability and Accountability Act (HIPAA) in its current form will not be sufficient. It is a foundation on which to base new privacy protections, but it was designed to regulate only the sharing of information among health-care providers. New privacy policies need to be wider and deeper. Policymakers should focus on consistency of regulations across governing jurisdictions, strengthening federal law to go beyond storage and transmission, consumer participation in and control of what is contained in electronic health records, and control of marketing and commercial use.

Last year, the Centers for Medicare & Medicaid Services (CMS) began reimbursement for adoption of provider systems that support electronic health records. The agency’s criteria center on reporting requirements and quality measures, but systems are not required to communicate with one another. The absence of such provisions, said Ms. Kean, disincentivizes the health informatics and IT developer communities from solving the challenges of robust information exchange.

Because so many data are required to make clinical decisions, Dr. Shulman said, electronic health records “can facilitate practice, but they are not yet practical because of the current state of technology. Many elements are missing, and those that exist are often not in a structured format.”

He said that Dana-Farber shares electronic health records with seven institutions and listed essential data, all of which must be encoded and collated: demographics, tumor type and staging, genomic and molecular data, treatment plan, tumor response, toxicity, patient-reported outcomes, disease-free progression, and survival.

He added that patients will more accurately report adverse events if they can do it on their home computers as they experience symptoms, and this can happen only with a patient portal as part of the electronic health record system.

Experience in the Real World

3.6.40_dalton.jpgWilliam S. Dalton, MD, PhD, President and CEO, Moffitt Cancer Center, asked one of the ultimate questions: “How do we develop an integrated system that can deal with an overwhelming amount of information to support personalized medicine—and that is usable by clinicians?”

A requisite network will involve providers, patients, and researchers, all of whom contribute and share information toward a clinically annotated repository for tumor and normal specimens to form the basis for evidence-based care.

Moffitt is now collaborating with Oracle Corporation and 18 other hospitals to build a secure data analysis platform, called Total Cancer Care. It aggregates and analyzes data related to diagnosis, treatment, follow-up, and biospecimens for thousands of people. The system also supports analysis of DNA sequencing with other sources of patient information in the hope that it will result in information about outcomes in patients with common biomarkers.

But there are gaps in the system. “At a time of huge scientific opportunity, it is challenging to work with so much data, and the problem is worsening,” said Brad Pollock, MPH, PhD, Henry B. Dielman Distinguished University Chair, and Chairman of the Department of Epidemiology, University of Texas Health Science Center, San Antonio.

3.6.41_quote_pollock.jpgHe said that a paradigm shift is occurring with the ever-increasing availability of data. We are “moving a traditional research framework of ‘hypotheses in search of data’ to ‘data in search of hypotheses.’ Historically, most novel clinical oncology discoveries have been made using a hypothesis-driven research framework employing study designs that use data efficiently.”

Electronic health records often suffer from identity crises. That is, different databases in use within the same NCI-designated cancer center and do not routinely interoperate. “We need to bring clinical data up to the same standards as high-quality research data, and we need better data mining and filtering approaches for sorting through massive datasets,” said Dr. Pollock.

Envisioning the Future

Leroy Hood, MD, PhD, President, Institute for Systems Biology, Seattle, who was one of the early pioneers in technology development for the Human Genome Project, said that within a decade, every human being will be surrounded by a virtual cloud of billions of data points that will make possible what he calls “P4 medicine”:

  • Predictive—based on health history, DNA sequencing, and regular multiparameter blood measurements, which will provide the knowledge to predict one or more diseases
  • Preventive—design of therapeutic and preventive drugs and vaccines tailored to individuals’ predictive characteristics
  • Personalized—using unique individual genetic variations to mandate particular treatments
  • Participatory—patient-driven social networks surrounding disease and wellness

“P4 medicine will have enormous societal import and will transform every part of health care. It will shift the focus from illness to wellness and involve a new set of technologies and ways of looking at health,” he said.

He added that “Genomic identification will be standard in medical records, and when combined with phenotypic data, will provide the means to make inferences about an individual’s health and treatment of disease.” For instance:

Nanotechnology approaches to protein measurement will result in tests that can evaluate 50 specific proteins from 50 organs as a way of evaluating health rather than just disease.

Detailed analyses via digitization from a single cell—even a single molecule (transcriptomes, RNAomes, proteomes, metabolomes)—will reveal normal as well as disease mechanisms. This means that eventually the cost of care will drop to a point where it can be exported to the developing world.

Computational and mathematical tools will deal with staggering amounts of data. For example, the amount of information that can be obtained from millions of people with billions of data points can provide deep fundamental insights into predictive medicine, if all of it can be stored.

Collaborations Can Work

August Calhoun, PhD, Vice President, Dell Healthcare and Life Sciences Services, said that Dell is the first company of its kind to commit millions of dollars in technology and manpower to address childhood cancer through personalized medicine. It has joined with the Translational Genomics Research Institute (TGen), Phoenix, to improve treatment of pediatric neuroblastoma. “Our role is to support research to identify and share personalized treatments and to expand the first FDA-approved personalized medicine trial for pediatric cancer.”

Dell’s cloud computing technology—which enables the sharing of resources, software, and data over a network—will increase T-Gen’s computer sequencing and analysis capacity by 1,200%.

Spyro Mousses, PhD, Director and Professor, TGen Center for Biointelligence, said that more and more health-care providers see the advantage of moving data to the cloud. Cloud computing eliminates many single silos in health information management, at the same time offering significant efficiency and cost savings.

“Storage represents about 20% of the costs of health IT, so the cloud allows organizations to use only the space they need immediately,” he said

McKesson Specialty Health, launched after the acquisition of US Oncology, continues to drive clinical decision support and workflow efficiencies in iKnowMed, an affordable electronic health record for community-based practices, said Asif Ahmad, Senior Vice President, Information and Technology Services, McKesson Specialty Health.

iKnowMed is now available to 1,200 physicians and more than 10,000 users, comprising almost a million patient records. It provides immediate 24/7 access to patient charts, cancer diagnosis and staging for all patients, an extensive regimen library to facilitate evidence-based treatment decisions, safety alerts when medication conflicts or high toxicity levels occur, and identification of appropriate clinical trials. The system also streamlines billing, reimbursement, and scheduling.

“When our technological expertise is combined with clinical data, delivery of patient care is transformed,” said Mr. Ahmad. “The business end of a practice can be revolutionized to enhance clinical outcomes, financial results, and operational efficiency.”

Meeting the Challenges

Dr. Poste suggested that because data are often trapped in hierarchical organizational structures, more time should be devoted to rapid, real-time data access and personnel training in these skills. Moreover, new organizational structures, alliances, and business models will have to be devised.

Ms. Kean proposed a coalition of stakeholders. “Disparate sectors of the research and clinical communities have a common need for data exchange that affects the overall health enterprise. We envision a nonprofit membership organization consisting of commercial, academic, and consumer stakeholders. It would encompass information-based biomedicine in which all data about clinical care are used to develop new treatments. It would nurture a community dedicated to an open framework, as well as support a flourishing system of biomedical organizations.” For example:

  • Any organization with a data exchange need could bring a project to the coalition and trigger some or all of its activities.
  • All data would be recorded in standards-based formats and curated through appropriate structures.
  • Open frameworks would define standards by which technology components can interoperate.
  • The coalition would begin with well defined projects supported by a user with a specific need. Solutions could be built in small increments, and users would provide feedback for the next cycle of development.
  • It could serve as a broker in the biomedical system, linking those who need digital frameworks with those who provide them.
  • Patient advocacy groups could share information about the molecular underpinnings of disease by linking biospecimen repositories.
  • Pharmaceutical researchers could interpret and share data that lead to identification and validation of significant biomarkers. They also could gain a national clinical trials infrastructure. Pharmaceutical marketers could collaborate on reports of adverse reactions. ■

Disclosure: Dr. Poste is a member of the Board of Directors for Caris Life Sciences. Ms. Kean disclosed that Feinstein Kean Healthcare is a contractor to various government entities, including the National Cancer Institute, as well as to numerous life science innovator companies. Dr. Dalton is CEO of Moffitt Cancer Center and subsidiary company M2Gen. Drs. Murphy, Pollock, Shulman, Calhoun, Ms. McGraw, and Mr. Ahmad reported no potential conflicts of interest.

SIDEBAR: Lessons Learned from caBIG


Related Articles

SIDEBAR: Lessons Learned from caBIG

3.6.39_masys.jpgThe Cancer Biomedical Informatics Grid (caBIG) is an NCI program that was launched in 2004 in reaction to a “health information tsunami,” said Daniel R. Masys, MD, Chair of the caBIG Oversight Committee and Affiliate Professor in the Department of Biomedical Informatics and Medical Education,...

Advertisement

Advertisement




Advertisement