Lapses in oversight compromise omics results
US board calls for tighter control of test-based data.
Loosely dubbed ‘omics’, such large-scale studies of genes, proteins and other molecular characteristics of whole organisms have been lauded as a way potentially to detect disease and evaluate how people respond to drugs. In theory, the approach could pave the way to personalized treatments. But the development of reliable clinical tests based on omics findings has taken longer than expected.
In a bid to improve how omics-based clinical tests are developed and evaluated, the US National Cancer Institute (NCI) in Bethesda, Maryland, asked the IOM to investigate after an analysis by an NCI biostatistician, Lisa McShane, led to the discovery that data used in Potti’s studies had been altered. That discovery led to the closing of clinical trials, retractions of over two dozen published papers, a misconduct investigation into Potti’s behaviour and lawsuits against Potti and his former institution, Duke University in Durham, North Carolina. His studies had linked changes in the expression of patients' genes with how they responded to cancer treatments, and independent statisticians had raised concerns about published papers linked to the work before clinical trials were initiated based on them (see 'Cancer trial errors revealed').
The IOM report issued today details how multiple failures of the systems that Duke relied on to protect research oversight and integrity enabled the clinical trials based on Potti’s data to proceed. And, the report warns, “Many of these failures stemmed from problems that may exist at other institutions,” including problems with trial design in omics studies, and with research oversight, data management and systems for managing conflicts of interest more broadly.
Wake-up callMcShane says that she hopes the report will serve as a wake-up call to the field. “A lot of what the committee is saying is that we need to put more rigour into the process of developing these tests,” she says. “These basic principles that statisticians should know have got lost in the rush to embrace these new technologies, and we’ve seen what the danger can be.”
For instance, the IOM underscores that several types of validation should be done before a test enters a clinical trial. This should guard against the danger of 'overfitting' — the likelihood that investigators will develop a computational model that seems to link certain omics signatures with particular clinical outcomes, even if no link exists. It also recommended that investigators release all the data and methods necessary for other investigators to independently verify the test, with no further changes being made until the test is verified. These steps weren’t followed consistently in Potti’s studies.
The report committee also notes that if a test is to be used to make decisions about patients’ medical care, the US Food and Drug Administration (FDA) must review it — a fact that many researchers and institutions don’t realize. Duke University's institutional review board, for example, mistakenly allowed Potti’s studies to proceed without an FDA review.
The report also points out that conflicts of interest (COI) among investigators were not disclosed to clinical-trial participants at Duke. “A key lesson learned from the Duke case study is that COI, though subject to multiple layers of oversight in most institutions, can still contaminate research integrity,” says the report, detailing several steps that institutions should take to strengthen oversight of these conflicts, such as designating particular officials to be in charge of specific FDA requirements for omics-based tests and to respond to serious criticism about its investigators' work.
The report suggests steps that journals should take steps to prevent future cases such as Potti's from unfolding, noting that biostatisticians Keith Baggerly and Kevin Coombes of the MD Anderson Cancer Center in Houston, Texas, could not find all the relevant data they needed when they tried to reproduce Potti’s published studies in 2006. The journals didn’t do enough to ensure that Potti’s original work was sound, and didn't pay enough attention to the concerns raised, the report adds. As a result, the studies were cited hundreds of times before their eventual retraction, years after Baggerly and Coombes questioned them.
“We heard over and over from institutional review committee chairs that the fact that these papers were published in such prestigious journals led to the acceptance of their results, and that is a serious problem,” says Gilbert Omenn, director of the University of Michigan Center for Computational Medicine and Bioinformatics in Ann Arbor, who chaired the IOM committee.
Baggerly says that he agrees with many of the report's recommendations: "Had these been in place earlier, I suspect many of the problems encountered might have been short-circuited."
Omenn adds that he hopes the committee’s report will shake up the field for the better. “There is an orderly way to develop these kinds of tests that people are going to have to learn to follow,” Omenn says. “In the end, it will be much more efficient, surely more reliable and a lot safer for patients.”
Follow Erika on Twitter at @Erika_Check.