Regulatory agency struggles under the weight of genomic data
- Journal name:
- Nature Medicine
- Volume:
- 19,
- Page:
- 385
- Year published:
- DOI:
- doi:10.1038/nm0413-385
- Published online
With the precipitously falling price of genome sequencing, generating reams of data is easy these days—analyzing all those As, Ts, Cs and Gs is the hard part. Yet, it's not just physicians and scientists who are faced with this analytical bottleneck posed by high-throughput sequencing (HTS). Regulators are now receiving huge batches of sequence data that support new drug applications—and they are struggling to figure out what to do with them.
High-throughput sequencing technology produces millions of sequences all at once in an automated and parallel fashion. Because of its high-throughput nature, it outperforms the time efficiency of the older Sanger sequencing by a factor of 1,000 and allows researchers to obtain an entire genome from an individual in a matter of days. The technology has paved the way for indentifying targets for cancer drugs, such as mutations in the protein kinase BRAF in melanoma (N. Engl. J. Med. 364, 2507–2516, 2011) or in HER2 in breast cancer (Cancer Discov. 3, 224–237, 2013 ).
“The science of HTS is moving fast, [and] applications will continue to grow that require certain infrastructure,” says Carolyn Wilson, the associate director for research at the Center for Biologics Evaluation and Research (CBER), a division of the FDA in Rockville, Maryland. The FDA now plans to build that infrastructure through a newly established 'genomics working group', launched earlier this year and tasked with developing approaches for handling and storing genomic data both internally and with relevant external partners.
On 27 February, Wilson, who chairs the group, presented an update to the agency's Science Board where she emphasized the need to develop the infrastructure and bioinformatic tools to meet today's demands. “We have been working to bring different components of the agency together to store this data better and to develop a strategic plan for evaluating HTS data quality and interpreting it for regulatory decision making,” she told Nature Medicine. To this end, the working group has also partnered with other branches of the US government, including the National Center for Biotechnology Information and the National Institute of Standards and Technology, to establish quality standards.
Delineating data benchmarks is important, notes George Weinstock, associate director of the Genome Institute at Washington University in St. Louis. He cites a lack of off-the-shelf reliable software to analyze genomic data. As such, most academic labs use various homegrown tools. These different programs often disagree in the results they produce from the same data sets. “Standards for both data quality as well as data analysis may be required,” he says.
According to Wilson, the FDA has been receiving an increasing amount of HTS data from industry and academia, but she would not comment on whether the agency has used such data to support approval of specific drugs, citing confidentiality. She did note, however, that the FDA anticipates it will use more genomic data for in-house research in the future. For example, the CBER plans to use HTS to detect emerging infectious agents in the nation's blood and tissue supplies and also to assess the safety and quality of products such as vaccines. Additionally, the FDA is planning to use HTS to evaluate how drug-resistant viruses develop after antiviral drug treatments.
Although it's unlikely that the FDA will make proprietary drug approval data available in the near future, some researchers still dream of access to genomic information handled by the agency. “The key is for the FDA to put out [these data] into the market as raw data so that it can be reanalyzed by the public,” says Michael Becich, chairman of the department of biomedical informatics at the University of Pittsburgh School of Medicine in Pennsylvania. “It would be very useful if the FDA considered this in their framework.”
High-throughput sequencing technology produces millions of sequences all at once in an automated and parallel fashion. Because of its high-throughput nature, it outperforms the time efficiency of the older Sanger sequencing by a factor of 1,000 and allows researchers to obtain an entire genome from an individual in a matter of days. The technology has paved the way for indentifying targets for cancer drugs, such as mutations in the protein kinase BRAF in melanoma (N. Engl. J. Med. 364, 2507–2516, 2011) or in HER2 in breast cancer (Cancer Discov. 3, 224–237, 2013 ).
Alfred Pasieka / Science Source
Peak performance: FDA gets a grasp on DNA data.
On 27 February, Wilson, who chairs the group, presented an update to the agency's Science Board where she emphasized the need to develop the infrastructure and bioinformatic tools to meet today's demands. “We have been working to bring different components of the agency together to store this data better and to develop a strategic plan for evaluating HTS data quality and interpreting it for regulatory decision making,” she told Nature Medicine. To this end, the working group has also partnered with other branches of the US government, including the National Center for Biotechnology Information and the National Institute of Standards and Technology, to establish quality standards.
Delineating data benchmarks is important, notes George Weinstock, associate director of the Genome Institute at Washington University in St. Louis. He cites a lack of off-the-shelf reliable software to analyze genomic data. As such, most academic labs use various homegrown tools. These different programs often disagree in the results they produce from the same data sets. “Standards for both data quality as well as data analysis may be required,” he says.
According to Wilson, the FDA has been receiving an increasing amount of HTS data from industry and academia, but she would not comment on whether the agency has used such data to support approval of specific drugs, citing confidentiality. She did note, however, that the FDA anticipates it will use more genomic data for in-house research in the future. For example, the CBER plans to use HTS to detect emerging infectious agents in the nation's blood and tissue supplies and also to assess the safety and quality of products such as vaccines. Additionally, the FDA is planning to use HTS to evaluate how drug-resistant viruses develop after antiviral drug treatments.
Although it's unlikely that the FDA will make proprietary drug approval data available in the near future, some researchers still dream of access to genomic information handled by the agency. “The key is for the FDA to put out [these data] into the market as raw data so that it can be reanalyzed by the public,” says Michael Becich, chairman of the department of biomedical informatics at the University of Pittsburgh School of Medicine in Pennsylvania. “It would be very useful if the FDA considered this in their framework.”
No hay comentarios:
Publicar un comentario