Drowning Research Scientists, Meet Life Preserver
Today’s pharmaceutical and biotechnology companies have more tools for experimentation and analysis than ever before. The problem is that they are drowning in a sea of data.
Modern research involves an enormous amount of highly complex and distributed information?from sources as diverse as laboratory instruments and published literature to molecular simulations and 3D images. Locked up in various system, application, and equipment “silos,” this data is often difficult to access, integrate and manage. As a result, research scientists can easily spend countless hours finding and gathering necessary information, preparing data for analysis, and then collating, formatting and distributing results. Disjointed information management in turn causes productivity and decision-making to suffer, raises research costs, and ultimately delays breakthrough discoveries.
A new approach is needed to allow the industry to quickly unlock the data vital to their experimental progress, and more effectively use it to speed the cycle of innovation.
Moving Towards a Solution
Over the past decade, supply chain and customer relationship management (CRM) systems have helped streamline manufacturing, sales, and marketing activities via automated workflows and collaborative information sharing. Taking a similar, enterprise-level view of drug discovery presents a compelling opportunity for scientific and clinical research organizations. But retrofitting traditional business intelligence, data management, or product lifecycle management tools is not the answer. These “one size fits all” technologies were built for transactional data, which is generally structured and numerical in nature and unable to accommodate advanced scientific analysis.
On the other side of the coin, point tools designed for the scientific market, such as electronic lab notebooks (ELN), often only solve part of the data management problem. By focusing on specific disciplines, these types of solutions can lead to the isolation of research information in software from one vendor or another, hampering process automation and requiring IT intervention to integrate and transfer data between multiple applications.
Scientific Information Management
Today, technology advances such as service-oriented architecture (SOA) are presenting new opportunities for an optimized approach to scientific information management that unifies an organization’s entire knowledge base. An open and standards-based platform can support the integration of multiple sources of information in a plug-and-play environment, which enables users to link their preferred technologies and components together to build workflows that incorporate data in a variety of formats, as well as services that originate in diverse systems and applications.
Translational medicine is the practice of using, or “translating,” isolated genomic research into a clinical setting. By leveraging technologies like next generation sequencing and gene expression analysis to pinpoint biomarkers that indicate disease or non-disease states, researchers can improve the effectiveness of drug discovery. For example, biomarkers can help researchers better understand how a potential therapy will affect a subset of the population – say those predisposed to colon cancer – before live clinical trials even begin.
But with more than 20,000 genes existing in a single cell finding the right biomarker can be a daunting task. Not only do researchers conduct their own experiments, they also need to analyze and compare their findings with data from collaborators, as well as with information gleaned from a host of sources including tissue samples from healthy and diseased patients, statistical models, text documents, academic literature, previous clinical trial documents, patents, and more. The critical challenge is how to quickly bring all this data together to make better decisions. With scientific information management, researchers can improve the speed and accuracy of their work.
Linking powerful and diverse range of data management functionalities on a single platform promises to revolutionize research efficiency. Through the integration of disparate data formats, applications, and algorithms from multiple research areas, systems, and sources (these may include anything from lab notes and text-based academic literature, to DNA sequence data and molecular models) organizations can create automated workflows that streamline highly complex research projects. The automated aspect of this integration is key–it enables researchers to leverage all the rich data sources available to them (both within and outside the organization) without the time and expense involved in writing custom software for each workflow.
An ability to analyze complex scientific information related to molecular biology, genomics, chemistry, and more is critical to transforming raw data into knowledge. Thus, an extensive array of statistical methods should be available, ranging from simple statistical indicators to advanced modeling methods. Sophisticated text analytics capabilities are also important, so that scientists can quickly wade through available literature to find relevant information that gives context to their research..For example, in gene expression research, scientists will need to find out more about their biomarker candidates. Which have been involved in clinical trials in the past? What diseases are they associated with? Automation is again a critical component here, so that the delays and lost productivity associated with IT building and deploying analytical capabilities can be avoided.
In order to make discoveries, different stakeholders such as bench scientists and bioinformatics specialists need the freedom to look at, manipulate, and analyze data in different ways. Thus, a flexible approach to information delivery is required?one that empowers all levels of users to view information in the manner most effective for their needs, which may range from simple dashboards to sophisticated 3D visualization. Rather than relying on standard templates, users should be able to configure what they want to see and how it is presented. This degree of flexibility leaves room for the innovation so vital to these initiatives, while still providing a framework for faster decision-making and ultimately faster results.
About the Author
Scott Markel, PhD, is the Principal Bioinformatics Architect at Accelrys. He is a Vice President and member of the Board of Directors of the International Society for Computational Biology.