When human biases distort drug discovery decision-making, software tools can guide thinking towards objective answers.
People are notoriously poor at making decisions based on complex uncertain data when there is a lot at stake. In drug discovery, poor decisions can mean wasting effort by synthesizing and testing compounds that fail, or throwing out perfectly good compounds in error, reducing the opportunities to find new therapies and sources of profit. However, making good decisions in this context is challenging for several reasons, including the need to balance multiple, often conflicting, criteria for a successful drug; the abundance of data available on many properties; and the uncertainty in the relevance and accuracy of the available data, particularly in early discovery.
Psychological research has demonstrated that reproducible biases affecting human decision-making—known as cognitive biases—are deeply ingrained and hard to overcome. Drug discovery leaders receive a great deal of conflicting advice on ways to improve productivity and boost the rate of successful drug launches; however, if these psychological barriers can be overcome, the improvement to decision-making will enhance R&D performance.
This article details four of the most common cognitive biases and considers the risks they pose to R&D decision-making. It is possible, within a creative environment that involves a lot of chance, to ”make your own luck” through rational design of processes and criteria. Also discussed is how computational tools can encourage objective consideration of screening and compound selection strategies. Tools to help teams make compound selection decisions that consider not just the importance of selection criteria, but also the uncertainty in the information being provided, are essential in overcoming psychological barriers to good discovery decisions.
Make your own luck
In the scientific world, cognitive biases undermine objectivity in decision-making. For example, people are naturally optimistic about their projects, and are more likely to notice and focus on characteristics that validate and conform to their interpretation of previous information. This effect, termed confirmation bias, is all too common in scientific R&D. It can be very hard to propose, prioritize, and act on tests that don’t support the original idea. This can lead to premature focus on a small number of possibilities, with the potential to miss opportunities and also the tendency to kill chemistries or projects too late, leading to additional cost and delay.
The effects of over-optimism can be mitigated through peer review and independent membership of strategic decision-making groups (e.g. for candidate selection). However, other biases may be more insidious in their impact on daily work.
The second of the most common biases, availability bias, comes from our short-term memories, limitations of personal experience, and a tendency to react to the most recent and vivid information, neglecting long-run chances of a problem. For a rare safety liability, assessed with an imperfect test, many, if not most, safety concerns will turn out to be false alarms. This is a simple statistical truth that needs to be considered in the planning of experiments.
Despite this, people tend to overreact to the latest information—the problem they just saw. This can lead to inadvertent loss of perfectly good product opportunities. There are, however, a number of methods that can improve the way tests are planned and used. Examples include: capturing the underlying prevalence of risk factors (known as 'prior' probabilities) and the reliabilities of predictions and assays more systematically; and helping scientists to include information such as the cost of late-stage failure and the value of a successful project in their decision-making process.
A third consistent barrier to rational decision-making occurs when scientists excessively focus on certainty when considering probabilities, wasting resources on generating data that has little impact on the eventual decision. When there are multiple sources of risk, each with different probabilities and impacts, people tend to focus on the low-probability, high-impact items. What this means for drug discovery is that some of the parameters for optimization are likely to be neglected. For example, if there is 95% chance of activity—which is essential—and a 50% chance of good bioavailability—which is fairly important—most people will spend undue effort trying for 100% certainty of activity when, overall, increasing the chance of bioavailability to 70% could be worth more.
It is therefore clear that decision-making related to choice of compounds and further screening has to be based on a combination of evidence and judgment. One method of applying an effective mix of methods is the probabilistic scoring approach employed by the StarDrop software platform from Optibrium. This guides compound selection decisions in drug discovery, considering not just the importance of the selection criteria across multiple compound properties, but also the uncertainty in the information being provided. Uncertainties in the overall score are calculated and can be used to establish when one compound can be confidently chosen over another, as illustrated in Figure 1.
A fourth source of bias is calibration bias, the tendency towards overconfidence in our forecasting ability (whether human or computer-assisted). In drug discovery, decision-makers considering how to eliminate compounds may underestimate the importance of the trade-off between false negatives and false positives, resulting in excessive costs of late failures or lost opportunities to develop valuable products. Getting better at planning and forecasting requires feedback to provide the self-awareness of performance.The problem here is that, in R&D, the timescales are so long that feedback happens at the organizational—not the individual—level. There needs to be a way to use organization-level learning to help individuals and teams practice in a simulated environment. Then they can get rapid feedback over a very wide range of cases on how decisions in their ”microworld” of research may play out in the development and commercial arenas. A simple example applied to choice of screening sequences, depending on method reliability and potential downstream consequences, is illustrated in Figure 2.
Project teams will accept guidance to improve objectivity in decision-making through visualization and feedback, not through a black box. Corporate and industry experience, correctly harnessed through an appropriate decision-making framework and applied with an understanding of why people think and act as they do, will help researchers to overcome the limitations of human nature and improve their chances of success in drug discovery research.
About the Author
Matthew Segall has led teams developing predictive ADME models and state-of-the-art intuitive decision-support and visualization tools for drug discovery. He was responsible for ADME and ADMET services at Inpharmatica and BioFocus DPI, including the StarDrop software platform and in 2009 led a management buyout of the StarDrop business to found Optibrium.
Andrew Chadwick is Principal Consultant (Life Sciences) at Tessella.
Matthew Segall, Optibrium Ltd., Cambridge, U.K. and Andrew Chadwick, Tessella plc, Burton upon Trent, U.K.
This article was published in Drug Discovery & Development magazine: Vol. 14, No. 1, January, 2011, pp. 18-19.