The definition of assay validation can vary depending on who you talk to and what the assay is being used for.

The second in a two-part series on assay development and validation

Patrick McGee, Senior Editor

Developing an assay—looking at the kinetics and pharmacology of an assay and ensuring it is properly arranged—can be a tricky process. But the challenge isn't over when the assay is completed, because
Researchers often under estimate the amount of reagent needed when moving an assay into an ultra high-throughput screening laboratory such as the one shown above. (Source: Peter Hodder, PhD) 
researchers must then validate the assay to ensure it is working correctly and supplying accurate data. Some argue that a common mistake is trying to develop and validate an assay at the same time, while others believe it is difficult to separate the two. For this story, Drug Discovery & Development spoke with researchers in pharmaceutical and biotechnology companies and academic labs to get their perspective on the methods they use to validate assays, as well as some of the common mistakes people make when performing assay validation.

Michael Bleavins, PhD,
Pfizer Global Research and Development

Depending on how it's being used, assay validation can mean a number of different things, Bleavins says. "You're really looking for an assay that does what you need it to do. What you would need in an assay if you're ranking early discovery compounds is nowhere near as extensive as what you would need if you were developing and validating an assay that affected a safety parameter that determined if you went to the next dose in a clinical trial."

One important part of validating assays, he adds, is getting access to the right samples, and if it is a human assay, it is vital to have a good history on the samples. For example, if the assay is testing for osteoarthritis, it needs to be known that arthritis was the only issue for the patient who the sample is from. It would also help to know what their treatment and medication history has been, as well as other factors such as body weight. "It's just a question of getting good sample sets so that you have enough information to separate how much of it is the disease you're looking at and how much of it can be confounded by other diseases they have or medications or basic medical history."

Angela Cacace, PhD,
and Jonathan O'Connell, PhD,
Bristol-Myers Squibb

Cacace says once an assay is designed and is known to exhibit the appropriate pharmacology, moving onto the validation phase is very straightforward. "Then, it's more a question of how robust your assay is. What we've done is streamlined our approach for assay validation whereby we're looking at the stability of all our reagents well ahead of the assay validation stage so we understand how each peripheral piece of equipment works on the automated systems," Cacace says. "We've done all of the appropriate validation and that's occurred almost in parallel with assay development. I think that by sort of consolidating the assay design validation, you've actually streamlined that process."

O'Connell agrees and says there is no clear line drawn between assay development and assay validation. "It's a continually evolving process," he says. It starts with the initial assay design, determining the signal and making sure the pharmacology is right, amongst other things. Researchers will then move this to the bench and begin running stability tests where they look at keeping their reagents in diluted concentrations overnight. "When you figure out each of those pieces, then you move into the final stage of validation, which is actually doing mock runs on the fully automated platform. There's no real line between the two; one just evolves into the other."

Cacace says one thing they do is develop a cell line with the assay endpoint in mind. From the moment a target is proposed and it goes into a cell line, they are identifying cell lines with the appropriate characteristics for high-throughput screening, and the pharmacological validation for the whole process is streamlined, at least for GPCR assay design.

Ralph Garippa, PhD,
Roche Discovery Technologies

For Garippa, a key component of assay validation is monitoring conditions. For example, some of the receptors are proteins that they express in cells. "Very often, they'll start to run down or they'll be overgrown with a population that doesn't express as well. So what you thought was your baseline will begin to run down," he says. Another common problem is that some very sensitive proteins begin to degrade even when they are kept at temperatures as low as minus 70°. If the data on this is tracked over days and weeks, it becomes clear that the activity of the enzyme or some of the viral transfection systems that are being used start to run down over time.

"That's something that you have to compensate for. Or you have to take your experiment off line, make new, fresh reagent, re-titer it, and then begin your experiment again." A key to spotting these trends are software visualization tools that allow investigators to sit at their desk and track volumes of data over time from their computer and look for very subtle trends, trends that would never be seen looking at the data on a day-to-day basis alone.

Douglas Auld, PhD,
National Institutes of Health

Auld says there are a number of things that researchers commonly do not take into account when validating assays, and one of those is the robustness needed to perform the kind of small-volume, high-throughput screening on the robotic systems typically used in their labs. Often, the number of samples measured may not be sufficient to get a statistically meaningful measure of the high-throughput screen performance. "An n=3 experiment may be good enough to provide evidence for a particular phenomenon that is part of a research project, but it is not good enough to judge the performance of a high-throughput screen. We ask for data that is at least derived from a 96-well plate."

Auld says the need for a sufficient number of measurements to calculate such parameters as Z factors is also commonly overlooked. In addition, while quality control of reagents is a standard operational procedure in industry, it is not something that many academic labs are prepared to do routinely, so that can be an issue, as can under estimating the amount of reagent needed to screen hundreds of thousands of wells.

Another potential issue when it comes to assay validation is incubation times that are not optimal for screening. Sometimes they are shorter or longer than what is optimal for the 1,536-well plates that they run in their labs, Auld adds. "With respect to incubation times of the assay protocol, sometimes there are steps that are too long (> 48 hrs) for miniaturized assays because of evaporation problems and they have to be modified. Or they are too short to allow for a consistent timing during batch processing of plates. Some of these things are not taken into account when it comes to putting an assay on an automated system.

Carol Ann Homon, PhD,
Boehringer Ingelheim Pharmaceuticals Inc
Homon says her labs have a set protocol of factors to look at when validating assays, with slight variations on that depending on whether it's a cellular target or a molecular target. "Some of the things are identical and some are specific to that differential between those two types of assays. We spell these out . . . when we're working with a therapeutic area lab," she says. Homon adds that they are stringent about their assays because if the program succeeds, it will move forward with the same basic assays and they don't want to have to redevelop the assay after the screen for hit-to-lead use.

Another thing they look for when validating assays is getting the same level of signal, something that can be difficult in today's research environment because just about everything is based on fluorescence. "Everybody's fluorescence is relative fluorescence, so this company's units are different from that company's units. We just have to agree upon a certain reader and then we reproduce the signal," Homon says. They look at a number of other factors as well, including background, noise, signal-to-background, and signal-to-noise. "We track every one of these things when we're doing high-throughput screening."