These excerpts from a recent Webcast on quantitative polymerase chain reaction for gene expression analysis involve experts from industry and academia discussing their experiences with, and data gained from, the method.
Senior Editor Patrick McGee recently hosted a Webcast entitled "How reliable is your qPCR data?" Quantitative PCR is a powerful and sensitive technology for the quantification and validation of genetic data. Despite the power of qPCR, however, a number of key considerations need to be addressed, from sample preparation through data analysis. Although the topic of the day was overcoming the challenges of qPCR, from pre-assay through data analysis, panelists limited their comments to gene expression analysis. The full version of the Webcast is available for viewing at www.dddmag.com/qpcr.
The panel of experts who joined McGee for the Webcast included Stephen Bustin, PhD, professor of Molecular Science at Barts and the London Queen Mary's School of Medicine and Dentistry at the University of London. His research group focuses on molecular oncology and has spent the last eight years working on applying molecular techniques such as qRT-PCR to the biology of colorectal cancer. Mark Anderson, PhD, research and development scientist, Invitrogen Corp., has extensive experience in analyzing and developing PCR technology and he discussed qPCR assay design and troubleshooting. Maurice Exner, PhD, research and development manager in infectious diseases at Quest Diagnostics, is responsible for directing research efforts to develop new clinical diagnostic assays for infectious diseases. These assays primarily use automated nucleic acid extraction methods coupled with various nucleic acid amplification techniques, particularly qPCR.
Stephen Bustin: The real-time quantitive RT-PCR is often described as benchmark technology or as the gold standard for the quantification of mRNA. However, despite the advances that have been made in reagents and instrumentation, it is still not a trivial matter to get the quantification of mRNA right. A successful qRT-PCR assay is
There are numerous publications, workshops, and meetings designed to educate and teach people how to carry out qRT-PCR assays properly. However, it is not clear whether that is being done today. I've done a survey of papers published late last year and this year in high- and low- and medium-impact factor journals to do with human cancer studies using biopsies. I asked myself a number of questions under certain parameters, and the results have been quite disconcerting. Only 12% of individuals use the laser capture microsection. Very few people report any quality analysis of their RNA, and while approximately half of the reports show some kind of quantification, no one looks for inhibition of the RT-PCR assay. The results of this are similar to those results I got when I looked at the questionnaire I submitted to the qPCR meeting this year in London. The conclusion is that there are a lot problems with how people do their template analysis.
Patrick McGee: What do you think of running plasma DNA as a control?
Maurice Exner: I do a lot of that, but I'm in experiments with target detection, not necessarily gene expression. It works very well for that to normalize as a quantity standard, but for gene expression, I'd think it would be difficult to get an accurate result.
Mark Anderson: It is always very important to run positive as well as negative controls. For gene expression, plasma DNA is not usually the best choice. If you have a sample that you know is giving a consistent positive signal, I think that would be the preferable positive control in these types of assays.
|Digging Deeper Into qPCR
Following the Webcast, Maurice Exner answered some additional questions submitted from the audience. Additional questions are available for viewing at forums.dddmag.com
Generally, it doesn't fail, but the reaction efficiency may decrease due to limiting amounts of enzyme or other reagents.
Can qPCR detect one copy of target nucleic acid in one sample?
It is capable of detecting a single target; however, because of sampling error, a sample containing a single nucleic acid copy per unit of reaction volume will not be positive all the time.
Why are random primers less susceptible to secondary structures?
With respect to primer binding, one reason to use random primers is that they are shorter (i.e., hexamers), and this diminishes the chances of having interfering secondary structure. Another reason is that there are multiple primers, which increases the chances of binding to a region unaffected by secondary structure.
When multiplexing two gene targets with four primers, do you need to double all reagents like dNTP and enzymes?
Maybe this will have to be optimized and tested for every situation. Generally, using only two targets will not require additional enzyme/dNTP's, since the amounts will still be in excess.
If I design primers and Taqman probes is it a good idea to test the primers using SYBR green and only if they look good test them in the presence of probes (without SYBR green)?
It is a good idea to look for primer dimers, using SYBR green, or using gel analysis. It is useful to also add a probe to determine if the probe also contributes to primer dimers.
Do you find once validated primer pairs lose their quality over time? For example, does PCR efficiency go down over time? If so, why?
Primers and probes will degrade over time with storage, and probes are sensitive to light and freeze-thaws, and this can decrease your efficiency over time.
When doing a multiplex assay with different labels, do you have to set different thresholds for every label when analyzing the data?
Generally it is a good idea to set the threshold separately for each different label. After reviewing your data, you may find that the same threshold can be used, but you should not assume that it will be so.
What's the best way to compare qPCR data from the different run with the same or different standard curves?
If the standard curves have been validated and are reproducible, the data should be directly comparable. If two different standard curves are used, they should be compared using external standards, and if the two curves are calibrated against each other, the results should be directly comparable.
What level is considered a low qPCR efficiency? What does it mean when the efficiency is at 75% for real-time PCR? What does it mean if an efficiency calculation is coming out too high (more than 120%)? What are the limits of efficiency for a good assay?
Everybody seems to agree that you should try to get to an assay efficiency close to 100%. How low can you go in term of efficiency and still be able to use the data? And how do you normalize if the endogenous and target assays have a different efficiency?
How would you explain a real-time RT-PCR reaction with an efficiency of about 140%?
Generally, PCR efficiencies should be between 90 and 110%. With primer/probe optimization, one should be able to acheive these levels. When the efficiency is low, the assay sensitivity will be affected and quantitation results may be questionable. Low efficiency can result from poor primer/target binding, from primer dimer formation between oligonucleotides in the reaction mix, or from un-optimized oligonucleotide and reagent concentrations in a reaction. PCR efficiencies of 120% generally result from having a nonlinear reaction, which could mean that the reaction has too much target. Ideally, choose CT values between 23 and 35 to build a curve to monitor PCR efficiency.
How do you feel about setting one's own threshold value vs. using the ones determined by the software? For one study is it better to set the threshold value as the same value throughout the study or change it appropriately for each individual assay?
Automatic threshold settings can be useful, as they always adjust for background. However, if you have one "bad well" it can throw off the values for all of your other wells. It is a good idea to do multiple runs and then determine an average threshold. You may want to set this as a permanent value or you may use the instrument's automatic setting, but any significant variation from the average value should be investigated. If you have good PCR efficiencies and reactions with good exponential growth curves, setting thresholds manually should be ok and you should be able to use different settings on different days or on different runs.
MA: With LCM, it's very difficult to do traditional types of samples prep. There are several types of kits out there . . . that are useful for very small, even down to single-cell, samples.
PM: Do you ever recommend RNase treating and cDNA cleanup after cDNA synthesis and two-step qRT-PCR?
MA: I think it really depends on which kind of RT you are using. Some RTs already have RNase H activity and some don't. There have been variable reports on whether or not that is helpful. I've personally never found it to be extremely useful to treat the samples with RNase after the cDNA synthesis. Again, if you are using two-step, usually you are diluting your cDNA material into your PCR. That's one of the best cleanup methods possible.
SB: My feeling would be the less you do with your sample during the cDNA and during the PCR, the better.
ME: If you have a well-designed assay, it shouldn't be a big issue.
PM: How many genes can you multiplex with [Invitrogen Corp.'s] LUX primers and which are the recommended fluorophores?
MA: It really depends on which instrument you are using. The instrument is generally one of the biggest limitations when you are multiplexing. With LUX, there is a specific set of guidelines, and it's very instrument-specific. Without outlining all of those right now, that's the way you can do it. It's up to three or possibly four depending on the platform you are using.
PM: Is it possible to produce any kind of meaningful quantification when a template is from formalin-fixed paraffin-embedded tissue, whereas a standard curve is generated from a cell line?
SB: Of course you can. All you are doing is a quantification against a standard curve. What you can't necessarily do is compare the results you are getting from your formalin-fixed material against a sample achieved using fresh frozen material. But the standard curve itself? Once you have prepared your RNA, I think that it is perfectly acceptable to quantitate against that. What you will find is that you will get far more copy numbers. So, I think the issue is not whether you can use a standard curve. The issue is are the results going to be valid in terms of what's happening with the RNA.
PM: Maurice, how would you explain a real-time RT-PCR reaction with an efficiency of about 140%?
ME: Efficiency calculations are not necessarily accurate all the time. They're close. You can often see 105%, 110% because the calculations are not perfect, but 140%, that's difficult. In some cases, you can max out your system at the high end, so your efficiency will be different. You might only have a one- or two-cycle difference between samples at the very high end of the range. That can change things because your efficiency may drop or you may level off at a high template concentration. It would depend on how much of a linear range you use to determine your efficiency from.
PM: Mark and Stephen, I saw you smiling when that 140% figure came up. Anything you would like to add on that?
MA: I think that's my experience as well. When you're getting efficiency numbers greater than 100%, that usually indicates a saturation of the RT or the PCR, and you see less of a cycle threshold [CT] difference between higher dilutions. That can sometimes throw off the slope on that high end and create artificially high efficiency. It is very important when you are looking at your standard curves to look for even spacing between all of them. If you do see compression on the upper end, throw out those points and recalculate the efficiency.
ME: Right, you want to look at the linear area of your slope if you can.
PM: Which is the best housekeeping gene when studying cancer?
SB: This is a question that always comes up, and I think the only answer is that there is none. It depends on which cancer you are looking at. It depends on what you are doing. The best thing to do, if you want a reference gene, go and get a panel, check for yourself which of the reference genes are the least variable using genome or something like that, and then use that. That's the only answer. There have been several papers that suggest that there are three or four or one or two that are universally applicable, but they don't agree with each other.
PM: How do you determine the integrity of your RNA?
SB: We do a 3' –> 5' assay. In our case with GAPDH. What we are looking for is for a ratio from the 3' –> 5' end of an oligoDT reverse transcribed template of roughly one.
PM: Stephen, which is the best way to extract RNA from biopsies?
SB: It really depends on what you are doing. We're using less and less material. We think it is extremely important not to have to extract RNA at all. You need to lyse and do your PCR or
PM: Do you have any recommendations for tissue RNA preservation?
SB: You freeze it at –70 °C or –80 °C and occasionally check the integrity. RNA does degrade, so we find that if you go back to RNA after many years, you get different results. The real problem is how do you archive your material? Some people recommend using cDNA. I'm not sure that's the best way either.
PM: How many duplicates are best in qPCR?
SB: Because people tend to show triplicates, and they've done an experiment twice. And as Mark said it is not as important that you can reproduce your rtPCR, what you need to do is show that your biological replicates are valid. Rather than doing 10 or five or three replicate RT-PCRs, what you should be doing is taking two separate samples and then doing duplicate assays on those. That will tell you how valid your results are.
PM: How do you determine whether to accept a positive NTC or not?
MA: From a pure data analysis standpoint, I think the CT value means nothing. You have to really see the curve. The quantity value is going to tell you something. If you have a valid curve, and it's a very high quantitation, then you really have to think about it. But if it's a valid curve with very high CT, then it might be acceptable.
PM: What is the best method for determining whether changes in mRNA levels are biologically significant?
SB: I think we need to consider again that we are looking at mRNA levels. We're not looking at a whole expression profile. So, there has to be additional validation experiments that suggest if we see an increase or decrease in the mRNA, that is in some way reflected in increases or decreases of protein. You should not look at your RT-PCR in isolation. You need to look at it in the larger context of the cell doing things.
MA: I think you just have to understand your system very well and use common sense.
PM: How harmless is it to preheat RNA up to 80 °C before RT toward RNA integrity? Is there any quantitative data available?
MA: I agree with Stephen in that the least number of steps, the better. It's something that I would only implement in a case where there are extreme difficulties in priming in your RT. The first thing that you should do would be to try out different priming strategies or different primer sequences. If you just can't get around that, preheating is the last resort. Depending on contamination and things that come through with your sample, you can damage your RNA, but if that's the only choice you have, that is one suggestion that you can try.
PM: How do you determine a threshhold?
ME: There are a lot of different ways. Sometimes it will depend on the instrument you use. Thresholds are chosen differently by different instrumentation. A lot of times, the standard is to choose so many standard deviations above your background. That's pretty reproducible, but it is something you have to validate. If you run your assay multiple times and you are always getting the same threshold, the same level above background, you are probably okay with that, but you do have to ensure that it is appropriate for the type of CTs you are seeing.
MA: It is important to set your threshold in the log view instead of the linear view. You can get a more accurate threshold that way.
SB: A key question is if this is objective assesment. There is a lot of subjectivity in how you get your data.
This article was published in G & P magazine: Vol. 6, No. 2, March, 2006, pp. 17-20.