New Tools for a Tough Job
During the first few decades of the molecular biology revolution, one thing has become abundantly clear: nature abhors protein chemists. While their colleagues have bounded ahead in the study of nucleic acids, sequencing genes, then genomes, and now entire ecosystems with apparent ease, protein science has had to struggle at every turn.
Indeed, the biology of proteins seems almost perfectly designed to thwart analysis. Protein function depends on complex 3D interactions, often governed by obscure post-translational modifications and Byzantine networks of other proteins. Even complete genome sequences provide little more than vague hints at how any of these interactions might work.
Over the past few years, though, protein scientists have gradually adapted old techniques and developed a few new ones, yielding a new generation of tools that are finally starting to crack some of the field’s toughest problems. Protein analysis is still a tough job, but a sampling of a few of these new methods suggests that it might at least be improving.
In many areas of protein analysis, change has been more evolutionary than revolutionary. Capillary electrophoresis (CE) is an old idea that researchers and product developers are now overhauling with new technology. Like slab gel electrophoresis, CE relies on an electric current to pull a sample through a conductive medium, causing molecules to separate based on charge and mass. Because the separation occurs inside a tiny capillary tube, though, CE can separate molecules much faster and more distinctly than traditional slab gels. CE is also much easier to automate than other forms of electrophoresis.
The minuscule size of a CE capillary originally made it hard to detect the molecules coming out the end, but highly sensitive assays have finally gotten around that problem, and CE now lies at the core of rapid nucleic acid sequencing machines and quality control protocols for protein-based drugs. Ed Horton, PhD, senior marketing manager for analytical products at Beckman Coulter Life Sciences in Indianapolis, explains that for these applications, traditional slab gels "have been replaced pretty uniformly throughout the industry with capillary electrophoresis."
In an effort to bring the technique into other areas of protein analysis, Beckman is currently developing the CESI-8000 system, which combines capillary electrophoresis with electrospray ionization for mass spectrometry (MS). Horton sees it as a natural pairing: "Capillary electrophoresis has exquisite separation capability ... but it doesn’t characterize those peaks." Researchers can feed the CE output into a mass spectrometer for analysis already, but the interface between the two systems is troublesome.
Previous CE-MS connections have required wrapping the output of the CE capillary in a sheath of conductive liquid to complete the electrical circuit. This sheath flow dilutes the sample, though, drastically reducing the system’s overall sensitivity. The CESI-8000’s design, on the other hand, eliminates the sheath flow, allowing protein chemists to perform entirely new categories of analysis. "We really see that as being a driver for doing intact protein characterization, very high speed proteomics, things that are ... difficult if not impossible to do with current technology," says Horton.
Other equipment makers are also pushing the boundaries of CE. Agilent Technologies of Santa Clara, Calif. already offers CE systems designed to connect to the company’s own mass spectrometers for a turn-key system. Meanwhile, companies such as Sebia in Norcross, Ga. continue to cater to customers who need fully automated CE systems for routine quality control testing.
While CE and MS can help researchers identify proteins, studying their 3D structures requires a different set of tools: nuclear magnetic resonance (NMR) and X-ray crystallography. Both techniques can yield high-resolution molecular structures, but they aren’t interchangeable.
"They’re perfect complements," says John Cavanagh, PhD, professor of molecular and structural biochemistry at North Carolina State University in Raleigh, N.C., adding that "NMR will do the same things as X-ray, but it will do them slower and not quite as big a molecular weight, but there’s also things you can do with NMR that you can’t do with X-ray."
One major strength of NMR is its ability to show the dynamic changes in protein structures as the molecules float in a solution, a setting that mimics the proteins’ natural environment. X-ray crystallography produces only static images of whatever conformation was capable of forming a crystal. Proteins that do crystallize well can yield X-ray structures in a few days; NMR studies can drag on for weeks.
As a NMR spectroscopist, Cavanagh has watched the technology evolve from revealing 2D structures in the 1970s to three dimensions in the 1990s. Along the way, scientists figured out how to add isotopic labels to proteins very efficiently, boosting the technique’s resolution. More recently, a method called transverse relaxation optimized spectroscopy has allowed NMR to probe larger proteins.
While researchers continued to refine NMR methods, equipment makers were hard at work improving the gear. More powerful magnets and cryogenically cooled probes have increased the instruments’ sensitivity while reducing noise, allowing investigators to study more complex molecules at higher resolutions. At the same time, biochemists have started to borrow NMR techniques from other areas of chemistry. "Solid-state NMR applications ... have traditionally been mostly in the materials studies, but over the last couple of years have made significant inroads also in protein analysis," says Clemens Anklin, PhD, vice president of applications at Bruker Biospin in Billerica, Mass.
As the name implies, solid-state NMR allows researchers to study proteins that aren’t in solution. Anklin explains that the method is particularly good for solving the structures of membrane-bound proteins, which have been hard to study with traditional X-ray crystallography and solution NMR methods. Bruker is a major supplier of NMR equipment, and Anklin and his colleagues are keen to continue pushing the technology’s limits across numerous fields. "There is almost no area of chemistry where NMR is not used," he says.
The steady stream of technological developments has also improved user-friendliness, giving a wider group of researchers access to the technique. "Ten years ago you had to be an ultraspecialist ... and I really didn’t have many biochemists and molecular biologists working for me, it was all hard-core structure people," says Cavanagh, adding "now my lab is almost completely turned over to general biochemists and general molecular biologists."
Similar trends have been underway in X-ray crystallography, where improvements in automated pipetting systems and other standard laboratory tools have made it easier to set up and manage large-scale crystallization experiments. Indeed, companies such as TTP LabTech in Royston, U.K. have even introduced specialized automation platforms specifically for crystallographers.
With the recent progress in protein analysis, biochemists now find themselves confronting a new problem. "All the technologies people are implementing just generate that much more data, the throughput’s gone up, and so we’re all sort of being flooded by data," says Andrew Emili, PhD, professor of proteomics and bioinformatics at the University of Toronto.
Emili and his colleagues are among several groups trying to deal with the genomic and proteomic data glut. Modern computing power makes it easy to search large databases very quickly, but in biology simply identifying correlations isn’t enough. "In protein-protein interaction networks [the challenge is] finding the signal in the noise. What are the relevant interactions and the nonspecific ones or artifacts in the assays that we use?" asks Emili.
To address that, the researchers are building algorithms to assign confidence scores to particular interactions, based on the known biology of the system being studied. But that raises a bootstrapping problem: without an initial set of known interactions, it’s impossible to know which other interactions are real. "We need what’s called a gold standard or reference set of interactions, but it turns out to be a little challenging to find a set of complexes or interactions that everyone can agree upon as being real and something that we should expect to see in our data," says Emili.
The Human Proteome Organization highlighted that problem by sending identical test samples of complex protein mixtures to multiple biochemists for analysis. The labs reached widely differing conclusions about the contents. (Bell A, et al. Nature Methods. 2009; 423-430.) "There was a lot of hair pulling and people saying ‘well, these methods are all failing because they don’t agree,’ but in practice after looking at everyone’s ... raw data and how they interpreted the raw data, it’s probably all an issue of data analysis; people applied different algorithms," says Emili.
Besides settling on more reproducible data analysis techniques, Emili advocates more studies that look directly at the physical interactions of drugs and drug-like compounds with their protein targets.
Unfortunately, even with new equipment and strategies, protein analysis is likely to remain difficult. As Emili explains, "there’s a lot of emphasis now on high-performance instruments and somehow that will lead to more reliable information, but knowledge extraction from large-scale data sets—even if you used the most high-performance instrument—I don’t think it’s any easier."
About the Author
Originally trained as a virologist, Alan Dove is now a science journalist whose work appears regularly in a variety of trade and scientific journals and online publications. He also co-hosts the popular podcast "This Week in Virology."