Mass spectrometry-based proteomics at Nature Methods

Published in Protocols & Methods

Share this post

Choose a social network to share with, or copy the shortened URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

A look back at highlights in proteomics technology developments published in Nature Methods.

The last decade has seen amazing advances in mass spectrometry-based proteomics technology as well as ever-expanding use of the technology for varied biological applications. Here we take a look back at some proteomics technology development highlights published in Nature Methods over the last 10 years. (A second entry covering biological applications of mass spectrometry-based proteomics is planned for the near future; stay tuned.)

Sample preparation

The first step in a successful proteomics experiment is sample preparation. In 2009 Matthias Mann’s lab published a filter-aided sample preparation (FASP) method that is widely used by the proteomics community. In 2014 the same lab published an optimized approach that performs all sample processing tasks in a single enclosed tube.

Proteins are digested into peptides for ‘shotgun’ proteomics analysis. While trypsin is most widely used, it also comes with known limitations. Albert Heck and colleagues and Neil Kelleher and colleagues described useful alternatives to trypsin.

Proteomics researchers are always striving for higher sensitivity. John Yates’s lab’s DigDeAPr method and Bernhard Kuster’s lab’s use of DMSO to enhance electrospray response allow researchers to do deeper proteomic analysis.

Quantitative methods

Proteomics researchers want to quantify, as well as identify, peptides and proteins. Stable isotope labeling, either through metabolic incorporation or chemical labeling during sample preparation, enables researchers to quantitatively compare multiple samples. Spiking in labeled concatenated signature peptides into samples enables absolute quantification, as shown by Robert Beynon and colleagues.

The SILAC metabolic method has proved to be extremely popular, and we have published applications of SILAC for quantifying proteins and phosphorylation sites in human tissues, and in nematodes (Larance et al. and Fredens et al.).

A limitation with SILAC is that it cannot be used to compare more than three samples at one time. Joshua Coon and colleagues provided a clever way around this with their NeuCode SILAC approach, which in theory could enable up to 39-plex experiments.

Chemical labeling approaches (such as iTRAQ and TMT) currently offer higher multiplexing capability than SILAC, but can suffer from problems of quantitative accuracy. Coon’s lab and Steven Gygi’s lab each provided methods to obtain accurate quantitative data in multiplexed experiments.

Shotgun data analysis

In a typical ‘shotgun’ proteomics (discovery-based) experiment, MS/MS fragmentation spectra are generated for all peptides that can be detected by the mass spectrometer. The proteins are identified by matching these experimental spectra to theoretical or actual MS/MS peptide spectra found in databases. Well-performing tools to do this and methods to control for false discoveries are therefore crucial.

To generate good proteomics data, one must tune the mass spectrometer to the best of its ability. The HCD method from Stevan Horning and Matthias Mann and colleagues and a decision tree algorithm from the Coon lab enable researchers to obtain improved MS/MS data for protein identification.

We have published tools for peptide identification – PercolatorSpectraST, and MS-Cluster – and quantitative data analysis (Census). Lennart Martens’ group showed that combining various data processing workflows leads to greater proteome coverage. Proteogenomics-type approaches using custom databases generated using genomic data are becoming popular as they allow novel peptides not found in standard protein databases to be identified (see Evans et al. and Branca et al.).

Researchers must be careful to not overinterpret their proteomics data. Gygi’s lab wrote a useful Perspective on the target-decoy approach for determining false discovery rate, a metric that has become broadly adopted by the field.

In order to keep tools sharp and highlight areas for development, it is important to systematically put them to the test. In 2005, Gygi’s lab performed a comparison of three platforms. In 2009, a large group of researchers tested their ability to identify proteins in a small test sample. This analysis highlighted common problems that occur especially during data analysis in proteomics investigations.

Targeted proteomics

Targeted proteomics, which we chose as our Method of the Year in 2012, offers a fundamentally different way of analyzing data compared to discovery-based proteomics. Targeted approaches, most commonly selected reaction monitoring (SRM), utilize mass spectrometry assays to identify and quantify peptides selected to represent proteins of interest, akin to Western blotting, but in a multiplexed fashion.

These SRM assays can be laborious to generate, however. Methods for high-throughput SRM assay generation are therefore important (see Picotti et al.Stergachis et al. and Kennedy et al.). In 2008 Ruedi Aebersold’s group set up a database of assays for the yeast proteome, called SRMAtlas, which has since grown to include assays for M. tuberculosis and human. Amanda Paulovich and colleagues just this year presented the CPTAC Assay Portal, a new repository of analytically validated targeted proteomics assays.

As in discovery-based proteomics, statistical validation in targeted proteomics is equally important. Aebersold’s lab developed the mProphet tool and also provide a useful guide to SRM in their 2012 Review.

Biological applications of targeted proteomics are growing. Bart Deplancke and colleagues showed that transcription factors could be followed during cellular differentiation using SRM. Olga Vitek’s group showed that targeted proteins could be quantified using sparse reference labeling. In this current 10th Anniversary issue, Claus Jørgensen’s group reports a quantitative method for monitoring human kinases, and Paola Picotti’s lab describes a panel of assays to quantify ‘sentinel’ proteins reporting on 188 different yeast processes.

Data-independent analysis

Our very first issue in October 2004 featured an interesting paper from Yates and colleagues describing a data-independent mass spectrometry scanning approach for acquiring MS/MS spectra. In contrast to the common data-dependent approach, where the most prominent peptide ions are selected for MS/MS, the data-independent approach can enable more reproducible results as it overcomes issues of peptide ion sampling stochasticity. It took nearly a decade for this clever idea to really catch on, but within the last year or so, we have published practical data-independent analysis implementations from Michael MacCoss’s and Stefan Tenzer’s labs.

Anne-Claude Gingras and Stephen Tate and colleagues, along with Aebersold and colleagues, showed how a quantitative targeted data-independent analysis method called SWATH provides advantages for analyzing protein interactomes by affinity purification-mass spectrometry.

We look forward to many more strong advances in mass spectrometry-based proteomics in the decade to come!

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in