Illuminating the unknown
Ensuring metagenomic data integrity
Metagenomic approaches utilize similar processes and workflows to conventional studies (e.g., PCR and qPCR). In both cases, the first step is to obtain, isolate, and purify a nucleic acid sample – DNA for genomic studies and RNA for transcriptomic investigations. This sample is then amplified (and sequenced in the case of next-generation sequencing techniques), with the end product read and measured using specialized instrumentation. Finally, software is employed to process, compile and analyze the resultant raw data.
What sets metagenomics apart from conventional approaches is scale. When designing and executing a metagenomics workflow, investigators must consider not only how to optimize nucleic acid yields pre- and post-amplification, but also that the post-amplification product is as accurate a representation of the original sample as possible. This places gene expression magnitude and proportion into play across thousands of organisms in a sample, each with unique genetic profiles, potentially present in a metagenomics sample. Metagenomics studies, therefore have increased difficulty when compared with conventional, single organism microbe studies.
What is bias and how is it introduced?
Unfortunately, bias – the systemic distortion of the measured data values from the true values of the original sample – is present to some degree in all experimental processes, and metagenomics is no exception. From sample acquisition to sequencing and read assembly, bias can be introduced at any stage throughout the typical metagenomics workflow (1). To start, whether a sample is truly representative of the greater community that it is part of depends on sampling location and frequency. For example, when studying the gut microbiome, a fecal sample will yield a different microbiota than one obtained from the intestinal mucosa. Additionally, sample composition can be biased by how the samples were stored and transported to the laboratory.
Extracting nucleic acids for metagenomic studies typically first requires liberating them from cellular enclosures. Cell membranes and walls are broken down through chemical, enzymatic, or mechanical means. However, microbes differ in how easily they are lysed, resulting in dramatic differences in nucleic acid yield proportions. Changing extraction techniques can result in as much as a 10-fold difference in measured proportion of a given taxon from the same sample (2). As such, it is important for researchers to understand – and compensate for – the inherent biases introduced by their extraction protocol and/or reagents of choice (3).
Sources of bias in shotgun sequencing
Similarly, individual sequencing techniques also possess their own biases. Primer construction, amplification protocol, genomic size, and even whether the nucleic acid sample is single- or double-stranded, have all been identified as sources of bias (3–5). For example, while shotgun sequencing creates random fragments for subsequent read generation, randomness does not automatically equate to uniformity, potentially resulting in the preferential amplification of some genomic or transcriptomic regions over others. Likewise, 16S sequencing relies on 16S ribosomal RNA (rRNA) as a phylogenetic marker to determine microbiome composition (3).
Sources of bias in 16S rRNA sequencing
16S rRNA sequencing targets conserved regions that surround hypervariable regions of the bacterial 16S rRNA gene and has been widely used. Analysis of the 16S rRNA gene has been a mainstay of sequence-based bacterial analysis for decades. (7) Analysis of the ITS (Internal transcribed spacer) region allows the profiling of fungal genomes (8).
Awareness leads to countermeasures
Bias is cumulative. A distortion introduced during sample preparation will be amplified during sequencing and highlighted during analysis. It is therefore critical for scientists to understand potential sources of bias and develop a thorough series of controls in an effort to compensate for it. Positive and negative controls can be used to identify variability between experimental runs using the same protocol and same sample, while databases such as the Microbiome Quality Control project can help demonstrate how changes in protocol translate to changes in the final result. Finally, researchers need to be aware that efforts to detect certain organisms of interest (e.g., a pathogen) may result in the masking of many others, thus creating a biased portrait of the microbial community (1). While fully removing bias may be impossible, understanding and mitigating bias will prove essential if metagenomics is to become a clinical diagnostic tool (1, 6).
To increase sample yield, nucleic acids need to be liberated from cellular sources. Mechanical homogenization is recommended to avoid introducing bias during the cell lysis process. This can be accomplished using the Powerlyzer 24 Homogenizer (110/120V). The choice of beads and lysis chemistry is essential to provide the best yield and representation.
Keys to success:
- Fast and powerful homogenization
- High throughput capacity
- Reproducible results
Whether feces, soil, tissues or water, different source materials present different challenges when it comes to isolating DNA and RNA. Find a protocol or kit (such as QIAamp PowerFecal Pro DNA Kit optimized to your needs for best results.
Keys to success:
- Optimized for specific sample type
- Ability to remove PCR inhibitors
- High yield recovery
NGS library preparation
Reagent and primer selection plays a large part in the generation of a proper NGS library, on that minimizes bias and maximizes read depth. Find a kit that aligns with your amplification and sequencing needs.
Keys to success:
- Even coverage
- High read quality
- Works even with low-input concentrations
A good software suite goes a long way in turning metagenomics data interpretation from a chore to a joy. The best will have specific tools and features to cover all of your needs, from taxonomy to polymorphism anaylsis.
Keys to success:
- Intuitive interface
- Sensitive and powerful