Microarray Analysis

Overview

The PBRC Genomics Core Facility (GCF) offers the Illumina system for human, mouse, and rat whole genomes. The system is housed in a fully equipped, secure laboratory. The GCF offers sample quality control, labeling, hybridization, scanning, quantitation, and limited data analysis.

Commercial microarrays in standard microscope slide format may be scanned using a Perkin-Elmer ScanArray 5000.

Panther (for gene annotation, biological pathways, molecular functions and protein families) and Spotfire (for pattern visualization) are recommended microarray software analysis programs. Spotfire is only available through the Genomics Core Facility.

Gene validation is supported through access to Applied Biosystems 7900HT Sequence Detection Systems for quantitative real-time PCR.

For help in designing a microarray experiment, please contact us. It is necessary for individuals who are considering running microarrays in our facility to discuss technical and statistical aspects of the design of their experiments before the experiment is performed, i.e. before RNA is isolated.

Return to top

Experimental Design

Microarrays are a powerful tool when a microarray experiment is designed properly. Both biological and technical variation must be addressed adequately to produce meaningful data.

  • What type of tissue/cells do you plan to use? Tissue heterogeneity?
  • How easy or difficult is it to obtain quality RNA from your samples?
  • What RNA isolation technique is appropriate for your chosen tissues or cells?
  • How much RNA can you actually obtain from your sample?
  • What is your experiment timeline?
  • How many replicates are needed to obtain statistical significance?
  • How do you intend to analyze and interpret your array results (a lab technician, a statistician, yourself)?
  • Researchers should review the MIAME standards on the MGED (Microarray Gene Expression Data Society) website. MIAME (Minimum Information about a Microarray Experiment) is the standard for submitting microarray data for publication.
Return to top

Discovery vs. Validation

Many investigators look for validation of previous quantitative PCR data in microarray results. While detection of known up or down regulation of a transcript on your arrays may be comforting, failure to detect a similar expression level does not mean that a failure of the array has occurred. Microarray analysis is designed to be used for discovery of potential gene targets. Quantitative PCR is the “gold” standard for validation and quantitation.

Discrepancies between microarray and quantitative PCR fold changes may be caused by:

  • the more limited dynamic range of microarray detection
  • a quantitative PCR primer/probe set designed to detect a different region of the target transcript
  • poor primer/probe design
  • inefficiency of the PCR reaction
  • detection of non-specific amplicons

Potential causes of signal variation in microarray experiments:

  • Varying RNA quality
  • Varying cDNA synthesis efficiency
  • Dye-bias if using two-color fluorescence dye labeling systems
  • Inconsistent hybridization conditions due to poor temperature control
  • Inadequate mixing or distribution of hybridization solution
  • Alternate sequence present with a high binding affinity for a probe
  • Contaminating, unlabeled cDNA that competitively binds with a probe
  • Contaminating protein or chemical that either enhances or quenches the chemiluminescence and/or fluorescence
  • Debris on the slide

Most causes may be avoided through careful laboratory technique or can be corrected through data normalization . Some are inherent problems with microarray technology. They cannot be detected or corrected.

Return to top

Replication

  • Biological replicates are multiple arrays hybridized with RNA from different biological samples or pools of samples that compare the same treatments or control/treatment combination. The arrays will detect both biological and technical variation.

  • Technical replicates are multiple arrays hybridized with the same RNA sample. The only differences in measurements are due to technical differences in array and reagent manufacturing and processing.

If experimental cost is an issue, consider spending your money on biological replicates. Technical replication only proves the technical proficiency of the person performing the experiment and the consistency of manufacturing quality. Applied Biosystems 1700 system has been shown to have the best correlation among technical replicates (>0.995, Nature Biotechnology, Vol. 24, Num. 7, July 2006, pages 832-840) amongst nine commercial and one in-house microarray platforms.

Design your experiment to limit variability to only those conditions you are testing. Match animals, environmental conditions, timing, etc. The number of replicates to be used must be determined by the amount of natural variation present which cannot be controlled in your experimental samples. Human samples will exhibit much more variation than inbred mice or rats which may exhibit more variation than cell culture samples. The greater the natural variability the more replicates are needed (See Pooling for an alternate method of dealing with variability). You must have enough replicates to perform appropriate statistical analyses. And last but not least, you must balance the cost of the experiment with the benefit derived from increasing the replicate number. The Power of Replicates, an Illumina Gene Expression Profiling Technical Note, discusses the advantages of using replicates to discern true differences from random variation. Remember — in most cases, you will be generating information that will fuel your research goals for years to come. Additional funding to produce quality data is wisely spent.

Return to top

Pooling

Pooled samples are commonly used for samples with low amounts of RNA (in the alternative, use an RNA amplification labeling protocol) or as a method to eliminate random individual variation when the experimental organism has a higher degree of genetic variability. Pooled biological replicates are recommended for good statistical analysis. Pool equal numbers of samples and then consider using several sample pools to test each experimental condition.

Return to top

Submission Requirements

The quality of the submitted RNA must be confirmed by

  • 260/280 ratio 1.8 ≤ 2.0,
  • 260/230 ratio ≥ 1.8,
  • RNA Integrity Number (RIN > 7 (preferably > 8))
Labeling ProtocolTemplate Amount per array-tRNARecommended Submission ConcentrationVolume*
Illumina-Single Amplification100ng-500ng50 ng/ul15 ul
*The volume requested is enough for one labeling reaction. More RNA may be requested if the labeling reaction fails.

Please refer to the document RNA Sample Requirements for Microarray Analysis for detailed information about RNA extraction and quality control. All submissions will be checked for quality and quantity by Agilent analysis before and after labeling. The submitter will be notified if any sample does not meet adequate quality standards for microarray analysis. The submitter will assume financial responsibility for any microarray failure if the decision is made to proceed with substandard samples.

Return to top

Analysis

At this time, the GCF provides normalized signal data. It does not provide further analysis of Illumina arrays.

Return to top

References

Butte, Atul. The Use and Analysis of Microarray Data. Nature Reviews Drug Discovery Volume 1 December 2002, pages 951-960

Bolstad, B. M., Irizarry R. A., Astrand, M, and Speed, T. P. (2003) A Comparison of Normalization Methods for High Density Oligonucleotide Array Data Based on Bias and Variance. Bioinformatics 19(2) pp 185-193.

Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein and Russ B. Altman. Missing value estimation methods for DNA microarrays. BIOINFORMATICS Vol. 17 no. 6, 2001 Pages 520-525

Katherine S. Pollard, Sandrine Dudoit, and Mark J. van der Laan Multiple Testing Procedures: R multtest Package and Applications to Genomics (December 2004). U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 164.

Y. Benjamini and Y. Hochberg (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Statist. Soc. B. Vol. 57: 289-300.

Return to top