In Part 1, several normalization methods were used on a developing mouse lens TMT study that spanned three 6-plex TMT labeling experiments. It was clear that an IRS-like procedure is critical to combining data from multiple TMT experiments because the different TMT experiments act like different batches. We also saw that increases in expression of several highly abundant lens proteins during the time course created a compositional bias in the samples that could be corrected by procedures like TMM.
In Part 2, we compared two rather different conditions: early development (E15 plus E18) to later development (P6 plus P9) because we could have some reasonable expectation of the differences due to what is known about the lens. We looked at using ratios of samples to a standard as an alternative to scaling experiments based on the standard values in Part 3.
In this installment, we will look at comparing P0 to P3 with and without IRS normalization. These samples should be relatively similar, so we will use TMM normalization. We will stick with an exact test in edgeR so that we are not changing too many things at once.
Robinson, M.D., McCarthy, D.J. and Smyth, G.K., 2010. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics, 26(1), pp.139-140.
Robinson, M.D. and Oshlack, A., 2010. A scaling normalization method for differential expression analysis of RNA-seq data. Genome biology, 11(3), p.R25.
Data from:
Khan, S.Y., Ali, M., Kabir, F., Renuse, S., Na, C.H., Talbot, C.C., Hackett, S.F. and Riazuddin, S.A., 2018. Proteome Profiling of Developing Murine Lens Through Mass Spectrometry. Investigative ophthalmology & visual science, 59(1), pp.100-107.
# Analysis of IOVS mouse lens data (Supplemental Table S01):
# Khan, Shahid Y., et al. "Proteome Profiling of Developing Murine Lens Through Mass Spectrometry."
# Investigative Ophthalmology & Visual Science 59.1 (2018): 100-107.
# load libraries
library(tidyverse) # modern R packages for big data analysis
library(limma) # edgeR will load this if we do not
library(edgeR)
# read the Supplemental 01 file (saved as a CSV export from XLSX file)
data_start <- read_csv("iovs-58-13-55_s01.csv")
# filter out proteins not seen in all three runs
data_no_na <- na.omit(data_start)
# fix the column headers
col_headers <- colnames(data_no_na)
col_headers <- str_replace(col_headers, " {2,3}", " ")
col_headers <- str_replace(col_headers, "Reporter ion intensities ", "")
colnames(data_no_na) <- col_headers
# save the annotation columns (gene symbol and protein accession) for later and remove from data frame
annotate_df <- data_no_na[1:2]
data_raw <- as.data.frame(data_no_na[3:20])
row.names(data_raw) <- annotate_df$`Protein Accession No.`
# separate the TMT data by experiment
exp1_raw <- data_raw[c(1:6)]
exp2_raw <- data_raw[c(7:12)]
exp3_raw <- data_raw[c(13:18)]
# figure out the global scaling value
target <- mean(c(colSums(exp1_raw), colSums(exp2_raw), colSums(exp3_raw)))
# do the sample loading normalization before the IRS normalization
# there is a different correction factor for each column
# seems like a loop could be used here somehow...
norm_facs <- target / colSums(exp1_raw)
exp1_sl <- sweep(exp1_raw, 2, norm_facs, FUN = "*")
norm_facs <- target / colSums(exp2_raw)
exp2_sl <- sweep(exp2_raw, 2, norm_facs, FUN = "*")
norm_facs <- target / colSums(exp3_raw)
exp3_sl <- sweep(exp3_raw, 2, norm_facs, FUN = "*")
# make a pre-IRS data frame after sample loading norms
data_sl <- cbind(exp1_sl, exp2_sl, exp3_sl)
# make working frame with row sums from each frame
irs <- tibble(rowSums(exp1_sl), rowSums(exp2_sl), rowSums(exp3_sl))
colnames(irs) <- c("sum1", "sum2", "sum3")
# get the geometric average intensity for each protein
irs$average <- apply(irs, 1, function(x) exp(mean(log(x))))
# compute the scaling factor vectors
irs$fac1 <- irs$average / irs$sum1
irs$fac2 <- irs$average / irs$sum2
irs$fac3 <- irs$average / irs$sum3
# make new data frame with normalized data
data_irs <- exp1_sl * irs$fac1
data_irs <- cbind(data_irs, exp2_sl * irs$fac2)
data_irs <- cbind(data_irs, exp3_sl * irs$fac3)
We start with a data frame where the rows are the different protein expression levels and the columns are the biological samples. We need to get the data into an edgeR DGEList object. This data container holds the count data and the sample mapping information. We have to tell edgeR which samples belong to which groups. When we perform the statistical testing, we will have to specify which two groups are being compared. We will do an analysis of the SL data without IRS first. EdgeR will work better if we load all 18 samples in the DGEList object and estimate dispersions on the full dataset. We will limit the pair-wise comparison to the P0 versus P3 samples below.
# set up the sample mapping
group <- rep(c("E15", "E18", "P0", "P3", "P6", "P9"), 3)
# make group into factors and set the order
group <- factor(group, levels = c("E15", "E18", "P0", "P3", "P6", "P9"))
# create a DGEList object with our data
y_sl <- DGEList(counts = data_sl, group = group)
# we need to run TMM norm and estimate the dispersion
y_sl <- calcNormFactors(y_sl)
y_sl <- estimateDisp(y_sl)
y_sl$samples
plotBCV(y_sl, main = "Biological variation SL only (no IRS)")
# we need to transform the SL data with TMM factors for later plotting
sl_tmm <- calcNormFactors(data_sl)
data_sl_tmm <- sweep(data_sl, 2, sl_tmm, FUN = "/") # this is data after SL and TMM on original scale
We have loaded the data, performed TMM normalizations, and estimated dispersion factors for the statistical modeling. Now we will perform a modified Fisher's exact test between the P0 and P3 groups.
# the exact test object has columns like fold-change, CPM, and p-values
et_sl <- exactTest(y_sl, pair = c("P0", "P3"))
summary(decideTestsDGE(et_sl)) # this counts up, down, and unchanged genes (here it is proteins)
We might expect that P0 and P3 are not too different, but there probably should be some changes in expression.
# the topTags function adds the BH FDR values to an exactTest data frame. Make sure we do not change the row order!
tt_sl <- topTags(et_sl, n = Inf, sort.by = "none")
tt_sl <- tt_sl$table # tt_sl is a list. We just need the data frame table
# add the default value as a new column
tt_sl$candidate <- "no"
tt_sl[which(tt_sl$FDR <= 0.10 & tt_sl$FDR > 0.05), dim(tt_sl)[2]] <- "low"
tt_sl[which(tt_sl$FDR <= 0.05 & tt_sl$FDR > 0.01), dim(tt_sl)[2]] <- "med"
tt_sl[which(tt_sl$FDR <= 0.01), dim(tt_sl)[2]] <- "high"
tt_sl$candidate <- factor(tt_sl$candidate, levels = c("high", "med", "low", "no"))
# what does tt_sl look like?
head(tt_sl)
Non-differentially-expressed proteins should produce a uniformly distributed (flat) background of p-values (at all values from 0.0 to 1.0). Any true differential candidates will be a second distribution peaked at very small p-values. This is a good example of what the p-value distribution should not look like! The variances are so large compared to the means, that the distribution is skewed.
# what does the test p-value distribution look like?
ggplot(tt_sl, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(tt_sl$PValue, breaks = 100, plot = FALSE)$counts[26:100])) +
ggtitle("P0 vs P3 (SLNorm only) p-value distribution")
# for plotting results, we will use the average intensities for the 3 P0 and the 3 P3 samples
P0 <- c(3, 9, 15)
P3 <- c(4, 10, 16)
de_sl <- data.frame(rowMeans(data_sl_tmm[P0]), rowMeans(data_sl_tmm[P3]), tt_sl$candidate)
colnames(de_sl) <- c("P0", "P3", "candidate")
volcano_sl <- data.frame(log2(rowMeans(data_sl[P0])/rowMeans(data_sl[P3])), log10(tt_sl$FDR)*(-1), tt_sl$candidate)
colnames(volcano_sl) <- c("FoldChange", "FDR", "candidate")
We will do three types of plots for visualizing the DE candidates: mean-difference plots (common in genomics), scatter plots (my favorite), and a volcano plot. For the MA plots and scatter plots, we will also separate the plots by candidate status.
The solid diagonal lines are the unity line and the dotted lies are plus/minus 2-fold. Since we have no candidates, the non-candidates are orange points, and the separate plots are the same.
# start with MA plot
library(scales)
temp <- data.frame(log2((de_sl$P0 + de_sl$P3)/2), log2(de_sl$P3/de_sl$P0), de_sl$candidate)
colnames(temp) <- c("Ave", "FC", "candidate")
ggplot(temp, aes(x = Ave, y = FC)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous("FC (P3 / P0)") +
scale_x_continuous("Ave_intensity") +
ggtitle("Without IRS P0 vs P3 (MA plot)") +
geom_hline(yintercept = 0.0, color = "black") + # one-to-one line
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted") # 2-fold down
# make separate MA plots
ggplot(temp, aes(x = Ave, y = FC)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous("FC (P3 / P0)") +
scale_x_continuous("Ave_intensity") +
geom_hline(yintercept = 0.0, color = "black") + # one-to-one line
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted") + # 2-fold down
facet_wrap(~ candidate) +
ggtitle("Without IRS, separated by candidate (MA plots)")
# make the combined candidate corelation plot
ggplot(de_sl, aes(x = P0, y = P3)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_log10() +
scale_x_log10() +
ggtitle("Without IRS P0 vs P3") +
geom_abline(intercept = 0.0, slope = 1.0, color = "black") + # one-to-one line
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted") # 2-fold down
# make separate corelation plots
ggplot(de_sl, aes(x = P0, y = P3)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_log10() +
scale_x_log10() +
geom_abline(intercept = 0.0, slope = 1.0, color = "black") + # one-to-one line
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted") + # 2-fold down
facet_wrap(~ candidate) +
ggtitle("Without IRS, separated by candidate")
# make a volcano plot
ggplot(volcano_sl, aes(x = FoldChange, y = FDR)) +
geom_point(aes(color = candidate, shape = candidate)) +
xlab("Fold-Change (Log2)") +
ylab("-Log10 FDR") +
ggtitle("Without IRS Volcano Plot")
# create a DGEList object with the IRS data
y_irs <- DGEList(counts = data_irs, group = group)
# we need to normalize and estimate the dispersion terms (global and local)
y_irs <- calcNormFactors(y_irs)
y_irs <- estimateDisp(y_irs)
y_irs$samples
plotBCV(y_irs, main = "Biological variation with IRS")
# we need to transform the IRS data with TMM factors for later plotting
irs_tmm <- calcNormFactors(data_sl)
data_irs_tmm <- sweep(data_irs, 2, irs_tmm, FUN = "/") # this is data after SL and TMM on original scale
# the exact test object has columns like fold-change, CPM, and p-values
et_irs <- exactTest(y_irs, pair = c("P0", "P3"))
summary(decideTestsDGE(et_irs)) # this counts up, down, and unchanged genes
# the topTags function adds the BH FDR values to an exactTest data frame. Make sure we do not change the row order!
tt_irs <- topTags(et_irs, n = Inf, sort.by = "none")
tt_irs <- tt_irs$table # tt_sl is a list. We just need the data frame table
# add the default value as a new column
tt_irs$candidate <- "no"
tt_irs[which(tt_irs$FDR <= 0.10 & tt_irs$FDR > 0.05), dim(tt_irs)[2]] <- "low"
tt_irs[which(tt_irs$FDR <= 0.05 & tt_irs$FDR > 0.01), dim(tt_irs)[2]] <- "med"
tt_irs[which(tt_irs$FDR <= 0.01), dim(tt_irs)[2]] <- "high"
tt_irs$candidate <- factor(tt_irs$candidate, levels = c("high", "med", "low", "no"))
# what does tt_sl look like?
head(tt_irs)
# what does the test p-value distribution look like?
ggplot(tt_irs, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(tt_irs$PValue, breaks = 100, plot = FALSE)$counts[26:100])) +
ggtitle("P0 vs P3 (after IRS) p-value distribution")
The solid lines are the unity line and the dotted lies are plus/minus 2-fold. Proteins with FDR values less than 0.01 are "high" significance candidates (orange), proteins with FDR between 0.05 and 0.01 are "medium" (sea green), proteins with FDR between 0.10 and 0.05 are "low" (teal), and proteins with FDR greater than 0.10 are non-candidates ("no", in purple).
# for plotting results, we will use the average intensities for the P0 and the P3 samples
de_irs <- data.frame(rowMeans(data_irs_tmm[P0]), rowMeans(data_irs_tmm[P3]), tt_irs$candidate)
colnames(de_irs) <- c("P0", "P3", "candidate")
volcano_irs <- data.frame(-1*tt_irs$logFC, -1*log10(tt_irs$FDR), tt_irs$candidate)
colnames(volcano_irs) <- c("FoldChange", "FDR", "candidate")
# start with MA plot
temp <- data.frame(log2((de_irs$P0 + de_irs$P3)/2), log2(de_irs$P3/de_irs$P0), de_irs$candidate)
colnames(temp) <- c("Ave", "FC", "candidate")
ggplot(temp, aes(x = Ave, y = FC)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous("FC (P3 / P0)") +
scale_x_continuous("Ave_intensity") +
ggtitle("After IRS P0 vs P3 (MA plot)") +
geom_hline(yintercept = 0.0, color = "black") + # one-to-one line
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted") # 2-fold down
# make separate MA plots
ggplot(temp, aes(x = Ave, y = FC)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous("FC (P3 / P0)") +
scale_x_continuous("Ave_intensity") +
geom_hline(yintercept = 0.0, color = "black") + # one-to-one line
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted") + # 2-fold down
facet_wrap(~ candidate) +
ggtitle("After IRS, separated by candidate (MA plots)")
# make the combined candidate corelation plot
ggplot(de_irs, aes(x = P0, y = P3)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_log10() +
scale_x_log10() +
ggtitle("After IRS P0 vs P3") +
geom_abline(intercept = 0.0, slope = 1.0, color = "black") + # one-to-one line
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted") # 2-fold down
# make separate corelation plots
ggplot(de_irs, aes(x = P0, y = P3)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_log10() +
scale_x_log10() +
geom_abline(intercept = 0.0, slope = 1.0, color = "black") + # one-to-one line
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted") + # 2-fold up
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted") + # 2-fold down
facet_wrap(~ candidate) +
ggtitle("IRS, separated by candidate")
# make a volcano plot
ggplot(volcano_irs, aes(x = FoldChange, y = FDR)) +
geom_point(aes(color = candidate, shape = candidate)) +
xlab("Fold-Change (Log2)") +
ylab("-Log10 FDR") +
ggtitle("After IRS Volcano Plot")
Even though the time delta is just 3 days, we should still have more expression of major lens proteins at P3 compared to P0. We do indeed see some up-regulated proteins starting to emerge.
We will not highlight candidates and we will add marginal histograms.
library(ggExtra)
# add marginal distrubution histograms to basic correlation plot (good starting point)
ggplot()
corr_plot <- ggplot(de_sl, aes(x = log10(P0), y = log10(P3))) +
geom_point() + ggtitle("Before IRS")
ggMarginal(corr_plot, type = "histogram")
ggplot()
corr_plot <- ggplot(de_irs, aes(x = log10(P0), y = log10(P3))) +
geom_point() + ggtitle("After IRS")
ggMarginal(corr_plot, type = "histogram")
As we saw in Part 2, the average values across TMT experiments do not depend too much on whether the IRS method is done or not. The variance, a key component in the statistical testing, is dramatically different with and without IRS.
Reference standards are truly needed in TMT experiments. Either IRS normalization or ratios to a common reference are required to remove the random sampling effect. Despite some data summary visualizations suggesting that standard normalization methods are comparable to the IRS method, they completely fail to prepare the data for statistical testing. The high variance from random MS2 sampling simply kills the statistical testing.
# log the R session for records
sessionInfo()