This notebook reads the quantitative summary file created by the Mascot Daemon and shows some alternative visualizations, normalization, and differential expression testing. The data was used in a recent blog where the new quantitative summary functionality was explored. I also did a full re-analysis with my pipeline and have a PAW notebook for the IPG 3.7 to 4.9 data to compare to this notebook. I suggest opening both notebook files in side-by-side browser windows. The notebooks are very similar, and you can see the underlying data similarities and differences as you scroll down the notebooks.
The data is from the PXD004163 PRIDE archive. The U1810 cell line was from non-small cell lung cancer. The cells were treated with several different microRNAs. The miRNAs were either non-targeting controls (n=3), miR-372-3p mimic (n=3), miR-191-5p mimic (n=2), or miR-519c-3p mimics (n=2). Extracted proteins were digested with trypsin in a FASP protocol. The samples were labeled with tandem mass tags (TMT) 10-plex and analyzed in a large-scale isoelectric point fractionation. One separation was pH 3-10 and the other was pH 3.7-4.9. Each separation produced 72 fractions that were run with a C18 reverse-phase column, electrospray ionization, and DDA mass spectrometry using a Thermo Q Exactive instrument. MS1 scans had a resolution of 70K and the top-5 MS2 scans had a resolution of 35K (which might be a little low for fully resolving the N- and C- forms of the TMT tags). Other instrument settings seemed okay for a Q Exactive doing TMT.
Sample Key:
Channel | Sample Key | Simplified Key |
---|---|---|
126C | siCtrl A | A_Ctrl |
127N | miR 191 B | B_191 |
127C | miR 372 A | A_372 |
128N | miR 519c A | A_519c |
128C | siCtrl B | B_Ctrl |
129N | miR372 B | B_372 |
129C | miR 519c B | B_519c |
130N | siCtrl C | C_Ctrl |
130C | miR 191 A | A_191 |
131N | miR 372 C | C_372 |
The data is described in this Oncogene paper, and more details on the samples are in the PRIDE description. The Matrix Science blog has a link to download the quantitative summary tab-delimited file. That PXD004163.txt
file for the IPG 3-10 range data was the starting point for this analysis. The summary has protein aggregated reporter ion quantities where each row is a protein or protein group and columns at the right of the table are the quantities for the individual channels. There is also a column to flag possible contaminant proteins. There are some additional human keratins that should probably be added to the contaminants.
The column totals from the PAW analysis (averaged) were 12,099,627,853. The column total average in the Mascot quantitative summary was 67,842,137. The protein level summary values from Mascot are reporter ion peak areas rather than peak heights. The values are 178 times smaller on average. The PAW processing uses reporter ion peak heights. Like most TMT datasets, there are not many missing data values. Like all data, the "missingness" is protein abundance dependent. The lowest abundance proteins have the missing data. I did some processing of the PXD004163.txt
in Excel to exclude contaminants and rank the proteins by decreasing average reporter ion value. A value of 5.0 looked like a good lower abundance cutoff. That left a handful of proteins with a few zero values. Zeros were replaced with 0.5 to avoid mathematical errors in the notebook analysis.
Normalizations and statistical testing was done in edgeR using a Jupyter notebook, similar to the analysis) used for the PAW pipeline data.
# library imports
library(tidyverse)
library(scales)
library(limma)
library(edgeR)
library(psych)
# ================== TMM normalization from DGEList object =====================
apply_tmm_factors <- function(y, color = NULL, plot = TRUE) {
# computes the tmm normalized data from the DGEList object
# y - DGEList object
# returns a dataframe with normalized intensities
# compute and print "Sample loading" normalization factors
lib_facs <- mean(y$samples$lib.size) / y$samples$lib.size
cat("\nLibrary size factors:\n",
sprintf("%-5s -> %f\n", colnames(y$counts), lib_facs))
# compute and print TMM normalization factors
tmm_facs <- 1/y$samples$norm.factors
cat("\nTrimmed mean of M-values (TMM) factors:\n",
sprintf("%-5s -> %f\n", colnames(y$counts), tmm_facs))
# compute and print the final correction factors
norm_facs <- lib_facs * tmm_facs
cat("\nCombined (lib size and TMM) normalization factors:\n",
sprintf("%-5s -> %f\n", colnames(y$counts), norm_facs))
# compute the normalized data as a new data frame
tmt_tmm <- as.data.frame(sweep(y$counts, 2, norm_facs, FUN = "*"))
colnames(tmt_tmm) <- str_c(colnames(y$counts), "_tmm")
# visualize results and return data frame
if(plot == TRUE) {
boxplot(log10(tmt_tmm), col = color, notch = TRUE, main = "TMM Normalized data")
}
tmt_tmm
}
# ================= reformat edgeR test results ================================
collect_results <- function(df, tt, x, xlab, y, ylab) {
# Computes new columns and extracts some columns to make results frame
# df - data in data.frame
# tt - top tags table from edgeR test
# x - columns for first condition
# xlab - label for x
# y - columns for second condition
# ylab - label for y
# returns a new dataframe
# condition average vectors
ave_x <- rowMeans(df[x])
ave_y <- rowMeans(df[y])
# FC, direction, candidates
fc <- ifelse(ave_y > ave_x, (ave_y / ave_x), (-1 * ave_x / ave_y))
direction <- ifelse(ave_y > ave_x, "up", "down")
candidate <- cut(tt$FDR, breaks = c(-Inf, 0.01, 0.05, 0.10, 1.0),
labels = c("high", "med", "low", "no"))
# make data frame
temp <- cbind(df[c(x, y)], data.frame(logFC = tt$logFC, FC = fc,
PValue = tt$PValue, FDR = tt$FDR,
ave_x = ave_x, ave_y = ave_y,
direction = direction, candidate = candidate,
Acc = tt$genes))
# fix column headers for averages
names(temp)[names(temp) %in% c("ave_x", "ave_y")] <- str_c("ave_", c(xlab, ylab))
temp # return the data frame
}
# =============== p-value plots ================================================
pvalue_plots <- function(results, ylim, title) {
# Makes p-value distribution plots
# results - results data frame
# ylim - ymax for expanded view
# title - plot title
p_plot <- ggplot(results, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(results$PValue, breaks = 100,
plot = FALSE)$counts[26:100]))
# we will need an expanded plot
p1 <- p_plot + ggtitle(str_c(title, " p-value distribution"))
p2 <- p_plot + coord_cartesian(xlim = c(0, 1.0), ylim = c(0, ylim)) + ggtitle("p-values expanded")
grid.arrange(p1, p2, nrow = 2) # from gridExtra package
}
# ============= log2 fold-change distributions =================================
log2FC_plots <- function(results, range, title) {
# Makes faceted log2FC plots by candidate
# results - results data frame
# range - plus/minus log2 x-axis limits
# title - plot title
ggplot(results, aes(x = logFC, fill = candidate)) +
geom_histogram(binwidth=0.1, color = "black") +
facet_wrap(~candidate) +
ggtitle(title) +
coord_cartesian(xlim = c(-range, range))
}
# ========== Setup for MA and volcano plots ====================================
transform <- function(results, x, y) {
# Make data frame with some transformed columns
# results - results data frame
# x - columns for x condition
# y - columns for y condition
# return new data frame
df <- data.frame(log10((results[x] + results[y])/2),
log2(results[y] / results[x]),
results$candidate,
-log10(results$FDR))
colnames(df) <- c("A", "M", "candidate", "P")
df # return the data frame
}
# ========== MA plots using ggplot =============================================
MA_plots <- function(results, x, y, title) {
# makes MA-plot DE candidate ggplots
# results - data frame with edgeR results and some condition average columns
# x - string for x-axis column
# y - string for y-axis column
# title - title string to use in plots
# returns a list of plots
# uses transformed data
temp <- transform(results, x, y)
# 2-fold change lines
ma_lines <- list(geom_hline(yintercept = 0.0, color = "black"),
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted"),
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted"))
# make main MA plot
ma <- ggplot(temp, aes(x = A, y = M)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous(paste0("logFC (", y, "/", x, ")")) +
scale_x_continuous("Ave_intensity") +
ggtitle(title) +
ma_lines
# make separate MA plots
ma_facet <- ggplot(temp, aes(x = A, y = M)) +
geom_point(aes(color = candidate, shape = candidate)) +
scale_y_continuous(paste0("log2 FC (", y, "/", x, ")")) +
scale_x_continuous("log10 Ave_intensity") +
ma_lines +
facet_wrap(~ candidate) +
ggtitle(str_c(title, " (separated)"))
# make the plots visible
print(ma)
print(ma_facet)
}
# ========== MA plots using ggplot =============================================
MA_species <- function(results, x, y, title) {
# makes MA-plot DE candidate ggplots
# results - data frame with edgeR results and some condition average columns
# x - string for x-axis column
# y - string for y-axis column
# title - title string to use in plots
# returns a list of plots
# uses transformed data
temp <- transform(results, x, y)
# 2-fold change lines
ma_lines <- list(geom_hline(yintercept = 0.0, color = "black"),
geom_hline(yintercept = 1.0, color = "black", linetype = "dotted"),
geom_hline(yintercept = -1.0, color = "black", linetype = "dotted"))
# make main MA plot
ma <- ggplot(temp, aes(x = A, y = M)) +
geom_point(aes(color = species, shape = species)) +
scale_y_continuous(paste0("logFC (", y, "/", x, ")")) +
scale_x_continuous("Ave_intensity") +
ggtitle(title) +
ma_lines
# make separate MA plots
ma_facet <- ggplot(temp, aes(x = A, y = M)) +
geom_point(aes(color = species, shape = species)) +
scale_y_continuous(paste0("log2 FC (", y, "/", x, ")")) +
scale_x_continuous("log10 Ave_intensity") +
ma_lines +
facet_wrap(~ species) +
ggtitle(str_c(title, " (separated by species)"))
# make the plots visible
print(ma)
print(ma_facet)
}
# ========== Scatter plots using ggplot ========================================
scatter_plots <- function(results, x, y, title) {
# makes scatter-plot DE candidate ggplots
# results - data frame with edgeR results and some condition average columns
# x - string for x-axis column
# y - string for y-axis column
# title - title string to use in plots
# returns a list of plots
# 2-fold change lines
scatter_lines <- list(geom_abline(intercept = 0.0, slope = 1.0, color = "black"),
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted"),
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted"),
scale_y_log10(),
scale_x_log10())
# make main scatter plot
scatter <- ggplot(results, aes_string(x, y)) +
geom_point(aes(color = candidate, shape = candidate)) +
ggtitle(title) +
scatter_lines
# make separate scatter plots
scatter_facet <- ggplot(results, aes_string(x, y)) +
geom_point(aes(color = candidate, shape = candidate)) +
scatter_lines +
facet_wrap(~ candidate) +
ggtitle(str_c(title, " (separated)"))
# make the plots visible
print(scatter)
print(scatter_facet)
}
# ========== Scatter plots using ggplot ========================================
scatter_species <- function(results, x, y, title) {
# makes scatter-plot DE candidate ggplots
# results - data frame with edgeR results and some condition average columns
# x - string for x-axis column
# y - string for y-axis column
# title - title string to use in plots
# returns a list of plots
# 2-fold change lines
scatter_lines <- list(geom_abline(intercept = 0.0, slope = 1.0, color = "black"),
geom_abline(intercept = 0.301, slope = 1.0, color = "black", linetype = "dotted"),
geom_abline(intercept = -0.301, slope = 1.0, color = "black", linetype = "dotted"),
scale_y_log10(),
scale_x_log10())
# make main scatter plot
scatter <- ggplot(results, aes_string(x, y)) +
geom_point(aes(color = species, shape = species)) +
ggtitle(title) +
scatter_lines
# make separate scatter plots
scatter_facet <- ggplot(results, aes_string(x, y)) +
geom_point(aes(color = species, shape = species)) +
scatter_lines +
facet_wrap(~ species) +
ggtitle(str_c(title, " (separated by species)"))
# make the plots visible
print(scatter)
print(scatter_facet)
}
# ========== Volcano plots using ggplot ========================================
volcano_plot <- function(results, x, y, title) {
# makes a volcano plot
# results - a data frame with edgeR results
# x - string for the x-axis column
# y - string for y-axis column
# title - plot title string
# uses transformed data
temp <- transform(results, x, y)
# build the plot
ggplot(temp, aes(x = M, y = P)) +
geom_point(aes(color = candidate, shape = candidate)) +
xlab("log2 FC") +
ylab("-log10 FDR") +
ggtitle(str_c(title, " Volcano Plot"))
}
# ============== individual protein expression plots ===========================
# function to extract the identifier part of the accesssion
get_identifier <- function(accession) {
identifier <- str_split(accession, "::", simplify = TRUE)
identifier[,2]
}
set_plot_dimensions <- function(width_choice, height_choice) {
options(repr.plot.width=width_choice, repr.plot.height=height_choice)
}
plot_top_tags <- function(results, nleft, nright, top_tags) {
# results should have data first, then test results (two condition summary table)
# nleft, nright are number of data points in each condition
# top_tags is number of up and number of down top DE candidates to plot
# get top ipregulated
up <- results %>%
filter(logFC >= 0) %>%
arrange(FDR)
up <- up[1:top_tags, ]
# get top down regulated
down <- results %>%
filter(logFC < 0) %>%
arrange(FDR)
down <- down[1:top_tags, ]
# pack them
proteins <- rbind(up, down)
color = c(rep("red", nleft), rep("blue", nright))
for (row_num in 1:nrow(proteins)) {
row <- proteins[row_num, ]
vec <- as.vector(unlist(row[1:(nleft + nright)]))
names(vec) <- colnames(row[1:(nleft + nright)])
title <- str_c(get_identifier(row$Acc), ", int: ", scientific(mean(vec), 2),
", FDR: ", scientific(row$FDR, digits = 3),
", FC: ", round(row$FC, digits = 1),
", ", row$candidate)
barplot(vec, col = color, main = title,
cex.main = 1.0, cex.names = 0.7, cex.lab = 0.7)
}
}
# ============== CV function ===================================================
CV <- function(df) {
# Computes CVs of data frame rows
# df - data frame,
# returns vector of CVs (%)
ave <- rowMeans(df) # compute averages
sd <- apply(df, 1, sd) # compute standard deviations
cv <- 100 * sd / ave # compute CVs in percent (last thing gets returned)
}
The data was gently prepped in Excel to remove contaminants, decoys, and proteins without any reporter ion intensities. We set a lower abundance cutoff (average reporter signal of 5.0) to exclude a few of the lowest abundance proteins that had excessive missing data. We replaced and remaining zero values with a value of 0.5. We have an accessions column and the 10 TMT channels.
# load the IRS-normalized data and check the table
data_all <- read_tsv("edgeR_input.txt")
# save gene names for edgeR so we can double check that results line up
accessions <- data_all$Accession
# get just the TMT columns
data_tmt <- select(data_all, -Accession)
# see how many rows of data we have
length(accessions)
We are defining the groups that will be compared explicitly and using all of the samples for variance estimates. We will put the data into a data frame, grouped by treatment. We will define some column indexes for each condition, set some colors for plotting, and see how the data cluster by treatment.
# define the groups
C <- 1:3
x372 <- 4:6
y191 <- 7:8
z519c <- 9:10
# set some colors by condition
colors = c(rep('red', length(C)), rep('blue', length(x372)),
rep('green', length(y191)), rep('black', length(z519c)))
We will the data into edgeR data structures and call the calcNormFactors
function to perform library size and the trimmed mean of M-values (TMM) normalization. We will double check if the TMM normalization changed the clustering that we had above.
We need to use the edgeR normalization factors to produce the TMM normalized data that the statistical testing will be working with. EdgeR uses the normalization factors in its statistical modeling but does not output the normalized intensities. We compute the normalized intensities with the apply_tmm_factors function
.
# get the biological sample data into a DGEList object
group = c(rep("C", length(C)), rep("x372", length(x372)),
rep("y191", length(y191)), rep("z519c", length(z519c)))
y <- DGEList(counts = data_tmt, group = group, genes = accessions)
# run TMM normalization (also includes a library size factor)
y <- calcNormFactors(y)
tmt_tmm <- apply_tmm_factors(y, color = colors)
# check the clustering
plotMDS(y, col = colors, main = "all samples after TMM")
We could have compositional differences between sample groups that the TMM factors would correct for. We have a couple of channels that need some modest library size adjustments. We do not have any TMM factors that are much different from 1.0, so there are no compositional differences of note. The box plots are very similar in size and well aligned. TMM usually gets the medians (box centers) aligned well.
The CVs are a very useful metric in these large-scale experiments. Cell cultures often have lower CVs than tissue samples. Different TMT methods (MS2 versus SPS MS3) are also rather different.
# put CVs in data frames to simplify plots and summaries
cv_tmm <- data.frame(C = CV(tmt_tmm[C]), x372 = CV(tmt_tmm[x372]),
y191 = CV(tmt_tmm[y191]), z519c = CV(tmt_tmm[z519c]))
# see what the median CV values are
medians <- apply(cv_tmm, 2, FUN = median)
print("Final median CVs by condition (%)")
round(medians, 2)
# also look at CV distributions
boxplot(cv_tmm, col = c("red", "blue", "green", "black"), notch = TRUE,
main = "TMM Normalized data")
Condition | This data | PAW processing |
---|---|---|
Control | 7.0% | 6.1% |
miR-372-3p | 7.9% | 6.9% |
miR-191-5p | 4.5% | 4.2% |
miR-519c-3p | 4.6% | 3.8% |
CVs were smaller from the PAW processing.
One of the most powerful features of edgeR (and limma) is computing variances across larger numbers of genes (proteins) to get more robust variance estimates for small replicate experiments. Here, we have 10 samples across all conditions to use to improve the variance estimates and reduce false positive differential expression (DE) candidates. We have an edgeR estimateDisp
function that does all of this and a visualization function to check the result.
We loaded the IRS data into DGEList
object y
a few cells above and did the normalization step. We need to estimate the dispersion parameters before we can do the actual statistical testing. This only needs to be done once. Each exact test will take two conditions and compare them using the normalization factors and dispersion estimates saved in y
.
# compute dispersions and plot BCV
y <- estimateDisp(y)
plotBCV(y, main = "BCV plot of IRS/TMM normalized data")
The trended dispersion plot from the PAW processing looks "tighter".
We will specify the pair of interest for the exact test in edgeR and use the experiment-wide dispersion. The decideTestsDGE
call will tell us how many up and down regulated candidates we have at an FDR of 0.10. The topTags
call does the Benjamini-Hochberg multiple testing corrections. We save the test results in tt
. We use a handy MA (mean-average) plotting function from limma
to visualize the DE candidates, and then check the p-value distribution.
# the exact test object has columns like fold-change, CPM, and p-values
et <- exactTest(y, pair = c("C", "x372"))
# this counts up, down, and unchanged genes (proteins) at 10% FDR
summary(decideTestsDGE(et, p.value = 0.10))
# the topTags function adds the BH FDR values to an exactTest data frame
# make sure we do not change the row order (the sort.by parameter)!
topTags(et)$table
tt <- topTags(et, n = Inf, sort.by = "none")
# make an MD plot (like MA plot)
plotMD(et, p.value = 0.10)
abline(h = c(-1, 1), col = "black")
# check the p-value distribution
ggplot(tt$table, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(et$table$PValue, breaks = 100,
plot = FALSE)$counts[26:100])) +
ggtitle("Control versus 372 p-value distribution")
We have 533 candidates; however, the fold-changes are mostly much less than 2-fold. The top tags have quite small p-values which must be due to small variances. The p-value distribution looks typical when we have true DE candidates - a low p-value spike and a flat background distribution.
We had 785 candidates in the PAW processing. The non-changing proteins (the black points in the MS plot) seemed tighter as well.
We will add the statistical testing results (logFC, p-values, and FDR), condition intensity averages, and candidate status to the results
data frame (which has the TMM-normalized data) and also accumulate all of the three comparisons into all_results
.
We will make MA plots, scatter plots, and a volcano plot using ggplot2. We will also look at the distribution of log2 expression ratios separated by differential expression (DE) category. The Benjamini-Hochberg corrected edgeR p-values are used to categorize the DE candidates: no > 0.10 > low > 0.05 > med > 0.01 > high.
# get the results summary
results <- collect_results(tmt_tmm, tt$table, C, "C", x372, "x372")
# make column names unique by adding comparison (for the accumulated frame)
results_temp <- results
colnames(results_temp) <- str_c(colnames(results), "_C_372")
# accumulate the testing results
all_results <- results_temp
# see how many candidates by category
results %>% count(candidate)
# plot log2 fold-changes by category
ggplot(results, aes(x = logFC, fill = candidate)) +
geom_histogram(binwidth=0.1, color = "black") +
facet_wrap(~candidate) +
coord_cartesian(xlim = c(-3, 3)) +
ggtitle("Control vs 372 logFC distributions by candidate")
We have many comparisons to visualize, so we will use functions to generate a series of plots. We will make: an MA plot with candidates highlighted by color, faceted MA plots separated by candidate status, a scatter plot with candidates highlighted by color, faceted scatter plots separated by candidate status, and a volcano plot with candidates highlighted by color. The solid black lines in the MA and scatter plots are the 1-to-1 lines; the dotted lines are 2-fold change lines.
# make MA plots
MA_plots(results, "ave_C", "ave_x372", "Control versus 372")
# make scatter plots
scatter_plots(results, "ave_C", "ave_x372", "Control versus 372")
# make a volcano plot
volcano_plot(results, "ave_C", "ave_x372", "Control versus 372")
Although the DE candidates have rather small expression differences, they do seem to have over and under expression that differs from the unchanged background. We do seem to have a good balance between up and down regulation.
The data from the PAW processing looks qualitatively similar. There are more candidates and the purple non-changing proteins are tighter to the 1-to-1 lines.
We will rank the DE candidates by p-values (FDR) and look at the top 10 in each expression direction.
# look at the top candidates
set_plot_dimensions(6, 3.5)
plot_top_tags(results, 3, 3, 10)
set_plot_dimensions(7, 7)
# the exact test object has columns like fold-change, CPM, and p-values
et <- exactTest(y, pair = c("C", "y191"))
# this counts up, down, and unchanged genes (proteins) at 10% FDR
summary(decideTestsDGE(et, p.value = 0.10))
# the topTags function adds the BH FDR values to an exactTest data frame
# make sure we do not change the row order (the sort.by parameter)!
topTags(et)$table
tt <- topTags(et, n = Inf, sort.by = "none")
# make an MD plot (like MA plot)
plotMD(et, p.value = 0.10)
abline(h = c(-1, 1), col = "black") # 2-fold change lines
# check the p-value distribution
ggplot(tt$table, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(et$table$PValue, breaks = 100,
plot = FALSE)$counts[26:100])) +
ggtitle("Control vs 191 p-value distribution")
We have 138 candidates. The other plots look pretty okay. The PAW processing had 210 candidates and the p-value distribution looked better.
We will add the statistical testing results (logFC, p-values, and FDR), condition intensity averages, and candidate status to the results
data frame (which has the TMM-normalized data) and also accumulate all of the three comparisons into all_results
.
# get the results summary
results <- collect_results(tmt_tmm, tt$table, C, "C", y191, "y191")
# make column names unique by adding comparison
results_temp <- results
colnames(results_temp) <- str_c(colnames(results), "_C_191")
# accumulate the testing results
all_results <- cbind(all_results, results_temp)
# see how many candidates by category
results %>% count(candidate)
# plot log2 fold-changes by category
ggplot(results, aes(x = logFC, fill = candidate)) +
geom_histogram(binwidth=0.1, color = "black") +
facet_wrap(~candidate) +
coord_cartesian(xlim = c(-3, 3)) +
ggtitle("Control versus 191 logFC distributions by candidate")
# make MA plots
MA_plots(results, "ave_C", "ave_y191", "Control versus 191")
# make scatter plots
scatter_plots(results, "ave_C", "ave_y191", "Control versus 191")
# make a volcano plot
volcano_plot(results, "ave_C", "ave_y191", "Control versus 191")
We have fewer DE candidates, but the plots are all pretty similar to what we saw in the first comparison. We do have some larger fold-changes for over-expressed proteins. The plots for this comparison are also quite similar to what we saw with the PAW/Comet analysis.
# look at the top candidates
set_plot_dimensions(6, 3.5)
plot_top_tags(results, 3, 2, 10)
set_plot_dimensions(7, 7)
This is a 3 versus 2 comparison, but we are using the trended variance computed from all 10 channels. This should stabalize the p-values and be more conservative.
Compare the non-targeting controls to the miR-519c-3p mimics. This is another 3 by 2 comparison.
# the exact test object has columns like fold-change, CPM, and p-values
et <- exactTest(y, pair = c("C", "z519c"))
# this counts up, down, and unchanged genes (proteins) at 10% FDR
summary(decideTestsDGE(et, p.value = 0.10))
# the topTags function adds the BH FDR values to an exactTest data frame
# make sure we do not change the row order (the sort.by parameter)!
topTags(et)$table
tt <- topTags(et, n = Inf, sort.by = "none")
# make an MD plot (like MA plot)
plotMD(et, p.value = 0.10)
abline(h = c(-1, 1), col = "black")
# check the p-value distribution
ggplot(tt$table, aes(PValue)) +
geom_histogram(bins = 100, fill = "white", color = "black") +
geom_hline(yintercept = mean(hist(et$table$PValue, breaks = 100,
plot = FALSE)$counts[26:100])) +
ggtitle("Control versus 519c p-value distribution")
We have more DE candidates for this comparison (734). The MDS cluster plots and CVs indicated that the two 519c samples were very similar. We are still using the trended variance from all 10 channels. We might have an even larger number of positive testing results if we allowed the lower 519c variance to drive candidates. The increased number of candidates suggests that we might have some larger fold-changes here.
The PAW processing had over 1,100 candidates.
We will add the statistical testing results (logFC, p-values, and FDR), condition intensity averages, and candidate status to the results
data frame (which has the TMM-normalized data) and also accumulate all of the three comparisons into all_results
.
# get the results summary
results <- collect_results(tmt_tmm, tt$table, C, "C", z519c, "z519c")
# make column names unique by adding comparison
results_temp <- results
colnames(results_temp) <- str_c(colnames(results), "_C_519")
# accumulate the testing results
all_results <- cbind(all_results, results_temp)
# see how many candidates by category
results %>% count(candidate)
# plot log2 fold-changes by category
ggplot(results, aes(x = logFC, fill = candidate)) +
geom_histogram(binwidth=0.1, color = "black") +
facet_wrap(~candidate) +
coord_cartesian(xlim = c(-3, 3)) +
ggtitle("Control versus 519c logFC distributions by candidate")
# make MA plots
MA_plots(results, "ave_C", "ave_z519c", "Control versus 519c")
# make scatter plots
scatter_plots(results, "ave_C", "ave_z519c", "Control versus 519c")
# make a volcano plot
volcano_plot(results, "ave_C", "ave_z519c", "Control versus 519c")
We might have some more proteins with slightly larger fold-changes, but it not anything very dramatic. The compressed intensities in MS2 TMT data really alter the statistical testing. Because of the low variance, even modest increases in the differences between means have a strong influence on the testing results.
# look at the top candidates
set_plot_dimensions(6, 3.5)
plot_top_tags(results, 3, 2, 10)
set_plot_dimensions(7, 7)
The individual protein expression plots do seem to have larger expression changes for the top tags. We still do not get much more than 2-fold changes for any of the proteins.
Notebooks let us provide more context on the analysis that a plain R script can provide. We can easily add quality control visualizations, normalization checks, and multiple types of expression visualizations. We are not really limited, although to many redundant data views may dilute the story telling a little. Keep in mind that the plots you find compelling may be the ones that another reader finds appalling. Censoring a data analysis story for your favorite visualizations is probably a worse choice than having a notebook be a bit longer. Navigations can help.
Another notebook advantage is to compare different data processing results. The Mascot quantitative summary data for the protein-level reporter ion values are different from the PAW processing. The PAW processing uses summation for the protein aggregation. I am not sure what is being done in Mascot Daemon. The PAW processing produced lower median CVs and considerably more differential expression candidates. Many other aspects of the data characteristics seemed quite similar. How shotgun quantitative data is processed is pretty critical and there are many choices along the way. This blog on how we put Humpty Dumpty back together discusses some of the details.
What about the biology? I don't know. I do not do cancer research. The paper did not present any results from the second or third comparisons (only the controls and the miR-372-3p mimics). Performing all comparisons was easy to do here. Given the way that MS2 reporter ion data seems "squeezed" towards the diagonal, false negatives might be more of a concern with this type of TMt data.
One final note. Reporter ion intensities from MS2 scans do not have much expression difference dynamic range. The bench protocols for these experiments are complicated and hard to do perfectly. There should be some reasonable sample-to-sample variability. MS2 TMT data is so non-variable it has to be an artifact. I wrote a blog about TMT ratio distortions. The more MS2 TMT data I look at, the more convinced I am that the data is much more biased than what the newer SPS MS3 methods produce.
all_results
frame to TSV file¶write.table(all_results, "edgeR_Mascot_results.txt", sep = "\t",
row.names = FALSE, na = " ")
sessionInfo()