tl;dr The default transcript filter function parameters in Sleuth are suitable for a single factor, two level contrast RNASeq experiment. If you are running a two-factor experiment (e.g. knock out vs. wild type, plus control vs. treatment), or an experiment with multiple factor levels (e.g. time series), you should probably use a filter function such as the one described below. You will retain more true positive differentially expressed genes, without generating too many new false positives.
________________________________
I've been a heavy user of Kallisto and Sleuth for RNASeq analysis for some time, and was used to seeing output similar to the following when loading up a dataset:
> so <- sleuth_prep(meta, ~ condition+cell_line+condition:cell_line)
reading in kallisto results
............
normalizing est_counts
26036 targets passed the filter
normalizing tpm
merging in metadata
normalizing bootstrap samples
summarizing bootstraps
I hadn't given much consideration to how the "filter" statistic was generated, until I had a 5 time point series experiment where we had a priori knowledge of the activation of a transcript only at the last two timepoints. This transcript did not show up in the Sleuth analysis with any p-value, let alone a significant one. A few days later in a two-factor experiment (growth condition and cell line), there were also some missing known transcripts.
The default filtering function in Sleuth (called basic_filter) requires at least 5 mapped reads to a transcript in at least 47% of the samples. This reduces spurious identification of differential expression in near-zero abundance transcripts, but retains genes that are moderately but consistently expressed in one of two factor levels (e.g. expressed-in-control-only transcripts, or expressed-in-treatment-only transcripts).
If I have two factors in my RNASeq experiment (3 replicates is typical, for 12 samples), this filter would eliminate transcripts only expressed in the interaction term, such as condition:cell_line in the above example. Here's the metadata:
> meta
sample path condition cell_line
1 NSC_Ctl_1 NSC_Ctl_1.kallisto NSC Ctl
2 NSC_Ctl_2 NSC_Ctl_2.kallisto NSC Ctl
3 NSC_Ctl_3 NSC_Ctl_3.kallisto NSC Ctl
4 NSC_KO_1 NSC_KO_1.kallisto NSC KO
5 NSC_KO_2 NSC_KO_2.kallisto NSC KO
6 NSC_KO_3 NSC_KO_3.kallisto NSC KO
7 Odiff_Ctl_1 Odiff_Ctl_1.kallisto OD Ctl
8 Odiff_Ctl_2 Odiff_Ctl_2.kallisto OD Ctl
9 Odiff_Ctl_3 Odiff_Ctl_3.kallisto OD Ctl
10 Odiff_KO_1 Odiff_KO_1.kallisto OD KO
11 Odiff_KO_2 Odiff_KO_2.kallisto OD KO
12 Odiff_KO_3 Odiff_KO_3.kallisto OD KO
The condition:cell_line term gleans data from only 3 (25%) of the samples (i.e. those that are OD:KO). Let's change the filter to only require >=5 reads in 25% of the samples...
> so <- sleuth_prep(meta, ~ condition+cell_line+condition:cell_line,
filter_fun=function(x){basic_filter(x, 5, 0.25)})
reading in kallisto results
............
normalizing est_counts
36320 targets passed the filter
normalizing tpm
merging in metadata
normalizing bootstrap samples
summarizing bootstraps
Whoa! We just increased the number of transcripts passing filter by 50%, which leads to a huge inflation of false positives in the differential expression, and just as importantly, detrimentally affects the q-values for the genes in our original, default-filtered analysis. A smarter filter might be to require 100% of samples with any present factor level to have at least 5 reads, i.e. keep any transcript where all replicate samples for a factor moderately express it.
[Puts on thinking cap, writes several failed attempts, then...]
> design_filter <- function(design, row, min_reads=5, min_prop = 0.47){
sum(apply(design, 2, function(x){
y <- as.factor(x);
return(max(tapply(row, y, function(f){sum(f >= min_reads)})/
tapply(row, y, length)) == 1
|| basic_filter(row, min_reads, min_prop)
)
})) > 0}
To pass in the design matrix that my new filter requires, I can just reuse the one my first call to sleuth_prep() generated, rather than making it myself. Probably not a bad idea to do it this way in any case, so we can then compare how many transcripts pass this new filter vs. the default filter.
> so_redux <- sleuth_prep(meta, ~cell_line*condition,
filter_fun=function(x){design_filter(so$design,x)})
reading in kallisto results
............
normalizing est_counts
26370 targets passed the filter
normalizing tpm
merging in metadata
normalizing bootstrap samples
summarizing bootstraps
Although for this dataset the new filter also requires ~25% of samples to have moderate expression, the added constraint that those 25% cover all replicates of some factor level means adding just 334 transcripts to the analysis instead of more than 10,000. This seems much more reasonable to me, and my known true positive transcript suddenly appeared. #winning
Note that the design_filter() should work for any set of nominal factors, but not quantitative factors. A column slice of the design matrix could be passed in accordingly in the so_redux code above.
No comments:
Post a Comment