Biases in CNV detection:
- GC content
- exon capture and amplification efficiency
- latent systemic articacts
Steps:
- 起始文件为排序并索引好的bam文件,通过可比对性(mappability),外显子大小, 以及最小测序深度阈值来对bam进行过滤。过滤完成之后进行各位点测序深度的计算;
We start with mapped reads from BAM fles (35) that are assembled, sorted and indexed by the same pipeline,
and compute depth of coverage after a series of quality filtering based on mappability, exon size and a cutoff on minimum coverage. - 接着,使用
log-linear
模型对测序深度进行一个归一化。在归一化(normalization)过程中会对每一个样本的每一个外显子生成一个"control coverage"(它表示没有cnv时候的从测序深度),这些coverage将会用来与实际观察到的coverage进行比较;
Then, we fit a normalization model based on a log-linear decomposition of the depth of coverage matrix into effects due to GC content, exon capture and amplifcation and other latent systemic factors. - 将每个样本实际检测到的coverage与normalization生成的"control coverage"通过"Poisson likelihood-based segmentation algorithm"进行比较,生成同源cnv变异(即与参考基因组序列相同的拷贝数变异);
Next, the observed coverage for each exon and each sample is compared to the corresponding estimated control coverage in a Poisson likelihood-based segmentation algorithm, which returns a segmentation of the genome into regions of homogeneous copy number. - 最后,通过比较后得到的倍数就可以得到cnv了。
A direct estimate of the relative copy number, in terms of fold change from the expected control value, can be used for genotyping.
Sample selection and target fltering
- 推荐使用的数据都来自相同的捕获测序平台(reducing artifacts);
- 对于外显子,采取4个步骤进行过滤:(1)coverage,对于所有样本的平均深度低于20的外显子过滤掉;(2)短外显子(<20bp);(3)难以进行比对的(mappappability < 0.9);(4)极端GC值(<20%或>80%);
Read depth normalization
Due to the extremely high level of systemic bias in WES data, normalization is crucial in WES CNV calling.
CODEX’s multi-sample normalization model takes as input the WES depth of coverage, exon-wise GC content and sample-wise total number of reads
Poisson latent factors and choice of K
有些影响cnv检测的原因可以直接检测到(如GC含量,mappability,外显子大小),然而也有些因素是难以直接检测的,如捕获建库测序或样本导致的bias,称之为潜在因素(latent factors)。
潜在因素的个数K是一个非常关键的因素,太大容易抑制屏蔽掉那些产生真实cnv的信号,太小又无法屏蔽那些干扰信号(artifacts),对结果造成干扰。
CODEX分别使用两个统计参数来评估K值:Akaike informa�tion criterion (AIC) and Bayes information criterion (BIC):
最后使用BIC值来确定K值。
Both CoNIFER and XHMM(28) use latent factor models to remove systemic bias, but their models assume continuous measurements with Gaussian noise structure, while CODEX is based on a Poisson log-linear model, which is more suitable for modeling the discrete counts in WES data, especially when there is high variance in depth of coverage between exons.
CNV detection and copy number estimation
Proper normalization sets the stage for accurate segmentation and CNV calling. For germline CNV detection in normal samples, many CNVs are short and extend over only one or two exons. In this case, simple gene- or exon-level thresholding is suffcient.
For longer CNVs and for copy number estimation in tumors where the events are expected to be large and exhibit nested structure, we propose a Poisson likelihood-based recursive segmentation algorithm.
Discuss
The distinguishing features of CODEX compared to existing methods are:
- (i) CODEX does not require matched normal samples as controls for normalization;
- (ii) The Poisson log-linear model fts better with the WES count data than SVD approaches;
- (iii) Dependence on GC content is modeled by a flexible non-parametric function in CODEX allowing it to capture non-linear biases;
- (iv) CODEX implements the BIC criterion for choosing the number of latent variables, which gives a conservative normalization on simulated and real data sets;
- (v) Compared to HMM-based segmentation procedures, the segmentation procedure in CODEX is completely off-the-shelf and does not require large relevant training set;
- (vi) CODEX estimates relative copy number, which can be converted to genotypes by thresholding, rather than broad categorizations (deletion, duplication and copy number neutral states)
文献:CODEX: a normalization and copy number variation detection method for whole exome sequencing.