1 数据合并:cat
之所以在过滤前实施该步是因为发生以下两个场景:(1)同一样品分批(补充)测序得两个或多个.fastq文件;(2)同一样品同一批次但因上机泳道不同产生两个或多个.fastq。我的合并流程如下:
① 上传数据至服务器
ls
E150016110_L01_722_1.fq.gz E150016110_L01_722_2.fq.gz \
V350200193_L04_888_1.fq.gz V350200193_L04_888_2.fq.gz \
E150016110_L01_799_1.fq.gz E150016110_L01_799_2.fq.gz \
V350200193_L04_855_1.fq.gz V350200193_L04_855_2.fq.gz
② 提取待合并文件名
ls | cut -d . -f 1 | sort | uniq > test1.txt
cat test1.txt
E150016110_L01_722_1
E150016110_L01_722_2
V350200193_L04_888_1
V350200193_L04_888_2
E150016110_L01_799_1
E150016110_L01_799_2
V350200193_L04_855_1
V350200193_L04_855_2
③ 利用excel整理文件名将需合并文件一一对应,随后编写cat合并脚本,如下所示:
touch test.sh
vim test.sh
#!/bin/bash
cat E150016110_L01_722_1.fq.gz V350200193_L04_888_1.fq.gz > merge1_1.fq.gz;
cat E150016110_L01_722_2.fq.gz V350200193_L04_888_2.fq.gz > merge1_2.fq.gz;
cat E150016110_L01_799_1.fq.gz V350200193_L04_855_1.fq.gz > merge2_1.fq.gz;
cat E150016110_L01_799_2.fq.gz V350200193_L04_855_2.fq.gz > merge2_2.fq.gz;
④ 运行脚本:bash test.sh
2 数据过滤及质控:fastp
因为是华大测序,起初试图用SOAPnuke,但折腾半天发现fastp软件[2-3]过滤接头的精度更高,与Trimmomatic是相差无几的;速度则优于Trimmomatic,仅次于SOAPnuke[2-3, 7],综合考虑选用fastp。另外,SOAPnuke拥有的功能fastp几乎均可实现,至于Trimmomatic,没多研究,是个较老的软件,基于java,暂未使用。
例:fastp -i input1.fastq -o output1.fastq -I input2.fastq -O output2.fastq \
-h output.html -j output.json -q 10 -u 10 -l 150 -w 16 \
--adapter_sequence AAGTCGGAGGCCAAGCGGTCTTAGGAAGACAA \
--adapter_sequence_r2 AAGTCGGATCGTAGCCATGTCGTTCTGTGAGCCAAGGAGTTG
#-h:设置输出.html格式结果的文件名;-j:设置输出.json格式结果的文件名;\
-q:设置碱基质量值阈值;-u:设置read中能容许的低质量值碱基比例,当序列含有的低质量值碱基比例达到该设置值,该read被剔除;\
-l:设置低于某长度的read被删除;-w:设置线程数;\
--adapter_sequence:设置正向引物(向公司索要);--adapter_sequence_r2:设置反向引物
批处理脚本例:
#!/bin/bash
for i in 1 2;do
{
fastp -i /input_path/merge${i}_1.fq.gz -I /input_path/merge${i}_2.fq.gz -o /output_path/merge${i}_1.fq.gz -O /output_path/merge${i}_2.fq.gz \
-h /output_path/merge${i}.html -j /output_path/merge${i}.json \
-q 10 -u 10 -l 150 -w 16 \
--adapter_sequence AAGTCGGAGGCCAAGCGGTCTTAGGAAGACAA --adapter_sequence_r2 AAGTCGGATCGTAGCCATGTCGTTCTGTGAGCCAAGGAGTTG
}
done
以下自用:
#!/bin/bash
for i in 001 002 ...;do
{
singularity exec --bind /data ~/lyz/software/software.sif/Reseq_genek.sif \
fastp -i /data/csp/lyz/cas_raw/Cas${i}_1.fq.gz -I /data/csp/lyz/cas_raw/Cas${i}_2.fq.gz -o /data/csp/lyz/cas_clean/Cas${i}_1.fq.gz -O
/data/csp/lyz/cas_clean/Cas${i}_2.fq.gz \
-h /data/csp/lyz/cas_clean/Cas${i}.html -j
/data/csp/lyz/cas_clean/Cas${i}.json \
-q 10 -u 10 -l 150 -w 16 \
--adapter_sequence AAGTCGGAGGCCAAGCGGTCTTAGGAAGACAA --adapter_sequence_r2 AAGTCGGATCGTAGCCATGTCGTTCTGTGAGCCAAGGAGTTG
}
done
参考资料:
[1] cat合并gz文件:https://blog.csdn.net/weixin_31956641/article/details/116553743
[2] fastp介绍与软件比较:https://www.jianshu.com/p/c3e74c6b8a2b
[3] fastp期刊原文“fastp: an ultra-fast all-in-one FASTQ preprocessor, Bioinformatics”:https://doi.org/10.1093/bioinformatics/bty560
[4] fastp命令详解(中文)-基于manual:https://blog.csdn.net/sinat_32872729/article/details/94440265
[5] fastp-GitHub manual: https://github.com/OpenGene/fastp#adapters
[6] fastp批量数据质控与过滤案例:https://www.jianshu.com/p/d83870fa4944
[7] SOAPnuke-GitHub tutorial: https://github.com/BGI-flexlab/SOAPnuke