fastp是一个速度非常快的测序数据质控软件,以下为软件参数的说明,根据github内容整理。
github: https://github.com/OpenGene/fastp
简单使用:
单端数据
fastp -i in.fq -o out.fq
双端数据
fastp -i in.R1.fq.gz -I in.R2.fq.gz -o out.R1.fq.gz -O out.R2.fq.gz
结果输出为html和json格式。
输出到STDOUT
--stdout
# 结果输出到标准输出,对于双端数据read1和read2交替输出即record1-R1 -> record1-R2 -> record2-R1 -> record2-R2 -> record3-R1 -> record3-R2 ...
采用STDIN作为输入
--stdin
# 以标准输入作为输入,交替的双端数据可指定 --interleaver_in 参数
存储双端数据中未配对的数据
--unpaired1 # 存储read2被过滤但read1满足要求的read1
--unpaired2 # 存储read1被过滤但read2满足要求的read2
# 这两个参数可指定相同的文件,来同时保存read1和read2
存储过滤的read
--failed_out # 存储过滤的reads,过滤的原因会加入到read name中
# 若未配对的read没有保存,则会和过滤的read一起保存到这里,其过滤原因为paired_read_is_failing
处理部分reads
--reads_to_process # 设置要处理的reads数
不覆盖现有文件
--dont_overwrite # 禁止覆盖现有文件
过滤
# 质量过滤,默认开启
-Q, --disable_quality_filtering # 关闭质量过滤
-n, --n_base_limit # 限制N的数量
-u, --unqualified_percent_limit # 限制低质量碱基的百分比,默认40,表示40%
-q, --qualified_quality_phred # 质量值低于此值认为是低质量碱基,默认15,表示质量为Q15
-e, --average_qual # 限制read的平均质量值,小于此值将被过滤,默认0,表示不要求
# 长度过滤,默认开启
-L, --disable_length_filtering # 关闭长度过滤
-l, --length_required # 最小长度值
--length_limit # 最大长度值,默认0,表示不限制
# 低复杂度过滤,默认关闭
# 复杂度定义:与下一个碱基不同的碱基所占的百分比
-y, --low_complexity_filter # 开启复杂度过滤
-Y, --complexity_threshold # 范围0~100,默认30,表示复杂度需达到30%
去接头
# 去除接头,默认开启,自动检测接头
-A, --disable_adapter_trimming # 关闭去除接头
# 对于单端数据,fastp通过分析前1M reads来自动检测接头,但也可以自己指定接头,当接头序列被指定后,自动检测将关闭
-a, --adapter_sequence # 指定接头序列
# 对于双端数据,通过read1和read2之间的overlap可判断接头,因此通常可以不指定接头序列,但仍然可以人为指定read1和read2的接头序列。当fastp无法找到overlap时(比如低质量碱基),将根据指定的接头序列进行接头去去除。
--adapter_sequence # read1的接头序列
--adapter_sequence_r2 # read2的接头序列
--detect_adapter_for_pe # 开启双端的接头检测(应该不同于overlap方法)
# fastp具有内置的接头序列,以便于更好的检测接头,可以指定包含接头序列的fasta文件来去除多个接头
--adapter_fasta # 指定包含接头序列的fasta文件
# 接头序列至少6bp长度,否则将被跳过
# 可以指定任何想去除的序列,比如polyA
reads切除
# 根据质量对read进行切除
# fastp通过计算滑窗内的平均碱基质量对read进行切除,主要有3种模式
#1.
-5, --cut_front # 默认关闭,从5'开始滑动滑窗,如果滑窗内的碱基质量值低于阈值,则切除,继续滑动,否则停止
--cut_front_window_size # 设置滑窗大小
--cut_front_mean_quality # 平均质量值阈值
# 开头的N碱基也会被切除
#2.
-3, --cut_tail # 默认关闭,从3'端开始滑动,同上
--cut_tail_window_size
--cut_tail_mean_quality
#3.
-r, --cut_right # 从5'端开始滑动,如果遇到滑窗中平均质量小于阈值,切除滑窗及其右边的序列
--cut_right_window_size # 设置滑窗大小
--cut_right_mean_quality # 平均质量值阈值
#如果未设置上述的滑窗大小和平均质量值阈值,则使用以下参数的值:
-W, --cut_window_size
-M, --cut_mean_quality
# 全局切除,切除所有read 5'端或3'端read
-f, --trim_front1 # 切除read1或单端数据5'端n bp的序列
-t, --trim_tail1 # 切除read1或单端数据3'端n bp的序列
-F, --trim_front2 # 切除read2 5'端n bp的序列
-T, --trim_tail2 # 切除read2 3'端n bp的序列
# -F和-T若不指定则默认和-f和-t相同
-b, --max_len1 # read1长度若超过此参数则切除3'端多余序列
-B, --max_len2 # read2长度若超过此参数则切除3'端多余序列
# 若-B参数不指定而指定了-b,则-B默认与-b相同
# 去除polyG
# 对于Illumina NextSeq/NovaSeq data(通过machine ID检测)自动进行polyG切除
-g, --trim_poly_g # 开启所有数据的polyG去除
-G, --disable_trim_poly_g # 关闭polyG去除
--poly_g_min_len # polyG最小长度,默认10
# 去除polyX,默认关闭
-x, --trim_poly_x # 开启
--poly_x_min_len # polyX最小长度,默认10
# 当polyG和polyX同时开启时,先去除ployG,再去除polyX
UMI处理
# 将umi加到read name中,若umi在read中,则会切除
-U, --umi # 开启UMI处理
--umi_loc # umi所处位置,以下几种选择{index1, index2, read1, read2, per_index, per_read}
--umi_len # umi长度,当umi_loc为read1/read2/per_read时需要指定
--umi_prefix # 为umi加上前缀
双端数据碱基矫正
# 双端数据碱基矫正,默认关闭
# fastp对双端read进行overlap分析,当出现不匹配的碱基时,如果一个碱基质量值很高,另一个碱基质量值很低,则使用质量值高的碱基对质量值低的碱基进行矫正,且使用同一质量值
-c, --correction # 开启矫正
# overlap的检测需要同时满足以下3个参数设置的条件:
--overlap_len_require # 最小overlap长度,默认30
--overlap_diff_limit # overlap序列中最大差异碱基数,默认5
--overlap_diff_limit_percent # overlap序列中最大差异碱基百分比,默认20%
fastp中影响序列长度的操作优先级
1, UMI preprocessing (--umi)
2, global trimming at front (--trim_front)
3, global trimming at tail (--trim_tail)
4, quality pruning at 5' (--cut_front)
5, quality pruning by sliding window (--cut_right)
6, quality pruning at 3' (--cut_tail)
7, trim polyG (--trim_poly_g, enabled by default for NovaSeq/NextSeq data)
8, trim adapter by overlap analysis (enabled by default for PE data)
9, trim adapter by adapter sequence (--adapter_sequence, --adapter_sequence_r2. For PE data, this step is skipped if last step succeeded)
10, trim polyX (--trim_poly_x)
11, trim to max length (---max_len)
输出文件拆分
-s, --split # 输出拆分为n个文件,输出文件前缀为拆分编号
-S, --split_by_lines # 通过限制每个输出文件的reads数进行拆分,但实际文件行数可能会更多一些
双端文件merge
# merge依赖read1和read2之间overlap的鉴定,因此碱基矫正中的overlap_len_require (default 30), overlap_diff_limit (default 5) and overlap_diff_limit_percent (default 20%)参数同样影响merge过程
-m, --merge # 开启merge
--merged_out # 存储merged reads的文件
--include_unmerged # 将--out1, --out2, --unpaired1 and --unpaired2的输出重定向到--merged_out
duplication rate
# 评估duplication rate,默认开启
# 所有碱基完全相同的read被认为是重复的,包含N碱基的read不会被认为是重复的
--dont_eval_duplication # 关闭重复率评估
# 由于哈希算法的原因,重复率的评估不是完全准确的,可设置不同的准确度等级,等级越高越准确,需要的内存和时间也越多
--dup_calc_accuracy # 设置准确度等级,1~6,不去除重复reads时默认为1,去除重复reads时默认为3
# 去除重复reads,默认关闭
-D, --dedup # 开启
所有参数
usage: fastp -i <in1> -o <out1> [-I <in1> -O <out2>] [options...]
options:
# I/O options
-i, --in1 read1 input file name (string)
-o, --out1 read1 output file name (string [=])
-I, --in2 read2 input file name (string [=])
-O, --out2 read2 output file name (string [=])
--unpaired1 for PE input, if read1 passed QC but read2 not, it will be written to unpaired1. Default is to discard it. (string [=])
--unpaired2 for PE input, if read2 passed QC but read1 not, it will be written to unpaired2. If --unpaired2 is same as --unpaired1 (default mode), both unpaired reads will be written to this same file. (string [=])
--failed_out specify the file to store reads that cannot pass the filters. (string [=])
--overlapped_out for each read pair, output the overlapped region if it has no any mismatched base. (string [=])
-m, --merge for paired-end input, merge each pair of reads into a single read if they are overlapped. The merged reads will be written to the file given by --merged_out, the unmerged reads will be written to the files specified by --out1 and --out2. The merging mode is disabled by default.
--merged_out in the merging mode, specify the file name to store merged output, or specify --stdout to stream the merged output (string [=])
--include_unmerged in the merging mode, write the unmerged or unpaired reads to the file specified by --merge. Disabled by default.
-6, --phred64 indicate the input is using phred64 scoring (it'll be converted to phred33, so the output will still be phred33)
-z, --compression compression level for gzip output (1 ~ 9). 1 is fastest, 9 is smallest, default is 4. (int [=4])
--stdin input from STDIN. If the STDIN is interleaved paired-end FASTQ, please also add --interleaved_in.
--stdout output passing-filters reads to STDOUT. This option will result in interleaved FASTQ output for paired-end input. Disabled by default.
--interleaved_in indicate that <in1> is an interleaved FASTQ which contains both read1 and read2. Disabled by default.
--reads_to_process specify how many reads/pairs to be processed. Default 0 means process all reads. (int [=0])
--dont_overwrite don't overwrite existing files. Overwritting is allowed by default.
--fix_mgi_id the MGI FASTQ ID format is not compatible with many BAM operation tools, enable this option to fix it.
# adapter trimming options
-A, --disable_adapter_trimming adapter trimming is enabled by default. If this option is specified, adapter trimming is disabled
-a, --adapter_sequence the adapter for read1. For SE data, if not specified, the adapter will be auto-detected. For PE data, this is used if R1/R2 are found not overlapped. (string [=auto])
--adapter_sequence_r2 the adapter for read2 (PE data only). This is used if R1/R2 are found not overlapped. If not specified, it will be the same as <adapter_sequence> (string [=])
--adapter_fasta specify a FASTA file to trim both read1 and read2 (if PE) by all the sequences in this FASTA file (string [=])
--detect_adapter_for_pe by default, the adapter sequence auto-detection is enabled for SE data only, turn on this option to enable it for PE data.
# global trimming options
-f, --trim_front1 trimming how many bases in front for read1, default is 0 (int [=0])
-t, --trim_tail1 trimming how many bases in tail for read1, default is 0 (int [=0])
-b, --max_len1 if read1 is longer than max_len1, then trim read1 at its tail to make it as long as max_len1. Default 0 means no limitation (int [=0])
-F, --trim_front2 trimming how many bases in front for read2. If it's not specified, it will follow read1's settings (int [=0])
-T, --trim_tail2 trimming how many bases in tail for read2. If it's not specified, it will follow read1's settings (int [=0])
-B, --max_len2 if read2 is longer than max_len2, then trim read2 at its tail to make it as long as max_len2. Default 0 means no limitation. If it's not specified, it will follow read1's settings (int [=0])
# duplication evaluation and deduplication
-D, --dedup enable deduplication to drop the duplicated reads/pairs
--dup_calc_accuracy accuracy level to calculate duplication (1~6), higher level uses more memory (1G, 2G, 4G, 8G, 16G, 24G). Default 1 for no-dedup mode, and 3 for dedup mode. (int [=0])
--dont_eval_duplication don't evaluate duplication rate to save time and use less memory.
# polyG tail trimming, useful for NextSeq/NovaSeq data
-g, --trim_poly_g force polyG tail trimming, by default trimming is automatically enabled for Illumina NextSeq/NovaSeq data
--poly_g_min_len the minimum length to detect polyG in the read tail. 10 by default. (int [=10])
-G, --disable_trim_poly_g disable polyG tail trimming, by default trimming is automatically enabled for Illumina NextSeq/NovaSeq data
# polyX tail trimming
-x, --trim_poly_x enable polyX trimming in 3' ends.
--poly_x_min_len the minimum length to detect polyX in the read tail. 10 by default. (int [=10])
# per read cutting by quality options
-5, --cut_front move a sliding window from front (5') to tail, drop the bases in the window if its mean quality < threshold, stop otherwise.
-3, --cut_tail move a sliding window from tail (3') to front, drop the bases in the window if its mean quality < threshold, stop otherwise.
-r, --cut_right move a sliding window from front to tail, if meet one window with mean quality < threshold, drop the bases in the window and the right part, and then stop.
-W, --cut_window_size the window size option shared by cut_front, cut_tail or cut_sliding. Range: 1~1000, default: 4 (int [=4])
-M, --cut_mean_quality the mean quality requirement option shared by cut_front, cut_tail or cut_sliding. Range: 1~36 default: 20 (Q20) (int [=20])
--cut_front_window_size the window size option of cut_front, default to cut_window_size if not specified (int [=4])
--cut_front_mean_quality the mean quality requirement option for cut_front, default to cut_mean_quality if not specified (int [=20])
--cut_tail_window_size the window size option of cut_tail, default to cut_window_size if not specified (int [=4])
--cut_tail_mean_quality the mean quality requirement option for cut_tail, default to cut_mean_quality if not specified (int [=20])
--cut_right_window_size the window size option of cut_right, default to cut_window_size if not specified (int [=4])
--cut_right_mean_quality the mean quality requirement option for cut_right, default to cut_mean_quality if not specified (int [=20])
# quality filtering options
-Q, --disable_quality_filtering quality filtering is enabled by default. If this option is specified, quality filtering is disabled
-q, --qualified_quality_phred the quality value that a base is qualified. Default 15 means phred quality >=Q15 is qualified. (int [=15])
-u, --unqualified_percent_limit how many percents of bases are allowed to be unqualified (0~100). Default 40 means 40% (int [=40])
-n, --n_base_limit if one read's number of N base is >n_base_limit, then this read/pair is discarded. Default is 5 (int [=5])
-e, --average_qual if one read's average quality score <avg_qual, then this read/pair is discarded. Default 0 means no requirement (int [=0])
# length filtering options
-L, --disable_length_filtering length filtering is enabled by default. If this option is specified, length filtering is disabled
-l, --length_required reads shorter than length_required will be discarded, default is 15. (int [=15])
--length_limit reads longer than length_limit will be discarded, default 0 means no limitation. (int [=0])
# low complexity filtering
-y, --low_complexity_filter enable low complexity filter. The complexity is defined as the percentage of base that is different from its next base (base[i] != base[i+1]).
-Y, --complexity_threshold the threshold for low complexity filter (0~100). Default is 30, which means 30% complexity is required. (int [=30])
# filter reads with unwanted indexes (to remove possible contamination)
--filter_by_index1 specify a file contains a list of barcodes of index1 to be filtered out, one barcode per line (string [=])
--filter_by_index2 specify a file contains a list of barcodes of index2 to be filtered out, one barcode per line (string [=])
--filter_by_index_threshold the allowed difference of index barcode for index filtering, default 0 means completely identical. (int [=0])
# base correction by overlap analysis options
-c, --correction enable base correction in overlapped regions (only for PE data), default is disabled
--overlap_len_require the minimum length to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. 30 by default. (int [=30])
--overlap_diff_limit the maximum number of mismatched bases to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. 5 by default. (int [=5])
--overlap_diff_percent_limit the maximum percentage of mismatched bases to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. Default 20 means 20%. (int [=20])
# UMI processing
-U, --umi enable unique molecular identifier (UMI) preprocessing
--umi_loc specify the location of UMI, can be (index1/index2/read1/read2/per_index/per_read, default is none (string [=])
--umi_len if the UMI is in read1/read2, its length should be provided (int [=0])
--umi_prefix if specified, an underline will be used to connect prefix and UMI (i.e. prefix=UMI, UMI=AATTCG, final=UMI_AATTCG). No prefix by default (string [=])
--umi_skip if the UMI is in read1/read2, fastp can skip several bases following UMI, default is 0 (int [=0])
# overrepresented sequence analysis
-p, --overrepresentation_analysis enable overrepresented sequence analysis.
-P, --overrepresentation_sampling One in (--overrepresentation_sampling) reads will be computed for overrepresentation analysis (1~10000), smaller is slower, default is 20. (int [=20])
# reporting options
-j, --json the json format report file name (string [=fastp.json])
-h, --html the html format report file name (string [=fastp.html])
-R, --report_title should be quoted with ' or ", default is "fastp report" (string [=fastp report])
# threading options
-w, --thread worker thread number, default is 3 (int [=3])
# output splitting options
-s, --split split output by limiting total split file number with this option (2~999), a sequential number prefix will be added to output name ( 0001.out.fq, 0002.out.fq...), disabled by default (int [=0])
-S, --split_by_lines split output by limiting lines of each file with this option(>=1000), a sequential number prefix will be added to output name ( 0001.out.fq, 0002.out.fq...), disabled by default (long [=0])
-d, --split_prefix_digits the digits for the sequential number padding (1~10), default is 4, so the filename will be padded as 0001.xxx, 0 to disable padding (int [=4])
# help
-?, --help print this message