文档分割方式
1 固定长度/分割符
如:RecursiveCharacterTextSplitter, CharacterTextSplitter
CharacterTextSplitter: 按照固定长度顺序分割,同时有一定的overlap
RecursiveCharacterTextSplitter: 按一个字符优先级(如 \n\n, \n, 括号... )递归地分割,很适合处理类似括号这样的嵌套引用
TokenTextSplitter:基于Token的长度进行顺序分割,同时有一定的overlap
也可以直接基于某个分割符简单分割:
docs = text.split(".")
CharacterTextSplitter:
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 64,
chunk_overlap = 20
)
docs = text_splitter.create_documents([text])
print(docs)
2 规范格式的分割
MarkdownHeaderTextSplitter: 适用于Markdown文件
LatexTextSplitter: 适用于Latex文件
PyPDFLoader: 适用于PDF文件
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("xxx.pdf")
pages = loader.load()
3 按规则分句工具
NLTK 或者 spaCy
NLTK使用的句子拆分原理是基于训练好的模型和规则。它使用了一种称为句子边界检测(Sentence Boundary Detection,SBD)的技术,该技术利用了标点符号、缩略词、数字和其他语言特定的规则来确定句子的边界。依赖语言学上的规则和模式,而不是句子之间的语义关系。
拆英文句子:
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize
text = "Hello, how are you? I'm doing well. Thanks for asking."
sentences = sent_tokenize(text)
print(sentences)
# ['Hello, how are you?', "I'm doing well.", 'Thanks for asking.']
拆中文句子要用jieba分词库:
import nltk
import jieba
from nltk.tokenize import sent_tokenize
# 载入中文分词词典
jieba.initialize()
# 使用nltk的sent_tokenize函数来拆分中文文本
def chinese_sent_tokenize(text):
sentences = []
seg_list = jieba.cut(text, cut_all=False)
seg_str = ' '.join(seg_list)
for sent in seg_str.split('。'):
sentences.append(sent.strip() + '。')
return sentences
sentences = chinese_sent_tokenize(text)
4 基于语义进行分割
- 基于 BERT 的 cross-segment 模型
- seqModel:
一个实例: nlp_bert_document-segmentation_chinese-base
https://modelscope.cn/models/iic/nlp_bert_document-segmentation_chinese-base/summary
from modelscope.outputs import OutputKeys
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
p = pipeline(
task=Tasks.document_segmentation,
model='damo/nlp_bert_document-segmentation_chinese-base')
result = p(documents=text)
print(result[OutputKeys.TEXT])
参考:
https://zhuanlan.zhihu.com/p/673906072
https://zhuanlan.zhihu.com/p/666273413
https://blog.csdn.net/hmywillstronger/article/details/130073676
LangChain+LLM 本地知识库:
https://blog.csdn.net/v_JULY_v/article/details/131552592
seqModel:
https://blog.csdn.net/weixin_48827824/article/details/126952959
从cross-segment到seqModel:
https://blog.csdn.net/v_JULY_v/article/details/135386202