接上一章节,这里对Transformers
进行深入的介绍,详细的参考文档可以直接看官方文档。
AutoClass
AutoClass是一种用于检索预训练模型(by name or path)的简化方式,用户只需要选择合适的AutoClass即可。
AutoTokenizer
tokenizer 的目的是,对text进行预处理,将其转化为 array-of-numbers,有很多规则来决策,如何做tokenization过程,包括如何split a word。如下我们实例化一个tokenization(注意,具体的Tokenizer类的处理规则需要和加载的model name保持一致):
from transformers import AutoTokenizer
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
tokenizer = AutoTokenizer.from_pretrained(model_name)
将文本传给 tokenizer:
encoding = tokenizer("We are very happy to show you the 🤗 Transformers library.")
print(encoding)
{'input_ids': [101, 11312, 10320, 12495, 19308, 10114, 11391, 10855, 10103, 100, 58263, 13299, 119, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
input_ids 表示了每个token代表的id。
AutoModel
Transformers提供了一种简单、统一的方式,用于加载预训练模型,需要为我们的任务选择正确的模型。比如对于text(或sequence)分类,需要加载AutoModelForSequenceClassification。
from transformers import AutoModelForSequenceClassification
model_name = "nlptown/bert-base-multilingual-uncased-sentiment"
pt_model = AutoModelForSequenceClassification.from_pretrained(model_name)
然后,就可以将输入传给模型了。
pt_batch = tokenizer(
["We are very happy to show you the 🤗 Transformers library.", "We hope you don't hate it."],
padding=True,
truncation=True,
max_length=512,
return_tensors="pt",
)
pt_outputs = pt_model(**pt_batch)
模型会输出最后的激活层
,我们应用softmax function(到logits),就可以查看相关的概率了:
from torch import nn
pt_predictions = nn.functional.softmax(pt_outputs.logits, dim=-1)
print(pt_predictions)
保存模型
如果模型做了微调,可以采用 PreTrainedModel.save_pretrained() 进行保存:
pt_save_directory = "./pt_save_pretrained"
tokenizer.save_pretrained(pt_save_directory)
pt_model.save_pretrained(pt_save_directory)
模型配置
可以对模型进行修改,通过配置来指定模型的属性,比如number-of-hidden-layers,number-of-attention-heads。当初始化一个model的时候,可以指定相关的配置。
AutoConfig 可以用于加载预训练模型(指定参数)。
from transformers import AutoConfig
my_config = AutoConfig.from_pretrained("distilbert/distilbert-base-uncased", n_heads=12)
然后可以创建模型了:
from transformers import AutoModel
my_model = AutoModel.from_config(my_config)
训练/微调
所有的models都是标准的torch.nn.Module
,可以对它们进行训练,Transformers为PyTorch提供了Trainer
class,包括了基础的training loop,增加了其它功能,比如分布式的训练:
from transformers import AutoModelForSequenceClassification
# 加载一个预训练的模型
model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased")
# 定义训练参数
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="path/to/save/folder/",
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=2,
)
# 加载预训练的class,如tokenizer、image processor、feature extractor、processor
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
# 加载数据集
from datasets import load_dataset
dataset = load_dataset("rotten_tomatoes")
# 定义处理dataset的function
def tokenize_dataset(dataset):
return tokenizer(dataset["text"])
# map应用于整个dataset
dataset = dataset.map(tokenize_dataset, batched=True)
# 构建examples(根据dataset)
from transformers import DataCollatorWithPadding
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
tokenizer=tokenizer,
data_collator=data_collator,
) # doctest: +SKIP
trainer.train()
如果是翻译、摘要的任务,需要用:Seq2SeqTrainer 和 Seq2SeqTrainingArguments classes。