预训练的模型的资源获取

集大成

pretrained-models · GitHub Topics · GitHub

DeepDA: https://github.com/jindongwang/transferlearning/tree/master/code/DeepDA
DeepDG: https://github.com/jindongwang/transferlearning/tree/master/code/DeepDG

CV方向

Tensorflow Hub

TensorFlow Hub

TIMM

GitHub - rwightman/pytorch-image-models: PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more

用法:
GitHub - rwightman/pytorch-image-models: PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, EfficientNetV2, NFNet, Vision Transformer, MixNet, MobileNet-V3/V2, RegNet, DPN, CSPNet, and more

import timm 
import torch

model = timm.create_model('resnet34')
x     = torch.randn(1, 3, 224, 224)
model.eval()
model(x).shape


##or funting
model.fc=nn.Linear(model.fc.in_features,n_classes) ##n_classes就是要调的

NLP

预训练模型的下载和使用 - 张耀灵的文章 - 知乎
https://zhuanlan.zhihu.com/p/515599304

huggingface transformers预训练模型如何下载至本地,并使用? - 于晨晨的文章 - 知乎
https://zhuanlan.zhihu.com/p/147144376

pytorch深度学习预训练模型pth下载 - 知乎 (zhihu.com)

Huggingface
Hugging Face – The AI community building the future.
Hugging Face · GitHub

>>> from transformers import AutoTokenizer, AutoModel

>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = AutoModel.from_pretrained("bert-base-uncased")

>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)

GitHub - ymcui/Chinese-BERT-wwm: Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)

其他

因特尔的

GitHub - openvinotoolkit/open_model_zoo: Pre-trained Deep Learning models and demos (high quality and extremely fast)

主要是pytorch,但是有点老了
GitHub - Cadene/pretrained-models.pytorch: Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容