任务说明
提供2000张标注了的车辆场景分类信息的高分辨率图片,请使用这些数据,建立并训练模型,并将此模型运用于测试数据集的图像分类标注。
标签信息
0,巴士,bus
1,出租车,taxi
2,货车,truck
3,家用轿车,family sedan
4,面包车,minibus
5,吉普车,jeep
6,运动型多功能车,SUV
7,重型货车,heavy truck
8,赛车,racing car
9,消防车,fire engine
文件信息
train.rar:训练集
val.rar:验证集
test.rar:测试集
前言
笔者大三上选修《机器学习》的实验一,第一次编写代码难免参考网上众多大神的代码,权当记录。选择的方式是利用pytorch进行迁移学习,pytorch版本是1.0.1。
>>> import torch
>>> print(torch.__version__)
1.0.1
完整的工程文件以及附件已上传至github,链接在文末。
具体实现
配置文件
新建一个config.py
文件当作配置文件,方便修改一些参数
# -*- coding:utf-8 -*-
# @time : 2019.12.02
# @IDE : pycharm
# @author : wangzhebufangqi
# @github : https://github.com/wangzhebufangqi
#数据集的类别
NUM_CLASSES = 10
#训练时batch的大小
BATCH_SIZE = 32
#训练轮数
NUM_EPOCHS= 25
##预训练模型的存放位置
#下载地址:https://download.pytorch.org/models/resnet50-19c8e357.pth
PRETRAINED_MODEL = './resnet50-19c8e357.pth'
##训练完成,权重文件的保存路径,默认保存在trained_models下
TRAINED_MODEL = 'trained_models/vehicle-10_record.pth'
#数据集的存放位置
TRAIN_DATASET_DIR = './vehicle-10/train'
VALID_DATASET_DIR = './vehicle-10/val'
数据处理
数据集结构如下:
train中每个类别140张图片,val中每个类别20张图片,test中无子文件夹。
数据增强:
train_transforms = transforms.Compose(
[transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)),#随机裁剪到256*256
transforms.RandomRotation(degrees=15),#随机旋转
transforms.RandomHorizontalFlip(),#随机水平翻转
transforms.CenterCrop(size=224),#中心裁剪到224*224
transforms.ToTensor(),#转化成张量
transforms.Normalize([0.485, 0.456, 0.406],#归一化
[0.229, 0.224, 0.225])
])
test_valid_transforms = transforms.Compose(
[transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
利用Dataloader加载数据:
train_datasets = datasets.ImageFolder(train_directory, transform=train_transforms)
train_data_size = len(train_datasets)
train_data = torch.utils.data.DataLoader(train_datasets, batch_size=batch_size, shuffle=True)
valid_datasets = datasets.ImageFolder(valid_directory,transform=test_valid_transforms)
valid_data_size = len(valid_datasets)
valid_data = torch.utils.data.DataLoader(valid_datasets, batch_size=batch_size, shuffle=True)
print(train_data_size, valid_data_size)
输出train_data_size和valid_data_size为:
1400 200
至此,数据加载部分完成。
可以如此这般验证:
#进行数据提取函数的测试
if __name__ =="__main__":
for images, labels in train_dataloader:
print(labels)
for images, labels in train_dataloader:
img = images[0]
img = img.numpy()
img = np.transpose(img, (1, 2, 0))
plt.imshow(img)
plt.show()
上面的代码会输出train_data_size//batch_size
个标签和图片的信息。比如,train中共有80张图片,batch_size为40,则会随机输出两个标签信息(张量)和两张图片。
训练
迁移学习
利用resnet50
的预训练模型,之所以选这个是因为比较小,pth文件只有不到100M。注意将其参数冻结。
resnet50 = models.resnet50(pretrained=True)
for param in resnet50.parameters():
param.requires_grad = False
为了适应前面提到过的数据集,需要将resnet50的最后一层替换成,将原来最后一个全连接层的输入提供给一个有256个输出单元的线性层,接着再连接ReLu层和Dropoup层,然后是256*10的线性层,输出为10通道的softmax层。
可以通过print(resnet50)
查看更改前后的模型参数。
for param in resnet50.parameters():
param.requires_grad = False
fc_inputs = resnet50.fc.in_features
resnet50.fc = nn.Sequential(
nn.Linear(fc_inputs, 256),
nn.ReLU(),
nn.Dropout(0.4),
nn.Linear(256, 10),
nn.LogSoftmax(dim=1)
)
定义损失函数和优化器:
loss_func = nn.NLLLoss()
optimizer = optim.Adam(resnet50.parameters())
损失函数和优化函数是定义好的,直接用即可
训练过程:
def train_and_valid(model, loss_function, optimizer, epochs=25):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")#若有gpu可用则用gpu
record = []
best_acc = 0.0
best_epoch = 0
for epoch in range(epochs):#训练epochs轮
epoch_start = time.time()
print("Epoch: {}/{}".format(epoch + 1, epochs))
model.train()#训练
train_loss = 0.0
train_acc = 0.0
valid_loss = 0.0
valid_acc = 0.0
for i, (inputs, labels) in enumerate(train_data):
inputs = inputs.to(device)
labels = labels.to(device)
#print(labels)
# 记得清零
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item() * inputs.size(0)
ret, predictions = torch.max(outputs.data, 1)
correct_counts = predictions.eq(labels.data.view_as(predictions))
acc = torch.mean(correct_counts.type(torch.FloatTensor))
train_acc += acc.item() * inputs.size(0)
with torch.no_grad():
model.eval()#验证
for j, (inputs, labels) in enumerate(valid_data):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
loss = loss_function(outputs, labels)
valid_loss += loss.item() * inputs.size(0)
ret, predictions = torch.max(outputs.data, 1)
correct_counts = predictions.eq(labels.data.view_as(predictions))
acc = torch.mean(correct_counts.type(torch.FloatTensor))
valid_acc += acc.item() * inputs.size(0)
avg_train_loss = train_loss / train_data_size
avg_train_acc = train_acc / train_data_size
avg_valid_loss = valid_loss / valid_data_size
avg_valid_acc = valid_acc / valid_data_size
record.append([avg_train_loss, avg_valid_loss, avg_train_acc, avg_valid_acc])
if avg_valid_acc > best_acc :#记录最高准确性的模型
best_acc = avg_valid_acc
best_epoch = epoch + 1
epoch_end = time.time()
print("Epoch: {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}%, \n\t\tValidation: Loss: {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(
epoch + 1, avg_valid_loss, avg_train_acc * 100, avg_valid_loss, avg_valid_acc * 100,
epoch_end - epoch_start))
print("Best Accuracy for validation : {:.4f} at epoch {:03d}".format(best_acc, best_epoch))
#torch.save(model, 'trained_models/vehicle-10_model_' + str(epoch + 1) + '.pth')
return model, record
差不多是个模板过程
结果
if __name__=='__main__':
num_epochs = config.NUM_EPOCHS
trained_model, record = train_and_valid(resnet50, loss_func, optimizer, num_epochs)
torch.save(record, config.TRAINED_MODEL)
record = np.array(record)
plt.plot(record[:, 0:2])
plt.legend(['Train Loss', 'Valid Loss'])
plt.xlabel('Epoch Number')
plt.ylabel('Loss')
plt.ylim(0, 1)
plt.savefig('loss.png')
plt.show()
plt.plot(record[:, 2:4])
plt.legend(['Train Accuracy', 'Valid Accuracy'])
plt.xlabel('Epoch Number')
plt.ylabel('Accuracy')
plt.ylim(0, 1)
plt.savefig('accuracy.png')
plt.show()
将损失函数和准确度记录下来,利用matplotlib绘图。
笔者电脑太渣,epoch_size设置为32,在CPU跑了25轮,每轮用了600s。。不过最后的结果还是蛮不错的:在第10轮达到了93%的准确率。
......
Epoch: 21/25
Epoch: 021, Training: Loss: 0.2828, Accuracy: 90.3571%,
Validation: Loss: 0.2828, Accuracy: 90.0000%, Time: 590.9682s
Best Accuracy for validation : 0.9300 at epoch 010
Epoch: 22/25
Epoch: 022, Training: Loss: 0.3188, Accuracy: 88.8571%,
Validation: Loss: 0.3188, Accuracy: 88.5000%, Time: 595.5310s
Best Accuracy for validation : 0.9300 at epoch 010
Epoch: 23/25
Epoch: 023, Training: Loss: 0.2980, Accuracy: 91.0000%,
Validation: Loss: 0.2980, Accuracy: 88.5000%, Time: 588.0105s
Best Accuracy for validation : 0.9300 at epoch 010
Epoch: 24/25
Epoch: 024, Training: Loss: 0.2783, Accuracy: 88.8571%,
Validation: Loss: 0.2783, Accuracy: 92.0000%, Time: 615.0971s
Best Accuracy for validation : 0.9300 at epoch 010
Epoch: 25/25
Epoch: 025, Training: Loss: 0.2596, Accuracy: 88.9286%,
Validation: Loss: 0.2596, Accuracy: 92.0000%, Time: 595.2022s
Best Accuracy for validation : 0.9300 at epoch 010
Process finished with exit code 0
后记
在利用resnet50之前还尝试过densenet169,vgg16等,但可能是模型太大电脑太渣了,运行一次卡死一次。
完整工程文件已上传至github:https://github.com/wangzhebufangqi/MachineLearning/tree/master/lab1
参考链接
https://blog.csdn.net/heiheiya/article/details/103028543
https://blog.csdn.net/qq_18668137/article/details/80883350
https://zhuanlan.zhihu.com/p/81688220
https://zhuanlan.zhihu.com/p/67220643