前言
在上一篇深度学习 - Tensorflow on iOS 入门 + MNIST中,我们使用了TensorFlow训练了一个模型,并且编译了iOS的库在项目工程中引用,用起来整个过程还是比较麻烦的,而且包的大小因为引用了TensorFlow的库所以比较大,并且目前TensorFlow在iOS上还不支持GPU。
本文章翻译了Speeding Up TensorFlow with Metal Performance Shaders里面的主要内容,并实现一个demoGithub(MNISToniOSWithoutTFlib)。
为什么要用Metal Performance Shaders
Metal在iOS9中已经提供,主要是用Apple的API的写Kernel把图像运算丢给GPU获得更好的性能。在iOS10中Apple提供了Metal Performance Shaders,是一个上层的API,专门用于用GPU加速深度学习中卷积,池化等运算,相比CPU会快很多。
使用Metal可以不用引用TensorFlow的lib,减少包的大小和复杂的工程设置。
基于以上两点可以说基本上能用Metal是一定要用的。
主要过程
使用Metal构建网络,传入训练好的参数 ->输入数据,经过训练好的网络 ->获得输出
1. 使用Metal定义网络,加载训练好的参数
这一步主要是用Metal重写我们之前训练用过的网络train_metal.py,包括定义输入,卷积层,池化,全连接和输出。
这里有一点需要注意的是Metal和TensorFlow使用的数据格式有所区别,在Tensorflow中的格式为:
[{source/kernel}Height][{source/kernel}Width][inputChannels][outputChannels]
[0, 1, 2, 3]
在Metal中的格式为:
[outputChannels][{source/kernel}Height][{source/kernel}Width][inputChannels]
[3, 0, 1, 2]
所以在我们训练模型时需要使用TensorFlow调整一下模型参数的格式:
with open('W_conv1', 'w') as f:
W_conv1_p = tf.transpose(W_conv1, perm=[3, 0, 1, 2])
f.write(session.run(W_conv1_p).tobytes())
同理我们调整剩余的参数,包括b_conv1,W_conv2等等...具体可以见train_metal.py。
搞定格式之后,我们使用Metal来重写网络:
我们训练的网络结构为:
输入->卷积1->池化1->卷积2->池化2->全连接1->全连接2->softmax
在使用TensorFlow训练的时候:
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
// 卷积1
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.nn.softmax(tf.matmul(h_fc1, W_fc2) + b_fc2, name="softmax")
对应到Metal里面,我们首先要构建相同的网络。为了之后做出预测,传入我们已经训练好的参数:
-(void) initMetal:(id) nDevice {
float *conv1weights = loadTensor(@"W_conv1", 5 * 5 * 1 * 32);
float *conv1biases = loadTensor(@"b_conv1", 32);
float *conv2weights = loadTensor(@"W_conv2", 5 * 5 * 32 * 64);
float *conv2biases = loadTensor(@"b_conv2", 64);
float *fc1weights = loadTensor(@"W_fc1", 7 * 7 * 64 * 1024);
float *fc1biases = loadTensor(@"b_fc1", 1024);
float *fc2weights = loadTensor(@"W_fc2", 1024 * 10);
float *fc2biases = loadTensor(@"b_fc2", 10);
id<MTLDevice> device = nDevice;
const MPSCNNNeuronReLU *reluUnit = [[MPSCNNNeuronReLU alloc] initWithDevice:device a:0];
self.conv1descriptor = [MPSCNNConvolutionDescriptor cnnConvolutionDescriptorWithKernelWidth:5 kernelHeight:5 inputFeatureChannels:1 outputFeatureChannels:32 neuronFilter:reluUnit];
self.conv1layer = [[MPSCNNConvolution alloc] initWithDevice:device convolutionDescriptor:self.conv1descriptor kernelWeights:conv1weights biasTerms:conv1biases flags:MPSCNNConvolutionFlagsNone];
self.conv1outdescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:kImageSide height:kImageSide featureChannels:32];
self.pool1layer = [[MPSCNNPoolingMax alloc]initWithDevice:device kernelWidth:2 kernelHeight:2 strideInPixelsX:2 strideInPixelsY:2];
self.pool1layer.offset = (MPSOffset){1, 1, 0};
self.pool1layer.edgeMode = MPSImageEdgeModeClamp;
self.pool1outdescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:kImageSide2 height:kImageSide2 featureChannels:32];
self.conv2descriptor = [MPSCNNConvolutionDescriptor cnnConvolutionDescriptorWithKernelWidth:5 kernelHeight:5 inputFeatureChannels:32 outputFeatureChannels:64 neuronFilter:reluUnit];
self.conv2layer = [[MPSCNNConvolution alloc] initWithDevice:device convolutionDescriptor:self.conv2descriptor kernelWeights:conv2weights biasTerms:conv2biases flags:MPSCNNConvolutionFlagsNone];
self.conv2outdescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:kImageSide2 height:kImageSide2 featureChannels:64];
self.pool2layer = [[MPSCNNPoolingMax alloc] initWithDevice:device kernelWidth:2 kernelHeight:2 strideInPixelsX:2 strideInPixelsY:2];
self.pool2layer.offset = (MPSOffset){1, 1, 0};
self.pool2layer.edgeMode = MPSImageEdgeModeClamp;
self.pool2outdescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:kImageSide4 height:kImageSide4 featureChannels:64];
self.fc1descriptor = [MPSCNNConvolutionDescriptor cnnConvolutionDescriptorWithKernelWidth:kImageSide4 kernelHeight:kImageSide4 inputFeatureChannels:64 outputFeatureChannels:1024 neuronFilter:reluUnit];
self.fc1layer = [[MPSCNNFullyConnected alloc] initWithDevice:device convolutionDescriptor:self.fc1descriptor kernelWeights:fc1weights biasTerms:fc1biases flags:MPSCNNConvolutionFlagsNone];
self.fc1outdescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:1 height:1 featureChannels:1024];
self.fc2descriptor = [MPSCNNConvolutionDescriptor cnnConvolutionDescriptorWithKernelWidth:1 kernelHeight:1 inputFeatureChannels:1024 outputFeatureChannels:kOutputs neuronFilter:nil];
self.fc2layer = [[MPSCNNFullyConnected alloc] initWithDevice:device convolutionDescriptor:self.fc2descriptor kernelWeights:fc2weights biasTerms:fc2biases flags:MPSCNNConvolutionFlagsNone];
self.fc2outdescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:1 height:1 featureChannels:kOutputs];
self.softmaxOutput = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat16 width:1 height:1 featureChannels:kOutputs];
self.softmaxLayer = [[MPSCNNSoftMax alloc]initWithDevice:device];
self.inputDescriptor = [MPSImageDescriptor imageDescriptorWithChannelFormat:MPSImageFeatureChannelFormatFloat32 width:kImageSide height:kImageSide featureChannels:1];
self.pendingBuffers = [[NSMutableArray alloc] init];
self.results = [[NSMutableArray alloc] init];
}
2. 将图像丢到网络中运算(预测)
在1中构建网络之后,我们就可以输入数据了,这里的数据就是我们画板中手写的数字,做成一个uiimage,转为28*28的灰度图然后输入我们的网络。
处理输入数据:
UIImage *scaledImage = [self scaleImage:drawedImage];
UIImage *image = [self convertImageToGrayScale:scaledImage];
float *data = [self getGrayPixelFromImage:image atX:0 andY:0 count:kInputLength];
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
if (device == nil) {
NSLog(@"no metal support");
return;
}
跑网络,这里我们把之前定义好的网络描述首尾相接,并将数据输入:
id<MTLCommandQueue> queue = [device newCommandQueue];
id<MTLCommandBuffer> buffer = [queue commandBuffer];
MPSImage *inputImage = [[MPSImage alloc] initWithDevice:device imageDescriptor:self.inputDescriptor];
[inputImage.texture replaceRegion:MTLRegionMake2D(0, 0, kImageSide, kImageSide) mipmapLevel:0 withBytes:data bytesPerRow:sizeof(float) * kImageSide];
[MPSTemporaryImage prefetchStorageWithCommandBuffer:buffer imageDescriptorList:@[self.conv1outdescriptor, self.pool1outdescriptor, self.conv2outdescriptor, self.pool2outdescriptor, self.fc1outdescriptor, self.fc2outdescriptor]];
MPSTemporaryImage *c1o = [MPSTemporaryImage temporaryImageWithCommandBuffer:buffer imageDescriptor:self.conv1outdescriptor];
[self.conv1layer encodeToCommandBuffer:buffer sourceImage:inputImage destinationImage:c1o];
MPSTemporaryImage *p1o = [MPSTemporaryImage temporaryImageWithCommandBuffer:buffer imageDescriptor:self.pool1outdescriptor];
[self.pool1layer encodeToCommandBuffer:buffer sourceImage:c1o destinationImage:p1o];
MPSTemporaryImage *c2o = [MPSTemporaryImage temporaryImageWithCommandBuffer:buffer imageDescriptor:self.conv2outdescriptor];
[self.conv2layer encodeToCommandBuffer:buffer sourceImage:p1o destinationImage:c2o];
MPSTemporaryImage *p2o = [MPSTemporaryImage temporaryImageWithCommandBuffer:buffer imageDescriptor:self.pool2outdescriptor];
[self.pool2layer encodeToCommandBuffer:buffer sourceImage:c2o destinationImage:p2o];
MPSTemporaryImage *fc1tdi = [MPSTemporaryImage temporaryImageWithCommandBuffer:buffer imageDescriptor:self.fc1outdescriptor];
[self.fc1layer encodeToCommandBuffer:buffer sourceImage:p2o destinationImage:fc1tdi];
MPSTemporaryImage *fc2tdi = [MPSTemporaryImage temporaryImageWithCommandBuffer:buffer imageDescriptor:self.fc2outdescriptor];
[self.fc2layer encodeToCommandBuffer:buffer sourceImage:fc1tdi destinationImage:fc2tdi];
__block MPSImage *resultImage = [[MPSImage alloc] initWithDevice:device imageDescriptor:self.softmaxOutput];
[self.softmaxLayer encodeToCommandBuffer:buffer sourceImage:fc2tdi destinationImage:resultImage];
[buffer commit];
[buffer waitUntilCompleted];
3. 获得结果
const size_t numSlices = (resultImage.featureChannels + 3)/4;
float16_t halfs[numSlices * 4];
NSLog(@"size of float16_t %lu",sizeof(float16_t));
for (size_t i = 0; i < numSlices; i += 1) {
[resultImage.texture getBytes:&halfs[i * 4] bytesPerRow:8 bytesPerImage:8 fromRegion:MTLRegionMake3D(0, 0, 0, 1, 1, 1) mipmapLevel:0 slice:i];
for (size_t j = i * 4; j < i * 4 + 4; j++) {
NSLog(@"half %zu %f", j, halfs[j]);
}
}
int bestIndex = -1;
float bestProbability = 0;
for (auto i = 0; i < kOutputs; i++) {
const auto probability = halfs[i];
if (probability > bestProbability) {
bestProbability = probability;
bestIndex = i;
}
}
最后
在程序中我使用了Speeding Up TensorFlow with Metal Performance Shaders里训练好的参数,我发现6和8和9的识别很不准啊,大家可以试试1,3,5这种数字,直接下载demo,不用任何库就能跑起来,需要ios10以上。
我在ip6s上,试着用这个demo循环10000次预测,时间为8.76855秒。在上一个使用了TensorFlow的库并跑在CPU的demo中,循环10000次的时间是26.4693秒。
Speeding Up TensorFlow with Metal Performance Shaders原作者使用MNIST的测试数据集跑了一遍:
On my iPad Pro, this took 3.29s, down from 5.4s (a 40% improvement).
不知道是不是我循环的方式有问题...
Have Fun :)