1. 综述
AVFoundation 提供了对底层数据的读写功能,需要用到 AVAssetReader 和 AVAssetWriter 两个核心类。
AVAssetReader 用于从 AVAsset 实例读取媒体样本,需要配置一个或多个 AVAssetReaderOutput 实例。
AVAssetReaderOutput 是一个抽象类,下分三个具体类,负责读取指定 AVAssetTrack 的AVAssetReaderTrackOutput,负责读取多音频轨道的 AVAssetReaderAudioMixOutput,负责读取多媒体轨道的 AVAssetReaderVideoCompositionOutput。其内部通道以多线程方式读取下一个可用样本,从而降低请求资源的延时。但仍然不推荐用 AVAssetReaderOutput 实现包括播放在内的实时操作。
AVAssetWriter 用于对媒体资源进行编码和写入,需要配置一个或多个 AVAssetWriterInput 实例,一个 AVAssetWriterInput 负责一种媒体类型,最终生成独立的 AVAssetTrack。同时还需要用到 AVAssetWriterInputPixelBufferAdaptor 为视频样本提供最优性能。
下面是一个简单的使用示例
- 进行媒体资源读取准备
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURL options:nil];
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] firstObject]; // 获取 video 类型的媒体轨道
self.assetReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
NSDictionary *readerOutputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey:@(kCVPixelFormatType_32BGRA)}; // 将视频帧解压缩为 32 位 BGRA 格式
AVAssetReaderTrackOutput *trackout = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:readerOutputSettings];
[self.assetReader addOutput:trackout];
[self.assetReader startReading];// 做好从 AVAsset 读取样本的准备,如果返回 NO 则表明出错了
- 进行媒体资源写入准备
self.assetWriter = [[AVAssetWriter alloc] initWithURL:[self outputURL] fileType:AVFileTypeQuickTimeMovie error:nil]; // 指定待写入的 URL 和 媒体类型
NSDictionary *writeOutputSettings = @{AVVideoCodecKey:AVVideoCodecH264, // 视频格式
AVVideoWidthKey:@1280,
AVVideoHeightKey:@720,
AVVideoCompressionPropertiesKey:@{ // 硬编码参数
AVVideoAverageBitRateKey:@10500000,
AVVideoProfileLevelKey:AVVideoProfileLevelH264Main31
}
};
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:writeOutputSettings];
[self.assetWriter addInput:writerInput];
[self.assetWriter startWriting]; // 做好写入准备
- 以拉模式写入
dispatch_queue_t queue = dispatch_queue_create("writer", NULL); // 在串行队列上写入
[self.assetWriter startSessionAtSourceTime:kCMTimeZero]; // 从视频起始位置开始写入会话
[writerInput requestMediaDataWhenReadyOnQueue:queue usingBlock:^{ // 准备写入更多样本时调用 block
BOOL complete = NO;
while ([writerInput isReadyForMoreMediaData] && !complete) {
CMSampleBufferRef sampleBuffer = [trackout copyNextSampleBuffer]; // 从读取器读取更多样本
if (sampleBuffer) {
BOOL result = [writerInput appendSampleBuffer:sampleBuffer]; // 将样本附加到写入器通道
CFRelease(sampleBuffer);
complete = !result;
} else {
[writerInput markAsFinished]; // 标记写入完成
complete = YES;
}
}
if (complete) {
[self.assetWriter finishWritingWithCompletionHandler:^{ // 完成写入
if (self.assetWriter.status == AVAssetWriterStatusCompleted) { // 判断是否写入成功
NSLog(@"complete");
} else {
NSLog(@"fail");
}
}];
}
}];
2. 创建音频波形视图
音频波形视图即提供图像化显示的音频波形,方便用户查看和编辑音频轨道。其主要步骤包括
- 读取:解压读取音频数据
- 采样:实际读取到的样本数量巨大,需要进行采样,在每一个样本块上取 min、max 或者 average 值
- 渲染:在 UI 界面上渲染得到的采样值(与 AVFoundation 无关)
2.1 读取音频样本
首先是对读取器和读取通道的初始化
NSError *error = nil;
AVAssetReader *assetReader = [[AVAssetReader alloc] initWithAsset:asset error:&error]; // 配置 AVAssetReader
if (!assetReader) {
NSLog(@"Error creating asset reader: %@", [error localizedDescription]);
return nil;
}
AVAssetTrack *track = [[asset tracksWithMediaType:AVMediaTypeAudio] firstObject]; // 配置音频轨道
NSDictionary *outputSettings = @{
AVFormatIDKey : @(kAudioFormatLinearPCM), // 读取格式为 PCM,一种未压缩的音频样本格式
AVLinearPCMIsBigEndianKey : @NO, // 小端字节顺序
AVLinearPCMIsFloatKey : @NO, // 有符号整型
AVLinearPCMBitDepthKey : @(16) // 位元深度 16 位
};
AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings:outputSettings];
[assetReader addOutput:trackOutput];
[assetReader startReading];
然后循环读取样本值并转移
NSMutableData *sampleData = [NSMutableData data];
while (assetReader.status == AVAssetReaderStatusReading) { // 持续读取样本
CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer]; // 读取一个音频样本
if (sampleBuffer) {
CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(sampleBuffer); // 获取其对应的不保留引用的 block buffer
size_t length = CMBlockBufferGetDataLength(blockBufferRef); // 获取样本长度
SInt16 sampleBytes[length];
CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, sampleBytes); // 转移样本内的数据到一个空数组
[sampleData appendBytes:sampleBytes length:length]; // 添加数组到 NSMutableData 中
CMSampleBufferInvalidate(sampleBuffer);
CFRelease(sampleBuffer);// 回收样本
}
}
2.2 采样音频样本
屏幕空间有限,而读取的音频样本非常庞大,因此需要进行采样。基本思路是将样本按照一定距离进行分割,在分割出的一个独立"箱"中找到最大样本。
- (NSArray *)filteredSamplesForSize:(CGSize)size {
NSMutableArray *filteredSamples = [[NSMutableArray alloc] init];
NSUInteger sampleCount = self.sampleData.length / sizeof(SInt16); // 获取样本个数,样本格式是 SInt16,总长度除以单个数据长度
NSUInteger binSize = sampleCount / size.width; // 获取一个样本箱的大小
SInt16 *bytes = (SInt16 *) self.sampleData.bytes; // 获取样本的基地址
for (NSUInteger i = 0; i < sampleCount; i += binSize) { // 遍历样本箱
SInt16 sampleBin[binSize];
for (NSUInteger j = 0; j < binSize; j++) { // 遍历箱内样本
sampleBin[j] = CFSwapInt16LittleToHost(bytes[i + j]); // 由于是按照小端顺序读取,需要通过 CFSwapInt16LittleToHost 方法转换为主机的本地字节顺序
}
SInt16 value = [self maxValueInArray:sampleBin ofSize:binSize]; // 取出样本箱中最大值
[filteredSamples addObject:@(value)]; // 添加到采样数组中
}
return filteredSamples;
}
3. 捕捉录制的高级方法
AVCaptureVideoDataOutput 无法像 AVCaptureMovieFileOutput 一样便捷地记录输出,它需要用到 AVAssetWriter 方法,但是另一方面可以对每一帧数据进行实时处理,因此更为灵活和强大。使用 AVCaptureVideoDataOutput 记录媒体资源需要注意以下几个地方
- 初始化过程
- 实时渲染预览页面
- 记录媒体输出
3.1 初始化
初始化 AVCaptureVideoDataOutput 和 AVCaptureAudioDataOutput
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
NSDictionary *outputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)}; // 设置输出为 32 位 BGRA 格式
self.videoDataOutput.videoSettings = outputSettings;
self.videoDataOutput.alwaysDiscardsLateVideoFrames = NO; // 默认为 YES,会立即丢弃在 captureOutput:didOutputSampleBuffer:fromConnection:delegate 方法中阻止处理当前捕获帧的帧,设置为 NO 可以给委托方法额外时间处理样本,但是会带来性能上的损耗
[self.videoDataOutput setSampleBufferDelegate:self queue:self.dispatchQueue]; // 将委托回调加入到串行队列中
if ([self.captureSession canAddOutput:self.videoDataOutput]) {
[self.captureSession addOutput:self.videoDataOutput];
} else {
return NO;
}
self.audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[self.audioDataOutput setSampleBufferDelegate:self queue:self.dispatchQueue];
if ([self.captureSession canAddOutput:self.audioDataOutput]) {
[self.captureSession addOutput:self.audioDataOutput];
} else {
return NO;
}
3.2 实时渲染预览页面
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (captureOutput == self.videoDataOutput) { // 判断是 video 通道
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // 取出像素帧
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:imageBuffer options:nil]; // 生成预览的 UIImage
}
}
当然实际上很多时候我们需要对预览加上滤镜效果,此时可以用 CIFilter 实现,首先通过 name 获取一系列的 CIFilter
+ (NSArray *)filterNames {
return @[@"CIPhotoEffectChrome",
@"CIPhotoEffectFade",
@"CIPhotoEffectInstant",
@"CIPhotoEffectMono",
@"CIPhotoEffectNoir",
@"CIPhotoEffectProcess",
@"CIPhotoEffectTonal",
@"CIPhotoEffectTransfer"];
}
+ (CIFilter *)filterForDisplayName:(NSString *)displayName {
for (NSString *name in [self filterNames]) {
if ([name containsString:displayName]) {
return [CIFilter filterWithName:name];
}
}
return nil;
}
然后对 UIImage 对象使用 CIFilter
[self.filter setValue:sourceImage forKey:kCIInputImageKey];
CIImage *filteredImage = self.filter.outputImage;
3.3 记录媒体输出
记录输出前首先要对 videoInput 和 audioInput 进行初始化
NSError *error = nil;
NSString *fileType = AVFileTypeQuickTimeMovie;
self.assetWriter = [AVAssetWriter assetWriterWithURL:[self outputURL] fileType:fileType error:&error]; // 初始化 writer
if (!self.assetWriter || error) {
NSString *formatString = @"Could not create AVAssetWriter: %@";
NSLog(@"%@", [NSString stringWithFormat:formatString, error]);
return;
}
self.assetWriterVideoInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:self.videoSettings];
self.assetWriterVideoInput.expectsMediaDataInRealTime = YES; // 指明输入应该针对实时性进行优化
UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
self.assetWriterVideoInput.transform = THTransformForDeviceOrientation(orientation); // 修复 orientation
NSDictionary *attributes = @{
(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA), // 与 AVCaptureVideoDataOutput 使用的像素格式一致能够保证最大效率
(id)kCVPixelBufferWidthKey : self.videoSettings[AVVideoWidthKey],
(id)kCVPixelBufferHeightKey : self.videoSettings[AVVideoHeightKey],
(id)kCVPixelFormatOpenGLESCompatibility : (id)kCFBooleanTrue
};
self.assetWriterInputPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:self.assetWriterVideoInput sourcePixelBufferAttributes:attributes]; //AVAssetWriterInputPixelBufferAdaptor 提供一个优化的 pixelBufferPool
if ([self.assetWriter canAddInput:self.assetWriterVideoInput]) {
[self.assetWriter addInput:self.assetWriterVideoInput];
} else {
NSLog(@"Unable to add video input.");
return;
}
self.assetWriterAudioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:self.audioSettings];
self.assetWriterAudioInput.expectsMediaDataInRealTime = YES;
if ([self.assetWriter canAddInput:self.assetWriterAudioInput]) {
[self.assetWriter addInput:self.assetWriterAudioInput];
} else {
NSLog(@"Unable to add audio input.");
}
这里要注意,对于初始化 AVAssetWriterInput 和 AVAssetWriterInput 时需要用到的 settings 值,iOS 7 提供了一些便捷方法来获取
- recommendedVideoSettingsForAssetWriterWithOutputFileType 获取指定类型的视频设置
- recommendedAudioSettingsForAssetWriterWithOutputFileType 获取指定类型的音频设置
if (!self.isWriting) { // 检测是否正在记录
return;
}
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer); // 获取当前帧的描述信息
CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDesc); // 获取媒体类型
if (mediaType == kCMMediaType_Video) {
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // 获取时间戳
if (self.firstSample) { // 如果是第一帧,则进行写入启动操作
if ([self.assetWriter startWriting]) {
[self.assetWriter startSessionAtSourceTime:timestamp]; // 在当前时间戳启动
} else {
NSLog(@"Failed to start writing.");
}
self.firstSample = NO;
}
CVPixelBufferRef outputRenderBuffer = NULL;
CVPixelBufferPoolRef pixelBufferPool = self.assetWriterInputPixelBufferAdaptor.pixelBufferPool; // 获取到 adaptor 的 pixelBufferPool
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &outputRenderBuffer); // 创建一个空的 CVPixelBufferRef,使用该 buffer 渲染筛选好的视频帧
if (err) {
NSLog(@"Unable to obtain a pixel buffer from the pool.");
return;
}
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // 获取当前样本的像素帧
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:imageBuffer options:nil]; // 生成 CIImage
[self.activeFilter setValue:sourceImage forKey:kCIInputImageKey]; // 滤镜配置
CIImage *filteredImage = self.activeFilter.outputImage;
if (!filteredImage) {
filteredImage = sourceImage;
}
[self.ciContext render:filteredImage toCVPixelBuffer:outputRenderBuffer bounds:filteredImage.extent colorSpace:self.colorSpace]; // 将 CIImage 渲染到空的 CVPixelBufferRef 中
if (self.assetWriterVideoInput.readyForMoreMediaData) {
if (![self.assetWriterInputPixelBufferAdaptor appendPixelBuffer:outputRenderBuffer withPresentationTime:timestamp]) { // 写入到 adaptor 中,完成对视频帧的处理
NSLog(@"Error appending pixel buffer.");
}
}
CVPixelBufferRelease(outputRenderBuffer); // 回收被渲染的帧
}
else if (!self.firstSample && mediaType == kCMMediaType_Audio) {
if (self.assetWriterAudioInput.isReadyForMoreMediaData) {
if (![self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]) { // 音频样本直接写入
NSLog(@"Error appending audio sample buffer.");
}
}
}
具体的操作已经在注释中写明了,此处不再过多说明。最终结束写入操作时,仍然需要调用 finishWritingWithCompletionHandler 方法
[self.assetWriter finishWritingWithCompletionHandler:^{
if (self.assetWriter.status == AVAssetWriterStatusCompleted) {
dispatch_async(dispatch_get_main_queue(), ^{
NSURL *fileURL = [self.assetWriter outputURL]; // 拿到 url 后进行相册保存操作
});
} else {
NSLog(@"Failed to write movie: %@", self.assetWriter.error);
}
}];