Export(导出)
要阅读和撰写视听Asset,必须使用AVFoundation框架提供的导出API。的AVAssetExportSession类提供了简单的出口的需要,比如修改的文件格式或修剪Asset的长度(见的接口修剪和转码一个电影)。对于更深入的出口需求,请使用AVAssetReader和AVAssetWriter类。
AVAssetReader当您要对Asset的内容执行操作时使用。例如,您可以读取Asset的音轨以产生波形的可视化表示。要从媒体(如样本缓冲区或静态图像)生成Asset,请使用AVAssetWriter对象。
注意: Asset读写器类不能用于实时处理。实际上,Asset读取器甚至不能用于从实时源中读取HTTP实时流。但是,如果您使用具有实时数据源(例如AVCaptureOutput对象)expectsMediaDataInRealTime的Asset编写器,请将Asset作者的输入的属性设置为YES。将此属性设置YES为非实时数据源将导致文件未正确交错。 |
---|
Reading an Asset(阅读Asset)
每个AVAssetReader对象一次只能与一个Asset相关联,但该Asset可能包含多个轨道。因此,AVAssetReaderOutput在开始阅读之前,您必须将类的具体子类分配给Asset读取器,以便配置媒体数据的读取方式。有在三个具体的子类AVAssetReaderOutput,你可以使用你的Asset阅读需要的基类:AVAssetReaderTrackOutput,AVAssetReaderAudioMixOutput,和AVAssetReaderVideoCompositionOutput。
Creating the Asset Reader(创建asset读取器)
所有您需要初始化AVAssetReader对象是您要阅读的Asset。
NSError *outError;
AVAsset *someAsset = <#AVAsset that you want to read#>;
AVAssetReader *assetReader = [AVAssetReader
assetReaderWithAsset:someAsset error:&outError];
BOOL success = (assetReader != nil);
注意: 始终检查asset读取器是否返回给您,nil以确保asset读取器已初始化成功。否则,错误参数(outError在前面的例子中)将包含相关的错误信息。 |
---|
设置asset读取器输出
创建Asset读取器后,设置至少一个输出以接收正在读取的媒体数据。设置输出时,请确保将alwaysCopiesSampleData属性设置为NO。通过这种方式,您可以获得性能改进的好处。在本章中的所有示例中,此属性可以并且应该设置为NO。
如果您只想从一个或多个轨道读取媒体数据,并可能将该数据转换为不同的格式,请使用AVAssetReaderTrackOutput该类,对于AVAssetTrack要从Asset读取的每个对象,使用单个轨道输出对象。要使用Asset读取器将音轨解压缩到Linear PCM,您可以按如下方式设置音轨输出:
AVAsset * localAsset = assetReader.asset;
//获取音轨。
AVAssetTrack * audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//线性PCM的解压缩设置
NSDictionary * decompressionAudioSettings = @ {AVFormatIDKey:[NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM]};
//使用音轨和解压缩设置创建输出。
AVAssetReaderOutput * trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
//如果可能,将输出添加到阅读器。
if([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];
注意: 要以存储格式从特定Asset轨道读取媒体数据,请传递nil给outputSettings参数 |
---|
您可以使用AVAssetReaderAudioMixOutput和AVAssetReaderVideoCompositionOutput类来分别使用AVAudioMix对象或AVVideoComposition对象来混合或合成在一起的媒体数据。通常,当您的Asset读取器从AVComposition对象读取时,将使用这些输出。
使用单个音频混合输出,您可以从Asset中读取已使用AVAudioMix对象混合在一起的多个音轨。要指定音轨如何混合,请AVAssetReaderAudioMixOutput在初始化后将该混音分配给该对象。以下代码显示如何使用Asset中的所有音轨创建音频混合输出,将音轨解压缩为Linear PCM,并将音频混合对象分配给输出。有关如何配置音频混合的详细信息,请参阅编辑。
AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset
tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
[assetReader addOutput:trackOutput];
注: 传递nil的audioSettings参数告诉asset读者以方便非压缩格式返回样本。AVAssetReaderVideoCompositionOutput班上也是如此。 |
---|
视频合成输出的行为方式大致相同:您可以从asset中读取已使用AVVideoComposition对象合成在一起的多个视频轨。要从多个合成视频轨道读取媒体数据并将其解压缩到ARGB,请按如下所示设置输出:
AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;
// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;
// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];
// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];
// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;
// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
[assetReader addOutput:audioMixOutput];
阅读asset的媒体数据
在设置所需的所有输出后开始阅读,请调用startReadingAsset读取器上的方法。接下来,使用该copyNextSampleBuffer方法从每个输出单独检索媒体数据。要使用单个输出启动Asset读取器并读取其所有媒体样本,请执行以下操作:
// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
// Copy the next sample buffer from the reader output.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer)
{
// Do something with sampleBuffer here.
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
else
{
// Find out why the asset reader output couldn't copy another sample buffer.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
// Handle the error here.
}
else
{
// The asset reader output has read all of its samples.
done = YES;
}
}
}
编写Asset
将AVAssetWriter媒体数据从多个源写入指定文件格式的单个文件的类。
您不需要将Asset Writer对象与特定Asset相关联,但必须为要创建的每个输出文件使用单独的Asset编写器。因为Asset Writer可以从多个来源写入媒体数据,所以您必须AVAssetWriterInput为要写入输出文件的每个单独的曲目创建一个对象。每个AVAssetWriterInput对象期望以对象的形式接收数据CMSampleBufferRef,但是如果要将CVPixelBufferRef对象附加到Asset编写器输入,请使用AVAssetWriterInputPixelBufferAdaptor该类。
创建Asset Writer
要创建资源写入程序,请指定输出文件的URL和所需的文件类型。以下代码显示如何初始化资源写入程序以创建QuickTime电影:
NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL
fileType:AVFileTypeQuickTimeMovie
error:&outError];
BOOL success = (assetWriter != nil);
设置Asset Writer 输入
要使Asset编写者能够编写媒体数据,您必须至少设置一个Asset编写器输入。例如,如果您的媒体数据来源已经将媒体样本作为CMSampleBufferRef对象,只需使用AVAssetWriterInput该类。要设置将音频媒体数据压缩为128 kbps AAC并将其连接到Asset编写器的Asset编写器输入,请执行以下操作:
//将频道布局配置为立体声。
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
//将通道布局对象转换为NSData对象。
NSData * channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout,mChannelDescriptions)];
//获取128 kbps AAC的压缩设置。
NSDictionary * compressionAudioSettings = @ {
AVFormatIDKey:[NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey:[NSNumber numberWithInteger:128000],
AVSampleRateKey:[NSNumber numberWithInteger:44100],
AVChannelLayoutKey:channelLayoutAsData,
AVNumberOfChannelsKey:[NSNumber numberWithUnsignedInteger:2]
};
//使用压缩设置创建资源写入器输入,并将媒体类型指定为音频。
AVAssetWriterInput * assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
//如果可能,将输入添加到写入器。
if([assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
注: 如果您希望媒体数据在其被存储的格式写成,通过nil在outputSettings参数。通过nil如果Asset作家用只初始化fileType的AVFileTypeQuickTimeMovie。 |
---|
您的Asset编写者输入可以可选地包括一些元数据,或者分别使用metadata和transform属性为特定的轨迹指定不同的变换。对于数据源是视频轨道的Asset编写器输入,您可以通过执行以下操作在输出文件中维护视频的原始变换:
AVAsset * videoAsset = <#AVAsset至少有一个视频轨道#>;
AVAssetTrack * videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;
注意:在开始写入Asset writer之前 设置metadata和transform属性,使其生效。 |
---|
将媒体数据写入输出文件时,有时您可能需要分配像素缓冲区。为此,请使用AVAssetWriterInputPixelBufferAdaptor该类。为了最大的效率,不要添加使用单独池分配的像素缓冲区,请使用像素缓冲适配器提供的像素缓冲池。以下代码创建一个在RGB域中工作的像素缓冲区对象,该CGImage对象将使用对象来创建其像素缓冲区。
NSDictionary * pixelBufferAttributes = @ {
kCVPixelBufferCGImageCompatibilityKey:[NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey:[NSNumber numberWithBool:YES],
kCVPixelBufferPixelFormatTypeKey:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor * inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];
注意: 所有AVAssetWriterInputPixelBufferAdaptor对象必须连接到单个Asset编写器输入。Asset作者输入必须接受类型的媒体数据AVMediaTypeVideo。 |
---|
写媒体数据
当您配置了Asset编写器所需的所有输入后,即可开始编写媒体数据。与Asset读取器一样,通过调用该startWriting方法启动写入过程。然后,您需要启动一个调用该startSessionAtSourceTime:方法的示例写入会话。Asset写入器完成的所有写入必须在这些会话之一内进行,每个会话的时间范围定义从源中包含的媒体数据的时间范围。例如,如果您的来源是提供从AVAsset对象读取的媒体数据的Asset读取器,并且不希望包含Asset上半部分的媒体数据,则可以执行以下操作:
CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration,0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//执行继续。
通常,要结束写入会话,您必须调用该endSessionAtSourceTime:方法。但是,如果您的写作会话直到文件的末尾,则可以通过调用该finishWriting方法来结束写入会话。要使用单个输入启动Asset编写器并写入其所有媒体数据,请执行以下操作:
// Prepare the asset writer for writing.
[self.assetWriter startWriting];
// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the next sample buffer.
CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
if (nextSampleBuffer)
{
// If it exists, append the next sample buffer to the output file.
[self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
CFRelease(nextSampleBuffer);
nextSampleBuffer = nil;
}
else
{
// Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}];
copyNextSampleBufferToWrite上面代码中的方法只是一个存根。此存根的位置是您需要插入一些逻辑来返回CMSampleBufferRef表示要写入的媒体数据的对象。样本缓冲区的一个可能来源是Asset读取器输出。
重新编码Asset
您可以一起使用Asset读取器和Asset作者对象将Asset从一种表示转换为另一种。使用这些对象,您比对象更能控制转换AVAssetExportSession。例如,您可以选择要在输出文件中表示的轨道,指定您自己的输出格式,或在转换过程中修改Asset。此过程的第一步只是根据需要设置Asset读取器输出和Asset写入器输入。您的Asset读写器是完全配置后,启动他们两个与调用的startReading和startWriting分别的方法。以下代码片段显示了如何使用单个Asset写入器输入来编写单个Asset读取器输出提供的媒体数据:
NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
while ([self.assetWriterInput isReadyForMoreMediaData])
{
// Get the asset reader output's next sample buffer.
CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
// If it exists, append this sample buffer to the output file.
BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
// Check for errors that may have occurred when appending the new sample buffer.
if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
{
NSError *failureError = self.assetWriter.error;
//Handle the error.
}
}
else
{
// If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
if (self.assetReader.status == AVAssetReaderStatusFailed)
{
NSError *failureError = self.assetReader.error;
//Handle the error here.
}
else
{
// The asset reader output must have vended all of its samples. Mark the input as finished.
[self.assetWriterInput markAsFinished];
break;
}
}
}
}];
将它们放在一起:使用Asset Reader和Writer串联来重新编码Asset
这个简短的代码示例说明了如何使用Asset Reader和Writer将Asset的第一个视频和音轨重新编码为新的文件。它显示如何:
- 使用序列化队列来处理读和写视听数据的异步性质
- 初始化Asset Reader并配置两个Asset Reader输出,一个用于音频,一个用于视频
- 初始化Asset Writer并配置两个Asset Writer输入,一个用于音频,另一个用于视频
- 使用Asset Reader通过两种不同的输出/输入组合异质地向Asset Writer提供媒体数据
- 使用调度组通知重新编码过程的完成
- 一旦开始允许用户取消重新编码过程
注意:为了专注于最相关的代码,本示例省略了完整应用程序的几个方面。要使用AVFoundation,您希望有足够的经验与Cocoa能够推断出丢失的部分。 |
---|
处理初始设置
在创建Asset读写器并配置其输出和输入之前,需要处理一些初始设置。此设置的第一部分涉及创建三个单独的序列化队列来协调读写过程。
NSString * serializationQueueDescription = [NSString stringWithFormat:@“%@ serialization queue”,self];
//创建主序列化队列。
self.mainSerializationQueue =
dispatch_queue_create([serializationQueueDescription UTF8String],NULL);
NSString * rwAudioSerializationQueueDescription = [NSString stringWithFormat:@“%@ rw audio serialization queue”,self];
//创建用于读取和写入音频数据的序列化队列。
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String],NULL);
NSString * rwVideoSerializationQueueDescription = [NSString stringWithFormat:@“%@ rw video serialization queue”,self];
//创建用于读取和写入视频数据的序列化队列。
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String],NULL);
主序列化队列用于协调asset读写器的启动和停止(可能由于取消),另外两个串行化队列用于将每个输出/输入组合的读取和写入序列化为潜在的取消。现在您有一些序列化队列,加载asset的轨迹并开始重新编码过程。
self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;
// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{
// Once the tracks have finished loading, dispatch the work to the main serialization queue.
dispatch_async(self.mainSerializationQueue, ^{
// Due to asynchronous nature, check to see if user has already cancelled.
if (self.cancelled)
return;
BOOL success = YES;
NSError *localError = nil;
// Check for success of loading the assets tracks.
success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
if (success)
{
// If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
NSFileManager *fm = [NSFileManager defaultManager];
NSString *localOutputPath = [self.outputURL path];
if ([fm fileExistsAtPath:localOutputPath])
success = [fm removeItemAtPath:localOutputPath error:&localError];
}
if (success)
success = [self setupAssetReaderAndAssetWriter:&localError];
if (success)
success = [self startAssetReaderAndWriter:&localError];
if (!success)
[self readingAndWritingDidFinishSuccessfully:success withError:localError];
});
}];
当轨道加载过程完成后,无论成功与否,其余的工作都将发送到主序列化队列,以确保所有这些工作都被序列化,并可能取消。
现在剩下的是在上一个代码清单结束时实现取消流程和三种自定义方法。
初始化 Asset Reader 和 Writer
自定义setupAssetReaderAndAssetWriter:方法初始化读写器并配置两个输出/输入组合,一个用于音轨,一个用于视频轨道。在本示例中,使用Asset读取器将音频解压缩为Linear PCM,并使用Asset写入器将其压缩回128 kbps AAC。使用asset读取器将视频解压缩为YUV,并使用Asset记录器将其压缩为H.264。
- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
// Create and initialize the asset reader.
self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
BOOL success = (self.assetReader != nil);
if (success)
{
// If the asset reader was successfully initialized, do the same for the asset writer.
self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
success = (self.assetWriter != nil);
}
if (success)
{
// If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];
if ([audioTracks count] > 0)
assetAudioTrack = [audioTracks objectAtIndex:0];
NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] > 0)
assetVideoTrack = [videoTracks objectAtIndex:0];
if (assetAudioTrack)
{
// If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
[self.assetReader addOutput:self.assetReaderAudioOutput];
// Then, set the compression settings to 128kbps AAC and create the asset writer input.
AudioChannelLayout stereoChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
NSDictionary *compressionAudioSettings = @{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};
self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
[self.assetWriter addInput:self.assetWriterAudioInput];
}
if (assetVideoTrack)
{
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
NSDictionary *decompressionVideoSettings = @{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
(id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
};
self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
[self.assetReader addOutput:self.assetReaderVideoOutput];
CMFormatDescriptionRef formatDescription = NULL;
// Grab the video format descriptions from the video track and grab the first one if it exists.
NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
if ([videoFormatDescriptions count] > 0)
formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
CGSize trackDimensions = {
.width = 0.0,
.height = 0.0,
};
// If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
if (formatDescription)
trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
else
trackDimensions = [assetVideoTrack naturalSize];
NSDictionary *compressionSettings = nil;
// If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
if (formatDescription)
{
NSDictionary *cleanAperture = nil;
NSDictionary *pixelAspectRatio = nil;
CFDictionaryRef cleanApertureFromCMFormatDescription =
CMFormatDescriptionGetExtension(formatDescription,
kCMFormatDescriptionExtension_CleanAperture);
if (cleanApertureFromCMFormatDescription)
{
cleanAperture = @{
AVVideoCleanApertureWidthKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
AVVideoCleanApertureHeightKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
AVVideoCleanApertureVerticalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
};
}
CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
if (pixelAspectRatioFromCMFormatDescription)
{
pixelAspectRatio = @{
AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
AVVideoPixelAspectRatioVerticalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
};
}
// Add whichever settings we could grab from the format description to the compression settings dictionary.
if (cleanAperture || pixelAspectRatio)
{
NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
if (cleanAperture)
[mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
if (pixelAspectRatio)
[mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
compressionSettings = mutableCompressionSettings;
}
}
// Create the video settings dictionary for H.264.
NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : [NSNumber numberWithDouble:trackDimensions.width],
AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
};
// Put the compression settings into the video settings dictionary if we were able to grab them.
if (compressionSettings)
[videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
// Create the asset writer input and add it to the asset writer.
self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
[self.assetWriter addInput:self.assetWriterVideoInput];
}
}
return success;
}
Asset重新编码
只要Asset读写器成功初始化和配置,调用处理初始设置中startAssetReaderAndWriter:描述的方法。这种方法是Asset的实际阅读和写作的地方。
- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
BOOL success = YES;
// Attempt to start the asset reader.
success = [self.assetReader startReading];
if (!success)
*outError = [self.assetReader error];
if (success)
{
// If the reader started successfully, attempt to start the asset writer.
success = [self.assetWriter startWriting];
if (!success)
*outError = [self.assetWriter error];
}
if (success)
{
// If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
self.dispatchGroup = dispatch_group_create();
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
self.audioFinished = NO;
self.videoFinished = NO;
if (self.assetWriterAudioInput)
{
// If there is audio to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
[self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.audioFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next audio sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
if (self.assetWriterVideoInput)
{
// If we had video to reencode, enter the dispatch group before beginning the work.
dispatch_group_enter(self.dispatchGroup);
// Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
[self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{
// Because the block is called asynchronously, check to see whether its task is complete.
if (self.videoFinished)
return;
BOOL completedOrFailed = NO;
// If the task isn't complete yet, make sure that the input is actually ready for more media data.
while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
{
// Get the next video sample buffer, and append it to the output file.
CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
completedOrFailed = !success;
}
else
{
completedOrFailed = YES;
}
}
if (completedOrFailed)
{
// Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
dispatch_group_leave(self.dispatchGroup);
}
}];
}
// Set up the notification that the dispatch group will send when the audio and video work have both finished.
dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{
BOOL finalSuccess = YES;
NSError *finalError = nil;
// Check to see if the work has finished due to cancellation.
if (self.cancelled)
{
// If so, cancel the reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
}
else
{
// If cancellation didn't occur, first make sure that the asset reader didn't fail.
if ([self.assetReader status] == AVAssetReaderStatusFailed)
{
finalSuccess = NO;
finalError = [self.assetReader error];
}
// If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
if (finalSuccess)
{
finalSuccess = [self.assetWriter finishWriting];
if (!finalSuccess)
finalError = [self.assetWriter error];
}
}
// Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
[self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
});
}
// Return success here to indicate whether the asset reader and writer were started successfully.
return success;
}
在重新编码期间,音频和视频轨道在各个序列化队列中异步处理,以提高进程的整体性能,但是两个队列都包含在同一个调度组中。通过将每个轨道的工作放在同一个调度组中,组可以在完成所有工作并且可以确定重新编码过程的成功时发送通知。
处理完成
为了处理读写过程的完成,该readingAndWritingDidFinishSuccessfully:方法被调用,其中参数指示重新编码是否成功完成。如果进程没有成功完成,则Asset读取器和写入程序都将被取消,任何与UI相关的任务都将发送到主队列。
- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
if (!success)
{
// If the reencoding process failed, we need to cancel the asset reader and writer.
[self.assetReader cancelReading];
[self.assetWriter cancelWriting];
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to failure.
});
}
else
{
// Reencoding was successful, reset booleans.
self.cancelled = NO;
self.videoFinished = NO;
self.audioFinished = NO;
dispatch_async(dispatch_get_main_queue(), ^{
// Handle any UI tasks here related to success.
});
}
}
处理取消
使用多个序列化队列,您可以允许您的应用程序的用户轻松取消重新编码过程。在主序列化队列中,消息被异步发送到每个Asset重新编码序列化队列以取消其读取和写入。当这两个序列化队列完成取消时,调度组向cancelled属性设置的主序列化队列发送通知YES。您可以cancel将以下代码列表中的方法与UI上的按钮相关联。
- (void)cancel
{
// Handle cancellation asynchronously, but serialize it with the main queue.
dispatch_async(self.mainSerializationQueue, ^{
// If we had audio data to reencode, we need to cancel the audio work.
if (self.assetWriterAudioInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the audio queue.
dispatch_async(self.rwAudioSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.audioFinished;
self.audioFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterAudioInput markAsFinished];
}
// Leave the dispatch group since the audio work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
if (self.assetWriterVideoInput)
{
// Handle cancellation asynchronously again, but this time serialize it with the video queue.
dispatch_async(self.rwVideoSerializationQueue, ^{
// Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
BOOL oldFinished = self.videoFinished;
self.videoFinished = YES;
if (oldFinished == NO)
{
[self.assetWriterVideoInput markAsFinished];
}
// Leave the dispatch group, since the video work is finished now.
dispatch_group_leave(self.dispatchGroup);
});
}
// Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
self.cancelled = YES;
});
}
Asset输出设置助手
本
AVOutputSettingsAssistant类在Asset read 和Writer创建输出设置字典。这使得设置更简单,特别是对于具有多个特定预设的高帧率H264电影。列表5-1显示了使用输出设置助手使用设置助手的示例
清单5-1 AVOutputSettingsAssistant示例
AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];
if (audioFormat != NULL)
[outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];
CMFormatDescriptionRef videoFormat = [self getVideoFormat];
if (videoFormat != NULL)
[outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];
CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]
[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];