iOS开发 AVCapture实现视频录制

1. AVCaptureDevice

首先, 我们需要判断设备是否支持前置摄像头和后置摄像头, 这里需要用到AVCaptureDevice, 我们看看开发文档怎么说的.

  1. An AVCaptureDevice object represents a physical capture device and the properties associated with that device. You use a capture device to configure the properties of the underlying hardware. A capture device also provides input data (such as audio or video) to an AVCaptureSession object.
  2. You use the methods of the AVCaptureDevice class to enumerate the available devices, query their capabilities, and be informed about when devices come and go. Before you attempt to set properties of a capture device (its focus mode, exposure mode, and so on), you must first acquire a lock on the device using the lockForConfiguration: method. You should also query the device’s capabilities to ensure that the new modes you intend to set are valid for that device. You can then set the properties and release the lock using the unlockForConfiguration method. You may hold the lock if you want all settable device properties to remain unchanged. However, holding the device lock unnecessarily may degrade capture quality in other applications sharing the device and is not recommended.
  1. AVCaptureDevice代表硬件设备, 并且为AVCaptureSession提供input
  2. 要想使用AVCaptureDevice, 应该先将设备支持的device枚举出来, 根据摄像头的位置( 前置摄像头或者后置摄像头 )获取需要用的那个摄像头, 再使用;
  3. 如果想要对AVCaptureDevice对象的一些属性进行设置, 应该先调用lockForConfiguration:方法, 设置结束后, 调用unlockForConfiguration方法

先通过遍历的方法, 根据所需要的媒体类型, 获取到可用的摄像头设备

//根据摄像头的位置获取到摄像头设备
- (AVCaptureDevice *)getCaptureDeviceWithCameraPosition:(AVCaptureDevicePosition)position {
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *de in devices) {
        if (de.position == position) {
            return de;
        }
    }
    return nil;
}

如果需要更改AVCaptureDevice的属性, 则可以这样更改

NSError *error = nil;
[self.captureDevice lockForConfiguration:&error];
//这里对device进行配置
[self.captureDevice unlockForConfiguration];

2. 输入(AVCaptureDeviceInput)

看一下苹果开发文档的介绍

  1. A capture input that provides media from a capture device to a capture session.
  2. AVCaptureDeviceInput is a concrete sub-class of AVCaptureInput you use to capture data from an AVCaptureDevice object.
  1. AVCaptureDeviceInput是AVCaptureInput的子类
  2. AVCaptureDeviceInput从AVCaptureDevice采集数据, 然后输送给AVCaptureSession
    看一下初始化方法:
- (AVCaptureDeviceInput *)deviceInput {
    if (!_deviceInput) {
        //根据device进行初始化
        _deviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:self.captureDevice error:nil];
    }
    return _deviceInput;
}

3. . AVCaptureVideoDataOutput

  1. You can use a video data output to process uncompressed frames from the video being captured or to access compressed frames.
  2. An instance of AVCaptureVideoDataOutput produces video frames you can process using other media APIs. You can access the frames with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.

AVCaptureVideoDataOutput是用来处理捕捉到的视频帧数的, 可使用代理方法captureOutput:didOutputSampleBuffer:fromConnection:来进行视频处理

- (AVCaptureVideoDataOutput *)videoOutput {
    if (!_videoOutput) {
        _videoOutput = [[AVCaptureVideoDataOutput alloc]init];
         dispatch_queue_t queue = dispatch_queue_create("videoQueue", DISPATCH_QUEUE_SERIAL);
        [_videoOutput setSampleBufferDelegate:self queue:queue];
    }
    return _videoOutput;
}

4. AVCaptureSession

有了input和output, 我们需要用AVCaptureSession来协调输入和输出.

  1. An object that manages capture activity and coordinates the flow of data from input devices to capture outputs.
  2. To perform a real-time or offline capture, you instantiate an AVCaptureSession object and add appropriate inputs (such as AVCaptureDeviceInput), and outputs (such as AVCaptureMovieFileOutput).
  3. You invoke startRuning to start the flow of data from the inputs to the outputs, and invoke stopRuning to stop the flow.
  4. The startRunning method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive).
  1. AVCaptureSession是用来协调输入inputs和输出outputs的
  2. AVCaptureSession对象要添加一个input(如 AVCaptureDeviceInput)和一个output(如AVCaptureMovieFileOutput)来操作实时或者离线的捕捉
  3. AVCaptureSession对象调用startRuning开始从输入到输出的数据输送, 调用stopRuning来停止数据输送
  4. 大概就是要把session放到一个穿行队列中去执行, 防止startRunning中的block未执行完毕导致一些错误.
    下面是AVCaptureSession的初始化
- (AVCaptureSession *)captureSession {
    if (!_captureSession) {
        _captureSession = [[AVCaptureSession alloc]init];
        //添加输入
        if ([_captureSession canAddInput:self.deviceInput]) {
            [_captureSession addInput:self.deviceInput];
        }
        //添加输出
        if ([_captureSession canAddOutput:self.videoOutput]) {
            [_captureSession addOutput:self.videoOutput];
        }
    }
    return _captureSession;
}

调用startRuning, 这里我没有尝试过, 看文章的小伙伴指教一下

dispatch_sync(dispatch_queue_create("serialQueue", DISPATCH_QUEUE_SERIAL), ^{
       [self.captureSession startRunning];
       //session的其他操作
 });

5. 预览视图 AVCaptureVideoPreviewLayer

  1. AVCaptureVideoPreviewLayer is a subclass of CALayer that you use to display video as it is being captured by an input device.
  2. You use this preview layer in conjunction with an AV capture session

AVCaptureVideoPreviewLayer是CALayer的子类, 用来展示摄像设备捕捉到的视频, 需要用到AVCaptureSession

- (AVCaptureVideoPreviewLayer *)previewLayer {
    if (!_previewLayer) {
        _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession];
        [_previewLayer setFrame:CGRectMake(0, 64, CGRectGetWidth([UIScreen mainScreen].bounds), CGRectGetHeight([UIScreen mainScreen].bounds) - 64 - CGRectGetHeight(self.takePhotoBtn.frame) - 50)];
        [self.view.layer addSublayer:_previewLayer];
        
        _previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    }
    return _previewLayer;
}

关系图

关系图.pic.jpg

完成上面的所有部分, 已经可以从previewLayer中看到实时的音像了......
接下来就是录制并保存到相册



6. 视频存储

1. 将录制的视频存储到本地沙盒
2. 将沙盒里的视频加入到相册中
3. 为这个视频制作封面(首帧)

这里我们采用AVAssetWriterAVAssetWriterInput将视频写入本地沙盒

6.1首先我们看一下AVAssetWriter的用法

You use an AVAssetWriter object to write media data to a new file of a specified audiovisual container type, such as a QuickTime movie file or an MPEG-4 file, with support for automatic interleaving of media data for multiple concurrent tracks.

可以通过AVAssetWriter对象将媒体数据写入特定的类型, 比如QuickTime视频文件或者MPEG-4文件类型.

  1. You can get the media data for one or more assets from instances of AVAssetReader or even from outside the AV Foundation API set. Media data is presented to AVAssetWriter for writing in the form of CMSampleBuffers (see CMSampleBuffer). Sequences of sample data appended to the asset writer inputs are considered to fall within “sample-writing sessions.” You must call startSessionAtSourceTime: to begin one of these sessions.
  2. Using AVAssetWriter, you can optionally re-encode media samples as they are written. You can also optionally write metadata collections to the output file.
  3. You can only use a given instance of AVAssetWriter once to write to a single file. If you want to write to files multiple times, you must use a new instance of AVAssetWriter each time.
  1. AVAssetWriter可以通过AVAssetReader(操作离线数据)或者AVFoundation的API拿到媒体数据(操作实时数据), 这些数据将会以CMSampleBuffers的形式呈献给AVAssetWriter, 想要将这些缓冲区里的数据append to(补入?) AVAssetWriterInput, 必须要调用startSessionAtSourceTime:方法;
    需要注意的是You must call startSessionAtSourceTime: to begin one of these sessions., 也就是说, 每一次的AVAssetWriter的写入(猜测AVAssetWriter里也有一个输入到输出的过程, 被称为一个session), 都需要调用这个方法.
if (self.writer.status == AVAssetWriterStatusUnknown) {
    CMTime startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
    [self.writer startWriting];
    [self.writer startSessionAtSourceTime:startTime];
}
  1. 可以选择性的将媒体数据进行重新编码, 可以选择性地将媒体数据写入output file中.
  2. 如果想要写入多个文件, 每次都要新建一个AVAssetWriter对象(比如音频和视频).
    由于音频和视频都是从AVCaptureVideoDataOutputSampleBufferDelegate的代理方法中获取到的
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
    //根据AVCaptureOutput判断是音频还是视频
}

所以要分别用不同的AVAssetWriter对象去写入音频和视频数据

看一下AVAssetWriter常用的方法

//初始化方法,  每个新的视频都要有个新的AVAssetWriter  新的视频本地存储url(outputURL)
+ (nullable instancetype)assetWriterWithURL:(NSURL *)outputURL fileType:(AVFileType)outputFileType error:(NSError * _Nullable * _Nullable)outError;
//添加AVAssetWriterInput
- (void)addInput:(AVAssetWriterInput *)input;
//开始写入
- (BOOL)startWriting;
//设定时间,  应该是指每一帧的时间, CMTime包含四个参数,  value表示当前第几帧, 
 timescale表示每秒钟多少帧
- (void)startSessionAtSourceTime:(CMTime)startTime;

6.2 AVAssetWriterInput

  1. 可以使用AVAssetWriterInput将AVAssetWriter的输出的文件补入封装成CMSampleBufferRef对象的媒体样本( 如 [self.writerInput appendSampleBuffer:sampleBuffer]).
  2. 当我们是从实时的数据源拿媒体数据的时候, 必须要先设置expectsMediaDataInRealTime为YES;
    如果不是从实时数据源拿数据(比如AVAssetReader), 则设为NO, 然后调用 requestMediaDataWhenReadyOnQueue:usingBlock:方法来处理

初始化AVAssetWriterInput对象

- (AVAssetWriterInput *)writerInput {
    if (!_writerInput) {
        //录制视频的一些配置,分辨率,编码方式等等
        NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:
                                  AVVideoCodecH264, AVVideoCodecKey,
                                  [NSNumber numberWithInteger: SCREEN_WIDTH * 2], AVVideoWidthKey,
                                  [NSNumber numberWithInteger: SCREEN_HEIGHT * 2], AVVideoHeightKey,
                                  nil];
        _writerInput = [[AVAssetWriterInput alloc]initWithMediaType:AVMediaTypeVideo outputSettings:settings];
        //录制视频是获取实时数据
        _writerInput.expectsMediaDataInRealTime = YES;
    }
    return _writerInput;
}

使用的时候需要先判断readyForMoreMediaData, 然后再append(补入?拼接?)

if (self.writerInput.readyForMoreMediaData == YES) {
            BOOL success = [self.writerInput appendSampleBuffer:sampleBuffer];
            if (!success) {
                [self.writer finishWritingWithCompletionHandler:^{
                    NSLog(@"finished");
                }];
            }else {
                NSLog(@"succeed!");
            }
        }

但是AVAssetWriterInput必须要在AVAssetWriter处理完每一帧之后, 才能将数据写入指定的内存中. 所以一般是这样

- (void)writeInSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    if (CMSampleBufferDataIsReady(sampleBuffer)) {
        //AVAssetWriterStatusUnknown表示这一次的write还没开始
        if (self.writer.status == AVAssetWriterStatusUnknown) {
            CMTime startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
            [self.writer startWriting];
            [self.writer startSessionAtSourceTime:startTime];
        }
        
        if (self.writerInput.readyForMoreMediaData == YES) {
            BOOL success = [self.writerInput appendSampleBuffer:sampleBuffer];
            if (!success) {
                [self.writer finishWritingWithCompletionHandler:^{
                    NSLog(@"finished");
                }];
            }else {
                NSLog(@"succeed!");
            }
        }
    }
}

6.3 写入沙盒

视频的每一帧都会通过didOutputSampleBuffer方法, 所以要在该方法中进行处理.

#pragma mark AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
  1. 每当捕获--输出一个新的视频帧的时候, Delegate会收到这个消息, 并且按照视频的设置进行解码或者重新编码; 也可以通过更多的API做更多的处理.
  2. 这个方法是被声明在AVCaptureVideoDataOutput的sampleBufferCallbackQueue属性中的, 它将会被定期调用, 所以必须要保证捕获的性能, 防止掉帧
  3. 如果你需要在该方法的外部引用CMSampleBufferRef对象, 必须要使用CFRetain(sampleBuffer);CFRelease(sampleBuffer);防止CMSampleBufferRef对象被释放
  4. 关于长时间保留CMSampleBufferRef对象(CFRetain(sampleBuffer)), 将会导致内存问题, 尽量不要长时间retainCMSampleBufferRef对象.
    代码示例
#pragma mark AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    //当开始捕捉的时候, 这里就拿到了数据
    if (output == self.videoOutput) {
        CFRetain(sampleBuffer);
        [self writeInSampleBuffer:sampleBuffer];
        CFRelease(sampleBuffer);
    }
}

6.4 存入相册

需要导入#import <Photos/Photos.h>

//保存到相册
- (void)saveToAlbum {
    [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
        [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:self.videoURL];
    } completionHandler:^(BOOL success, NSError * _Nullable error) {
        NSLog(@"保存成功");
    }];
}

如果需要预览, 就截取第一帧作为预览图

Demo地址: https://github.com/YuePei/Camera

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 213,186评论 6 492
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,858评论 3 387
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 158,620评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,888评论 1 285
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 66,009评论 6 385
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,149评论 1 291
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,204评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,956评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,385评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,698评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,863评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,544评论 4 335
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,185评论 3 317
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,899评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,141评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,684评论 2 362
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,750评论 2 351

推荐阅读更多精彩内容