用到是AVFoundation 这个库
人脸检测使用思路:用Session管理,输入Input ,呈现捕获对象Output,最后在layer上显示
使用到的如下类:
1.AVCaptureSession 继承与NSObject,是AVFoundation的核心类 ,用于管理捕获对象AVCaptureInput的视频和音频的输入,协调捕获的输出AVCaptureOutput
2.AVCaptureDeviceInput 用于从AVCaptureDevice对象捕获数据。
3.AVCaptureVideoDataOutput 继承于AVCaptureOutput 用于将捕获输出数据(如文件和视频预览)连接到捕获会话AVCaptureSession的实例
4.AVCaptureMetadataOutput 继承于AVCaptureOutput (主要用于人脸检测)
定义对象:
@property (nonatomic,strong) AVCaptureSession *session;
@property (nonatomic,strong) AVCaptureDeviceInput*input;
@property (nonatomic, strong) AVCaptureMetadataOutput *MetadataOutput;
@property (nonatomic,strong) AVCaptureVideoDataOutput *videoOutput;
@property (nonatomic,strong) AVCaptureVideoPreviewLayer *previewLayer;
初始化各个对象:
-(void)deviceInit{
//1.获取输入设备(摄像头)可以切换前后摄像头
NSArray *devices = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionBack].devices;
AVCaptureDevice*deviceF = devices[0];
//2.根据输入设备创建输入对象
self.input= [[AVCaptureDeviceInputalloc]initWithDevice:deviceFerror:nil];
// 设置代理监听输出对象输出的数据
self.MetadataOutput = [[AVCaptureMetadataOutput alloc] init];
// 设置代理监听输出对象输出的数据
self.videoOutput = [[AVCaptureVideoDataOutput alloc] init];
//对实时视频帧进行相关的渲染操作,指定代理
[_videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
self.session = [[AVCaptureSession alloc] init];
//5.设置输出质量(高像素输出)
if ([self.session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
[self.session setSessionPreset:AVCaptureSessionPreset640x480];
}
//6.添加输入和输出到会话
[self.session beginConfiguration];
if([self.sessioncanAddInput:_input]) {
[self.sessionaddInput:_input];
}
if ([self.session canAddOutput:_MetadataOutput]) {
[self.session addOutput:_MetadataOutput];
}
if ([self.session canAddOutput:_videoOutput]) {
[self.session addOutput:_videoOutput];
}
[self.MetadataOutput setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
[self.MetadataOutput setMetadataObjectsDelegate:self queue:dispatch_queue_create("face", NULL)];
self.MetadataOutput.rectOfInterest = self.view.bounds;
[self.session commitConfiguration];
AVCaptureSession *session = (AVCaptureSession *)self.session;
//7.创建预览图层 _previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session]; _previewLayer.videoGravity = AVLayerVideoGravityResizeAspect; _previewLayer.frame = self.view.bounds; [self.view.layer insertSublayer:_previewLayer atIndex:0];
//8. 开始扫描
[self.session startRunning];
}
主要用到这些代理
<AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureMetadataOutputObjectsDelegate>
代理方法如下:
捕获到视频流,刷新率和手机的刷新率一样
AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput*)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection;
检测到人脸才会回调,只会返回一些人脸信息
AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput*)outputdidOutputMetadataObjects:(NSArray<__kindofAVMetadataObject*> *)metadataObjectsfromConnection:(AVCaptureConnection*)connection;
识别到人脸后出现
( "<AVMetadataFaceObject: 0x282d44da0, faceID=7, bounds={0.6,0.6 0.2x0.3}, rollAngle=210.0, yawAngle=0.0, time=235358881818541>",
"<AVMetadataFaceObject: 0x282d45020, faceID=5, bounds={0.2,0.3 0.2x0.3}, rollAngle=210.0, yawAngle=0.0, time=235358881818541>")