// 视频录制,拍照,边拍边打时间水印
https://github.com/GiantFans/CwbVideoRecord
GPUImage
https://github.com/BradLarson/GPUImage
GPUImage uses OpenGL ES 2.0 shaders to perform image and video manipulation much faster than could be done in CPU-bound routines. However, it hides the complexity of interacting with the OpenGL ES API in a simplified Objective-C interface. This interface lets you define input sources for images and video, attach filters in a chain, and send the resulting processed image or video to the screen, to a UIImage, or to a movie on disk.
GPUImage使用OpenGL ES 2着色器进行图像和视频处理速度远远超过可以在CPU绑定的程序做的。然而,它隐藏在OpenGLES API简化Objective-C接口OpenGL交互的复杂性。这个接口允许您定义的图像和视频输入源,链连接的过滤器,并发送处理结果的图像或视频的画面到屏幕,一个UIImage,或磁盘上的一个movie。
GPUImageVideoCamera (for live video from an iOS camera), GPUImageStillCamera (for taking photos with the camera)
Images or frames of video are uploaded from source objects, which are subclasses of GPUImageOutput. These include GPUImageVideoCamera (for live video from an iOS camera), GPUImageStillCamera (for taking photos with the camera), GPUImagePicture (for still images), and GPUImageMovie (for movies). Source objects upload still image frames to OpenGL ES as textures, then hand those textures off to the next objects in the processing chain.
(视频图像或帧从源对象的上传,这是GPUImageOutput。这些包括GPUImageVideoCamera(从iOS相机录制视频)、GPUImageStillCamera(带相机的照片),GPUImagePicture(静态图片),和GPUImageMovie(电影)。源对象将图像帧上传到OpenGL ES作为纹理,然后将这些纹理传递给处理链中的下一个对象。)
Filters
Filters and other subsequent elements in the chain conform to the GPUImageInput protocol, which lets them take in the supplied or processed texture from the previous link in the chain and do something with it. Objects one step further down the chain are considered targets, and processing can be branched by adding multiple targets to a single output or filter.
(链中的过滤器和其他随后的元素符合GPUImageInput协议,这让他们以提供或加工纹理从链中的上一个链接,用它做什么。在链上一步一步的对象被认为是目标,并且处理可以通过将多个目标添加到单个输出或过滤器来进行分支)
For example, an application that takes in live video from the camera, converts that video to a sepia tone, then displays the video onscreen would set up a chain looking something like the following:
(例如,一个应用程序,需要在摄像头获取视频,再转换视频到深褐色调,然后显示视频屏幕将建立一个链,看起来过程有点像下面:)
GPUImageVideoCamera -> GPUImageSepiaFilter -> GPUImageView
作者:CC老师_HelloCoder
链接:https://www.jianshu.com/p/39d84e51712f
来源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
Filtering live video
To filter live video from an iOS device's camera, you can use code like the following:
要从iOS设备的摄像头过滤实时视频,您可以使用如下代码:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:@"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
视频源来自iOS设备的后置摄像头,使用预设,试图捕捉在640x480。此视频被捕获的界面是在纵向模式,其中左侧的左侧安装的相机需要在显示前旋转其视频帧。自定义过滤器,使用从文件CustomShader.fsh代码,然后设置为目标,从相机的视频帧。这些过滤的视频帧,最后显示在屏幕上以一个UIView子类可以过滤的OpenGL ES纹理从管道获得结果。
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
通过设置填充模式性能改变可以设置GPUImageView的填充模式。因此,如果源视频的纵横比是不同的。视频会被拉长,以black bars为中心或者填充放大。
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
对于混合过滤器和其他容纳多图像,你可以选择2个输出和添加一个单独的过滤器作为目标。把输出添加作为目标的顺序,会影响深入图像的混合和其他方式的处理顺序。
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
另外,如果你想使麦克风音频捕捉到movie,你需要设置相机的audioEncodingTarget是你的movie 写入者,如下列:
videoCamera.audioEncodingTarget = movieWriter;
Capturing and filtering a still photo (捕捉和过滤静态图片)
To capture and filter still photos, you can use a process similar to the one for filtering video. Instead of a GPUImageVideoCamera, you use a GPUImageStillCamera:
要捕捉和过滤静态图片,你可以使用类似于过滤视频的过程。而不是使用GPUImageVideoCamera,你需要使用GPUImageStillCamera:
tillCamera = [[GPUImageStillCamera alloc] init];
stillCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
filter = [[GPUImageGammaFilter alloc] init];
[stillCamera addTarget:filter];
GPUImageView *filterView = (GPUImageView *)self.view;
[filter addTarget:filterView];
[stillCamera startCameraCapture];
This will give you a live, filtered feed of the still camera's preview video. Note that this preview video is only provided on iOS 4.3 and higher, so you may need to set that as your deployment target if you wish to have this functionality.
这将给你一个静物,过滤静态相机的预览视频。注意,此预览视频必须在iOS4.3 或者以上系统。所以你如果需要这个功能,设置你的部署目标(deployment target)。
Once you want to capture a photo, you use a callback block like the following:
一旦你想过滤一张图片,你会使用到一个回调block。如下所示:
[stillCamera capturePhotoProcessedUpToFilter:filter withCompletionHandler:^(UIImage *processedImage, NSError *error){
NSData *dataForJPEGFile = UIImageJPEGRepresentation(processedImage, 0.8);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSError *error2 = nil;
if (![dataForJPEGFile writeToFile:[documentsDirectory stringByAppendingPathComponent:@"FilteredPhoto.jpg"] options:NSAtomicWrite error:&error2])
{
return;
}
}];
The above code captures a full-size photo processed by the same filter chain used in the preview view and saves that photo to disk as a JPEG in the application's documents directory.
上面的代码在捕获预览层中使用一个通过同一个过滤链处理的全尺寸的图片,并把照片作为一个JPEG存储在项目里的documents directory中
作者:CC老师_HelloCoder
链接:https://www.jianshu.com/p/09968f6254b7
来源:简书
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。