上一篇学习了Base里面的几个文件。今天我学习一下OutPuts里面的几个文件。
1. RenderView
首先先来看下能将纹理展示到手机屏幕上的文件RenderView
public class RenderView: MTKView, ImageConsumer {
public let sources = SourceContainer()
public let maximumInputs: UInt = 1
var currentTexture: Texture? ///当前纹理
var renderPipelineState:MTLRenderPipelineState! ///渲染管线
public override init(frame frameRect: CGRect, device: MTLDevice?) {
super.init(frame: frameRect, device: sharedMetalRenderingDevice.device)
commonInit()
}
public required init(coder: NSCoder) {
super.init(coder: coder)
commonInit()
}
private func commonInit() {
framebufferOnly = false
autoResizeDrawable = true
self.device = sharedMetalRenderingDevice.device
let (pipelineState, _) = generateRenderPipelineState(device:sharedMetalRenderingDevice, vertexFunctionName:"oneInputVertex", fragmentFunctionName:"passthroughFragment", operationName:"RenderView")
self.renderPipelineState = pipelineState
enableSetNeedsDisplay = false
isPaused = true
}
public func newTextureAvailable(_ texture:Texture, fromSourceIndex:UInt) {
self.drawableSize = CGSize(width: texture.texture.width, height: texture.texture.height)
currentTexture = texture
self.draw()
}
public override func draw(_ rect:CGRect) {
if let currentDrawable = self.currentDrawable, let imageTexture = currentTexture {
let commandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer()
let outputTexture = Texture(orientation: .portrait, texture: currentDrawable.texture)
commandBuffer?.renderQuad(pipelineState: renderPipelineState, inputTextures: [0:imageTexture], outputTexture: outputTexture)
commandBuffer?.present(currentDrawable)
commandBuffer?.commit()
}
}
}
这就是他里面全部的代码,很少,只有几十行代码
RenderView
继承与MTKView
,并遵循ImageConsumer
,这就说明了只接受纹理数据
很同意理解,通过newTextureAvailable()
接受纹理,调用draw ()
将纹理绘制到视图
这里就讨论下里面设置的几个属性的问题:
framebufferOnly = false
:如果值为true
(默认值),则该类仅使用用法标志分配其对象。然后,Core Animation可以优化纹理以进行显示。但是,您可能无法采样,读取或写入这些纹理。要支持采样和像素读/写操作(以性能为代价),请将此值设置为false
。
autoResizeDrawable
:如果值为true
,则在调整视图大小时,视图会自动调整其基础颜色,深度,模板和多重采样纹理的大小。如果值为false
,则必须显式设置以更改这些对象的大小。(drawablesize)默认值为true
。
enableSetNeedsDisplay
:则视图的行为类似于UIView,响应对setNeedsDisplay的调用。 将视图标记为要显示后,该视图会在应用程序的事件循环中的每次遍历中自动重新显示。 将enableSetNeedsDisplay设置为true还将暂停MTKView的内部渲染循环,而更新将由事件驱动。 默认值为false
isPaused
: 如果为true
,则委托将基于内部计时器以preferredFramesPerSecond
的速率接收drawInMTKView
消息,或者子类将接收drawRect
消息。 默认值为false
MTKView 是MetalKit的一个类,layer 是 CAMetalLayer,负责渲染内容到屏幕。
驱动模式
提供三种渲染模式。分别由两个变量控制。
- 默认模式,paused 和 enableSetNeedsDisplay 都是NO,渲染由内部的定时器驱动
- paused 和 enableSetNeedsDisplay 都是YES,由view的渲染通知驱动,比如调用setNeedsDisplay
- paused 是 YES, enableSetNeedsDisplay 是 NO, 这个由主动调用MTKView 的draw方法
渲染方法
- 子类MTKView,在drawRect:方法里实现
- 设置MTKView的代理,在代理drawInMTKView:方法实现
文字来自于:https://blog.csdn.net/weixin_34010949/article/details/91370893
2.PictureOutput
可以导出纹理成图片,并且写入指定路径中。
主要就2个方法:
/// 设置保存路径,图片保存格式
public func saveNextFrameToURL(_ url:URL, format:PictureFileFormat) {
onlyCaptureNextFrame = true
encodedImageFormat = format
self.url = url // Create an intentional short-term retain cycle to prevent deallocation before next frame is captured
/// 编码成功回调 将数据写入指定路径中
encodedImageAvailableCallback = {imageData in
do {
try imageData.write(to: self.url, options:.atomic)
} catch {
// TODO: Handle this better
print("WARNING: Couldn't save image with error:\(error)")
}
}
}
/// 接受新纹理
public func newTextureAvailable(_ texture:Texture, fromSourceIndex:UInt) {
...
if let imageCallback = encodedImageAvailableCallback {
let cgImageFromBytes = texture.cgImage()
let imageData:Data
let image = UIImage(cgImage:cgImageFromBytes, scale:1.0, orientation:.up)
switch encodedImageFormat {
case .png: imageData = image.pngData()! // TODO: Better error handling here
case .jpeg: imageData = image.jpegData(compressionQuality: 0.8)! // TODO: Be able to set image quality
}
imageCallback(imageData)
if onlyCaptureNextFrame {
encodedImageAvailableCallback = nil
}
}
}
其中主要的就是纹理
转换成CGImage
的方法,在Texture
对象内部,方法如下:
extension Texture {
func cgImage() -> CGImage {
// Flip and swizzle image
//翻转和旋转图像
guard let commandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer() else { fatalError("Could not create command buffer on image rendering.")}
let outputTexture = Texture(device:sharedMetalRenderingDevice.device, orientation:self.orientation, width:self.texture.width, height:self.texture.height)
commandBuffer.renderQuad(pipelineState:sharedMetalRenderingDevice.colorSwizzleRenderState, uniformSettings:nil, inputTextures:[0:self], useNormalizedTextureCoordinates:true, outputTexture:outputTexture)
commandBuffer.commit()
/// 阻止当前线程的执行,直到命令缓冲区的执行完成
commandBuffer.waitUntilCompleted()
// Grab texture bytes, generate CGImageRef from them
/// 图片大小
let imageByteSize = texture.height * texture.width * 4
/// 分配空间大小
let outputBytes = UnsafeMutablePointer<UInt8>.allocate(capacity:imageByteSize)
/// 获取纹理数据保存到 outputBytes
outputTexture.texture.getBytes(outputBytes, bytesPerRow: MemoryLayout<UInt8>.size * texture.width * 4, bytesPerImage:0, from: MTLRegionMake2D(0, 0, texture.width, texture.height), mipmapLevel: 0, slice: 0)
guard let dataProvider = CGDataProvider(dataInfo:nil, data:outputBytes, size:imageByteSize, releaseData:dataProviderReleaseCallback) else {fatalError("Could not create CGDataProvider")}
let defaultRGBColorSpace = CGColorSpaceCreateDeviceRGB()
return CGImage(width:texture.width, //图片宽
height:texture.height, //图片高
bitsPerComponent:8, //位数
bitsPerPixel:32, // 像素所占位数
bytesPerRow:4 * texture.width, //没一行的字节数
space:defaultRGBColorSpace,
bitmapInfo:CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue),
provider:dataProvider,
decode:nil,
shouldInterpolate:false,
intent:.defaultIntent)!
}
}
MTLTexture通过getBytes()
方法拿到数据,再通过CGImage绘制出来.
这样我们就拿到了纹理的图片数据了,最后想写入或者显示等操作都可以
3.MovieOutput
将持续获取到的纹理数据写入成视频文件到指定路径中。
我们直接来看核心的方法
public func newTextureAvailable(_ texture:Texture, fromSourceIndex:UInt) {
...
/// 申请创建一个空的CVPixelBuffer
var pixelBufferFromPool:CVPixelBuffer? = nil
let pixelBufferStatus = CVPixelBufferPoolCreatePixelBuffer(nil, assetWriterPixelBufferInput.pixelBufferPool!, &pixelBufferFromPool)
guard let pixelBuffer = pixelBufferFromPool, (pixelBufferStatus == kCVReturnSuccess) else { return }
CVPixelBufferLockBaseAddress(pixelBuffer, [])
/// 将纹理数据填充到CVPixelBuffer
renderIntoPixelBuffer(pixelBuffer, texture:texture)
if (!assetWriterPixelBufferInput.append(pixelBuffer, withPresentationTime:frameTime)) {
print("Problem appending pixel buffer at time: \(frameTime)")
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
}
/// 将纹理数据填充到CVPixelBuffer
func renderIntoPixelBuffer(_ pixelBuffer:CVPixelBuffer, texture:Texture) {
/// 获取pixelBuffer像素的基本地址
guard let pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer) else {
print("Could not get buffer bytes")
return
}
/// 获取一行的字节数
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let outputTexture:Texture
if (Int(round(self.size.width)) != texture.texture.width) && (Int(round(self.size.height)) != texture.texture.height) {
let commandBuffer = sharedMetalRenderingDevice.commandQueue.makeCommandBuffer()
outputTexture = Texture(device:sharedMetalRenderingDevice.device, orientation: .portrait, width: Int(round(self.size.width)), height: Int(round(self.size.height)), timingStyle: texture.timingStyle)
commandBuffer?.renderQuad(pipelineState: renderPipelineState, inputTextures: [0:texture], outputTexture: outputTexture)
commandBuffer?.commit()
commandBuffer?.waitUntilCompleted()
} else {
outputTexture = texture
}
let region = MTLRegionMake2D(0, 0, outputTexture.texture.width, outputTexture.texture.height)
/// 将数据填充到pixelBufferBytes中
outputTexture.texture.getBytes(pixelBufferBytes, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
}
步骤:1.接受到新的纹理
2.申请创建一个新的CVPixelBuffer
3.获取CVPixelBuffer像素的基本地址,将纹理数据写入CVPixelBuffer中
4.将有数据的CVPixelBuffer添加到assetWriterPixelBufferInput中
注:调用
CVPixelBufferGetBaseAddress()
之前必须调用CVPixelBufferLockBaseAddress()
锁定pixelbuffer
在此Outputs就完了。仅以此勉励自己。
热爱生活,记录生活!