用来描述Android应用程序的UI元数据的SharedClient
是如何创建的?
实际在近期版本的Android 源码中,已经移除了SharedClient
,只存在于早期版本中,甚至4.4中,该类都已经不复存在。
//http://aosp.opersys.com/xref/android-2.3_r1/xref/frameworks/base/include/private/surfaceflinger/SharedBufferStack.h#127
class SharedClient
{
public:
SharedClient();
~SharedClient();
status_t validate(size_t token) const;
private:
friend class SharedBufferBase;
friend class SharedBufferClient;
friend class SharedBufferServer;
// FIXME: this should be replaced by a lock-less primitive
Mutex lock;
Condition cv;
SharedBufferStack surfaces[ SharedBufferStack::NUM_LAYERS_MAX ];
};
SharedClient
在早期版本中,是做什么的呢?
Android
应用和SurfaceFlinger
服务是不同的进程,它们通过binder通讯。但是binder有一个问题,无法传递大数据(限制在1M-8k,即1016K),一个Android应用存在多个窗口,数据量比较大,因此使用Binder的话不合适。于是选择了使用Android匿名共享内存方案。
在每一个Android
应用与SurfaceFlinger
之间的连接上,加上一块用来传递UI元数据的匿名共享内存,这块区域被包装成SharedClient
。
可以看出,
SharedClient
是用来解决Android
应用与SurfaceFlinger
服务之间的数据传输问题的。
所以这个问题也可以换个说法,SharedClient
被废弃后,Android
系统是如何在Android
应用和SurfaceFlinger
服务之间传输数据的?是如何通讯的呢?
带着这个问题接着来分析一下SurfaceFlinger
源码。
官方文档中有张图。
涉及
Metadata (元数据), WindowManager, SurfaceFlinger,Buffer Data , surface.cpp,GLConsumer.cpp ,IGraphicBufferProducer
等。
出处:https://www.jianshu.com/p/f96ab6646ae3
App到SurfaceFlinger
应用首先通过Surface的接口向SurfaceFlinger申请一块buffer, 需要留意的是Surface刚创建时是不会有buffer被alloc出来的,只有应用第一次要绘制画面时dequeueBuffer会让SurfaceFlinger去alloc buffer, 在应用侧会通过importBuffer把这块内存映射到应用的进程空间来。
之后App通过dequeueBuffer拿到画布, 通过queueBuffer来提交绘制好的数据。
APP绘画的画布是由SurfaceFlinger提供的,而画布是一块共享内存,APP向SurfaceFlinger申请到画布,是将共享内存的地址映射到自身进程空间。 App负责在画布上作画,画完的作品提交给SurfaceFlinger, 这个提交操作并不是把内存复制一份给SurfaceFlinger,而是把共享内存的控制权交还给SurfaceFlinger, SurfaceFlinger把拿来的多个应用的共享内存再送给HWC Service去合成, HWC Service把合成的数据交给DRM去输出完成app画面显示到屏幕上的过程。为了更有效地利用时间这样的共享内存不止一份,可能有两份或三份,即常说的double buffering, triple buffering.
那么我们就需要设计一个机制可以管理buffer的控制权,这个就是BufferQueue.
本篇文章的分析,我先略过Surface相关细节,重点看数据是如何从App到SurfaceFlinger的。
所以,要想知道数据是如何传输的,关键点是理解BufferQueue
。
BufferQueue
数据不是通过copy从应用传递到SurfaceFlinger
,而是通过句柄传递。先记住这句话,后面会说道。
BufferQueues
是 Android
图形组件之间的粘合剂。它们是一对队列,可以调解缓冲区从生产方到消耗方的固定周期。一旦生产方移交其缓冲区,SurfaceFlinger
便会负责将所有内容合成到显示部分。
有关 BufferQueue
通信过程,请参见下图。
这个图结合前面官方文档的描述,可简单总结出以下几点:
1.
SurfaceFlinger
创建并拥有 BufferQueue
数据结构2.应用需要缓冲区时,它会通过调用
dequeueBuffer()
从BufferQueue
请求一个可用的缓冲区3.应用绘制UI,填充缓冲区,并通过调用
queueBuffer()
将缓冲区返回到队列
-
SurfaceFlinger
通过调用acquireBuffer()
获取该缓冲区并使用该缓冲区的内容
那么这些是怎么做到的呢?
出处:https://www.jianshu.com/p/730dd558c269
SharedBufferStack中的缓冲区只是用来描述UI元数据的,这意味着它们不包含真正的UI数据。真正的UI数据保存在GraphicBuffer中,后面我们再描述GaphicBuffer。因此,为了完整地描述一个UI,SharedBufferStack中的每一个已经使用了的缓冲区都对应有一个GraphicBuffer,用来描述真正的UI数据。当SurfaceFlinger服务缓制Buffer-1和Buffer-2的时候,就会找到与它们所对应的GraphicBuffer,这样就可以将对应的UI绘制出来了。
当Android应用程序需要更新一个Surface的时候,它就会找到与它所对应的SharedBufferStack,并且从它的空闲缓冲区列表的尾部取出一个空闲的Buffer。我们假设这个取出来的空闲Buffer的编号为index。接下来Android应用程序就请求SurfaceFlinger服务为这个编号为index的Buffer分配一个图形缓冲区GraphicBuffer。
早期的源码中,存在SharedBufferStack,现在SharedBufferStack已经没有了。
那么现在,应用绘制UI所需的内存空间是如何分配的?
出处:https://www.jianshu.com/p/3c61375cc15b
在BufferQueue的设计中,一个buffer的状态有以下几种:
FREE :表示该buffer可以给到应用程序,由应用程序来绘画
DEQUEUED:表示该buffer的控制权已经给到应用程序侧,这个状态下应用程序可以在上面绘画了
QUEUED: 表示该buffer已经由应用程序绘画完成,buffer的控制权已经回到SurfaceFlinger手上了
ACQUIRED:表示该buffer已经交由HWC Service去合成了,这时控制权已给到HWC Service了
Buffer的初始状态为FREE, 当生产者通过dequeueBuffer来申请buffer成功时,buffer状态变为了DEQUEUED状态, 应用画图完成后通过queueBuffer把buffer状态改到QUEUED状态, 当SurfaceFlinger通过acquireBuffer操作把buffer拿去给HWC Service合成, 这时buffer状态变为ACQUIRED状态,合成完成后通过releaseBuffer把buffer状态重新改为FREE状态。 状态切换如下图所示:
BufferSlot
每一个应用程序的图层在SurfaceFlinger里称为一个Layer, 而每个Layer都拥有一个独立的BufferQueue, 每个BufferQueue都有多个Buffer,Android 系统上目前支持每个Layer最多64个buffer, 这个最大值被定义在frameworks/native/gui/BufferQueueDefs.h, 每个buffer用一个结构体BufferSlot来代表。
每个BufferSlot里主要有如下重要成员:
struct BufferSlot{
......
BufferState mBufferState;//代表当前Buffer的状态 FREE/DEQUEUED/QUEUED/ACQUIRED
....
sp<GraphicBuffer> mGraphicBuffer;//代表了真正的buffer的存储空间
......
uint64_t mFrameNumber;//表示这个slot被queued的编号,在应用调dequeueBuffer申请slot时会参考该值
......
sp<Fence> mFence;//在Fence一章再来看它的作用
.....
}
SurfaceFlinger
不仅是Consumer
,创建并拥有 BufferQueue数据结构,而且还提供了Buffer
空间——GraphicBuffer
(GraphicBuffer
就是缓冲区,真正的buffer的存储空间)。
了解了上面的这些基本知识后,再来看看调用dequeueBuffer
后,如何分配缓存区,以及BufferQueue
的基本流程。
1. dequeueBuffer
从应用通过dequeueBuffer来申请buffer这里开始看代码。应用在第一次dequeueBuffer时会先调用requestBuffer:
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/gui/Surface.cpp
sp<IGraphicBufferProducer> mGraphicBufferProducer;
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/gui/Surface.cpp
Surface::Surface(const sp<IGraphicBufferProducer>& bufferProducer, bool controlledByApp,
const sp<IBinder>& surfaceControlHandle)
: mGraphicBufferProducer(bufferProducer),
//mGraphicBufferProducer 初始化= bufferProducer
....
}
int Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd) {
....
// 这里尝试去dequeueBuffer,因为这时SurfaceFlinger对应Layer的slot还没有分配buffer,这时SurfaceFlinger会回复的flag会有BUFFER_NEEDS_REALLOCATION
status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence, dqInput.width,
dqInput.height, dqInput.format,
dqInput.usage, &mBufferAge,
dqInput.getTimestamps ?
&frameTimestamps : nullptr);
....
if ((result & IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION) || gbuf == nullptr) {
...
//这里检查到dequeueBuffer返回的结果里带有BUFFER_NEEDS_REALLOCATION标志就会发出一次requestBuffer
result = mGraphicBufferProducer->requestBuffer(buf, &gbuf);
...
}
....
}
调用mGraphicBufferProducer->dequeueBuffer(&buf, &fence, dqInput.width,dqInput.height, dqInput.format, dqInput.usage, &mBufferAge,dqInput.getTimestamps ? &frameTimestamps : nullptr);
时发现分配出来的slot没有GraphicBuffer, 这时会去申请对应的buffer:
//frameworks/native/libs/gui/BufferQueueProducer.cpp
status_t BufferQueueProducer::dequeueBuffer(int* outSlot, sp<android::Fence>* outFence,
uint32_t width, uint32_t height, PixelFormat format,
uint64_t usage, uint64_t* outBufferAge,
FrameEventHistoryDelta* outTimestamps) {
if ((buffer == NULL) ||
buffer->needsReallocation(width, height, format, BQ_LAYER_COUNT, usage))//检查是否已分配了GraphicBuffer
{
......
returnFlags |= BUFFER_NEEDS_REALLOCATION;//发现需要分配buffer,置个标记
}
......
if (returnFlags & BUFFER_NEEDS_REALLOCATION) {
......
//新创建一个新的GraphicBuffer给到对应的slot
sp<GraphicBuffer> graphicBuffer = new GraphicBuffer(
width, height, format, BQ_LAYER_COUNT, usage,
{mConsumerName.string(), mConsumerName.size()});
......
mSlots[*outSlot].mGraphicBuffer = graphicBuffer;//把GraphicBuffer给到对应的slot
......
}
......
return returnFlags;//注意在应用第一次请求buffer, dequeueBuffer返回时对应的GraphicBuffer已经创建完成并给到了对应的slot上,但返回给应用的flags里还是带有BUFFER_NEEDS_REALLOCATION标记的
}
调用BufferQueueProducer::dequeueBuffer
的时候,虽然创建了GraphicBuffer
,但是应用侧并没有与拥有可用的缓冲区,即GraphicBuffer
,只是新创建一个新的GraphicBuffer
给到对应的slot
。
然后走到mGraphicBufferProducer->requestBuffer(buf, &gbuf)
。
// frameworks/native/libs/gui/IGraphicBufferProducer.cpp
class BpGraphicBufferProducer : public BpInterface<IGraphicBufferProducer>
{
...
virtual status_t requestBuffer(int bufferIdx, sp<GraphicBuffer>* buf) {
Parcel data, reply;
data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
data.writeInt32(bufferIdx);
status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
if (result != NO_ERROR) {
return result;
}
bool nonNull = reply.readInt32();
if (nonNull) {
*buf = new GraphicBuffer();
result = reply.read(**buf);
if(result != NO_ERROR) {
(*buf).clear();
return result;
}
}
result = reply.readInt32();
return result;
}
然后从bp走到bn端。
// frameworks/native/libs/gui/IGraphicBufferProducer.cpp
status_t BnGraphicBufferProducer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case REQUEST_BUFFER: {
CHECK_INTERFACE(IGraphicBufferProducer, data, reply);
int bufferIdx = data.readInt32();
sp<GraphicBuffer> buffer;
int result = requestBuffer(bufferIdx, &buffer);
reply->writeInt32(buffer != nullptr);
if (buffer != nullptr) {
reply->write(*buffer);
}
reply->writeInt32(result);
return NO_ERROR;
}
...
}
return BBinder::onTransact(code, data, reply, flags);
}
应用侧收到带有BUFFER_NEEDS_REALLOCATION标记的返回结果后就会调BufferQueueProducer::requestBuffer
来获取对应buffer的信息:
//frameworks/native/libs/gui/include/gui/BufferQueueProducer.h
sp<BufferQueueCore> mCore;
// This references mCore->mSlots. Lock mCore->mMutex while accessing.
BufferQueueDefs::SlotsType& mSlots;
// frameworks/native/libs/gui/BufferQueueProducer.cpp
BufferQueueProducer::BufferQueueProducer(const sp<BufferQueueCore>& core,
bool consumerIsSurfaceFlinger) :
mCore(core),
mSlots(core->mSlots),
...{}
status_t BufferQueueProducer::requestBuffer(int slot, sp<GraphicBuffer>* buf) {
ATRACE_CALL();
BQ_LOGV("requestBuffer: slot %d", slot);
std::lock_guard<std::mutex> lock(mCore->mMutex);
...
mSlots[slot].mRequestBufferCalled = true;
*buf = mSlots[slot].mGraphicBuffer;
return NO_ERROR;
}
这段调用mGraphicBufferProducer->requestBuffer(buf, &gbuf)
后发生的binder
通讯创建GraphicBuffer
的过程比较难以理解,与一般的binder通讯不同, 要想理解这一部分通讯是如何创建GraphicBuffer
,需要对dma-buf
有一定的理解。
关于dmf_buf 机制可以查看这篇:dma-buf 由浅入深(一) —— 最简单的 dma-buf 驱动程序
,
概念
dma-buf 的出现就是为了解决各个驱动之间 buffer 共享的问题,因此它本质上是 buffer 与 file 的结合,即 dma-buf 既是块物理 buffer,又是个 linux file。buffer 是内容,file 是媒介,只有通过 file 这个媒介才能实现同一 buffer 在不同驱动之间的流转。
一个典型的 dma-buf 应用框图如下:
通常,我们将分配 buffer 的模块称为 exporter,将使用该 buffer 的模块称为 importer 或 user。但在本系列文章中,importer 特指内核空间的使用者,user 特指用户空间的使用者。
2. GraphicBuffer的创建
为了弄清楚mGraphicBufferProducer->requestBuffer(buf, &gbuf)
到底是如何给应用侧提供GraphicBuffer
。下面来看一下 GraphicBuffer
类。
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/include/ui/GraphicBuffer.h
class GraphicBufferMapper;
class GraphicBuffer
: public ANativeObjectBase<ANativeWindowBuffer, GraphicBuffer, RefBase>,
public Flattenable<GraphicBuffer>
GraphicBuffer继承于ANativeWindowBuffer和Flattenable,前者是在ANativeWindow中的图元缓冲,后者是Binder 传输时候的Parcel封装IBinder。
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/GraphicBuffer.cpp
GraphicBuffer::GraphicBuffer()
: BASE(), mOwner(ownData), mBufferMapper(GraphicBufferMapper::get()),
mInitCheck(NO_ERROR), mId(getUniqueId()), mGenerationNumber(0)
{
width =
height =
stride =
format =
usage_deprecated = 0;
usage = 0;
layerCount = 0;
handle = nullptr;
}
status_t GraphicBuffer::initWithSize(uint32_t inWidth, uint32_t inHeight,
PixelFormat inFormat, uint32_t inLayerCount, uint64_t inUsage,
std::string requestorName)
{
GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();
uint32_t outStride = 0;
status_t err = allocator.allocate(inWidth, inHeight, inFormat, inLayerCount,
inUsage, &handle, &outStride, mId,
std::move(requestorName));
...
}
return err;
}
在初始化中有一个十分核心的类GraphicBufferAllocator,图元申请器。这个类真正在一个GraphicBuffer的壳内,通过allocate真正生成一个核心内存块。接着会调用GraphicBufferMapper. getTransportSize在Mapper中记录大小。请注意,allocate方法中有一个十分核心的参数handle。他是来自ANativeWindowBuffer:
//http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/nativebase/include/nativebase/nativebase.h#96
const native_handle_t* handle;
// system/core/libcutils/include/cutils/native_handle.h中定义了该结构体
//http://aosp.opersys.com/xref/android-12.0.0_r2/xref/system/core/libcutils/include/cutils/native_handle.h#34
typedef struct native_handle
{
int version; /* sizeof(native_handle_t) */
int numFds; /* number of file-descriptors at &data[0] */
int numInts; /* number of ints at &data[numFds] */
#if defined(__clang__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wzero-length-array"
#endif
int data[0]; /* numFds + numInts ints */
#if defined(__clang__)
#pragma clang diagnostic pop
#endif
} native_handle_t;
native_handle_t实际上是的GraphicBuffer的句柄。
这里的句柄,就是官方文档中缓冲区始终通过句柄进行传递。
3. GraphicBufferAllocator
下面来看看GraphicBufferAllocator 初始化
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/GraphicBufferAllocator.cpp
ANDROID_SINGLETON_STATIC_INSTANCE( GraphicBufferAllocator )
Mutex GraphicBufferAllocator::sLock;
KeyedVector<buffer_handle_t,
GraphicBufferAllocator::alloc_rec_t> GraphicBufferAllocator::sAllocList;
GraphicBufferAllocator::GraphicBufferAllocator() : mMapper(GraphicBufferMapper::getInstance()) {
// 按照版本由高到低的顺序加载 gralloc allocator, 成功则退出
mAllocator = std::make_unique<const Gralloc4Allocator>(
reinterpret_cast<const Gralloc4Mapper&>(mMapper.getGrallocMapper()));
// 创建 Gralloc4Allocator
...
}
ANDROID_SINGLETON_STATIC_INSTANCE这个宏实际上就是一个单例,在构造函数中做了2件事情:
1.初始化GraphicBufferMapper
2.创建Gralloc4Allocator
3.1 GraphicBufferMapper
下面我们来看GraphicBufferMapper 初始化做了什么。
//http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/GraphicBufferMapper.cpp
GraphicBufferMapper::GraphicBufferMapper() {
mMapper = std::make_unique<const Gralloc4Mapper>();
...
}
Gralloc4Mapper 的初始化
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/Gralloc4.cpp#90
Gralloc4Mapper::Gralloc4Mapper() {
mMapper = IMapper::getService();
...
}
本质上是沟通了Hal层的hwServiceManager之后,获取IMapper的服务。
最后我们关注IMapper.hal,看看这个hal层开放了什么方法给上层,看几个重点函数。
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/hardware/interfaces/graphics/mapper/4.0/IMapper.hal#23
interface IMapper {
// BufferDescriptorInfo用于描述图形buffer的属性(宽、高、格式...)
struct BufferDescriptorInfo {
string name; // buffer的名字,用于debugging/tracing
uint32_t width; // width说明了分配的buffer中有多少个像素列,但它并不表示相邻行的同一列元素的偏移量,区别stride。
uint32_t height; // height说明了分配的buffer中有多少像素行
uint32_t layerCount; // 分配的缓冲区中的图像层数
PixelFormat format; // 像素格式 (参见/frameworks/native/libs/ui/include/ui/PixelFormat.h中的定义)
bitfield<BufferUsage> usage;buffer使用方式的标志位(参见/frameworks/native/libs/ui/include/ui/GraphicBuffer.h的定义)。
uint64_t reservedSize; // 与缓冲区关联的保留区域的大小(字节)。
};
/**
* 创建一个 buffer descriptor,这个descriptor可以用于IAllocator分配buffer
* 主要完成两个工作:
* 1. 检查参数的合法性(设备是否支持);
* 2. 把BufferDescriptorInfo这个结构体变量进行重新的包装,本质就是转化为byte stream,这样可以传递给IAllocator
*/
createDescriptor(BufferDescriptorInfo description)
generates (Error error,
BufferDescriptor descriptor);
/**
* 把raw buffer handle转为imported buffer handle,这样就可以在调用进程中使用了
* 当其他进程分配的GraphicBuffer传递到当前进程后,需要通过该方法映射到当前进程,为后续的lock做好准备
*/
importBuffer(handle rawHandle) generates (Error error, pointer buffer);
/**
* importBuffer()返回的buffer handle不再使用后必须调用freeBuffer()释放
*/
freeBuffer(pointer buffer) generates (Error error);
/**
* 已指定的CPU usage 锁定缓冲区的指定区域accessRegion。lock之后就可以对buffer进行读写了
*/
lock(pointer buffer,
uint64_t cpuUsage,
Rect accessRegion,
handle acquireFence)
generates (Error error,
pointer data);
/**
* 解锁缓冲区以指示对缓冲区的所有CPU访问都已完成
*/
unlock(pointer buffer) generates (Error error, handle releaseFence);
/**
* 根据给定的MetadataType获取对应的buffer metadata
*/
get(pointer buffer, MetadataType metadataType)
generates (Error error,
vec<uint8_t> metadata);
/**
* 设置给定的MetadataType对应的buffer metadata
*/
set(pointer buffer, MetadataType metadataType, vec<uint8_t> metadata)
generates (Error error);
};
1.importBuffer 生成可用的Buffer
2.freeBuffer 释放Buffer
3.lock 上锁buffer
4.unlock 解锁锁buffer
IMapper::getService()我们可以理解为完成了 buffer handle 所指向的图形缓存到运行进程的映射。访问 buffer 数据一般遵循这样的流程:
importBuffer -> lock -> 读写GraphicBuffer-> unlock -> freeBuffer。
3.2 Gralloc4Allocator
Gralloc4Allocator构造函数中会去创建一个 Gralloc4Allocator 对象,并且传递一个 Gralloc4Mapper 对象作为参数:
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/Gralloc4.cpp
Gralloc4Allocator::Gralloc4Allocator(const Gralloc4Mapper& mapper) : mMapper(mapper) {
mAllocator = IAllocator::getService();
if (mAllocator == nullptr) {
ALOGW("allocator 4.x is not supported");
return;
}
}
Gralloc4Allocator 的构造函数中去获取 gralloc-allocator hal service,这是一个 binderized hal service。
这里我们先暂时理解为:透过 GraphicBufferAllocator & Gralloc4Allocator 我们就可以使用 gralloc-alloctor hal 的功能了。
GraphicBuffer类图如下所示:
-
GraphicBuffer
:对应gralloc
分配的图形Buffer
(也可能是普通内存,具体要看gralloc实现),它继承ANativeWindowBuffer
结构体,核心成员是指向图形缓存的句柄(native_handle_t * handle
),并且图形Buffer
本身是多进程共享的,跨进程传输的是GraphicBuffer
的关键属性,这样在使用进程可以重建GraphicBuffer
,同时指向同一块图形Buffer
。 -
GraphicBufferAllocator
:向下对接gralloc allocator HAL
服务,是进程内单例,负责分配进程间共享的图形Buffer。 -
GraphicBufferMapper
:向下对接gralloc mapper HAL
服务,是进程内单例,负责把GraphicBufferAllocator
分配的GraphicBuffer
映射到当前进程空间。
GraphicBufferAllocator
分配内存的具体实现在HAL层,GraphicBufferMapper
映射GraphicBuffer
到当前进程空间的具体实现也一样,暂不做深入。
现在来看GraphicBuffer是怎么做到跨进程共享的。
3.3 关于GraphicBuffer的跨进程共享
在图形系统中,生产者和最终的消费者往往不在同一个进程中,所以 GraphicBuffer 需要跨进程传递,以实现数据共享。我们先用一张流程图来概况:
首先,生产进程通过GraphicBuffer::flatten把ANativeWindowBuffer关键属性保存在两个数组中:buffer和fds,其实就是 Binder 数据传输前的序列化处理;
其次,跨进程传输buffer和fds,这里一般就是 Binder IPC 跨进程通信;
然后,消费进程通过GraphicBuffer::unflatten在自己的进程中重建ANativeWindowBuffer,关键是重建ANativeWindowBuffer.handle这个结构体成员,相当于把生产进程的GraphicBuffer映射到了消费进程;
最后,遵循 importBuffer->lock->读写GraphicBuffer->unlock->freeBuffer 的基本流程操作GraphicBuffer。
下面跟着代码一步步说明这个流程。
GraphicBuffer的数据太大了,没有办法进行Binder通信,那么他为什么可以办到binder返回呢?我们调用IGraphicBufferProducer
的requestBuffer
,走到BnGraphicBufferProducer::onTransact
,最后会走到BufferQueueProducer::requestBuffer
。
// frameworks/native/libs/gui/IGraphicBufferProducer.cpp
class BpGraphicBufferProducer : public BpInterface<IGraphicBufferProducer>
{
...
virtual status_t requestBuffer(int bufferIdx, sp<GraphicBuffer>* buf) {
Parcel data, reply;
data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
data.writeInt32(bufferIdx);
status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
...
bool nonNull = reply.readInt32();
if (nonNull) {
*buf = new GraphicBuffer();
result = reply.read(**buf);
if(result != NO_ERROR) {
(*buf).clear();
return result;
}
}
result = reply.readInt32();
return result;
}
}
// frameworks/native/libs/gui/IGraphicBufferProducer.cpp
status_t BnGraphicBufferProducer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case REQUEST_BUFFER: {
CHECK_INTERFACE(IGraphicBufferProducer, data, reply);
int bufferIdx = data.readInt32();
sp<GraphicBuffer> buffer;
int result = requestBuffer(bufferIdx, &buffer);
reply->writeInt32(buffer != nullptr);
if (buffer != nullptr) {
reply->write(*buffer);
}
reply->writeInt32(result);
return NO_ERROR;
}
...
}
return BBinder::onTransact(code, data, reply, flags);
}
// frameworks/native/libs/gui/BufferQueueProducer.cpp
status_t BufferQueueProducer::requestBuffer(int slot, sp<GraphicBuffer>* buf) {
...
mSlots[slot].mRequestBufferCalled = true;
*buf = mSlots[slot].mGraphicBuffer;
return NO_ERROR;
}
从requestBuffer
,到BufferQueueProducer::requestBuffer
,这就是一个flatten-> unflatten ->importBuffer->lock->读写GraphicBuffer
的过程。
这个binder流程应该是SF端写fd,AP端读出fd',binder驱动帮助转换了这个fd。
具体怎么做的呢,先来看看Parcel的reply.read和reply.write。
//http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/binder/Parcel.cpp
status_t Parcel::read(FlattenableHelperInterface& val) const
{
...
err = val.unflatten(buf, len, fds, fd_count);
...
}
status_t Parcel::write(const FlattenableHelperInterface& val)
{
...
err = val.flatten(buf, len, fds, fd_count);
...
}
requestBuffer---->BnGraphicBufferProducer::onTransact----> reply->write(*buffer)---->Parcel::write---->val.flatten(buf, len, fds, fd_count)---->GraphicBuffer::flatten
。
val
是调用 BufferQueueProducer::requestBuffer
生成的GraphicBuffer
。
// frameworks/native/libs/ui/GraphicBuffer.cpp
status_t GraphicBuffer::flatten(void*& buffer, size_t& size, int*& fds, size_t& count) const {
...
int32_t* buf = static_cast<int32_t*>(buffer);
buf[0] = 'GB01';
buf[1] = width;
buf[2] = height;
...
if (handle) {
...
memcpy(fds, handle->data, static_cast<size_t>(mTransportNumFds) * sizeof(int));
memcpy(buf + 13, handle->data + handle->numFds,
static_cast<size_t>(mTransportNumInts) * sizeof(int));
}
...
}
memcpy
是内存拷贝函数,这里把handle->data
中的中复制到fds
中。
void *memcpy(void *dest, const void *src, size_t n);
从源src所指的内存地址的起始位置开始拷贝n个字节到目标dest所指的内存地址的起始位置中。memcpy可以复制任意内容,例如字符数组、整型、结构体、类等。
这里的handle
就是ANativeWindowBuffer::handle
。调用initWithHandle
的时候会初始化。
// frameworks/native/libs/ui/GraphicBuffer.cpp
GraphicBuffer::GraphicBuffer(const native_handle_t* inHandle, HandleWrapMethod method,
uint32_t inWidth, uint32_t inHeight, PixelFormat inFormat,
uint32_t inLayerCount, uint64_t inUsage, uint32_t inStride)
: GraphicBuffer() {
mInitCheck = initWithHandle(inHandle, method, inWidth, inHeight, inFormat, inLayerCount,
inUsage, inStride);
}
ANativeWindowBuffer::handle = inHandle;
native_handle_t是上层抽象的可以在进程间传递的数据结构,对private_handle_t的抽象包装。
numFds=1表示有一个文件句柄:fd
numInts= 8表示后面跟了8个INT型的数据:magic,flags,size,offset,base,lockState,writeOwner,pid;
服务进程将创建的GraphicBuffer对象的成员变量handle写回到请求创建图形缓冲区的客户进程,这时客户进程通过以下方式就可以读取服务进程返回的关于创建图形buffer的信息数据。
服务端返回replay后,应用侧通过reply.read(**buf)读native_handle_t
数据。
// frameworks/native/libs/gui/IGraphicBufferProducer.cpp
virtual status_t requestBuffer(int bufferIdx, sp<GraphicBuffer>* buf) {
...
if (nonNull) {
//新建GraphicBuffer
*buf = new GraphicBuffer();
result = reply.read(**buf);
if(result != NO_ERROR) {
(*buf).clear();
return result;
}
}
...
}
reply.read(**buf)---->Parcel::read---->val.unflatten(buf, len, fds, fd_count)---->GraphicBuffer::unflatten
//http://aosp.opersys.com/xref/android-12.0.0_r2/xref/frameworks/native/libs/ui/GraphicBuffer.cpp#101
status_t GraphicBuffer::unflatten(void const*& buffer, size_t& size, int const*& fds,
size_t& count) {
...
native_handle* h =
native_handle_create(static_cast<int>(numFds), static_cast<int>(numInts));
...
memcpy(h->data, fds, numFds * sizeof(int));
memcpy(h->data + numFds, buf + flattenWordCount, numInts * sizeof(int));
handle = h;
...
if (handle != nullptr) {
buffer_handle_t importedHandle;
status_t err = mBufferMapper.importBuffer(handle, uint32_t(width), uint32_t(height),
uint32_t(layerCount), format, usage, uint32_t(stride), &importedHandle);
...
native_handle_close(handle);
native_handle_delete(const_cast<native_handle_t*>(handle));
handle = importedHandle;
mBufferMapper.getTransportSize(handle, &mTransportNumFds, &mTransportNumInts);
}
...
}
关注2个重点函数:native_handle_create
和importBuffer
。
调用native_handle_create
在应用侧进程构造一个native_handle
对象。
// http://aosp.opersys.com/xref/android-12.0.0_r2/xref/system/core/libcutils/native_handle.cpp#38
native_handle_t* native_handle_create(int numFds, int numInts) {
if (numFds < 0 || numInts < 0 || numFds > NATIVE_HANDLE_MAX_FDS ||
numInts > NATIVE_HANDLE_MAX_INTS) {
errno = EINVAL;
return NULL;
}
size_t mallocSize = sizeof(native_handle_t) + (sizeof(int) * (numFds + numInts));
native_handle_t* h = static_cast<native_handle_t*>(malloc(mallocSize));
if (h) {
h->version = sizeof(native_handle_t);
h->numFds = numFds;
h->numInts = numInts;
}
return h;
}
importBuffer
的调用在HAL层,这里看看其接口。
// IMapper.hal
/**
* 把raw buffer handle转为imported buffer handle,这样就可以在调用进程中使用了
* 当其他进程分配的GraphicBuffer传递到当前进程后,需要通过该方法映射到当前进程,为后续的lock做好准备
*/
importBuffer(handle rawHandle) generates (Error error, pointer buffer);
Android早期版本调用的是mBufferMapper.registerBuffer,作用是将创建的图形缓冲区映射到客户进程的地址空间来,这样客户端进程就可以直接在图形buffer映射的地址空间绘图。
客户端进程读取到服务进程创建的图形buffer的描述信息native_handle后,通过GraphicBufferMapper对象mBufferMapper的registerBuffer函数将创建的图形buffer映射到客户端进程地址空间。
前面调用GraphicBuffer::flatten
,会走到memcpy(fds, handle->data, static_cast<size_t>(mTransportNumFds) * sizeof(int));
,把handle->data
复制到fds
中,这里调用 GraphicBuffer::unflatten
之后,走到memcpy(h->data, fds, numFds * sizeof(int));
,把fds
复制到handle->data
中。
这样就把整个handle
拷贝过来了,接着调用importBuffer,把handle转化从hidl_handle转化为可用的private_handle_t。
这样就将Private_native_t中的数据:magic,flags,size,offset,base,lockState,writeOwner,pid复制到了客户端进程。服务端(SurfaceFlinger)分配了一段内存作为Surface的作图缓冲区,客户端怎样在这个作图缓冲区上绘图呢?两个进程间如何共享内存,这就需要GraphicBufferMapper将分配的图形缓冲区映射到客户端进程地址空间。对于共享缓冲区,他们操作同一物理地址的内存块。
其实这里还有一个很重要的点ION
没有深入。
出处:https://www.jianshu.com/p/3bfc0053d254
一般来说:图元的绘制分为如下几个步骤:
1.dequeueBuffer 获取一个图元的插槽位置,或者生产一个图元。其实在IGrraphicBufferProducer通过flattern进行一次句柄GraphicBuffer拷贝,依次为依据找到底层的共享内存。
2.lock 绑定图元共享内存地址,最后通过句柄在GrallocImportedBufferPool中找到在SF进程申请好的内存地址
3.queueBuffer 把图元放入mActiveBuffer中,并且从新计算dequeue和acquire的数量,同时把GrapicBuffer放到mQueue进行消费,最后调用frameAvilable回调通知消费者。
4.unlock 解锁图元 揭开共享内存的映射。
到这里面涉及到了几个fd的转化,先不用太关注,知道是通过ion申请一段共享内存,通过fd的方式告诉App进程可以映射到同一段物理内存。
更高一点的版本ION也被换掉了。
在 Android 12 中,GKI 2.0 将 ION 分配器替换为 DMA-BUF。
4.queueBuffer
客户端/应用通过调用dequeueBuffer获取到一个可用的buffer后,就可以往这个buffer中填充数据了。填充好数据后,就要把这个buffer再返还给BufferQueue,调用的方法是queueBuffer。
status_t BufferQueueProducer::queueBuffer(int slot,
const QueueBufferInput &input, QueueBufferOutput *output) {
ATRACE_CALL();
ATRACE_BUFFER_INDEX(slot);
int64_t requestedPresentTimestamp;
bool isAutoTimestamp;
android_dataspace dataSpace;
Rect crop(Rect::EMPTY_RECT);
int scalingMode;
uint32_t transform;
uint32_t stickyTransform;
sp<Fence> acquireFence;
bool getFrameTimestamps = false;
// 保存Surface传递过来的input里面封装的buffer信息
input.deflate(&requestedPresentTimestamp, &isAutoTimestamp, &dataSpace,
&crop, &scalingMode, &transform, &acquireFence, &stickyTransform,
&getFrameTimestamps);
const Region& surfaceDamage = input.getSurfaceDamage();
const HdrMetadata& hdrMetadata = input.getHdrMetadata();
if (acquireFence == nullptr) {
BQ_LOGE("queueBuffer: fence is NULL");
return BAD_VALUE;
}
auto acquireFenceTime = std::make_shared<FenceTime>(acquireFence);
switch (scalingMode) {
case NATIVE_WINDOW_SCALING_MODE_FREEZE:
case NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW:
case NATIVE_WINDOW_SCALING_MODE_SCALE_CROP:
case NATIVE_WINDOW_SCALING_MODE_NO_SCALE_CROP:
break;
default:
BQ_LOGE("queueBuffer: unknown scaling mode %d", scalingMode);
return BAD_VALUE;
}
// 回调接口,用于通知consumer
sp<IConsumerListener> frameAvailableListener;
sp<IConsumerListener> frameReplacedListener;
int callbackTicket = 0;
uint64_t currentFrameNumber = 0;
BufferItem item;
{ // Autolock scope
std::lock_guard<std::mutex> lock(mCore->mMutex);
// BufferQueue是否被弃用
if (mCore->mIsAbandoned) {
BQ_LOGE("queueBuffer: BufferQueue has been abandoned");
return NO_INIT;
}
// BufferQueue是否没有连的producer
if (mCore->mConnectedApi == BufferQueueCore::NO_CONNECTED_API) {
BQ_LOGE("queueBuffer: BufferQueue has no connected producer");
return NO_INIT;
}
// BufferSlot对应的slot序号是否合法,状态是否为DEQUEUE
if (slot < 0 || slot >= BufferQueueDefs::NUM_BUFFER_SLOTS) {
BQ_LOGE("queueBuffer: slot index %d out of range [0, %d)",
slot, BufferQueueDefs::NUM_BUFFER_SLOTS);
return BAD_VALUE;
} else if (!mSlots[slot].mBufferState.isDequeued()) {
BQ_LOGE("queueBuffer: slot %d is not owned by the producer "
"(state = %s)", slot, mSlots[slot].mBufferState.string());
return BAD_VALUE;
} else if (!mSlots[slot].mRequestBufferCalled) { // 是否调用了requestBuffer 函数
BQ_LOGE("queueBuffer: slot %d was queued without requesting "
"a buffer", slot);
return BAD_VALUE;
}
// If shared buffer mode has just been enabled, cache the slot of the
// first buffer that is queued and mark it as the shared buffer.
if (mCore->mSharedBufferMode && mCore->mSharedBufferSlot ==
BufferQueueCore::INVALID_BUFFER_SLOT) {
mCore->mSharedBufferSlot = slot;
mSlots[slot].mBufferState.mShared = true;
}
BQ_LOGV("queueBuffer: slot=%d/%" PRIu64 " time=%" PRIu64 " dataSpace=%d"
" validHdrMetadataTypes=0x%x crop=[%d,%d,%d,%d] transform=%#x scale=%s",
slot, mCore->mFrameCounter + 1, requestedPresentTimestamp, dataSpace,
hdrMetadata.validTypes, crop.left, crop.top, crop.right, crop.bottom,
transform,
BufferItem::scalingModeName(static_cast<uint32_t>(scalingMode)));
// 当前queue的具体GraphicBuffer
const sp<GraphicBuffer>& graphicBuffer(mSlots[slot].mGraphicBuffer);
// 根据当前的GraphicBufferd的宽高创建矩形区域
Rect bufferRect(graphicBuffer->getWidth(), graphicBuffer->getHeight());
// 创建裁剪区域
Rect croppedRect(Rect::EMPTY_RECT);
// 裁剪区域 赋值为crop和bufferRect相交部分
crop.intersect(bufferRect, &croppedRect);
if (croppedRect != crop) {
BQ_LOGE("queueBuffer: crop rect is not contained within the "
"buffer in slot %d", slot);
return BAD_VALUE;
}
// Override UNKNOWN dataspace with consumer default
if (dataSpace == HAL_DATASPACE_UNKNOWN) {
dataSpace = mCore->mDefaultBufferDataSpace;
}
mSlots[slot].mFence = acquireFence;
// 改变入队的BufferSlot的状态为QUEUED
mSlots[slot].mBufferState.queue();
// Increment the frame counter and store a local version of it
// for use outside the lock on mCore->mMutex.
++mCore->mFrameCounter;
currentFrameNumber = mCore->mFrameCounter;
mSlots[slot].mFrameNumber = currentFrameNumber;
// 把BufferSlot中的信息封装为BufferItem,后续会把这个BufferItem加入到队列中
item.mAcquireCalled = mSlots[slot].mAcquireCalled;
item.mGraphicBuffer = mSlots[slot].mGraphicBuffer;
item.mCrop = crop;
item.mTransform = transform &
~static_cast<uint32_t>(NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY);
item.mTransformToDisplayInverse =
(transform & NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY) != 0;
item.mScalingMode = static_cast<uint32_t>(scalingMode);
item.mTimestamp = requestedPresentTimestamp;
item.mIsAutoTimestamp = isAutoTimestamp;
item.mDataSpace = dataSpace;
item.mHdrMetadata = hdrMetadata;
item.mFrameNumber = currentFrameNumber;
item.mSlot = slot;
item.mFence = acquireFence;
item.mFenceTime = acquireFenceTime;
item.mIsDroppable = mCore->mAsyncMode ||
(mConsumerIsSurfaceFlinger && mCore->mQueueBufferCanDrop) ||
(mCore->mLegacyBufferDrop && mCore->mQueueBufferCanDrop) ||
(mCore->mSharedBufferMode && mCore->mSharedBufferSlot == slot);
item.mSurfaceDamage = surfaceDamage;
item.mQueuedBuffer = true;
item.mAutoRefresh = mCore->mSharedBufferMode && mCore->mAutoRefresh;
item.mApi = mCore->mConnectedApi;
mStickyTransform = stickyTransform;
// Cache the shared buffer data so that the BufferItem can be recreated.
if (mCore->mSharedBufferMode) {
mCore->mSharedBufferCache.crop = crop;
mCore->mSharedBufferCache.transform = transform;
mCore->mSharedBufferCache.scalingMode = static_cast<uint32_t>(
scalingMode);
mCore->mSharedBufferCache.dataspace = dataSpace;
}
output->bufferReplaced = false;
if (mCore->mQueue.empty()) {
// 如果mQueue队列为空,则直接push进入这个mQueue,不用考虑阻塞
// When the queue is empty, we can ignore mDequeueBufferCannotBlock
// and simply queue this buffer
mCore->mQueue.push_back(item);
//取出BufferQueueCore的回调接口,下面调用这个接口的onFrameAvailable函数来通知Consumer
frameAvailableListener = mCore->mConsumerListener;
} else {
// When the queue is not empty, we need to look at the last buffer
// in the queue to see if we need to replace it
const BufferItem& last = mCore->mQueue.itemAt(
mCore->mQueue.size() - 1);
if (last.mIsDroppable) {
// 判断最后一个BufferItem是否可以丢弃
if (!last.mIsStale) {
mSlots[last.mSlot].mBufferState.freeQueued();
// After leaving shared buffer mode, the shared buffer will
// still be around. Mark it as no longer shared if this
// operation causes it to be free.
if (!mCore->mSharedBufferMode &&
mSlots[last.mSlot].mBufferState.isFree()) {
mSlots[last.mSlot].mBufferState.mShared = false;
}
// Don't put the shared buffer on the free list.
if (!mSlots[last.mSlot].mBufferState.isShared()) {
mCore->mActiveBuffers.erase(last.mSlot);
mCore->mFreeBuffers.push_back(last.mSlot);
output->bufferReplaced = true;
}
}
// Make sure to merge the damage rect from the frame we're about
// to drop into the new frame's damage rect.
if (last.mSurfaceDamage.bounds() == Rect::INVALID_RECT ||
item.mSurfaceDamage.bounds() == Rect::INVALID_RECT) {
item.mSurfaceDamage = Region::INVALID_REGION;
} else {
item.mSurfaceDamage |= last.mSurfaceDamage;
}
// 用当前BufferItem,替换了队列最后一个BufferItem
// Overwrite the droppable buffer with the incoming one
mCore->mQueue.editItemAt(mCore->mQueue.size() - 1) = item;
//取出回调接口,因为是替换,所以后续调用接口的函数 onFrameReplaced
frameReplacedListener = mCore->mConsumerListener;
} else {
// 直接push进入这个mQueue
mCore->mQueue.push_back(item);
frameAvailableListener = mCore->mConsumerListener;
}
}
// 表示 buffer已经queued,此时入队完成
mCore->mBufferHasBeenQueued = true;
// mDequeueCondition是C++条件变量用作等待/唤醒,这里调用notify_all会唤醒调用了wait的线程
mCore->mDequeueCondition.notify_all();
mCore->mLastQueuedSlot = slot;
//output 参数,会在Surface中继续使用
output->width = mCore->mDefaultWidth;
output->height = mCore->mDefaultHeight;
output->transformHint = mCore->mTransformHintInUse = mCore->mTransformHint;
output->numPendingBuffers = static_cast<uint32_t>(mCore->mQueue.size());
output->nextFrameNumber = mCore->mFrameCounter + 1;
ATRACE_INT(mCore->mConsumerName.string(),
static_cast<int32_t>(mCore->mQueue.size()));
#ifndef NO_BINDER
mCore->mOccupancyTracker.registerOccupancyChange(mCore->mQueue.size());
#endif
// Take a ticket for the callback functions
callbackTicket = mNextCallbackTicket++;
VALIDATE_CONSISTENCY();
} // Autolock scope
// It is okay not to clear the GraphicBuffer when the consumer is SurfaceFlinger because
// it is guaranteed that the BufferQueue is inside SurfaceFlinger's process and
// there will be no Binder call
if (!mConsumerIsSurfaceFlinger) {
item.mGraphicBuffer.clear();
}
// Update and get FrameEventHistory.
nsecs_t postedTime = systemTime(SYSTEM_TIME_MONOTONIC);
NewFrameEventsEntry newFrameEventsEntry = {
currentFrameNumber,
postedTime,
requestedPresentTimestamp,
std::move(acquireFenceTime)
};
addAndGetFrameTimestamps(&newFrameEventsEntry,
getFrameTimestamps ? &output->frameTimestamps : nullptr);
// Call back without the main BufferQueue lock held, but with the callback
// lock held so we can ensure that callbacks occur in order
int connectedApi;
sp<Fence> lastQueuedFence;
{ // scope for the lock
std::unique_lock<std::mutex> lock(mCallbackMutex);
while (callbackTicket != mCurrentCallbackTicket) {
mCallbackCondition.wait(lock);
}
//通知consumer,此处调用接口的不同,是有上面,是否替换最后一个BufferItem 决定的
if (frameAvailableListener != nullptr) {
frameAvailableListener->onFrameAvailable(item);
} else if (frameReplacedListener != nullptr) {
frameReplacedListener->onFrameReplaced(item);
}
connectedApi = mCore->mConnectedApi;
lastQueuedFence = std::move(mLastQueueBufferFence);
mLastQueueBufferFence = std::move(acquireFence);
mLastQueuedCrop = item.mCrop;
mLastQueuedTransform = item.mTransform;
++mCurrentCallbackTicket;
mCallbackCondition.notify_all();
}
// Wait without lock held
if (connectedApi == NATIVE_WINDOW_API_EGL) {
// Waiting here allows for two full buffers to be queued but not a
// third. In the event that frames take varying time, this makes a
// small trade-off in favor of latency rather than throughput.
lastQueuedFence->waitForever("Throttling EGL Production");
}
return NO_ERROR;
}
queueBuffer 的流程主要做了这两件事情:
1.将对应 BufferSlot 状态设置成 QUEUED
2.创建 BufferItem 对象,并将 GraphicBuffer 的数据复制给 BufferItem,并入队到 BufferQueueCore 的 mQueue 队列中,这样可以方便图像消费者直接按先进先出的顺序从 mQueue 队列取出 GraphicBuffer 使用
生产者写完数据,把buffer还给buffer queue后,会通知消费者来使用。
5.acquireBuffer 和 releaseBuffer
BufferQueueConsumer作为消费者的一个代表元素通过 acquireBuffer 来获取图像缓冲区,通过 releaseBuffer 来释放该缓冲区。
5.1 acquireBuffer
status_t BufferQueueConsumer::acquireBuffer(BufferItem* outBuffer,
nsecs_t expectedPresent, uint64_t maxFrameNumber) {
ATRACE_CALL();
int numDroppedBuffers = 0;
sp<IProducerListener> listener;
{
std::unique_lock<std::mutex> lock(mCore->mMutex);
// Check that the consumer doesn't currently have the maximum number of
// buffers acquired. We allow the max buffer count to be exceeded by one
// buffer so that the consumer can successfully set up the newly acquired
// buffer before releasing the old one.
// 检查acquire的buffer的数量是否超出了限制
int numAcquiredBuffers = 0;
for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isAcquired()) {
++numAcquiredBuffers;
}
}
const bool acquireNonDroppableBuffer = mCore->mAllowExtraAcquire &&
numAcquiredBuffers == mCore->mMaxAcquiredBufferCount + 1;
if (numAcquiredBuffers >= mCore->mMaxAcquiredBufferCount + 1 &&
!acquireNonDroppableBuffer) {
BQ_LOGE("acquireBuffer: max acquired buffer count reached: %d (max %d)",
numAcquiredBuffers, mCore->mMaxAcquiredBufferCount);
return INVALID_OPERATION;
}
bool sharedBufferAvailable = mCore->mSharedBufferMode &&
mCore->mAutoRefresh && mCore->mSharedBufferSlot !=
BufferQueueCore::INVALID_BUFFER_SLOT;
// In asynchronous mode the list is guaranteed to be one buffer deep,
// while in synchronous mode we use the oldest buffer.
// 检查BufferQueueCore中的mQueue队列是否为空
if (mCore->mQueue.empty() && !sharedBufferAvailable) {
return NO_BUFFER_AVAILABLE;
}
// 获取BufferQueueCore中的mQueue队列的迭代器
BufferQueueCore::Fifo::iterator front(mCore->mQueue.begin());
// If expectedPresent is specified, we may not want to return a buffer yet.
// If it's specified and there's more than one buffer queued, we may want
// to drop a buffer.
// Skip this if we're in shared buffer mode and the queue is empty,
// since in that case we'll just return the shared buffer.
if (expectedPresent != 0 && !mCore->mQueue.empty()) {
// expectedPresent表示期望这个buffer什么时候显示到屏幕上。
// 如果buffer的期望显示时间小于expectedPresent,我们会acquire and return这个buffer
// 如果我们不想显示它直到expectedPresent之后,可以返回PRESENT_LATER
// The 'expectedPresent' argument indicates when the buffer is expected
// to be presented on-screen. If the buffer's desired present time is
// earlier (less) than expectedPresent -- meaning it will be displayed
// on time or possibly late if we show it as soon as possible -- we
// acquire and return it. If we don't want to display it until after the
// expectedPresent time, we return PRESENT_LATER without acquiring it.
//
// 安全起见,如果expectedPresent超过了buffer的期望显示时间1秒,我们会推迟acquire
// To be safe, we don't defer acquisition if expectedPresent is more
// than one second in the future beyond the desired present time
// (i.e., we'd be holding the buffer for a long time).
//
// NOTE: Code assumes monotonic time values from the system clock
// are positive.
// 检查是否需要丢弃一些帧,主要是判断timestamps & expectedPresent
// Start by checking to see if we can drop frames. We skip this check if
// the timestamps are being auto-generated by Surface. If the app isn't
// generating timestamps explicitly, it probably doesn't want frames to
// be discarded based on them.
while (mCore->mQueue.size() > 1 && !mCore->mQueue[0].mIsAutoTimestamp) {
const BufferItem& bufferItem(mCore->mQueue[1]);
// If dropping entry[0] would leave us with a buffer that the
// consumer is not yet ready for, don't drop it.
if (maxFrameNumber && bufferItem.mFrameNumber > maxFrameNumber) {
break;
}
// If entry[1] is timely, drop entry[0] (and repeat). We apply an
// additional criterion here: we only drop the earlier buffer if our
// desiredPresent falls within +/- 1 second of the expected present.
// Otherwise, bogus desiredPresent times (e.g., 0 or a small
// relative timestamp), which normally mean "ignore the timestamp
// and acquire immediately", would cause us to drop frames.
//
// We may want to add an additional criterion: don't drop the
// earlier buffer if entry[1]'s fence hasn't signaled yet.
nsecs_t desiredPresent = bufferItem.mTimestamp;
// desiredPresent比expectedPresent小了1 second多,或desiredPresent大于expectedPresent
if (desiredPresent < expectedPresent - MAX_REASONABLE_NSEC ||
desiredPresent > expectedPresent) {
// This buffer is set to display in the near future, or
// desiredPresent is garbage. Either way we don't want to drop
// the previous buffer just to get this on the screen sooner.
BQ_LOGV("acquireBuffer: nodrop desire=%" PRId64 " expect=%"
PRId64 " (%" PRId64 ") now=%" PRId64,
desiredPresent, expectedPresent,
desiredPresent - expectedPresent,
systemTime(CLOCK_MONOTONIC));
break;
}
BQ_LOGV("acquireBuffer: drop desire=%" PRId64 " expect=%" PRId64
" size=%zu",
desiredPresent, expectedPresent, mCore->mQueue.size());
// 处理要drop的buffer
if (!front->mIsStale) {
// Front buffer is still in mSlots, so mark the slot as free
// 对应的BufferSlot设置为FREE状态
mSlots[front->mSlot].mBufferState.freeQueued();
// After leaving shared buffer mode, the shared buffer will
// still be around. Mark it as no longer shared if this
// operation causes it to be free.
if (!mCore->mSharedBufferMode &&
mSlots[front->mSlot].mBufferState.isFree()) {
mSlots[front->mSlot].mBufferState.mShared = false;
}
// mActiveBuffers :绑定了GraphicBuffer且状态为非FREE的BufferSlot集合;
// mFreeBuffers :绑定了GraphicBuffer且状态为FREE的BufferSlot集合;
// Don't put the shared buffer on the free list
if (!mSlots[front->mSlot].mBufferState.isShared()) {
mCore->mActiveBuffers.erase(front->mSlot); // 从mActiveBuffers删除
mCore->mFreeBuffers.push_back(front->mSlot);// 添加进mFreeBuffers
}
if (mCore->mBufferReleasedCbEnabled) {
listener = mCore->mConnectedProducerListener; // 设置生产者的监听器
}
++numDroppedBuffers; // 计数加1,记录drop了几个buffer
}
mCore->mQueue.erase(front);// 从mQueue中删除
front = mCore->mQueue.begin();// 重置front,进入下一次while循环
}
// See if the front buffer is ready to be acquired
nsecs_t desiredPresent = front->mTimestamp;
bool bufferIsDue = desiredPresent <= expectedPresent ||
desiredPresent > expectedPresent + MAX_REASONABLE_NSEC;
bool consumerIsReady = maxFrameNumber > 0 ?
front->mFrameNumber <= maxFrameNumber : true;
if (!bufferIsDue || !consumerIsReady) {
BQ_LOGV("acquireBuffer: defer desire=%" PRId64 " expect=%" PRId64
" (%" PRId64 ") now=%" PRId64 " frame=%" PRIu64
" consumer=%" PRIu64,
desiredPresent, expectedPresent,
desiredPresent - expectedPresent,
systemTime(CLOCK_MONOTONIC),
front->mFrameNumber, maxFrameNumber);
ATRACE_NAME("PRESENT_LATER");
return PRESENT_LATER;
}
BQ_LOGV("acquireBuffer: accept desire=%" PRId64 " expect=%" PRId64 " "
"(%" PRId64 ") now=%" PRId64, desiredPresent, expectedPresent,
desiredPresent - expectedPresent,
systemTime(CLOCK_MONOTONIC));
}
// 走到这里就说明:该丢弃的已经都丢弃了,余下的就可以拿去显示了。
int slot = BufferQueueCore::INVALID_BUFFER_SLOT;
if (sharedBufferAvailable && mCore->mQueue.empty()) {
// make sure the buffer has finished allocating before acquiring it
// 共享Buffer模式下处理
mCore->waitWhileAllocatingLocked(lock);
slot = mCore->mSharedBufferSlot;
// Recreate the BufferItem for the shared buffer from the data that
// was cached when it was last queued.
outBuffer->mGraphicBuffer = mSlots[slot].mGraphicBuffer;
outBuffer->mFence = Fence::NO_FENCE;
outBuffer->mFenceTime = FenceTime::NO_FENCE;
outBuffer->mCrop = mCore->mSharedBufferCache.crop;
outBuffer->mTransform = mCore->mSharedBufferCache.transform &
~static_cast<uint32_t>(
NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY);
outBuffer->mScalingMode = mCore->mSharedBufferCache.scalingMode;
outBuffer->mDataSpace = mCore->mSharedBufferCache.dataspace;
outBuffer->mFrameNumber = mCore->mFrameCounter;
outBuffer->mSlot = slot;
outBuffer->mAcquireCalled = mSlots[slot].mAcquireCalled;
outBuffer->mTransformToDisplayInverse =
(mCore->mSharedBufferCache.transform &
NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY) != 0;
outBuffer->mSurfaceDamage = Region::INVALID_REGION;
outBuffer->mQueuedBuffer = false;
outBuffer->mIsStale = false;
outBuffer->mAutoRefresh = mCore->mSharedBufferMode &&
mCore->mAutoRefresh;
} else if (acquireNonDroppableBuffer && front->mIsDroppable) {
BQ_LOGV("acquireBuffer: front buffer is not droppable");
return NO_BUFFER_AVAILABLE;
} else {
// 从front获取对应的slot index
slot = front->mSlot;
*outBuffer = *front;
}
ATRACE_BUFFER_INDEX(slot);
BQ_LOGV("acquireBuffer: acquiring { slot=%d/%" PRIu64 " buffer=%p }",
slot, outBuffer->mFrameNumber, outBuffer->mGraphicBuffer->handle);
if (!outBuffer->mIsStale) {
mSlots[slot].mAcquireCalled = true;
// Don't decrease the queue count if the BufferItem wasn't
// previously in the queue. This happens in shared buffer mode when
// the queue is empty and the BufferItem is created above.
if (mCore->mQueue.empty()) {
mSlots[slot].mBufferState.acquireNotInQueue();
} else {
// 将BufferState状态改为acquire
mSlots[slot].mBufferState.acquire();
}
mSlots[slot].mFence = Fence::NO_FENCE;
}
// If the buffer has previously been acquired by the consumer, set
// mGraphicBuffer to NULL to avoid unnecessarily remapping this buffer
// on the consumer side
if (outBuffer->mAcquireCalled) {
outBuffer->mGraphicBuffer = nullptr;
}
//将该Buffer从mQueue中移除
mCore->mQueue.erase(front);
// We might have freed a slot while dropping old buffers, or the producer
// may be blocked waiting for the number of buffers in the queue to
// decrease.
mCore->mDequeueCondition.notify_all();
ATRACE_INT(mCore->mConsumerName.string(),
static_cast<int32_t>(mCore->mQueue.size()));
#ifndef NO_BINDER
mCore->mOccupancyTracker.registerOccupancyChange(mCore->mQueue.size());
#endif
VALIDATE_CONSISTENCY();
}
// 回调,通知生产者
if (listener != nullptr) {
for (int i = 0; i < numDroppedBuffers; ++i) {
listener->onBufferReleased();
}
}
return NO_ERROR;
}
主要就是这几件事情:
1.判断 BufferQueueCore 中的 mQueue 是否为空,mQueue 就是前面 BufferQueueProducer 调用 queueBuffer 函数时,将缓冲区入队的容器;
2.取出对应的 BufferSlot(会有一些判断规则,舍弃一些buffer);
3.将 BufferState 改为 acquire 状态;
4.将该 Buffer 从 mQueue 中移除;
5.2 releaseBuffer
status_t BufferQueueConsumer::releaseBuffer(int slot, uint64_t frameNumber,
const sp<Fence>& releaseFence, EGLDisplay eglDisplay,
EGLSyncKHR eglFence) {
ATRACE_CALL();
ATRACE_BUFFER_INDEX(slot);
if (slot < 0 || slot >= BufferQueueDefs::NUM_BUFFER_SLOTS ||
releaseFence == nullptr) {
BQ_LOGE("releaseBuffer: slot %d out of range or fence %p NULL", slot,
releaseFence.get());
return BAD_VALUE;
}
sp<IProducerListener> listener;
{ // Autolock scope
std::lock_guard<std::mutex> lock(mCore->mMutex);
// If the frame number has changed because the buffer has been reallocated,
// we can ignore this releaseBuffer for the old buffer.
// Ignore this for the shared buffer where the frame number can easily
// get out of sync due to the buffer being queued and acquired at the
// same time.
if (frameNumber != mSlots[slot].mFrameNumber &&
!mSlots[slot].mBufferState.isShared()) {
return STALE_BUFFER_SLOT;
}
if (!mSlots[slot].mBufferState.isAcquired()) {
BQ_LOGE("releaseBuffer: attempted to release buffer slot %d "
"but its state was %s", slot,
mSlots[slot].mBufferState.string());
return BAD_VALUE;
}
mSlots[slot].mEglDisplay = eglDisplay;
mSlots[slot].mEglFence = eglFence;
mSlots[slot].mFence = releaseFence;
mSlots[slot].mBufferState.release();//置为FREE状态
// After leaving shared buffer mode, the shared buffer will
// still be around. Mark it as no longer shared if this
// operation causes it to be free.
if (!mCore->mSharedBufferMode && mSlots[slot].mBufferState.isFree()) {
mSlots[slot].mBufferState.mShared = false;
}
// Don't put the shared buffer on the free list.
if (!mSlots[slot].mBufferState.isShared()) {
mCore->mActiveBuffers.erase(slot);// 从mActiveBuffers中删除
mCore->mFreeBuffers.push_back(slot);//加入到mFreeBuffers中
}
if (mCore->mBufferReleasedCbEnabled) {
listener = mCore->mConnectedProducerListener; // 设置listener
}
BQ_LOGV("releaseBuffer: releasing slot %d", slot);
// 唤醒等待的线程
mCore->mDequeueCondition.notify_all();
VALIDATE_CONSISTENCY();
} // Autolock scope
// Call back without lock held
if (listener != nullptr) {
listener->onBufferReleased(); //通知producer
}
return NO_ERROR;
}
至此,BufferQueue
的大致流程已经看完。回到一开始的问题,APP
和SurfaceFlinger
如何传输数据?
关于这个问题,我觉得这里总结的蛮好的。
出处:https://www.jianshu.com/p/f96ab6646ae3
APP绘画的画布是由SurfaceFlinger提供的,而画布是一块共享内存,APP向SurfaceFlinger申请到画布,是将共享内存的地址映射到自身进程空间。 App负责在画布上作画,画完的作品提交给SurfaceFlinger, 这个提交操作并不是把内存复制一份给SurfaceFlinger,而是把共享内存的控制权交还给SurfaceFlinger。
最后,给出一张BufferQueue
的流程图,复盘一下:
更多关于BufferQueue
源码方面细节,可以参考这篇
Android 12(S) 图像显示系统 - 开篇
参考链接:
Android图形系统(八)-app与SurfaceFlinger共享UI元数据过程
SurfaceFlinger 原理分析
SurfaceFlinger中的SharedClient
《深入理解android内核设计思想》
Android画面显示流程分析(2)
Android画面显示流程分析(3)
显示框架之app与SurfaceFlinger通信
android Gui系统之SurfaceFlinger(2)---BufferQueue
Android 重学系列 GraphicBuffer的诞生
Android 重学系列 图元的消费
Android graphics(二) bufferqueue
[Android禅修之路] 解读 GraphicBuffer 开篇
[Android禅修之路] 解读 GraphicBuffer 之 Ion 驱动层
Android P 图形显示系统(十) BufferQueue(一)
AndroidQ 图形系统(7)GraphicBuffer内存分配与Gralloc
android graphic(8)—surface申请GraphicBuffer过程
surfaceflinger分析
Android P 图形显示系统(十一) BufferQueue(二)
Android graphics(二) bufferqueue
Android 12(S) 图像显示系统 - 解读Gralloc架构及GraphicBuffer创建/传递/释放(十四)
Android GraphicBuffer分配过程