SurfaceFlinger和Choreographer是构成Android图形系统的主要部分,它们都是VSYNC信号的订阅者;SurfaceFlinger将接受到的不同数据源整合并,最终更新到帧缓冲以便显示;而Choreographer最终post给ViewRootImpl进行界面view的measure及draw等。
SurfaceFlinger
SurfaceFlinger's role is to accept buffers of data from multiple sources, composite them, and send them to the display. Once upon a time this was done with software blitting to a hardware framebuffer (e.g. /dev/graphics/fb0), but those days are long gone.
Google的官方文档里很明确的给出来SurfaceFlinger的职责,而它是怎么做到的呢。
启动
Android在启动SystemServer进程的时候,会调用frameworks\base\cmds\system_server\library\system_init.cpp中的system_init函数
extern "C" status_t system_init()
{
ALOGI("Entered system_init()");
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
ALOGI("ServiceManager: %p\n", sm.get());
sp<GrimReaper> grim = new GrimReaper();
sm->asBinder()->linkToDeath(grim, grim.get(), 0);
char propBuf[PROPERTY_VALUE_MAX];
property_get("system_init.startsurfaceflinger", propBuf, "1");
if (strcmp(propBuf, "1") == 0) {
// Start the SurfaceFlinger
SurfaceFlinger::instantiate();
}
...
}
//如果system_init.startsurfaceflinger的属性值设置成1的时候,SurfaceFlinger会以线程的方式随着SystemServer一起启动
而如果system_init.startsurfaceflinger属性为0是,会在init.rc中配置SurfaceFlinger服务,Linux的天字第一号init进程会以独立进程的方式启动SurfaceFlinger。
# Set this property so surfaceflinger is not started by system_init
setprop system_init.startsurfaceflinger 0
service surfaceflinger /system/bin/surfaceflinger
class main
user system
group graphics
onrestart restart zygote
shell@android:/ # ps | grep surface
system 110 1 54448 10732 ffffffff 40076710 S /system/bin/surfaceflinger
//surfaceflinger的父进程的确为天字第一号
VSYNC的产生与分发
不管以何种方式启动SurfaceFlinger,创建SurfaceFlinger的实例之后都会调用init方法:
void SurfaceFlinger::init() {
ALOGI( "SurfaceFlinger's main thread ready to run. "
"Initializing graphics H/W...");
Mutex::Autolock _l(mStateLock);
// initialize EGL for the default display
mEGLDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(mEGLDisplay, NULL, NULL);
// start the EventThread
sp<VSyncSource> vsyncSrc = new DispSyncSource(&mPrimaryDispSync,
vsyncPhaseOffsetNs, true, "app");
mEventThread = new EventThread(vsyncSrc);
sp<VSyncSource> sfVsyncSrc = new DispSyncSource(&mPrimaryDispSync,
sfVsyncPhaseOffsetNs, true, "sf");
mSFEventThread = new EventThread(sfVsyncSrc);
mEventQueue.setEventThread(mSFEventThread);
// Initialize the H/W composer object. There may or may not be an
// actual hardware composer underneath.
mHwc = new HWComposer(this,
*static_cast<HWComposer::EventHandler *>(this));
// get a RenderEngine for the given display / config (can't fail)
mRenderEngine = RenderEngine::create(mEGLDisplay, mHwc->getVisualID());
// retrieve the EGL context that was selected/created
mEGLContext = mRenderEngine->getEGLContext();
LOG_ALWAYS_FATAL_IF(mEGLContext == EGL_NO_CONTEXT,
"couldn't create EGLContext");
// initialize our non-virtual displays
for (size_t i=0 ; i<DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES ; i++) {
DisplayDevice::DisplayType type((DisplayDevice::DisplayType)i);
// set-up the displays that are already connected
if (mHwc->isConnected(i) || type==DisplayDevice::DISPLAY_PRIMARY) {
// All non-virtual displays are currently considered secure.
bool isSecure = true;
createBuiltinDisplayLocked(type);
wp<IBinder> token = mBuiltinDisplays[i];
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferConsumer> consumer;
BufferQueue::createBufferQueue(&producer, &consumer,
new GraphicBufferAlloc());
sp<FramebufferSurface> fbs = new FramebufferSurface(*mHwc, i,
consumer);
int32_t hwcId = allocateHwcDisplayId(type);
sp<DisplayDevice> hw = new DisplayDevice(this,
type, hwcId, mHwc->getFormat(hwcId), isSecure, token,
fbs, producer,
mRenderEngine->getEGLConfig());
if (i > DisplayDevice::DISPLAY_PRIMARY) {
// FIXME: currently we don't get blank/unblank requests
// for displays other than the main display, so we always
// assume a connected display is unblanked.
ALOGD("marking display %zu as acquired/unblanked", i);
hw->setPowerMode(HWC_POWER_MODE_NORMAL);
}
mDisplays.add(token, hw);
}
}
// make the GLContext current so that we can create textures when creating Layers
// (which may happens before we render something)
getDefaultDisplayDevice()->makeCurrent(mEGLDisplay, mEGLContext);
mEventControlThread = new EventControlThread(this);
mEventControlThread->run("EventControl", PRIORITY_URGENT_DISPLAY);
// set a fake vsync period if there is no HWComposer
if (mHwc->initCheck() != NO_ERROR) {
mPrimaryDispSync.setPeriod(16666667);
}
// initialize our drawing state
mDrawingState = mCurrentState;
// set initial conditions (e.g. unblank default device)
initializeDisplays();
// start boot animation
startBootAnim();
}
init方法首先初始化默认显示设备,然后实例化了两个EventThread和HWComposer。
在Android4.4之后,SurfaceFlinger成为了分发VSYNC信号的中心,它接收到的HWComposer硬件产生或软件模拟的VSYNC信号后,调用自己的onVSyncReceived方法,初始化EventControlThread等工作。EventControlThread主要用于SurfaceFlinger对硬件产生的VSYNC信号的控制。
mEventThread是Choreographer所代表App层的远程SOCKET服务端,mSFEventThread则对应着SurfaceFlinger自身的VSYNC处理。它们都通过各自的DispSyncSource持有mPrimaryDispSync。
mPrimaryDispSync是DispSync的实例,内部持有一个DispSyncThread。EventThread在有客户端成功订阅VSYNC后,通过DispSyncSource的setVsyncEnable方法调用DispSync的addEventListener方法,将mPhaseOffset和callback传递给DispSync,DispSync则通过这个offset和屏幕刷新周期period,依靠DispSyncThread线程控制着回调调用的时间,最终通过SurfaceFlinger的onDispSyncEvent方法调用到EventThread的onVSyncEvent方法。
void EventThread::onVSyncEvent(nsecs_t timestamp) {
Mutex::Autolock _l(mLock);
mVSyncEvent[0].header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
mVSyncEvent[0].header.id = 0;
mVSyncEvent[0].header.timestamp = timestamp;
mVSyncEvent[0].vsync.count++;
mCondition.broadcast();
}
//EventThread本身继承自Thread,通过Condition唤醒当前的线程
bool EventThread::threadLoop() {
DisplayEventReceiver::Event event;
Vector< sp<EventThread::Connection> > signalConnections;
signalConnections = waitForEvent(&event);
// dispatch events to listeners...
const size_t count = signalConnections.size();
for (size_t i=0 ; i<count ; i++) {
const sp<Connection>& conn(signalConnections[i]);
// now see if we still need to report this event
status_t err = conn->postEvent(event);
...
//EventThread通过onVSyncEvent中对mVSyncEvent的设置,调用Connection的postEvent
status_t EventThread::Connection::postEvent(
const DisplayEventReceiver::Event& event) {
ssize_t size = DisplayEventReceiver::sendEvents(mChannel, &event, 1);
return size < 0 ? status_t(size) : status_t(NO_ERROR);
}
ssize_t DisplayEventReceiver::sendEvents(const sp<BitTube>& dataChannel,
Event const* events, size_t count)
{
return BitTube::sendObjects(dataChannel, events, count);
}
ssize_t BitTube::sendObjects(const sp<BitTube>& tube,
void const* events, size_t count, size_t objSize)
{
const char* vaddr = reinterpret_cast<const char*>(events);
ssize_t size = tube->write(vaddr, count*objSize);
...
}
ssize_t BitTube::write(void const* vaddr, size_t size)
{
ssize_t err, len;
do {
len = ::send(mSendFd, vaddr, size, MSG_DONTWAIT | MSG_NOSIGNAL);
// cannot return less than size, since we're using SOCK_SEQPACKET
err = len < 0 ? errno : 0;
} while (err == EINTR);
return err == 0 ? len : -err;
}
//最终调用到BitTube的write方法,往mSendFd描述符send对应内容,而这个BitTube,接下来我们还会说到
接着初始化显示器信息,在setPowerMode或setPeriod中,都会设置mPrimaryDispSync的更新周期Period,在initializeDisplays之后显示模块已经准备就绪,调用startBootAnim启动开机动画。
VSYNC的接收
在SurfaceFlinger中,首次创建实例的时候会调用到onFirstRef方法:
void SurfaceFlinger::onFirstRef()
{
mEventQueue.init(this);
}
//mEventQueue.init为MessageQueue的实例
而SurfaceFlinger的VSYNC接收,就是通过MessageQueue实现的,它的init方法如下:
void MessageQueue::init(const sp<SurfaceFlinger>& flinger)
{
mFlinger = flinger;
mLooper = new Looper(true);
mHandler = new Handler(*this);
}
//创建了一个Looper和一个Handler
上面我们已经提到了,在SurfaceFlinger的init方法中,mSFEventThread对应着SurfaceFlinger自身的VSYNC的处理,是因为调用了mEventQueue.setEventThread(mSFEventThread)方法。
void MessageQueue::setEventThread(const sp<EventThread>& eventThread)
{
mEventThread = eventThread;
mEvents = eventThread->createEventConnection();
mEventTube = mEvents->getDataChannel();
mLooper->addFd(mEventTube->getFd(), 0, Looper::EVENT_INPUT,
MessageQueue::cb_eventReceiver, this);
}
//通过getDataChannel得到了上面所说的BitTube对象。
void BitTube::init(size_t rcvbuf, size_t sndbuf) {
int sockets[2];
if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets) == 0) {
size_t size = DEFAULT_SOCKET_BUFFER_SIZE;
setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &rcvbuf, sizeof(rcvbuf));
setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &sndbuf, sizeof(sndbuf));
// sine we don't use the "return channel", we keep it small...
setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &size, sizeof(size));
setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &size, sizeof(size));
fcntl(sockets[0], F_SETFL, O_NONBLOCK);
fcntl(sockets[1], F_SETFL, O_NONBLOCK);
mReceiveFd = sockets[0];
mSendFd = sockets[1];
} else {
mReceiveFd = -errno;
ALOGE("BitTube: pipe creation failed (%s)", strerror(-mReceiveFd));
}
}
BitTube通过SocketPair创建了两个Socket描述符,而getFd方法:
int BitTube::getFd() const
{
return mReceiveFd;
}
返回的是mReceiveFd,MessageQueue通过Looper的addFD方法将该描述符添加到Looper中,当上述的VSYNC时间分发时,当通过send方法往mSendFd写数据时,会触发Looper中的epoll机制,从而回调到MessageQueue::cb_eventReceiver方法。对Looper不清楚的,可以看这里Android中的Looper与epoll。
MessageQueue的cb_eventReceiver方法,最终通过Looper的SendMessage调用到SurfaceFlinger的onMessageReceived方法,Message类型为MessageQueue::REFRESH:
void SurfaceFlinger::onMessageReceived(int32_t what) {
ATRACE_CALL();
switch (what) {
case MessageQueue::TRANSACTION: {
handleMessageTransaction();
break;
}
case MessageQueue::INVALIDATE: {
bool refreshNeeded = handleMessageTransaction();
refreshNeeded |= handleMessageInvalidate();
refreshNeeded |= mRepaintEverything;
if (refreshNeeded) {
// Signal a refresh if a transaction modified the window state,
// a new buffer was latched, or if HWC has requested a full
// repaint
signalRefresh();
}
break;
}
case MessageQueue::REFRESH: {
handleMessageRefresh();
break;
}
}
}
最终通过SurfaceFlinger的handleMessageRefresh方法,将App填充的最新的Layer实现计算大小,合成到framebuffer中供显卡显示。
Choreographer
Choreographer做为编舞者直接由ViewRootImpl持有,控制着App UI的绘制节奏。
在Choreographer的构造函数中,会通过判断系统属性文件中的debug.choreographer.vsync值决定是否开启Vsync机制,如启用�则通过创建FrameDisplayEventReceiver类型,调用到它的父类DisplayEventReceiver.java的构造方法:
public DisplayEventReceiver(Looper looper) {
if (looper == null) {
throw new IllegalArgumentException("looper must not be null");
}
mMessageQueue = looper.getQueue();
mReceiverPtr = nativeInit(new WeakReference<DisplayEventReceiver>(this), mMessageQueue);
mCloseGuard.open("dispose");
}
然后通过nativeInit的jni调用android_view_DisplayEventReceiver.cpp中的nativeInit方法,通过创建NativeDisplayEventReceiver调用到它的父类DisplayEventDispatcher,通过nativeInit中传递过来的MessageQueue,得到App主线程的Looper,并交给DisplayEventDispatcher,继而调用DisplayEventDispatcher的initialize方法:
status_t DisplayEventDispatcher::initialize() {
status_t result = mReceiver.initCheck();
if (result) {
ALOGW("Failed to initialize display event receiver, status=%d", result);
return result;
}
int rc = mLooper->addFd(mReceiver.getFd(), 0, Looper::EVENT_INPUT,
this, NULL);
if (rc < 0) {
return UNKNOWN_ERROR;
}
return OK;
}
可以看到它也是通过addFd监听着VSYNC信号的到来。而mReceiver则对应着DisplayEventReceiver则可以理解成跨进程的远程对象的本地代理。
DisplayEventReceiver::DisplayEventReceiver() {
sp<ISurfaceComposer> sf(ComposerService::getComposerService());
if (sf != NULL) {
mEventConnection = sf->createDisplayEventConnection();
if (mEventConnection != NULL) {
mDataChannel = mEventConnection->getDataChannel();
}
}
}
int DisplayEventReceiver::getFd() const {
if (mDataChannel == NULL)
return NO_INIT;
return mDataChannel->getFd();
}
DisplayEventReceiver首先通过ComposerService创建了SurfaceFlinger的本地代理,然后通过createDisplayEventConnection得到EventThread所持有的Connection,这个Connection持有的BitTube订阅的方式和上面SurfaceFlinger一样。
createDisplayEventConnection方法在SurfaceFlinger中:
sp<IDisplayEventConnection> SurfaceFlinger::createDisplayEventConnection() {
return mEventThread->createEventConnection();
}
可以看到,的确是通过mEventThread建立的连接而不是mSFEventThread。就这样Choreographer成功的订阅了VSYNC信号。
而当Vsync信号到来的时候,由于DisplayEventDispatcher继承了LooperCallback,会调用到它的handleEvent方法,继而调用到NativeDisplayEventReceiver的dispatchVsync方法:
void NativeDisplayEventReceiver::dispatchVsync(nsecs_t timestamp, int32_t id, uint32_t count) {
JNIEnv* env = AndroidRuntime::getJNIEnv();
ScopedLocalRef<jobject> receiverObj(env, jniGetReferent(env, mReceiverWeakGlobal));
if (receiverObj.get()) {
ALOGV("receiver %p ~ Invoking vsync handler.", this);
env->CallVoidMethod(receiverObj.get(),
gDisplayEventReceiverClassInfo.dispatchVsync, timestamp, id, count);
ALOGV("receiver %p ~ Returned from vsync handler.", this);
}
mMessageQueue->raiseAndClearException(env, "dispatchVsync");
}
jni端会通过CallVoidMethod方法调用到java端的DisplayEventReceiver的dispatchVsync方法,继而调用到FrameDisplayEventReceiver的onVsync方法,最后通过调用Choreographer的doFrame方法启动App端的UI绘制工作。
总结
在Android4.4之前,VSYNC信号到来的时候,Choreographer和SurfaceFlinger是同时回调,并没有Offset机制,所以导致了Choreographer准备好了新的一帧数据,SurfaceFlinger要到下个VSYNC到来才会组成到framebuffer中,而显示设备必须等到下下个VSYNC才能最终显示出来,这样不仅浪费了时间(需要2个VSYNC周期),而且同时启动也引起了CPU等资源的争抢。所有Google在4.4之后引入了Offset机制,希望能够尽量优化这个问题。Google官方文档也解释了这一点。
Application and SurfaceFlinger render loops should be synchronized to the hardware VSYNC. On a VSYNC event, the display begins showing frame N while SurfaceFlinger begins compositing windows for frame N+1. The app handles pending input and generates frame N+2.
Synchronizing with VSYNC delivers consistent latency. It reduces errors in apps and SurfaceFlinger and the drifting of displays in and out of phase with each other. This, however, does assume application and SurfaceFlinger per-frame times don’t vary widely. Nevertheless, the latency is at least two frames.
To remedy this, you may employ VSYNC offsets to reduce the input-to-display latency by making application and composition signal relative to hardware VSYNC. This is possible because application plus composition usually takes less than 33 ms.