我们知道,Android系统进程间通信是依靠Binder来实现的,系统服务需要先通过ServiceManager的addService()将Binder进行注册,App端通过ServiceManager的getService()来获取对应服务的Binder,然后跟调用本地接口一样执行跨进程逻辑调用;
本文只涉及了Java层和native层,主要的核心代码路径为:
frameworks/base/core/jni
frameworks/native/libs/binder
frameworks/base/core/java/android/os
一.服务注册
服务注册用的是ServiceManager的addService()方法,该方法是暴露给用户的,对应的路径为:framework/base/core/java/android/os/ServiceManager.java,先看一下addService()的逻辑:
public static void addService(String name, IBinder service) {
try {
getIServiceManager().addService(name, service, false);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}
通过addService()方法可以看到,实际调用的是getIServiceManager()返回结果中的addService()方法,先从getIServiceManager()来看:
1.getIServiceManager()
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}
可以看到,该方法返回的是IServiceManager实例,主要通过以下两项工作来完成:
a.先调用BinderInternal的getContextObject()方法;
b.将a得到的结果作为参数执行ServiceManagerNative的asInterface()方法获取IServiceManager;
a.BinderInternal.getContextObject()
该方法对应类的路径为:framework/base/core/java/android/os/BinderInternal.java
public static final native IBinder getContextObject();
可以看到,该方法是native方法,对应jni层的实现代码路径为:frameworks/base/core/jni/vim android_util_Binder.cpp
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
return javaObjectForIBinder(env, b);
}
根据调用关系,接着往下看ProcessState,对应的实现路径为:frameworks/native/libs/binder/ProcessState.cpp
a.1.ProcessState.getContextObject()
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}
根据调用关系,执行到getStrongProxyForHandle();
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
可以看到,首次执行ProcessStaete.getContextObject()返回的是BpBinder(0),我们知道0号引用就是serviceManager,Binder的大管家。
a.2.javaObjectForIBinder()
根据a.1返回的结果BpBinder(0)作为参数执行javaObjectForIBinder():
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
;;;;;;;;;;;;;;;;;
//创建BinderProxy对象
object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
if (object != NULL) {
LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
//将BpBinder对象赋值给BinderProxy.mObject成员变量
env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
val->incStrong((void*)javaObjectForIBinder);
jobject refObject = env->NewGlobalRef(
env->GetObjectField(object, gBinderProxyOffsets.mSelf));
//将BinderProxy对象信息附加到BpBinder的成员变量mObjects中
val->attachObject(&gBinderProxyOffsets, refObject,
jnienv_to_javavm(env), proxy_cleanup);
sp<DeathRecipientList> drl = new DeathRecipientList;
drl->incStrong((void*)javaObjectForIBinder);
//BinderProxy.mOrgue成员变量记录死亡通知对象
env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));
android_atomic_inc(&gNumProxyRefs);
incRefsCreated(env);
}
return object;
}
可以看到,该方法主要工作是创建BinderProxy对象,并把BpBinder对象地址保存到BinderProxy的mObject成员变量,到此可知BinderInternal.getContextObject()返回的是BinderProxy对象;
b.ServiceManagerNative.asInterface()
该方法对应类的路径为:framework/base/core/java/android/os/ServiceManagerNative.java
static public IServiceManager asInterface(IBinder obj) {
if (obj == null) {
return null;
}
IServiceManager in =
(IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}
return new ServiceManagerProxy(obj);
}
返回的是ServiceManagerProxy对象;
class ServiceManagerProxy implements IServiceManager {
public ServiceManagerProxy(IBinder remote) {
mRemote = remote;
}
;;;;;;;;;;;;;;;
}
从传入的参数可以看到,mRemote为BinderProxy对象,该BinderProxy对象对应于BpBinder(0),其作为binder代理端,指向native层大管家service Manager。
getIServiceManager()最终等价于new ServiceManagerProxy(new BinderProxy()),即getIServiceManager().addService()等价于ServiceManagerNative.addService();
ServiceManager的addService()调用实际的工作是交给ServiceManagerNative的成员变量mRemote(即BinderProxy对象),而BinderProxy通过jni方式,最终会调用BpBinder对象;可见上层binder架构的核心功能依赖native架构的服务来完成的。
2.addService()
该addService()是真正的服务注册入口,对应的实现是在ServiceManagerProxy内部:
public void addService(String name, IBinder service, boolean allowIsolated)
throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}
注册service需要先要本地有service(Binder),即本地实现Binder,通过AIDL来实现,详情可参考:Android IPC 之Binder分析,里面有对应的实现讲解,本文就不讲解了,在实现Binder后作为参数传入addService(),可以看到,在该方法内主要有两项工作:
1.获取Parcel对象,然后调用writeStrongBinder()对service进行封装;
2.调用mRemote.transact(),即调用BinderProxy的transact()的方法;
a.writeStrongBinder()
该方法对的路径为:framework/base/core/java/android/os/Parcel.java
public final void writeStrongBinder(IBinder val) {
nativeWriteStrongBinder(mNativePtr, val);
}
调用的native方法,对应的实现路径为:frameworks/base/core/jni/android_os_Parcel.cpp
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
if (err != NO_ERROR) {
signalExceptionForError(env, clazz, err);
}
}
}
根据调用关系来看,会先调用ibinderForJavaObject(env, object),一起看一下:
a.1.ibinderForJavaObject()
该方法实现的路径为:frameworks/base/core/jni/android_util_Binder.cpp
sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
if (obj == NULL) return NULL;
//---------------分析1----------------------------
if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
JavaBBinderHolder* jbh = (JavaBBinderHolder*)env->GetLongField(obj, gBinderOffsets.mObject);
return jbh != NULL ? jbh->get(env, obj) : NULL;
}
//----------------分析2-------------------------------
if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
return (IBinder*)env->GetLongField(obj, gBinderProxyOffsets.mObject);
}
ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
return NULL;
}
在该方法内部主要有两项工作,具体分析如下:
分析1:判断object是否是Binder对象,此处为true,获取gBinderOffsets.mObject,该mObject类型是JavaBBinderHolder(稍后讲到),然后执行JavaBBinderHolder的get()方法来返回结果;
分析2:判断object是否为BinderProxy对象,返回gBinderProxyOffsets.mObject即BpBinder;
针对分析1,有个疑问,什么时候对gBinderOffsets.mObject赋值为JavaBBinderHolder的呢?
我们知道,在执行addService之前,需要本地先创建Binder对象,在创建本地对象时,会默认调用父类Binder的无参构造方法,看一下Binder的构造方法:
private native final void init();
public Binder() {
init();
;;;;;;;
}
可以看到,在Binder的构造方法内调用init(),init()是native方法,会调用到jni层,对应的路径是:frameworks/base/core/jni/android_util_Binder.cpp
static void android_os_Binder_init(JNIEnv* env, jobject obj)
{
JavaBBinderHolder* jbh = new JavaBBinderHolder();
;;;;;;;;;;
jbh->incStrong((void*)android_os_Binder_init);
env->SetLongField(obj, gBinderOffsets.mObject, (jlong)jbh);
}
在方法内部,会创建JavaBBinderHolder实例,然后将对象赋值给gBinderOffsets.mObject,为后续addService()提前做好了准备;
接着上面分析1,在获取到JavaBBinderHolder对象后,执行get()来返回结果:
a.1.1.JavaBBinderHolder.get()
sp<JavaBBinder> get(JNIEnv* env, jobject obj)
{
AutoMutex _l(mLock);
sp<JavaBBinder> b = mBinder.promote();
if (b == NULL) {
b = new JavaBBinder(env, obj);
mBinder = b;
}
return b;
}
get()方法内部会创建JavaBBinder对象,JavaBBinder继承于BBinder,BBinder对应的路径为:frameworks/native/libs/binder/Binder.cpp;
data.writeStrongBinder(service)最终等价于parcel->writeStrongBinder(new JavaBBinder(env, obj));
a.2.flatten_binder
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const wp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
sp<IBinder> real = binder.promote();
if (real != NULL) {
IBinder *local = real->localBinder();
if (!local) {
//远程Binder
BpBinder *proxy = real->remoteBinder();
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_WEAK_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
//本地Binder,进入该分支
obj.type = BINDER_TYPE_WEAK_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(binder.get_refs());
obj.cookie = reinterpret_cast<uintptr_t>(binder.unsafe_get());
}
return finish_flatten_binder(real, obj, out);
}
;;;;;;;;
obj.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
return finish_flatten_binder(NULL, obj, out);
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
return finish_flatten_binder(NULL, obj, out);
}
}
对于Binder实体,则cookie记录Binder实体的指针;对于Binder代理,则用handle记录Binder代理的句柄;
b.mRemote.transact()
在执行完date.writexx一系列操作后,执行mRemote.transact()来执行最终的操作,前面分析到,mRemote是BinderProxy对象(此处对应的是serviceManager代理对象),看一下BinderProxy实现:
final class BinderProxy implements IBinder {
public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
;;;;;;;;;;;;;;;;;;;;
try {
return transactNative(code, data, reply, flags);
} finally {
if (tracingEnabled) {
Trace.traceEnd(Trace.TRACE_TAG_ALWAYS);
}
}
}
public native boolean transactNative(int code, Parcel data, Parcel reply,
int flags) throws RemoteException;
//注册服务端Binder异常回调
public native void linkToDeath(DeathRecipient recipient, int flags)
throws RemoteException;
//服务端Binder异常时会调用该方法,继而调用客户端本地实现的binderDied
private static final void sendDeathNotice(DeathRecipient recipient) {
try {
recipient.binderDied();
}
}
}
BinderProxy内部在执行transact()后,会调用transactNative(),调用到jni层,对应的代码路径为:frameworks/base/core/jni/android_util_Binder.cpp
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
//java Parcel转为native Parcel
Parcel* data = parcelForJavaObject(env, dataObj);
Parcel* reply = parcelForJavaObject(env, replyObj);
//gBinderProxyOffsets.mObject中保存的是new BpBinder(0)对象
IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
bool time_binder_calls;
int64_t start_millis;
//执行BpBinder::transact(),经过native层,进入Binder驱动程序
status_t err = target->transact(code, *data, reply, flags);
return JNI_FALSE;
}
在Java层执行BinderProxy.transact()后,最终交由Native层的BpBinder::transact()完成;
简单总结
注册服务过程就是通过BpBinder来发送ADD_SERVICE_TRANSACTION命令,实现与binder驱动进行数据交互;
二.transact()---客户端到Driver
前面分析到,在addService()后,最终是通过BpBinder来发送ADD_SERVICE_TRANSACTION命令,实现与binder驱动进行数据交互,同理可知,所有的进程间通信都是如此,看一下具体交互过程,对应的代码路径为:frameworks/native/libs/binder/BpBinder.cpp
1.BpBinder.cpp
a.transact()
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
根据调用关系,会调用到IPCThreadState内部的transact()方法,看传入了mHandle,该Handle是在创建BpBinder时传入的,即服务端的引用,对应的实现路径为:frameworks/native/libs/binder/IPCThreadState.cpp
2.IPCThreadState.cpp
a.transact()
status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();//数据错误检测
flags |= TF_ACCEPT_FDS;
if (err == NO_ERROR) {
//--------------------分析1------------------------------------
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
//默认情况下,都是采用非oneway的方式,即需要等待服务端的返回结果
if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
//----------分析2-----------
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
//oneway方式,发送完之后不阻塞
err = waitForResponse(NULL, NULL);
}
return err;
}
在transact()内部主要执行了两项工作,具体分析如下:
分析1:执行writeTransactionData()进行数据写入;
分析2:执行waitForResponse()发送数据并等待结果返回;
a.1.writeTransactionData()
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle;//服务端的句柄引用
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);//cmd为BC_TRANSACTION
mOut.write(&tr, sizeof(tr));//写入binder_transaction_data数据
return NO_ERROR;
}
调用writeTransactionData()方法先对binder_transaction_data赋值,然后mOut写入BC_TRANSACTION指令和binder_transaction_data数据,此时只是写入了数据,还没有与driver进行交互,具体交互逻辑是在waitForResponse()里面进行的,一起看一下:
a.2.waitForResponse()
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
switch (cmd) {
;;;;;;;;;;;
}
return err;
}
从waitForResponse()方法可以看到,在内部会先调用talkWithDriver()用来跟driver进行交互最后调用服务端的方法(稍后讲到),然后执行mIn等待服务端的返回结果,返回时的指令为BR_REPLY,先看talkWithDriver()方法:
a.3.talkWithDriver()
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
binder_write_read bwr;
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
//将mOut的值赋值给bwr的write_buffer
bwr.write_buffer = (uintptr_t)mOut.data();
;;;;;;;;;;;;;;;;;;;;;;;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//真正的与driver层的binder进行交互
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
} while (err == -EINTR);
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}
在talkWithDriver()内部执行ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)真正实现了与driver层的交互,接着经过一系列操作,服务端就收到了请求,进行处理后返回结果指令BR_REPLY,还是在waitForResponse()里面进行处理,在回来看一下该方法:
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
//在与driver进行交互后,mIn收到数据会执行BR命令
cmd = (uint32_t)mIn.readInt32();
switch (cmd) {
//在与Driver交互后,Driver在成功收到BC_TRANSACTION后,会先返回BR_TRANSACTION_COMPLETE
case BR_TRANSACTION_COMPLETE:
//如果为oneway方式,reply和acquireResult都为空,直接finish
if (!reply && !acquireResult) goto finish;
break;
;;;;;;;;;;;
//非oneway方式,继续等待Driver返回BR_REPLY
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
;;;;;;;;;;
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
//退出while循环,返回结果
goto finish;
}
return err;
}
通过mIn.readInt32()来获取到指令,在客户端通过talkwithDriver()与Driver进行交互后,Driver在收到BC_TRANSACTION会先返回BR_TRANSACTION_COMPLETE命令:
如果为oneway方式,直接goto finish,该次通信就结束了,AMS通知回调的接口都是oneway的,防止应用在回调里面做耗时工作,阻塞system_server;
如果为非oneway方式,在收到BR_TRANSACTION_COMPLETE命令后会break,再次执行talkwithDrvier()等待Driver返回BR_REPLY命令;
当Driver返回BR_REPLY后,通过mIn.read()来获取到binder_transaction_data,赋值给reply,那么客户端在调用mRemote.callxx()后,通过reply.readxx()就可以获取到结果:
mRemote.transact(Stub.TRANSACTION_getXXX, _data, _reply, 0);
_reply.readException();
_result = _reply.readInt();
return _result;
接下来就是Driver将请求发送到服务端,然后将服务端处理的结果在返回来给客户端在waitForResponse()里面接收处理就行了。
三.transact()---Driver到服务端
应用在启动时,会通过AMS与Zygote进行socket通信来创建进程,在创建进程时会创建ProcessState,ProcessState是单例,一个进程对应一个ProcessState,该类的代码路径为:frameworks/native/libs/binder/ProcessState.cpp
1.ProcessState.cpp
#define BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2) //1M-8K
#define DEFAULT_MAX_BINDER_THREADS 15 //默认最大并发访问的线程数
sp<ProcessState> ProcessState::self()
{
gProcess = new ProcessState("/dev/binder");
return gProcess;
}
ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))
, mDriverFD(open_driver(driver))
, ..........
{
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0)
}
}
static int open_driver(const char *driver)
{
//打开/dev/binder设备,建立与内核的Binder驱动的交互通道
int fd = open(driver, O_RDWR | O_CLOEXEC);
if (fd >= 0) {
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers);
;;;;;;;;;;;;;;;;;;;;;;;;;;
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
//通过ioctl设置binder驱动,能支持的最大线程数
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
}
return fd;
}
可以看到在创建ProcessState时会调用open_driver()打开/dev/binder,ProcessState单例模式的唯一性,因此一个进程只打开binder设备一次,其中ProcessState的成员变量mDriverFD记录binder驱动的fd,用于访问binder设备进行交互操作;
通过mmap映射一块物理空间到内核空间和进程空间, binder分配的默认内存大小为1M-8k,binder默认的最大可并发访问的线程数为16。
在ProcessState创建完成后,会调用startThreadPool(),一起看一下该方法:
a.startThreadPool()
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.string());
sp<Thread> t = new PoolThread(isMain);
t->run(name.string());
}
}
当参数isMain为true时,thread是不会退出的;isMain为false时,thread在执行完工作后会退出;根据调用关系,会创建PoolThread,然后启动该thread,C++的thread在启动后,会回调threadLoop方法:
class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
//返回false,执行完退出thread
return false;
}
const bool mIsMain;
};
在PoolThread的内部threadLoop()方法内部,会执行IPCThreadState::self()->joinThreadPool(mIsMain),进入到IPCThreadState:
2.IPCThreadState.cpp
该类服务端也用到,只是逻辑处理不同,再次进入该类看一下服务端对应的逻辑实现:
a.joinThreadPool()
void IPCThreadState::joinThreadPool(bool isMain)
{
//BC_ENTER_LOOPER:应用线程进入looper;
//BC_REGISTER_LOOPER:创建新的looper线程;
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
status_t result;
do {
processPendingDerefs();
//开始处理命令,没有的话一直进行等待
result = getAndExecuteCommand();
//线程不需要且不是主线程时会退出
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
//应用线程退出looper
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
可以看到,该方法内部有do..while循环,在循环内部执行getAndExecuteCommand(),从注释来看:获取下一个需要执行的命令,如果没有的话处于等待状态;一起看一下该方法的实现:
b.getAndExecuteCommand()
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
//与Driver进行交互
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
//如果Driver有数据返回,读取到cmd
cmd = mIn.readInt32();
//处理前,对mProcess->mExecutingThreadsCount进行++操作
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
//执行executeCommand对cmd进行处理
result = executeCommand(cmd);
//处理后,对mProcess->mExecutingThreadsCount进行--操作
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
}
return result;
}
在getAndExecuteCommand()内部,会先执行talkWithDriver()与Driver进行交互,如果没有结果时,就一直等待;如果Driver有数据返回时,就获取到cmd,然后执行executeCommand()对cmd进行处理;
注意一点:在执行executeCommand()前后会对mProcess的mExecutingThreadsCount进行操作,在WatchDog中会用到,通过blockUntilThreadAvailable()来检测系统服务当前运行的线程是否超过最大值:
void IPCThreadState::blockUntilThreadAvailable()
{
pthread_mutex_lock(&mProcess->mThreadCountLock);
while (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads) {
ALOGW("Waiting for thread to be free. mExecutingThreadsCount=%lu mMaxThreads=%lu\n",
static_cast<unsigned long>(mProcess->mExecutingThreadsCount),
static_cast<unsigned long>(mProcess->mMaxThreads));
pthread_cond_wait(&mProcess->mThreadCountDecrement, &mProcess->mThreadCountLock);
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
如果mExecutingThreadsCount达到mMaxThreads,线程需要等待,ProcessState.cpp里面的默认值为:#define DEFAULT_MAX_BINDER_THREADS 15,SystemServer.java里面有对其进行设置,通过BinderInternal的setMaxThreads()进行设置,最终会调用到ProcessState.cpp里面对mMaxThreads进行重新设置;
// maximum number of binder threads used for system_server
// will be higher than the system default
private static final int sMaxBinderThreads = 31;
// Increase the number of binder threads in system_server
BinderInternal.setMaxThreads(sMaxBinderThreads);
接着上面对executeCommand()进行分析:
c.executeCommand()
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
;;;;;;;;;;;;;;;;;;;
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
Parcel buffer;
;;;;;;;;;;;;;;;;;;;;;;;
Parcel reply;
status_t error;
if (tr.target.ptr) {
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
//处理来自客户端的请求
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
//发送处理结果
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}
}
break;
;;;;;;;;;;;;;;;;;
//服务端进程crash及Binder异常,Driver会通过索引找到对应的Client进程端的BpBinder,
//并发送给对应的Client进程,Java层处理DeathRecipient回调即可
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writePointer((uintptr_t)proxy);
}
break;
return result;
}
当客户端有请求时,从Driver端返回来的命令为BR_TRANSACTION到服务端,在executeCommand()内部主要执行了两项工作:
1.处理来自客户端的请求:调用BBinder的transact()来处理请求;
2.处理完后发送请求结果:sendReply();
先分析第一项工作,该BBinder对应的是上层在本地实现Binder时通过init()方法创建的JavaBBinder,JavaBBinder继承了BBinder,BBinder具体实现路径为:frameworks/native/libs/binder/Binder.cpp
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
status_t err = NO_ERROR;
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);
break;
}
return err;
}
在BBinder-->transact()后,执行的是onTransact()方法,具体调用的是其继承者JavaBBinder的onTransact(),JavaBBinder的实现路径为:frameworks/base/core/jni/android_util_Binder.cpp
class JavaBBinder : public BBinder {
;;;;;;;;;;;;;;;;;;;;;;;;;;
virtual status_t onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
{
JNIEnv* env = javavm_to_jnienv(mVM);
jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
code, reinterpret_cast<jlong>(&data), reinterpret_cast<jlong>(reply), flags);
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;
}
}
在onTransact()内部会调用到Java层Binder内部的execTransact()方法,该方法实现路径为:frameworks/base/core/java/android/os/Binder.java
private boolean execTransact(int code, long dataObj, long replyObj,
int flags) {
Parcel data = Parcel.obtain(dataObj);
Parcel reply = Parcel.obtain(replyObj);
boolean res;
try {
res = onTransact(code, data, reply, flags);
} catch (RemoteException|RuntimeException e) {
} finally {
}
reply.recycle();
data.recycle();
return res;
}
可以看到,在execTransact()后,会调用onTransact(),这个就回到了本地aidl实现类Stub内部的onTransact():
@Override
public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException {
switch (code) {
case TRANSACTION_getXXX:{
data.enforceInterface(descriptor);
int _arg0;
_arg0 = data.readInt();
int _arg1;
_arg1 = data.readInt();
int _result = this.getXXX(_arg0, _arg1);
reply.writeNoException();
reply.writeInt(_result);
return true;
}
}
}
通过调用本地的getXXX()方法返回结果_result,然后将_result写入reply返回;
到这里executeCommand()内部的第一项工作transact()已经处理完成了;
第二项工作为sendReply(reply, 0)来发送处理结果:
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
status_t err;
status_t statusBuffer;
err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
return waitForResponse(NULL, NULL);
}
可以看到,sendReply()内部的逻辑跟对应的客户端发送请求逻辑是类似的,也是先执行writeTransactionData(),命令为BC_REPLY,然后执行waitForResponse()来执行talkWithDriver()与Driver进行交互,客户端在收到BR_REPLY(Driver处理返回的)时进行数据接收处理就可以了,服务端也是在收到Driver返回的BC_TRANSACTION_COMPLETE命令后,由于waitForResponse()传入的参数都为null,就直接goto finish,到这里Driver到服务端的处理就完成了。
四.bindService()获取Binder
前面讲了addService是通过大管家serviceManager来进行的,经过transact()-->...-->talkwithDriver()--> ...-->execTransact() -->onTransact()最终调用到服务端;
在调用服务端的接口时,首先要获取到服务端的Binder代理,实名Binder是直接通过ServiceManager.getService()就可以获取到了,本文简单分析一下通过bindService()来获取服务端的Binder代理过程;
我们知道,进程A通过bindServce()与进程B进行通信时,会先通过ServiceManager获取到AMS代理,然后通过AMS调用进程B的handBindService()获取到Binder,然后进程B通过AMS的publishService()来回调进程A的onServiceConnected()将Binder传递给进程A,关于详细流程,可以参考文章Android 进程通信bindService详解;
这里主要分析一下进程A获取到进程B的Binder时,到底是什么?直接从进程B调用AMS的publishService()来开始分析:
case TRANSACTION_publishService:
{
data.enforceInterface(DESCRIPTOR);
android.os.IBinder _arg0;
_arg0 = data.readStrongBinder();
android.content.Intent _arg1;
if ((0!=data.readInt())) {
_arg1 = android.content.Intent.CREATOR.createFromParcel(data);
}
else {
_arg1 = null;
}
android.os.IBinder _arg2;
_arg2 = data.readStrongBinder();
this.publishService(_arg0, _arg1, _arg2);
reply.writeNoException();
return true;
}
在调用到AMS进程的onTransact()后,可以看到,_arg2是IBinder类型,通过readStrongBinder()来获取的,readStrongBinder()会调用到nativeReadStrongBinder(),该实现路径为:frameworks/base/core/jni/android_os_Parcel.cpp
static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
{
Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
if (parcel != NULL) {
return javaObjectForIBinder(env, parcel->readStrongBinder());
}
return NULL;
}
跟着调用关系,看一下javaObjectForIBinder()方法,里面有一个参数为parcel->readStrongBinder(),先看一下readStrongBinder()的实现,该方法的实现路径为:frameworks/native/libs/binder/Parcel.cpp
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
readNullableStrongBinder(&val);
return val;
}
status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
{
return unflatten_binder(ProcessState::self(), *this, val);
}
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->type) {
case BINDER_TYPE_BINDER:
// 当请求服务的进程与服务属于同一进程
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
case BINDER_TYPE_HANDLE:
//请求服务的进程与服务属于不同进程
*out = proc->getStrongProxyForHandle(flat->handle);
//创建BpBinder对象
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
由于进程A与进程B属于不同的进程,所以会通过ProcessState的getStrongProxyForHandle()来返回结果out,看一下该方法的返回值:
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
;;;;
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
// Special case for context manager...
;;;;;;;;;;;;;;;;;;;;;;;;;
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}
回到熟悉的方法了,在获取ServiceManager的Binder时,调用的是getStrongProxyForHandle(0),即0号引用;如果其他进程的话,对应不同的handle值,但是最终返回的是BpBinder对象;
分析完parcel->readStrongBinder(),接着再看一下javaObjectForIBinder(),该方法的实现路径为:frameworks/base/core/jni/android_util_Binder.cpp
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
if (object != NULL) {
jobject res = jniGetReferent(env, object);
if (res != NULL) {
ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
return res;
}
LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
android_atomic_dec(&gNumProxyRefs);
val->detachObject(&gBinderProxyOffsets);
env->DeleteGlobalRef(object);
}
object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
if (object != NULL) {
LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
// The proxy holds a reference to the native object.
// 将val.get()赋值给BinderProxy的mObject,后续transactNative()后会用到
env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
val->incStrong((void*)javaObjectForIBinder);
;;;;;;;;;;;;;;;;;;;;;;
}
return object;
}
javaObjectForIBinder()返回的是BinderProxy,然后将val.get()(BpBinder)赋值给BinderProxy的mObject变量,所以在A进程的onServiceConnected()中的IBinder返回的是BinderProxy,即mRemote为BinderProxy对象,当调用进程B中的逻辑时,就通过BinderProxy来进行调用了,整个流程在文章第二、三节已经介绍了。
五.Binder驱动
1.简介
Binder驱动是Android专用的,但底层的驱动架构与Linux驱动一样。binder驱动在以misc设备进行注册,作为虚拟字符设备,没有直接操作硬件,只是对设备内存的处理。主要是驱动设备的初始化(binder_init),打开 (binder_open),映射(binder_mmap),数据操作(binder_ioctl)。
2.系统调用
用户态的程序调用Kernel层驱动是需要陷入内核态,进行系统调用(syscall),比如打开Binder驱动方法的调用链为: open-> __open() -> binder_open()。 open()为用户空间的方法,__open()便是系统调用中相应的处理方法,通过查找,对应调用到内核binder驱动的binder_open()方法,至于其他的从用户态陷入内核态的流程也基本一致。
当用户空间调用open()方法,最终会调用binder驱动的binder_open()方法;mmap()/ioctl()方法也是同理,从用户态进入内核态,都依赖于系统调用过程。
3.工作过程
在transact()之后通过talkwithDriver()与驱动进行交互后,驱动层执行的核心逻辑方法为:binder_ioctl_write_read(),该方法执行流程如下:
1.把用户空间数据ubuf拷贝到内核空间bwr;
2.当bwr写缓存有数据,则执行binder_thread_write;当写失败则将bwr数据写回用户空间并退出;
3.当bwr读缓存有数据,则执行binder_thread_read;当读失败则再将bwr数据写回用户空间并退出;
4.把内核数据bwr拷贝到用户空间ubuf。
4.通信过程
前面讲到,在客户端执行transact()后,会通过talkwithDriver()与Driver进行交互,然后Driver在和服务端通过talkwithDriver()进行交互,一次完整的通信过程如下:
Binder协议包含在IPC数据中,分为两类:
BINDER_COMMAND_PROTOCOL:binder请求码,以”BC_“开头,简称BC码,用于从IPC层传递到Binder Driver层;
BC_TRANSACTION:Client向Binder驱动发送请求数据;
BC_REPLY:Server向Binder驱动发送回复数据;
BC_ENTER_LOOPER:binder主线程(由应用层发起)的创建会向驱动发送该消息;joinThreadPool()过程创建binder主线程;
BC_REGISTER_LOOPER:Binder用于驱动层决策而创建新的binder线程;joinThreadPool()过程,创建非binder主线程;
BC_EXIT_LOOPER:退出Binder线程,对于binder主线程是不能退出;joinThreadPool()的过程出现timeout,并且非binder主线程,则会退出该binder线程;
BINDER_RETURN_PROTOCOL :binder响应码,以”BR_“开头,简称BR码,用于从Binder Driver层传递到IPC层;
BR_TRANSACTION:Binder驱动向Server端发送请求数据;
BR_REPLY:Binder驱动向Client端发送回复数据;
BR_TRANSACTION_COMPLETE:当Client端向Binder驱动发送BC_TRANSACTION命令后,Client会收到BR_TRANSACTION_COMPLETE命令,告知Client端请求命令发送成功;对于Server向Binder驱动发送BC_REPLY命令后,Server端会收到BR_TRANSACTION_COMPLETE命令,告知Server端请求回应命令发送成功;
BR_SPAWN_LOOPER:binder驱动已经检测到进程中没有线程等待即将到来的事务,那么当一个进程接收到这条命令时,该进程必须创建一条新的服务线程并注册该线程,通过ProcessState创建Thread,在内部调用IPCThreadState.cpp的joinThreadPool(false),写入BC_REGISTER_LOOPER与驱动进行交互,处理完cmd后,会退出thread;
BR_DEAD_BINDER:Binder驱动向Client端发送服务端Binder异常通知;过程为: executeCommand(cmd)[IPCThreadState.cpp]---->sendObituary()[BpBinder.cpp]----->reportOneDeath()----->binderDied()[android_util_Binder.cpp]---->sendDeathNotice()[BinderProxy.java]-->DeathRecipient.binderDied()[客户端]
五.总结
1.客户端通过xx.Stub.asInterface(IBinder mRemote)时,mRemote是BinderProxy对象,该BinderProxy对象的变量mObject对应的是BpBinder,当客户端调用方法时最终执行mRemote.transact()时,通过transactNative()最终实际上是调用的BpBinder的transact()方法;
2.IPCThreadState内部包含了客户端与服务端的逻辑处理,通过talkWithDriver()与Driver进行交互,客户端的请求会write给Driver,在joinThreadPool()执行getAndExecuteCommand()来获取到Drvier的read,然后通过JavaBBinder的transact()将请求发送给Java层的Binder本地实现,将结果通过sendReply() write给Driver;
3.客户端在执行完waitForResponse()后,会执行talkwithDriver()与驱动层进行交互,成功后会收到BR_TRANSACTION_COMPLETE命令,如果是oneway方式,通信结束;如果不是oneway方式,会再次block在talkwithDriver()等待BR_REPLY到来时进行数据处理,最后通过reply_readXX()获取请求结果;
最后用一张流程图总结一下从客户端到服务端请求及处理的整个过程:
非常感谢以下大神的文章:
http://gityuan.com/2015/11/01/binder-driver/
http://gityuan.com/2015/11/02/binder-driver-2/