2. IBinder上下文的创建
所谓IBinder上下文在这里实际上就是获取binder server代理,用来访问server接口。这个上下文,就是指的server的代理IBinder。
上面是IBinder代理生成流程,方便跟踪代码。
接着就从addService开始分析。
下面的流程图展示了AMS(ActivityManagerService)启动和添加的部分过程。
众所周知,SystemServer进程在启动过程中会启动一大波的系统service,包括耳熟能详的AMS,PMS等等。最终,这些service中的一部分都会通过addService添加到system_server。其调用流程与上图都是大同小异。
从上面的流程图可以看到添加了4个service,最终都会调用到ServiceManager的addService。
2.1 addService
//ServiceManager.java frameworks\base\core\java\android\os
public static void addService(String name, IBinder service, boolean allowIsolated,
int dumpPriority) {
try {
getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}
addService有多个重载方法,最终都会调用到上面那一个。
重点关注两个参数:
name:service的名称,用于findService
service:IBinder类型,意味着添加的service_manager中的service必须是实现了Binder
接着看函数的功能,其实就两件事:
- getIServiceManager()获取service_manager的binder代理
- 调用addService把service添加到service_manager
这一节先分析binder代理的获取,如下:
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}
首先调用getContextObject()获取IServiceManager的实现,然后调用asInterface生成IServiceManager代理。
getContextObject()
这个会直接调用到native,代码如下:
//android_util_Binder.cpp frameworks\base\core\jni
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
//把c的IBinder转换为Java的IBinder并返回给java层。
return javaObjectForIBinder(env, b);
}
ProcessState::self()->getContextObject(NULL)
//ProcessState.cpp frameworks\native\libs\binder
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
//这里的0代表目标server为service_manager
return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != nullptr) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == nullptr || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
// Special case for context manager...
// The context manager is the only object for which we create
// a BpBinder proxy without already holding a reference.
// Perform a dummy transaction to ensure the context manager
// is registered before we create the first local reference
// to it (which will occur when creating the BpBinder).
// If a local reference is created for the BpBinder when the
// context manager is not present, the driver will fail to
// provide a reference to the context manager, but the
// driver API does not return status.
//
// Note that this is not race-free if the context manager
// dies while this code runs.
//
// TODO: add a driver API to wait for context manager, or
// stop special casing handle 0 for context manager and add
// a driver API to get a handle to the context manager with
// proper reference counting.
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, nullptr, 0);
if (status == DEAD_OBJECT)
return nullptr;
}
b = BpBinder::create(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
这里有三部分工作:
- 检查handle是否存在与mHandleToObject中,如果不存在,则创建一个handle_entry。
- 判断handle对应的handle_entry中的IBinder是否为空,不为空说明当前handle指向的server对应的binder_node存在,那么针对该node增加一个引用。
- 否则,表示IBinder为空或者无法增加引用,那么新建一个BpBinder并获取它的引用。
- 最后把结果返回给上层。
1. lookupHandleLocked(handle)
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
handle_entry e;
e.binder = nullptr;
e.refs = nullptr;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return nullptr;
}
return &mHandleToObject.editItemAt(handle);
}
handle大于等于N,表示目标handle不在mHandleToObject里面,那就新建一个handle_entry并加入到mHandleToObject里面。
这里访问的handle为0,如果是第一次访问,那么会创建handle_entry并添加到位置0。
struct handle_entry {
IBinder* binder;
RefBase::weakref_type* refs;
};
binder: 指向handle对应的server
refs:对应handle的引用
2. IBinder判空
这里假设第一次添加service给service_manager,那么handle 0 对应的IBinder为空,直接进入第三步。
3. 创建BpBinder
这里针对handle为0的情况,先会通过调用PING_TRANSACTION去检查service_manager是否存在,如果不存在,那肯定无法注册,直接返回。
接着就是创建BpBinder
b = BpBinder::create(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
可以看到创建完后首先赋值给e->binder以备下次直接使用,然后把b赋值给result返回给上层。
2.2 BpBinder的创建
//BpBinder.cpp frameworks\native\libs\binder
BpBinder* BpBinder::create(int32_t handle) {
return new BpBinder(handle, trackedUid);
}
BpBinder::BpBinder(int32_t handle, int32_t trackedUid)
: mHandle(handle)
, mAlive(1)
, mObitsSent(0)
, mObituaries(nullptr)
, mTrackedUid(trackedUid)
{
IPCThreadState::self()->incWeakHandle(handle, this);
}
可以看出创建也比较简单,创建BpBinder实例,记录一下mHandle,然后又调回到IPCThreadState incWeakHandle
incWeakHandle主要功能就是给handle 0增加一个引用,当然这个动作是binder kernel来完成的
void IPCThreadState::incWeakHandle(int32_t handle, BpBinder *proxy)
{
LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
mOut.writeInt32(BC_INCREFS);
mOut.writeInt32(handle);
// Create a temp reference until the driver has handled this command.
proxy->getWeakRefs()->incWeak(mProcess.get());
mPostWriteWeakDerefs.push(proxy->getWeakRefs());
}
通过前面对ProcessState初始化的分析知道,mOut写入的数据会触发线程池去处理,然后通过talkWithDriver发给kernel去执行。
接着就进入kernel的分析了。
2.2.1
代码从binder_ioctl开始,binder_ioctl在Binder驱动初始化一文中初步讲解过,这里会更详细的做介绍。
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
//binder_proc的一些私有数据,参见Binder驱动初始化一文的介绍
struct binder_proc *proc = filp->private_data;
//这里会创建一个新的binder_thread
struct binder_thread *thread;
//这个cmd就是前面talkWithDriver发送的ioctl cmd:BINDER_WRITE_READ
unsigned int size = _IOC_SIZE(cmd);
//ubuf指向client构建的数据对应的虚拟空间地址,也就是通过talkWithDriver发送来的结构体binder_write_read对应的用户空间地址
void __user *ubuf = (void __user *)arg;
thread = binder_get_thread(proc);
switch (cmd) {
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
break;
}
这里会分析以下两点:
- binder_thread的创建
- 客户端命令的处理
1. binder_thread的创建
在Binder驱动初始化一文中我们知道,进程通过open打开/dev/binder的时候会创建一个与进程对应的binder_proc,针对本文分析的addService,其对应的client进程实际上就是system_server,而这里会根据binder_proc去创建一个新的binder_thread.
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
struct binder_thread *thread;
struct binder_thread *new_thread;
thread = binder_get_thread_ilocked(proc, NULL);
if (!thread) {
new_thread = kzalloc(sizeof(*thread), GFP_KERNEL);
thread = binder_get_thread_ilocked(proc, new_thread);
}
return thread;
}
这里可以看出该方法的作用其实是分为两部分,首先去查询binder_thread是否存在,如果不存在则分配一个新的binder_thread然后完成它的初始化,可以看到查询和创建调用的是同一个方法binder_get_thread_ilocked,通过第二个参数来指明是否需要新建,这里就一起分析了:
static struct binder_thread *binder_get_thread_ilocked(
struct binder_proc *proc, struct binder_thread *new_thread)
{
struct binder_thread *thread = NULL;
struct rb_node *parent = NULL;
struct rb_node **p = &proc->threads.rb_node;
//第一部分,查询thread是否存在,检查的依据是current->pid是否与binder_proc中的已有的各个binder_thread id相同。
while (*p) {
parent = *p;
thread = rb_entry(parent, struct binder_thread, rb_node);
if (current->pid < thread->pid)
p = &(*p)->rb_left;
else if (current->pid > thread->pid)
p = &(*p)->rb_right;
else
return thread;
}
//第二部分,new_thread为true的时候进入新建过程。所谓的新建,其实就是对于thread的各项变量的初始化赋值。
if (!new_thread)
return NULL;
thread = new_thread;
binder_stats_created(BINDER_STAT_THREAD);
thread->proc = proc;
thread->pid = current->pid;
get_task_struct(current);
thread->task = current;
atomic_set(&thread->tmp_ref, 0);
init_waitqueue_head(&thread->wait);
INIT_LIST_HEAD(&thread->todo);
rb_link_node(&thread->rb_node, parent, p);
rb_insert_color(&thread->rb_node, &proc->threads);
thread->looper_need_return = true;
thread->return_error.work.type = BINDER_WORK_RETURN_ERROR;
thread->return_error.cmd = BR_OK;
thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
thread->reply_error.cmd = BR_OK;
INIT_LIST_HEAD(&new_thread->waiting_thread_node);
return thread;
}
直接看注释即可。
2. 客户端命令的处理
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
//把用户空间的结构体binder_write_read copy到内核空间,这里只是对结构体的copy。
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
}
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
binder_inner_proc_lock(proc);
if (!binder_worklist_empty_ilocked(&proc->todo))
binder_wakeup_proc_ilocked(proc);
binder_inner_proc_unlock(proc);
}
//完事之后再从内核copy到用户空间
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
这里有两个主要动作,一个是write,一个是read,会根据client传进来的binder_write_read携带的write_size和read_size判断是否要进行相应的工作,而我们这里传进来的只有write,所以只会进行binder_thread_write的动作,代码如下:
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
//binder_proc的context,从Binder驱动初始化一文可知针对设备/dev/binder,其context是唯一的,在servicemanager一文中详细分析了context的创建过程,这里就是那个context
struct binder_context *context = proc->context;
//buffer即指向binder_write_read
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
//consumed指的是已经处理掉的数据,所以ptr指向待处理数据,这里consumed为0,所以ptr就是指向binder_write_read结构体
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error.cmd == BR_OK) {
int ret;
//获取cmd,也就是BC_INCREFS
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
//取完cmd之后ptr后移
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_INCREFS:
case BC_ACQUIRE:
case BC_RELEASE:
case BC_DECREFS: {
uint32_t target;
bool strong = cmd == BC_ACQUIRE || cmd == BC_RELEASE;
//BC_INCREFS 代表增加引用,把increment置为true
bool increment = cmd == BC_INCREFS || cmd == BC_ACQUIRE;
struct binder_ref_data rdata;
//target代表目标,这里取得ptr就是第二个数据,也就是前面传进来的handle,也就是目标server。
if (get_user(target, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
ret = -1;
//增加引用并且targe为0的情况
if (increment && !target) {
//针对target为0,也就是目标为service_manager的情况,为ctx_mgr_node增加ref
struct binder_node *ctx_mgr_node;
mutex_lock(&context->context_mgr_node_lock);
ctx_mgr_node = context->binder_context_mgr_node;
if (ctx_mgr_node)
ret = binder_inc_ref_for_node(
proc, ctx_mgr_node,
strong, NULL, &rdata);
mutex_unlock(&context->context_mgr_node_lock);
}
//针对其它target增加或者删除ref
if (ret)
ret = binder_update_ref_for_handle(
proc, target, increment, strong,
&rdata);
break;
}
}
}
这里只分析BC_INCREFS对应的情况:
关于binder_context_mgr_node,请参看servicemanager分析第3节。
这里可以看到针对target为0的情况是和普通target的处理方式不太相同,我们在这里只分析为0的情况,即函数binder_inc_ref_for_node。
static int binder_inc_ref_for_node(struct binder_proc *proc,
struct binder_node *node,
bool strong,
struct list_head *target_list,
struct binder_ref_data *rdata)
{
struct binder_ref *ref;
struct binder_ref *new_ref = NULL;
int ret = 0;
binder_proc_lock(proc);
ref = binder_get_ref_for_node_olocked(proc, node, NULL);
if (!ref) {
binder_proc_unlock(proc);
new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!new_ref)
return -ENOMEM;
binder_proc_lock(proc);
ref = binder_get_ref_for_node_olocked(proc, node, new_ref);
}
ret = binder_inc_ref_olocked(ref, strong, target_list);
*rdata = ref->data;
binder_proc_unlock(proc);
if (new_ref && ref != new_ref)
/*
* Another thread created the ref first so
* free the one we allocated
*/
kfree(new_ref);
return ret;
}