Binder驱动分析(一)—— 数据结构

在阅读 Binder驱动分析 系列之前,假设你已经对binder的使用已经比较了解了。
你可以在这里(https://github.com/weidongshan/APP_0003_Binder_C_App)获取关于Binder使用的例子,其中的代码是应用程序层的代码,你将看到应用程序是如何和binder驱动打交道的,有助于帮助你理解接下来的内容。

binder驱动代码再AOSP中的代码路径是\common\drivers\android\binder.c

在使用Binder驱动的时候,上面的例子中我们最终会调用到binder_call函数,该函数定义如下:

int binder_call(struct binder_state *bs,
                struct binder_io *msg, struct binder_io *reply,
                uint32_t target, uint32_t code)

binder_call会将这些参数进一步打包成binder_write_read,之后通过ioctl与Binder驱动通信。

binder_call其中有几个重要参数:

struct binder_io *msg: 指向 binder_io 结构体的指针,表示要发送的 Binder 请求数据。(提供的参数)
struct binder_io *reply:  指向 binder_io 结构体的指针,用于存储目标服务的响应数据。(返回值)
uint32_t target:目标服务的句柄(handle),用于标识要调用的 Binder 服务。(调用哪个服务)
uint32_t code:调用的操作码(transaction code),用于指定要执行的操作。(调用服务的哪个函数)

在与binder驱动通信的时候,binder驱动将会根据target参数,即handle,找到服务进程提供的服务的引用,该数据结构体是binder_ref:

/**
 * struct binder_ref - struct to track references on nodes
 * @data:        binder_ref_data containing id, handle, and current refcounts
 * @rb_node_desc: node for lookup by @data.desc in proc's rb_tree
 * @rb_node_node: node for lookup by @node in proc's rb_tree
 * @node_entry:  list entry for node->refs list in target node
 *               (protected by @node->lock)
 * @proc:        binder_proc containing ref
 * @node:        binder_node of target node. When cleaning up a
 *               ref for deletion in binder_cleanup_ref, a non-NULL
 *               @node indicates the node must be freed
 * @death:       pointer to death notification (ref_death) if requested
 *               (protected by @node->lock)
 *
 * Structure to track references from procA to target node (on procB). This
 * structure is unsafe to access without holding @proc->outer_lock.
 */
struct binder_ref {
    /* Lookups needed: */
    /*   node + proc => ref (transaction) */
    /*   desc + proc => ref (transaction, inc/dec ref) */
    /*   node => refs + procs (proc exit) */
    struct binder_ref_data data;
    struct rb_node rb_node_desc;
    struct rb_node rb_node_node;
    struct hlist_node node_entry;
    struct binder_proc *proc;
    struct binder_node *node;
    struct binder_ref_death *death;
};

而服务进程提供的服务在驱动中则表示为:binder_node:

/**
 * struct binder_node - binder node bookkeeping
 * @debug_id:             unique ID for debugging
 *                        (invariant after initialized)
 * @lock:                 lock for node fields
 * @work:                 worklist element for node work
 *                        (protected by @proc->inner_lock)
 * @rb_node:              element for proc->nodes tree
 *                        (protected by @proc->inner_lock)
 * @dead_node:            element for binder_dead_nodes list
 *                        (protected by binder_dead_nodes_lock)
 * @proc:                 binder_proc that owns this node
 *                        (invariant after initialized)
 * @refs:                 list of references on this node
 *                        (protected by @lock)
 * @internal_strong_refs: used to take strong references when
 *                        initiating a transaction
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @local_weak_refs:      weak user refs from local process
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @local_strong_refs:    strong user refs from local process
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @tmp_refs:             temporary kernel refs
 *                        (protected by @proc->inner_lock while @proc
 *                        is valid, and by binder_dead_nodes_lock
 *                        if @proc is NULL. During inc/dec and node release
 *                        it is also protected by @lock to provide safety
 *                        as the node dies and @proc becomes NULL)
 * @ptr:                  userspace pointer for node
 *                        (invariant, no lock needed)
 * @cookie:               userspace cookie for node
 *                        (invariant, no lock needed)
 * @has_strong_ref:       userspace notified of strong ref
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @pending_strong_ref:   userspace has acked notification of strong ref
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @has_weak_ref:         userspace notified of weak ref
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @pending_weak_ref:     userspace has acked notification of weak ref
 *                        (protected by @proc->inner_lock if @proc
 *                        and by @lock)
 * @has_async_transaction: async transaction to node in progress
 *                        (protected by @lock)
 * @sched_policy:         minimum scheduling policy for node
 *                        (invariant after initialized)
 * @accept_fds:           file descriptor operations supported for node
 *                        (invariant after initialized)
 * @min_priority:         minimum scheduling priority
 *                        (invariant after initialized)
 * @inherit_rt:           inherit RT scheduling policy from caller
 * @txn_security_ctx:     require sender's security context
 *                        (invariant after initialized)
 * @async_todo:           list of async work items
 *                        (protected by @proc->inner_lock)
 *
 * Bookkeeping structure for binder nodes.
 */
struct binder_node {
    int debug_id;
    spinlock_t lock;
    struct binder_work work;
    union {
        struct rb_node rb_node;
        struct hlist_node dead_node;
    };
    struct binder_proc *proc;
    struct hlist_head refs;
    int internal_strong_refs;
    int local_weak_refs;
    int local_strong_refs;
    int tmp_refs;
    binder_uintptr_t ptr;
    binder_uintptr_t cookie;
    struct {
        /*
         * bitfield elements protected by
         * proc inner_lock
         */
        u8 has_strong_ref:1;
        u8 pending_strong_ref:1;
        u8 has_weak_ref:1;
        u8 pending_weak_ref:1;
    };
    struct {
        /*
         * invariant after initialization
         */
        u8 sched_policy:2;
        u8 inherit_rt:1;
        u8 accept_fds:1;
        u8 txn_security_ctx:1;
        u8 min_priority;
    };
    bool has_async_transaction;
    struct list_head async_todo;
};

所以通过服务的引用binder_ref可以找到服务进程binder_node,binder_ref数据结构中确实也有binder_node的指针。找到binder_node(服务进程提供的服务)之后还需要可以找到服务进程,这个服务进程在binder驱动中为:

/**
 * struct binder_proc - binder process bookkeeping
 * @proc_node:            element for binder_procs list
 * @threads:              rbtree of binder_threads in this proc
 *                        (protected by @inner_lock)
 * @nodes:                rbtree of binder nodes associated with
 *                        this proc ordered by node->ptr
 *                        (protected by @inner_lock)
 * @refs_by_desc:         rbtree of refs ordered by ref->desc
 *                        (protected by @outer_lock)
 * @refs_by_node:         rbtree of refs ordered by ref->node
 *                        (protected by @outer_lock)
 * @waiting_threads:      threads currently waiting for proc work
 *                        (protected by @inner_lock)
 * @pid                   PID of group_leader of process
 *                        (invariant after initialized)
 * @tsk                   task_struct for group_leader of process
 *                        (invariant after initialized)
 * @deferred_work_node:   element for binder_deferred_list
 *                        (protected by binder_deferred_lock)
 * @deferred_work:        bitmap of deferred work to perform
 *                        (protected by binder_deferred_lock)
 * @outstanding_txns:     number of transactions to be transmitted before
 *                        processes in freeze_wait are woken up
 *                        (protected by @inner_lock)
 * @is_dead:              process is dead and awaiting free
 *                        when outstanding transactions are cleaned up
 *                        (protected by @inner_lock)
 * @is_frozen:            process is frozen and unable to service
 *                        binder transactions
 *                        (protected by @inner_lock)
 * @sync_recv:            process received sync transactions since last frozen
 *                        bit 0: received sync transaction after being frozen
 *                        bit 1: new pending sync transaction during freezing
 *                        (protected by @inner_lock)
 * @async_recv:           process received async transactions since last frozen
 *                        (protected by @inner_lock)
 * @freeze_wait:          waitqueue of processes waiting for all outstanding
 *                        transactions to be processed
 *                        (protected by @inner_lock)
 * @todo:                 list of work for this process
 *                        (protected by @inner_lock)
 * @stats:                per-process binder statistics
 *                        (atomics, no lock needed)
 * @delivered_death:      list of delivered death notification
 *                        (protected by @inner_lock)
 * @max_threads:          cap on number of binder threads
 *                        (protected by @inner_lock)
 * @requested_threads:    number of binder threads requested but not
 *                        yet started. In current implementation, can
 *                        only be 0 or 1.
 *                        (protected by @inner_lock)
 * @requested_threads_started: number binder threads started
 *                        (protected by @inner_lock)
 * @tmp_ref:              temporary reference to indicate proc is in use
 *                        (protected by @inner_lock)
 * @default_priority:     default scheduler priority
 *                        (invariant after initialized)
 * @debugfs_entry:        debugfs node
 * @alloc:                binder allocator bookkeeping
 * @context:              binder_context for this proc
 *                        (invariant after initialized)
 * @inner_lock:           can nest under outer_lock and/or node lock
 * @outer_lock:           no nesting under innor or node lock
 *                        Lock order: 1) outer, 2) node, 3) inner
 * @binderfs_entry:       process-specific binderfs log file
 * @oneway_spam_detection_enabled: process enabled oneway spam detection
 *                        or not
 *
 * Bookkeeping structure for binder processes
 */
struct binder_proc {
    struct hlist_node proc_node;
    struct rb_root threads;
    struct rb_root nodes;
    struct rb_root refs_by_desc;
    struct rb_root refs_by_node;
    struct list_head waiting_threads;
    int pid;
    struct task_struct *tsk;
    struct hlist_node deferred_work_node;
    int deferred_work;
    int outstanding_txns;
    bool is_dead;
    bool is_frozen;
    bool sync_recv;
    bool async_recv;
    wait_queue_head_t freeze_wait;

    struct list_head todo;
    struct binder_stats stats;
    struct list_head delivered_death;
    int max_threads;
    int requested_threads;
    int requested_threads_started;
    int tmp_ref;
    struct binder_priority default_priority;
    struct dentry *debugfs_entry;
    struct binder_alloc alloc;
    struct binder_context *context;
    spinlock_t inner_lock;
    spinlock_t outer_lock;
    struct dentry *binderfs_entry;
    bool oneway_spam_detection_enabled;
};

在binder_node结构体中,也有binder_proc的指针,由此客户进程就可以通过handler找到服务进程,并且将数据放到服务进程的某个链表中了。

在实际情况中,服务进程会创建多个线程来处理客户进程发送过来的数据,所以在binder_proc中可以看到有struct rb_root threads属性;通过rb_*关键字我们可以知道biner_proc是通过红黑树来管理它手下的线程的,在binder驱动中,通过binder_thread来表示这些线程:

/**
 * struct binder_thread - binder thread bookkeeping
 * @proc:                 binder process for this thread
 *                        (invariant after initialization)
 * @rb_node:              element for proc->threads rbtree
 *                        (protected by @proc->inner_lock)
 * @waiting_thread_node:  element for @proc->waiting_threads list
 *                        (protected by @proc->inner_lock)
 * @pid:                  PID for this thread
 *                        (invariant after initialization)
 * @looper:               bitmap of looping state
 *                        (only accessed by this thread)
 * @looper_needs_return:  looping thread needs to exit driver
 *                        (no lock needed)
 * @transaction_stack:    stack of in-progress transactions for this thread
 *                        (protected by @proc->inner_lock)
 * @todo:                 list of work to do for this thread
 *                        (protected by @proc->inner_lock)
 * @process_todo:         whether work in @todo should be processed
 *                        (protected by @proc->inner_lock)
 * @return_error:         transaction errors reported by this thread
 *                        (only accessed by this thread)
 * @reply_error:          transaction errors reported by target thread
 *                        (protected by @proc->inner_lock)
 * @wait:                 wait queue for thread work
 * @stats:                per-thread statistics
 *                        (atomics, no lock needed)
 * @tmp_ref:              temporary reference to indicate thread is in use
 *                        (atomic since @proc->inner_lock cannot
 *                        always be acquired)
 * @is_dead:              thread is dead and awaiting free
 *                        when outstanding transactions are cleaned up
 *                        (protected by @proc->inner_lock)
 * @task:                 struct task_struct for this thread
 * @prio_lock:            protects thread priority fields
 * @prio_next:            saved priority to be restored next
 *                        (protected by @prio_lock)
 * @prio_state:           state of the priority restore process as
 *                        defined by enum binder_prio_state
 *                        (protected by @prio_lock)
 *
 * Bookkeeping structure for binder threads.
 */
struct binder_thread {
    struct binder_proc *proc;
    struct rb_node rb_node;
    struct list_head waiting_thread_node;
    int pid;
    int looper;              /* only modified by this thread */
    bool looper_need_return; /* can be written by other thread */
    struct binder_transaction *transaction_stack;
    struct list_head todo;
    bool process_todo;
    struct binder_error return_error;
    struct binder_error reply_error;
    wait_queue_head_t wait;
    struct binder_stats stats;
    atomic_t tmp_ref;
    bool is_dead;
    struct task_struct *task;
    spinlock_t prio_lock;
    struct binder_priority prio_next;
    enum binder_prio_state prio_state;
};

了解了数据结构的关系之后,就可以梳理一个服务进程被客户进程查询到的过程,要分为以下几个步骤:

  1. server在驱动中为每个服务创建binder_node,并且binder_node.proc = server进程
  2. service_manager在驱动中创建binder_ref,并且binder_ref.node = binder_node,且binder_ref.rb_node_desc从1开始赋值。service_manager在用户态会创建一个服务链表,该链表中的节点数据结构中有name(服务名字)以及 handle(binder_ref中的属性rb_node_desc)
  3. client向service_manager查询服务,通过传输服务名字即可。
  4. service_manager根据服务名字,找到handle,又根据handle到驱动中找到binder_ref,又根据binder_ref找到binder_node,最后给client创建新的binder_ref,这个新的binder_ref.node = binder_node,binder_ref.rb_node_desc也从1开始赋值。这里有一个重要的概念,即每个进程都会有自己的binder_ref指向同一个binder_node。示例图如下:


    image.png

5.binder驱动返回新创建的binder_ref.rb_node_desc给client,它就是handler
6.client根据handler找到binder_ref,再根据具binder_ref -> binder_node -> binder_proc,之后就可以给binder_proc写数据到binder_proc.todo链表中,并唤醒binder_proc去处理。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容