盒子
盒子
文章目录
  1. ServiceManager
    1. main
  2. binder_open
  3. binder_become_context_manager
  4. binder_loop
    1. binder_write
    2. binder_parse
    3. svcmgr_handler
    4. do_add_service
  5. 推荐

Binder: ServiceManager的创建

承接Binder: addService初探这篇文章,我们已经知道Client端通过BpBinder的transact方法与service端进行通信,在BpBinder的transact方法中又通过IPCThreadState的transact方法将数据传递到service端。

最终来到IPCThreadState的writeTransactionData方法

frameworks/native/libs/binder/IPCThreadState.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;

// 将数据封装到tr中
tr.target.ptr = 0;
tr.target.handle = handle; // handle = 0, 定位到ServiceManager
tr.code = code; // 操作码 ADD_SERVICE_TRANSACTION
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;

const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}

mOut.writeInt32(cmd); // 指令码 BC_TRANSACTION
// 写入数据,进行数据传递
mOut.write(&tr, sizeof(tr));

return NO_ERROR;
}

在传递数据的过程中,通过handle = 0来定位到service的service_manager。

下面我们来分析一下ServiceManager的创建过程。

ServiceManager

ServiceManager是伴随着Android init 启动一起被创建的,在init.rc文件中进行声明的。

其所对应的可执行程序是/system/bin/servicemanager,所对应的源文件是service_manager.c,进程名为/system/bin/servicemanager。

1
2
3
4
5
6
7
8
9
10
service servicemanager /system/bin/servicemanager
class core
user system
group system
critical
onrestart restart healthd
onrestart restart zygote
onrestart restart media
onrestart restart surfaceflinger
onrestart restart drm

所以启动ServiceManager的入口在service_manager.c的main方法中

main

frameworks/native/cmds/servicemanager/service_manager.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
int main(int argc, char** argv)
{
struct binder_state *bs;
union selinux_callback cb;
char *driver;

if (argc > 1) {
driver = argv[1];
} else {
driver = "/dev/binder";
}

// 打开binder驱动
bs = binder_open(driver, 128*1024);

...

// 将ServiceManager设置成binder的守护者
if (binder_become_context_manager(bs)) {
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}

...

// 开启binder循环,监听数据的到来
binder_loop(bs, svcmgr_handler);

return 0;
}

在main方法中主要做了三件事

  1. 通过binder_open打开binder驱动,申请128kb的内存大小空间
  2. 通过binder_become_context_manager将ServiceManager设置成binder的守护者
  3. 通过binder_loop开启binder循环,监听数据

binder_open

frameworks/native/cmds/servicemanager/binder.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
struct binder_state *binder_open(const char* driver, size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;

bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}

// 开启binder驱动
bs->fd = open(driver, O_RDWR | O_CLOEXEC);
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open %s (%s)\n",
driver, strerror(errno));
goto fail_open;
}

// 获取binder版本信息
if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
fprintf(stderr,
"binder: kernel driver version (%d) differs from user space version (%d)\n",
vers.protocol_version, BINDER_CURRENT_PROTOCOL_VERSION);
goto fail_open;
}

// 设置mmap映射大小128kb
bs->mapsize = mapsize;
// 设置内存映射地址
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}

return bs;

fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}

这里主要对bs结构体的三个变量进行赋值,它是binder_state类型的结构体

1
2
3
4
5
6
struct binder_state
{
int fd; // dev/binder 文件描述符
void *mapped; // 映射的内存地址
size_t mapsize; // 映射的大小
};

所以binder_open主要做的事情是

  1. 打开binder驱动
  2. 验证binder版本信息
  3. 设置mmap内存映射大小,默认为128kb
  4. 设置mmap内存映射的地址

binder_become_context_manager

frameworks/native/cmds/servicemanager/binder.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
int binder_become_context_manager(struct binder_state *bs)
{
struct flat_binder_object obj;
// 初始化obj
memset(&obj, 0, sizeof(obj));
obj.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX;

// 与binder驱动进行数据通信
int result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR_EXT, &obj);

if (result != 0) {
android_errorWriteLog(0x534e4554, "121035042");

// 与binder驱动进行数据通信
result = ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
return result;
}

在binder_become_context_manager方法中,通过ioctl与binder驱动进行通信,并传入数据0作为标识,将ServiceManager设置为binder的守护者,用来统一处理binder的数据传输。

binder_loop

frameworks/native/cmds/servicemanager/binder.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];

bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;

readbuf[0] = BC_ENTER_LOOPER;
// 写入数据
binder_write(bs, readbuf, sizeof(uint32_t));

// 开启循环,监听数据的到来
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;

// 获取binder驱动中的数据
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}

// 解析数据
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}

在binder_loop中主要做了三件事:

  1. 首先通过binder_write传递BC_ENTER_LOOPER指令码,告诉binder进入循环
  2. 开启循序,通过ioctl监听并读取数据
  3. 一旦读取到数据,将通过binder_parse来进一步解析

binder_write

frameworks/native/cmds/servicemanager/binder.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
int binder_write(struct binder_state *bs, void *data, size_t len)
{
struct binder_write_read bwr;
int res;

// 将数据填充到bwr中
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
// 传输数据
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)\n",
strerror(errno));
}
return res;
}

这里主要是将数据统一封装到bwr中,bwr是binder_write_read的结构体,当写数据时会将数据写入到write_buffer中,而当读数据时会从read_buffer中读取数据。所以这是一个支持双向读写操作的数据源。以便可以通过ioctl与binder驱动进行读写操作。

由于是首次进入且即将进入循序操作,所以第一次会传递BC_ENTER_LOOPER指令码,通知binder进行循环操作。

所以通过ioctl发送BINDER_WRITE_READ的通信code,将bwr传递给binder驱动。

binder_parse

frameworks/native/cmds/servicemanager/binder.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
// 获取数据截止位的内存地址
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {

...

case BR_TRANSACTION: {
struct binder_transaction_data_secctx txn;

...

if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;

bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, &txn.transaction_data);

// 调用func,对应的是svcmgr_handler
res = func(bs, &txn, &msg, &reply);
if (txn.transaction_data.flags & TF_ONE_WAY) {
binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
} else {
// 发送reply
binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);
}
}
break;
}

...

case BR_REPLY: {

...

break;
}
default:
ALOGE("parse: OOPS %d\n", cmd);
return -1;
}
}

return r;
}

binder_parse中主要是解析binder信息,参数ptr指向BC_ENTER_LOOPER,func指向svcmgr_handler。所以一旦请求到来,会调用svcmgr_handler,并将处理的结构通过binder_send_reply返回会给client端。这个对应的就是之前文章中说的BC_REPLAY。

这个svcmgr_handler是在最外面的binder_loop传递过来的。

svcmgr_handler

frameworks/native/cmds/servicemanager/service_manager.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data_secctx *txn_secctx,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
uint32_t dumpsys_priority;

struct binder_transaction_data *txn = &txn_secctx->transaction_data;

switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
// 查找service
handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid,
(const char*) txn_secctx->secctx);
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;

case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
dumpsys_priority = bio_get_uint32(msg);

// 注册service
if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority,
txn->sender_pid, (const char*) txn_secctx->secctx))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
uint32_t req_dumpsys_priority = bio_get_uint32(msg);
if (!svc_can_list(txn->sender_pid, (const char*) txn_secctx->secctx, txn->sender_euid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
txn->sender_euid);
return -1;
}
si = svclist;

// 遍历service
while (si) {
if (si->dumpsys_priority & req_dumpsys_priority) {
if (n == 0) break;
n--;
}
si = si->next;
}
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %d\n", txn->code);
return -1;
}

bio_put_uint32(reply, 0);
return 0;
}

svcmgr_handler主要是对service的操作处理,例如之前文章中提到的addService操作,最终都会在SVC_MGR_ADD_SERVICE中进行处理。

在SVC_MGR_ADD_SERVICE中会通过do_add_service方法来注册service。

do_add_service

frameworks/native/cmds/servicemanager/service_manager.c

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
int do_add_service(struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle,
uid_t uid, int allow_isolated, uint32_t dumpsys_priority, pid_t spid, const char* sid) {
struct svcinfo *si;

if (!handle || (len == 0) || (len > 127))
return -1;

// 检查是否能够注册该service
if (!svc_can_register(s, len, spid, sid, uid)) {
ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
str8(s, len), handle, uid);
return -1;
}

// 查找是否已经注册了
si = find_svc(s, len);
if (si) { //已经注册
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
str8(s, len), handle, uid);
svcinfo_death(bs, si);
}
si->handle = handle;
} else { //没有注册
// 申请内存
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
str8(s, len), handle, uid);
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->dumpsys_priority = dumpsys_priority;
si->next = svclist;
// 保存到svclist表中
svclist = si;
}

binder_acquire(bs, handle);
binder_link_to_death(bs, handle, &si->death);
return 0;
}

在do_add_service中首先会检查该注册的service是否能够注册,然后再出查询现有的svclist中是否存在该service;如果不存在就为该service申请内存空间,最后在加入到svclist注册表中。

至此整个ServiceManager的流程就分析完了,我这里做个总结:

  1. 通过binder_open打开binder驱动,并调用mmap分配128kb的内存映射地址空间
  2. 通过binder_become_context_manager将ServiceManager设置为binder驱动的守护进程,通过0来标识
  3. 通过binder_loop开启循环,等待与监听client端传递过来的数据
  4. 在数据监听的过程中,使用binder_write通知binder进行循环
  5. 通过ioctl来与binder驱动进行数据读写
  6. 通过binder_parse来解析监听到的数据,根据BR_指令码来区别不同的操作,并通过reply与BC_指令码的方式回馈给client端
  7. 将解析的数据回调给svcmgr_handler进行统一逻辑处理,包括service的注册、查找、验证等操作
  8. 最终ServiceManager会将注册的service保存到svclist注册表中,以便之后的验证与查询

推荐

android_startup: 提供一种在应用启动时能够更加简单、高效的方式来初始化组件,优化启动速度。不仅支持Jetpack App Startup的全部功能,还提供额外的同步与异步等待、线程控制与多进程支持等功能。

AwesomeGithub: 基于Github的客户端,纯练习项目,支持组件化开发,支持账户密码与认证登陆。使用Kotlin语言进行开发,项目架构是基于JetPack&DataBinding的MVVM;项目中使用了Arouter、Retrofit、Coroutine、Glide、Dagger与Hilt等流行开源技术。

flutter_github: 基于Flutter的跨平台版本Github客户端,与AwesomeGithub相对应。

android-api-analysis: 结合详细的Demo来全面解析Android相关的知识点, 帮助读者能够更快的掌握与理解所阐述的要点。

daily_algorithm: 每日一算法,由浅入深,欢迎加入一起共勉。

支持一下
赞赏是一门艺术