花式读取Android CPU使用率

您所在的位置:网站首页 安卓cpu监控器 花式读取Android CPU使用率

花式读取Android CPU使用率

2023-09-03 14:02| 来源: 网络整理| 查看: 265

本文包含以下内容:

介绍常见的获取android cpu使用率的方法 介绍这些常见方法背后的原理 介绍我自己写的一个脚本,这个脚本可以获取各个线程在cpu各个核上的占用率 一、常见的获取Android CPU使用率方法及其原理

首先说一下如何查看cpu的基本信息,相信很多人也知道,使用下面的命令即可

adb shell cat /proc/cpuinfo

比如我从手边一台电视上获取到的信息如下,可以看到是个4核CPU,还能看到对应的CPU architecture

processor : 0 BogoMIPS : 24.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: AArch64 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 Hardware : Maserati processor : 1 BogoMIPS : 24.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: AArch64 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 2 Hardware : Maserati processor : 2 BogoMIPS : 24.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: AArch64 CPU variant : 0x0 CPU part : 0xd08 CPU revision : 2 Hardware : Maserati processor : 3 BogoMIPS : 24.00 Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 CPU implementer : 0x41 CPU architecture: AArch64 CPU variant : 0x0 CPU part : 0xd03 CPU revision : 4 Hardware : Maserati

后面会发现,很多CPU使用率都是从/proc下获取到了,而/proc又是啥呢?可以直接参考Linux man-pages

The proc filesystem is a pseudo-filesystem which provides an interface to kernel data structures. It is commonly mounted at /proc. Most of it is read-only, but some files allow kernel variables to be changed.

以上其实算是Linux基础知识,在这里做备忘用

1.1 /proc/stat adb shell cat /proc/stat

还是一样,在我的电视上通过上面的命令可以看到如下内容,需要说明的是,下面带#符号的是我加的注释,实际的打印没有这些内容。因为我们关注的是CPU使用率,所以实际上只需要关注前五行的数据。其他几行数据的含义可以查看Linux man-pages了解其含义。

# user nice system idle iowait irq softirq steal guest guest_nice cpu 9209017 769851 5253355 93211564 47788 0 507580 0 0 0 cpu0 2331920 357020 1753947 22302205 5287 0 499656 0 0 0 cpu1 2280688 26391 794710 24135619 9820 0 2260 0 0 0 cpu2 2289562 26618 782293 24138249 9637 0 3323 0 0 0 cpu3 2306846 359821 1922404 22635490 23043 0 2339 0 0 0 intr 2268122829 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1514463779 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 39918 0 0 0 0 0 0 0 0 0 0 0 0 0 0 24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 8478059 0 1231467 0 63977920 21812722 0 0 0 0 0 0 0 0 22 14476527 0 0 0 0 0 0 0 0 0 0 0 0 0 4489 0 0 3044630 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 30151490 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3126 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ctxt 3894466714 btime 1504771975 processes 324478 procs_running 1 procs_blocked 0 softirq 593223335 0 272491432 25632652 41701499 0 0 4153 92974658 20514 160398427

前五行分别打印了总体CPU数据和各个核的数据。每列数据的含义同样可以参考Linux man-pages,如下: 需要注意的是,以下时间都是从系统启动到当前时间内的累计时间

user (1) Time spent in user mode. 用户态时间 nice (2) Time spent in user mode with low priority (nice). system (3) Time spent in system mode. 系统态时间 idle (4) Time spent in the idle task. 除IO等待之外的其他等待时间 iowait (since Linux 2.5.41) (5) Time waiting for I/O to complete. This value is not reliable, for the following reasons: 1. The CPU will not wait for I/O to complete; iowait is the time that a task is waiting for I/O to complete. When a CPU goes into idle state for outstanding task I/O, another task will be scheduled on this CPU. 2. On a multi-core CPU, the task waiting for I/O to complete is not running on any CPU, so the iowait of each CPU is difficult to calculate. 3. The value in this field may decrease in certain conditions. IO等待时间 irq (since Linux 2.6.0-test4) (6) Time servicing interrupts. 硬中断时间 softirq (since Linux 2.6.0-test4) (7) Time servicing softirqs. 软中断时间 steal (since Linux 2.6.11) (8) Stolen time, which is the time spent in other operating systems when running in a virtualized environment guest (since Linux 2.6.24) (9) Time spent running a virtual CPU for guest operating systems under the control of the Linux kernel. guest_nice (since Linux 2.6.33) (10) Time spent running a niced guest (virtual CPU for guest operating systems under the control of the Linux kernel). 一般取前七个变量(user, nice, system, idle, iowait, irq, softirq)之和即为总的cpu时间,因为这是一个累计时间,所以我们只需要在两个时间点分别读一下cpu快照,设为total_time_old 和 total_time_new,则两个值相减即为这段时间内的总CPU时间total_time_delta,然后想办法读一个进程或线程在相同时间段内的cpu时间proc_time_delta, 则该进程或线程的cpu使用率即为100% * proc_time_delta / total_time_delta 那么要如何读一个进程或线程的cpu数据呢?请看下文 1.2 /proc/[pid]/stat 和 /proc/[pid]/task/[tid]/stat adb shell cat /proc/[pid]/stat adb shell cat /proc/[pid]/task/[tid]/stat

至于如何获取pid 和 tid, 则可以用ps命令,比如在我手边的电视上用ps命令先查看一个进程的pid,这里以ijkplayer demo为例,如下

u0_a69 18446 1758 915464 29648 SyS_epoll_ 0000000000 S tv.danmaku.ijk.media.example

然后用下面的命令看看这个进程都有那些线程

adb shell ps -t 18446

结果如下

USER PID PPID VSIZE RSS WCHAN PC NAME u0_a69 18446 1758 915464 29648 SyS_epoll_ 0000000000 S tv.danmaku.ijk.media.example u0_a69 18451 18446 915464 29648 do_sigtime 0000000000 S Signal Catcher u0_a69 18452 18446 915464 29648 poll_sched 0000000000 S JDWP u0_a69 18453 18446 915464 29648 futex_wait 0000000000 S ReferenceQueueD u0_a69 18454 18446 915464 29648 futex_wait 0000000000 S FinalizerDaemon u0_a69 18455 18446 915464 29648 futex_wait 0000000000 S FinalizerWatchd u0_a69 18456 18446 915464 29648 futex_wait 0000000000 S HeapTaskDaemon u0_a69 18457 18446 915464 29648 binder_thr 0000000000 S Binder_1 u0_a69 18458 18446 915464 29648 binder_thr 0000000000 S Binder_2 u0_a69 18491 18446 915464 29648 futex_wait 0000000000 S ModernAsyncTask u0_a69 18495 18446 915464 29648 SyS_epoll_ 0000000000 S RenderThread u0_a69 18502 18446 915464 29648 futex_wait 0000000000 S mali-mem-purge u0_a69 18503 18446 915464 29648 futex_wait 0000000000 S mali-utility-wo u0_a69 18504 18446 915464 29648 futex_wait 0000000000 S mali-utility-wo u0_a69 18505 18446 915464 29648 futex_wait 0000000000 S mali-utility-wo u0_a69 18506 18446 915464 29648 futex_wait 0000000000 S mali-utility-wo u0_a69 18507 18446 915464 29648 poll_sched 0000000000 S mali-cmar-backe u0_a69 18508 18446 915464 29648 futex_wait 0000000000 S mali-hist-dump u0_a69 19664 18446 915464 29648 futex_wait 0000000000 S ModernAsyncTask u0_a69 25018 18446 915464 29648 binder_thr 0000000000 S Binder_3 u0_a69 25026 18446 915464 29648 futex_wait 0000000000 S ModernAsyncTask

分别看看进程和随便一个线程的cpu数据如下

18446 (k.media.example) S 1758 1757 0 0 -1 1077936448 20639 0 1 0 70 18 0 0 20 0 21 0 8405754 937435136 7412 18446744073709551615 1 1 0 0 0 0 4612 0 38136 18446744073709551615 0 0 17 3 0 0 0 0 0 0 0 0 0 0 0 0 0 18495 (RenderThread) S 1758 1757 0 0 -1 1077936192 3474 0 0 0 32 4 0 0 16 -4 21 0 8405783 937435136 7412 18446744073709551615 1 1 0 0 0 0 4612 0 38136 18446744073709551615 0 0 -1 3 0 0 0 0 0 0 0 0 0 0 0 0 0

一口气打印了50多个数据,不用怕,接着查看Linux man-page,可以知道各个数据项的含义如下,先说明一下/proc/[pid]/stat的含义,如下,可以看到就是进程的信息

Status information about the process. This is used by ps(1). It is defined in the kernel source file fs/proc/array.c.

而/proc/[pid]/task的含义,如下

This is a directory that contains one subdirectory for each thread in the process. The name of each subdirectory is the numerical thread ID ([tid]) of the thread (see gettid(2)). Within each of these subdirectories, there is a set of files with the same names and contents as under the /proc/[pid] directories.

是线程的信息,而且该目录下的子目录结构和/proc/[pid]/stat下的一致 下面来看看这50多项是什么含义,为了阅读方便,我把上面的获取到的数据也写到各项含义后面

(1) pid %d The process ID. 18446

(2) comm %s 线程名或进程名 (k.media.example)

(3) state %c One of the following characters, indicating process state运行状态,常见值有如下: (这个例子中是S)

R Running S Sleeping in an interruptible wait D Waiting in uninterruptible disk sleep Z Zombie

(4) ppid %d 父进程ID The PID of the parent of this process. 1758

(5) pgrp %d The process group ID of the process. 1757

(6) session %d The session ID of the process. 0

(7) tty_nr %d The controlling terminal of the process. 0

(8) tpgid %d The ID of the foreground process group of the controlling terminal of the process. -1

(9) flags %u The kernel flags word of the process. For bit meanings, see the PF_* defines in the Linux kernel source file include/linux/sched.h. Details depend on the kernel version. 1077936448

(10) minflt %lu The number of minor faults the process has made which have not required loading a memory page from disk. 20639

(11) cminflt %lu The number of minor faults that the process's waited-for children have made. 0

(12) majflt %lu The number of major faults the process has made which have required loading a memory page from disk. 1

(13) cmajflt %lu The number of major faults that the process's waited-for children have made. 0

(14) utime %lu 用户态时间Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. 70

(15) stime %lu 系统态时间 Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). 18

(16) cutime %ld Amount of time that this process's waited-for children have been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). (See also times(2).) This includes guest time, cguest_time (time spent running a virtual CPU, see below). 0

(17) cstime %ld Amount of time that this process's waited-for children have been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). 0

(18) priority %ld 优先级,取值在0(high)-39(low)之间,本例中是20

(19) nice %ld The nice value (see setpriority(2)), a value in the range 19 (low priority) to -20 (high priority). 0

(20) num_threads %ld 线程数 ,在本例中是21

(21) itrealvalue %ld hard coded as 0.

(22) starttime %llu 进程启动的时间 The time the process started after system boot. 8405754

(23) vsize %lu Virtual memory size in bytes. 937435136

(24) rss %ld Resident Set Size: number of pages the process has in real memory. This is just the pages which count toward text, data, or stack space. This does not include pages which have not been demand-loaded in, or which are swapped out. 7412

(25) rsslim %lu Current soft limit in bytes on the rss of the process; see the description of RLIMIT_RSS in getrlimit(2). 18446744073709551615

(26) startcode %lu [PT] The address above which program text can run. 1

(27) endcode %lu [PT] The address below which program text can run. 1

(28) startstack %lu [PT] The address of the start (i.e., bottom) of the stack. 0

(29) kstkesp %lu [PT] The current value of ESP (stack pointer), as found in the kernel stack page for the process. 0

(30) kstkeip %lu [PT] The current EIP (instruction pointer). 0

(31) signal %lu The bitmap of pending signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. 0

(32) blocked %luThe bitmap of blocked signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. 4612

(33) sigignore %lu The bitmap of ignored signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. 0

(34) sigcatch %lu The bitmap of caught signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. 38136

(35) wchan %lu [PT] This is the "channel" in which the process is waiting. It is the address of a location in the kernel where the process is sleeping. The corresponding symbolic name can be found in /proc/[pid]/wchan. 18446744073709551615

(36) nswap %lu always 0

(37) cnswap %lu always 0

(38) exit_signal %d (since Linux 2.1.22) Signal to be sent to parent when we die. 17

(39) processor %d 上次运行在哪个cpu核上(since Linux 2.2.8) CPU number last executed on. 3

(40) rt_priority %u (since Linux 2.5.19) Real-time scheduling priority, a number in the range1 to 99 for processes scheduled under a real-time policy, or 0, for non-real-time processes (see sched_setscheduler(2)). 0

(41) policy %u (since Linux 2.5.19) Scheduling policy (see sched_setscheduler(2)). Decode using the SCHED_* constants in linux/sched.h. 0

(42) delayacct_blkio_ticks %llu (since Linux 2.6.18) Aggregated block I/O delays, measured in clock ticks (centiseconds). 0

(43) guest_time %lu (since Linux 2.6.24) Guest time of the process (time spent running a virtual CPU for a guest operating system), measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). 0

(44) cguest_time %ld (since Linux 2.6.24) Guest time of the process's children, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). 0

(45) start_data %lu (since Linux 3.3) [PT] Address above which program initialized and uninitialized (BSS) data are placed. 0

(46) end_data %lu (since Linux 3.3) [PT] Address below which program initialized and uninitialized (BSS) data are placed. 0

(47) start_brk %lu (since Linux 3.3) [PT] Address above which program heap can be expanded with brk(2). 0

(48) arg_start %lu (since Linux 3.5) [PT] Address above which program command-line arguments (argv) are placed. 0

(49) arg_end %lu (since Linux 3.5) [PT] Address below program command-line arguments (argv) are placed. 0

(50) env_start %lu (since Linux 3.5) [PT] Address above which program environment is placed. 0

(51) env_end %lu (since Linux 3.5) [PT] Address below which program environment is placed. 0

(52) exit_code %d (since Linux 3.5) [PT] The thread's exit status in the form reported by waitpid(2). 0

上面这些项在kernel中都能找到对应的代码,在fs/proc/array.c的do_task_stat方法中,如下

seq_printf(m, "%d (%s) %c", pid_nr_ns(pid, ns), tcomm, state); seq_put_decimal_ll(m, ' ', ppid); seq_put_decimal_ll(m, ' ', pgid); seq_put_decimal_ll(m, ' ', sid); seq_put_decimal_ll(m, ' ', tty_nr); seq_put_decimal_ll(m, ' ', tty_pgrp); seq_put_decimal_ull(m, ' ', task->flags); seq_put_decimal_ull(m, ' ', min_flt); seq_put_decimal_ull(m, ' ', cmin_flt); seq_put_decimal_ull(m, ' ', maj_flt); seq_put_decimal_ull(m, ' ', cmaj_flt); seq_put_decimal_ull(m, ' ', cputime_to_clock_t(utime)); seq_put_decimal_ull(m, ' ', cputime_to_clock_t(stime)); seq_put_decimal_ll(m, ' ', cputime_to_clock_t(cutime)); seq_put_decimal_ll(m, ' ', cputime_to_clock_t(cstime)); seq_put_decimal_ll(m, ' ', priority); ....

虽然数据项很多,但是我们并不是全都关心,其中只有线程名,pid,优先级,运行在哪个核上,以及当前进程或线程占用的cpu时间这几项是我们所关心的。具体来说,process_total_time = utime + stime + cutime + cstime ,即上面数据项中的(14)~(17)项。由此,我们就很清楚要怎么计算某一进程或线程的CPU使用率了: 在两个时间点分别通过/proc/stat和/proc/[pid]/stat抓取总体CPU数据快照和进程(线程)CPU数据快照,从而计算出total_time_delta和process_time_delta,如果要具体到某一个核上的CPU使用率,则利用/proc/stat也可以计算出core_time_delta,随后利用process_time_delta100%/total_time_delta或process_time_delta100%/core_time_delta即可计算进程(线程)的总体CPU使用率或某一个核上的CPU使用率

1.3 top adb shell top

top提供了CPU数据的实时监视

Usage: top [ -m max_procs ] [ -n iterations ] [ -d delay ] [ -s sort_column ] [ -t ] [ -h ] -m num Maximum number of processes to display. 最多显示几个进程,top会自动进行排序,比如让CPU占用率高的进程在前 -n num Updates to show before exiting. 刷新次数 -d num Seconds to wait between updates. 刷新间隔,可以输入小数即代表毫秒级间隔 -s col Column to sort by (cpu,vss,rss,thr). 选择以哪一项进行排序 -t Show threads instead of processes. 显示线程 -h Display this help screen.

在手边的电视上运行top -m 5命令,结果如下

User 5%, System 5%, IOW 0%, IRQ 0% User 70 + Nice 0 + Sys 70 + Idle 1069 + IOW 1 + IRQ 0 + SIRQ 3 = 1213 PID PR CPU% S #THR VSS RSS PCY UID Name 1728 0 2% S 28 648828K 18764K fg system /system/bin/surfaceflinger 26366 2 2% S 31 1004812K 134940K fg system com.xxxxxxxxxxx 1792 0 1% S 61 1640236K 16508K fg root /applications/bin/xxxx 3906 3 0% S 47 935428K 31300K fg system com.xxxxxxxxxxxx 25192 1 0% S 60 973844K 36872K bg system com.xxxxxxxxxx

相信此时你一定已经明白最开始两行数据的含义了。接下来的表头项含义如下

PID:略 PR:在android N之前代表运行在哪个核上,在android N上代表优先级,当然可能设备厂商会进行自定义 CPU%:略 S:运行状态 #THR:线程数 VSS:Virtual Set Size 虚拟耗用内存(包含共享库占用的内存) RSS:Resident Set Size 实际使用物理内存(包含共享库占用的内存) PCY:调度策略优先级,SP_BACKGROUND/SP_FOREGROUND UID:进程所有者的用户id Name:进程名

加上-t参数,结果如下

User 2%, System 2%, IOW 0%, IRQ 0% User 30 + Nice 0 + Sys 33 + Idle 1195 + IOW 0 + IRQ 0 + SIRQ 2 = 1260 PID TID PR CPU% S VSS RSS PCY UID Thread Proc 29402 29402 2 0% R 4204K 1612K fg shell top top 1792 2099 1 0% S 1640236K 16508K fg root InitHDMIthread /applications/xxxx 1039 1039 3 0% S 0K 0K fg root irq/202-scaler 29395 29395 0 0% S 0K 0K fg root kworker/0:2 1737 2392 3 0% S 826844K 10920K fg media mediaserver /system/bin/mediaserver

多了TID和Thread表头项,顾名思义。 那么top命令又是如何计算出cpu占用率的呢?想必你已经猜到了,也是通过读取上面的/proc/stat,/proc/[pid]/stat,/proc/[pid]/task/[tid]/stat。查看top的源码,在system/core/toolbox/top.c中可以看到 读取CPU数据部分的代码如下,可以说是非常浅显易懂了

static void read_procs(void) { DIR *proc_dir, *task_dir; struct dirent *pid_dir, *tid_dir; char filename[64]; FILE *file; int proc_num; struct proc_info *proc; pid_t pid, tid; int i; proc_dir = opendir("/proc"); if (!proc_dir) die("Could not open /proc.\n"); new_procs = calloc(INIT_PROCS * (threads ? THREAD_MULT : 1), sizeof(struct proc_info *)); num_new_procs = INIT_PROCS * (threads ? THREAD_MULT : 1); file = fopen("/proc/stat", "r"); if (!file) die("Could not open /proc/stat.\n"); fscanf(file, "cpu %lu %lu %lu %lu %lu %lu %lu", &new_cpu.utime, &new_cpu.ntime, &new_cpu.stime, &new_cpu.itime, &new_cpu.iowtime, &new_cpu.irqtime, &new_cpu.sirqtime); fclose(file); proc_num = 0; while ((pid_dir = readdir(proc_dir))) { if (!isdigit(pid_dir->d_name[0])) continue; pid = atoi(pid_dir->d_name); struct proc_info cur_proc; if (!threads) { proc = alloc_proc(); proc->pid = proc->tid = pid; sprintf(filename, "/proc/%d/stat", pid); read_stat(filename, proc); sprintf(filename, "/proc/%d/cmdline", pid); read_cmdline(filename, proc); sprintf(filename, "/proc/%d/status", pid); read_status(filename, proc); read_policy(pid, proc); proc->num_threads = 0; } else { sprintf(filename, "/proc/%d/cmdline", pid); read_cmdline(filename, &cur_proc); sprintf(filename, "/proc/%d/status", pid); read_status(filename, &cur_proc); proc = NULL; } sprintf(filename, "/proc/%d/task", pid); task_dir = opendir(filename); if (!task_dir) continue; while ((tid_dir = readdir(task_dir))) { if (!isdigit(tid_dir->d_name[0])) continue; if (threads) { tid = atoi(tid_dir->d_name); proc = alloc_proc(); proc->pid = pid; proc->tid = tid; sprintf(filename, "/proc/%d/task/%d/stat", pid, tid); read_stat(filename, proc); read_policy(tid, proc); strcpy(proc->name, cur_proc.name); proc->uid = cur_proc.uid; proc->gid = cur_proc.gid; add_proc(proc_num++, proc); } else { proc->num_threads++; } } closedir(task_dir); if (!threads) add_proc(proc_num++, proc); } for (i = proc_num; i < num_new_procs; i++) new_procs[i] = NULL; closedir(proc_dir); } static int read_stat(char *filename, struct proc_info *proc) { FILE *file; char buf[MAX_LINE], *open_paren, *close_paren; file = fopen(filename, "r"); if (!file) return 1; fgets(buf, MAX_LINE, file); fclose(file); /* Split at first '(' and last ')' to get process name. */ open_paren = strchr(buf, '('); close_paren = strrchr(buf, ')'); if (!open_paren || !close_paren) return 1; *open_paren = *close_paren = '\0'; strncpy(proc->tname, open_paren + 1, THREAD_NAME_LEN); proc->tname[THREAD_NAME_LEN-1] = 0; /* Scan rest of string. */ sscanf(close_paren + 1, " %c " "%*d %*d %*d %*d %*d %*d %*d %*d %*d %*d " "%" SCNu64 "%" SCNu64 "%*d %*d %*d %*d %*d %*d %*d " "%" SCNu64 "%" SCNu64 "%*d %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d " "%d", &proc->state, &proc->utime, &proc->stime, &proc->vss, &proc->rss, &proc->prs); return 0; }

而计算CPU占用率的方法也和我们前面说的一致,同样在top.c中可以看到,也很浅显易懂

static void print_procs(void) { int i; struct proc_info *old_proc, *proc; long unsigned total_delta_time; struct passwd *user; char *user_str, user_buf[20]; for (i = 0; i < num_new_procs; i++) { if (new_procs[i]) { old_proc = find_old_proc(new_procs[i]->pid, new_procs[i]->tid); if (old_proc) { new_procs[i]->delta_utime = new_procs[i]->utime - old_proc->utime; new_procs[i]->delta_stime = new_procs[i]->stime - old_proc->stime; } else { new_procs[i]->delta_utime = 0; new_procs[i]->delta_stime = 0; } new_procs[i]->delta_time = new_procs[i]->delta_utime + new_procs[i]->delta_stime; } } total_delta_time = (new_cpu.utime + new_cpu.ntime + new_cpu.stime + new_cpu.itime + new_cpu.iowtime + new_cpu.irqtime + new_cpu.sirqtime) - (old_cpu.utime + old_cpu.ntime + old_cpu.stime + old_cpu.itime + old_cpu.iowtime + old_cpu.irqtime + old_cpu.sirqtime); qsort(new_procs, num_new_procs, sizeof(struct proc_info *), proc_cmp); printf("\n\n\n"); printf("User %ld%%, System %ld%%, IOW %ld%%, IRQ %ld%%\n", ((new_cpu.utime + new_cpu.ntime) - (old_cpu.utime + old_cpu.ntime)) * 100 / total_delta_time, ((new_cpu.stime ) - (old_cpu.stime)) * 100 / total_delta_time, ((new_cpu.iowtime) - (old_cpu.iowtime)) * 100 / total_delta_time, ((new_cpu.irqtime + new_cpu.sirqtime) - (old_cpu.irqtime + old_cpu.sirqtime)) * 100 / total_delta_time); printf("User %ld + Nice %ld + Sys %ld + Idle %ld + IOW %ld + IRQ %ld + SIRQ %ld = %ld\n", new_cpu.utime - old_cpu.utime, new_cpu.ntime - old_cpu.ntime, new_cpu.stime - old_cpu.stime, new_cpu.itime - old_cpu.itime, new_cpu.iowtime - old_cpu.iowtime, new_cpu.irqtime - old_cpu.irqtime, new_cpu.sirqtime - old_cpu.sirqtime, total_delta_time); printf("\n"); if (!threads) printf("%5s %2s %4s %1s %5s %7s %7s %3s %-8s %s\n", "PID", "PR", "CPU%", "S", "#THR", "VSS", "RSS", "PCY", "UID", "Name"); else printf("%5s %5s %2s %4s %1s %7s %7s %3s %-8s %-15s %s\n", "PID", "TID", "PR", "CPU%", "S", "VSS", "RSS", "PCY", "UID", "Thread", "Proc"); for (i = 0; i < num_new_procs; i++) { proc = new_procs[i]; if (!proc || (max_procs && (i >= max_procs))) break; user = getpwuid(proc->uid); if (user && user->pw_name) { user_str = user->pw_name; } else { snprintf(user_buf, 20, "%d", proc->uid); user_str = user_buf; } if (!threads) { printf("%5d %2d %3" PRIu64 "%% %c %5d %6" PRIu64 "K %6" PRIu64 "K %3s %-8.8s %s\n", proc->pid, proc->prs, proc->delta_time * 100 / total_delta_time, proc->state, proc->num_threads, proc->vss / 1024, proc->rss * getpagesize() / 1024, proc->policy, user_str, proc->name[0] != 0 ? proc->name : proc->tname); } else { printf("%5d %5d %2d %3" PRIu64 "%% %c %6" PRIu64 "K %6" PRIu64 "K %3s %-8.8s %-15s %s\n", proc->pid, proc->tid, proc->prs, proc->delta_time * 100 / total_delta_time, proc->state, proc->vss / 1024, proc->rss * getpagesize() / 1024, proc->policy, user_str, proc->tname, proc->name); } } } 1.4 dumpsys cpuinfo adb shell dumpsys cpuinfo

在手边的电视上运行的结果如下

Load: 3.18 / 3.42 / 3.49 CPU usage from 1053590ms to 153542ms ago: 7% 1792/xxx: 2.7% user + 4.3% kernel 3.3% 1728/surfaceflinger: 2.3% user + 0.9% kernel 3.1% 26366/com.xxxx: 2.4% user + 0.7% kernel / faults: 197480 minor 2.1% 25192/com.xxxx: 1.7% user + 0.4% kernel / faults: 28686 minor 1.7% 2204/system_server: 1.2% user + 0.4% kernel / faults: 4071 minor ....

dumpsys的原理是利用Binder的dump,如源码所示,在/frameworks/native/cmds/dumpsys/dumpsys.cpp中

sp service = sm->checkService(services[i]); if (service != NULL) { if (N > 1) { aout


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3