Linux下查看命令(CPU核心数, 系统进程数,进程中的线程数 )

您所在的位置:网站首页 centos7如何看几个CPU Linux下查看命令(CPU核心数, 系统进程数,进程中的线程数 )

Linux下查看命令(CPU核心数, 系统进程数,进程中的线程数 )

2024-07-16 14:02| 来源: 网络整理| 查看: 265

Linux 下多核CPU知识

1. 在Linux下,如何确认是多核或多CPU:

#cat /proc/cpuinfo

如果有多个类似以下的项目,则为多核或多CPU:

processor : 0

......

processor : 1

2. Linux下,如何看每个CPU的使用率:

#top -d 1

之后按下1. 则显示多个CPU

Cpu0 : 1.0%us, 3.0%sy, 0.0%ni, 96.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st

3. 如何察看某个进程在哪个CPU上运行:

#top -d 1

之后按下f.进入top Current Fields设置页面:

选中:j: P = Last used cpu (SMP)

则多了一项:P 显示此进程使用哪个CPU。

Sam经过试验发现:同一个进程,在不同时刻,会使用不同CPU Core.这应该是Linux Kernel SMP处理的。

4. 配置Linux Kernel使之支持多Core:

内核配置期间必须启用 CONFIG_SMP 选项,以使内核感知 SMP。

Processor type and features ---> Symmetric multi-processing support

察看当前Linux Kernel是否支持(或者使用)SMP

#uname -a

5. Kernel 2.6的SMP负载平衡:

在 SMP 系统中创建任务时,这些任务都被放到一个给定的 CPU 运行队列中。通常来说,我们无法知道一个任务何时是短期存在的,何时需要长期运行。因此,最初任务到 CPU 的分配可能并不理想。

为了在 CPU 之间维护任务负载的均衡,任务可以重新进行分发:将任务从负载重的 CPU 上移动到负载轻的 CPU 上。Linux 2.6 版本的调度器使用负载均衡(load balancing) 提供了这种功能。每隔 200ms,处理器都会检查 CPU 的负载是否不均衡;如果不均衡,处理器就会在 CPU 之间进行一次任务均衡操作。

这个过程的一点负面影响是新 CPU 的缓存对于迁移过来的任务来说是冷的(需要将数据读入缓存中)。

记住 CPU 缓存是一个本地(片上)内存,提供了比系统内存更快的访问能力。如果一个任务是在某个 CPU 上执行的,与这个任务有关的数据都会被放到这个 CPU 的本地缓存中,这就称为热的。如果对于某个任务来说,CPU 的本地缓存中没有任何数据,那么这个缓存就称为冷的。

不幸的是,保持 CPU 繁忙会出现 CPU 缓存对于迁移过来的任务为冷的情况。

6. 应用程序如何利用多Core :

开发人员可将可并行的代码写入线程,而这些线程会被SMP操作系统安排并发运行。

另外,Sam设想,对于必须顺序执行的代码。可以将其分为多个节点,每个节点为一个thread.并在节点间放置channel.节点间形如流水线。这样也可以大大增强CPU利用率。

=============================

Linux最大线程数限制及当前线程数查询

检查 使用 ps -fe |grep programname 查看获得进程的pid,再使用 ps -Lf pid 查看对应进程下的线程数. 查找资料发现可以通过设置 ulimit -s 来增加每进程线程数。 每进程可用线程数 = VIRT上限/stack size 32位x86系统默认的VIRT上限是3G(内存分配的3G+1G方式),64位x86系统默认的VIRT上限是64G

1.根据进程号进行查询:

pstree -p 进程号 top -Hp 进程号

1、总结系统限制有:

查看最大进程数:

cat /proc/sys/kernel/pid_max #我8G内存,可用最大进程值32768

查看最大线程数:

cat /proc/sys/kernel/threads-max #我8G内存, 可用最大线程值61036

ulimit -s #可以查看默认的线程栈大小,一般情况下,这个值是 8M[8192]

查看用户可用进程数量 max_user_process

ulimit -u 31508 /proc/sys/vm/max_map_count 系统理论最大进程数 65530

/proc/sys/vm下内核参数解析 [wuyaalan@localhost desktop]$ cd /proc/sys/vm/ [wuyaalan@localhost vm]$ ls block_dump hugepages_treat_as_movable oom_kill_allocating_task compact_memory hugetlb_shm_group overcommit_memory dirty_background_bytes laptop_mode overcommit_ratio dirty_background_ratio legacy_va_layout page-cluster dirty_bytes lowmem_reserve_ratio panic_on_oom dirty_expire_centisecs max_map_count percpu_pagelist_fraction dirty_ratio min_free_kbytes scan_unevictable_pages dirty_writeback_centisecs mmap_min_addr stat_interval drop_caches nr_hugepages swappiness extfrag_threshold nr_overcommit_hugepages vdso_enabled extra_free_kbytes nr_pdflush_threads vfs_cache_pressure highmem_is_dirtyable oom_dump_tasks would_have_oomkilled 从上面结果可以看出,proc文件系统给用户提供了很多内核信息帮助,使得用户可以通过修改内核参数达到提高系统性能的目的。 接下来对上面列出的部分参数含义进行解释说明。 一 block_dump block_dump enables block I/O debugging when set to a nonzero value. If you want to find out which process caused the disk to spin up(see /proc/sys/vm/laptop_mode ), you can gather information by setting the flag. When this flag is set, Linux reports all disk read and write operations that take place, and all block dirtyings done to files. This makes it possible to debug why a disk needs to spin up, and to increase battery life even more. The output of block_dump is written to the kernel output, and it can be retrieved using "dmesg". When you use block_dump and your kernel logging level also includes kernel debugging messages, you probably want to turn off klogd, otherwise the output of block_dump will be logged, causing disk activity that is not normally there. 参数block_dump使块I / O调试时设置为一个非零的值。如果你想找出哪些过程引起的磁盘旋转(见/proc/sys/vm/laptop_mode), 你可以通过设置标志收集信息。设置该标志后,Linux将会以文件的形式报告所有磁盘活动时的读写操作以及所有脏块。这使得它可以解释为什么一个磁盘需要 旋转起来,甚至可以增加电池寿命。把block_dump输出写至内核输出,可以使用“dmesg”相关信息。当你使用block_dump和内核日志记 录级别,还包括内核调试信息,你可能要关闭klogd,否则block_dump输出将被记录,导致不正常的磁盘活动有。 二 dirty_background_ratio Contains, as a percentage of total system memory, the number of pages at which the pdflush background writeback daemon will start writing out dirty data. 参数dirty_background_ratio是当所有被更改页面总大小占工作内存超过 一定比例 时,pdflush 会开始写回工作。用户可以增加这个比例,以增加页面驻留在内存的时间。 三 dirty_expire_centisecs This tunable is used to define when dirty data is old enough to be eligible for writeout by the pdflush daemons. It is expressed in 100'ths of a second. Data which has been dirty in memory for longer than this interval will be written out next time a pdflush daemon wakes up. 参数dirty_expire_centisecs控制一个更改过的页面经过多长时间后被认为是过期的、必须被写回的页面。 四 dirty_ratio Contains, as a percentage of total system memory, the number of pages at which a process which is generating disk writes will itself start writing out dirty data. 五 dirty_writeback_centisecs The pdflush writeback daemons will periodically wake up and write "old" data out to disk. This tunable expresses the interval between those wakeups, in 100'ths of a second. Setting this to zero disables periodic writeback altogether. 参数dirty_writeback_centisecs 是在pdflash线程周期唤醒的时间间隔。也就是每过一定时间pdflsh就会将修改过得数据回写到磁盘。 六 drop_caches Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free. To free pagecache: echo 1 > /proc/sys/vm/drop_caches To free dentries and inodes: echo 2 > /proc/sys/vm/drop_caches To free pagecache, dentries and inodes: echo 3 > /proc/sys/vm/drop_caches As this is a non-destructive operation, and dirty objects are not freeable, the user should run "sync" first in order to make sure all cached objects are freed. This tunable was added in 2.6.16. 七 hugepages_treat_as_movable When a non-zero value is written to this tunable, future allocations for the huge page pool will use ZONE_MOVABLE. Despite huge pages being non-movable, we do not introduce additional external fragmentation of note as huge pages are always the largest contiguous block we care about. Huge pages are not movable so are not allocated from ZONE_MOVABLE by default. However, as ZONE_MOVABLE will always have pages that can be migrated or reclaimed, it can be used to satisfy hugepage allocations even when the system has been running a long time. This allows an administrator to resize the hugepage pool at runtime depending on the size of ZONE_MOVABLE. 八 hugetlb_shm_group hugetlb_shm_group contains group id that is allowed to create SysV shared memory segment using hugetlb page 九 laptop_mode laptop_mode is a knob that controls "laptop mode". When the knob is set, any physical disk I/O (that might have caused the hard disk to spin up, see 。/proc/sys/vm/block_dump) causes Linux to flush all dirty blocks. The result of this is that after a disk has spun down, it will not be spun up anymore to write dirty blocks, because those blocks had already been written immediately after the most recent read operation. The value of the laptop_mode knob determines the time between the occurrence of disk I/O and when the flush is triggered. A sensible value for the knob is 5 seconds. Setting the knob to 0 disables laptop mode. 在“笔记本模式”下,内核更智能的使用 I/O 系统,它会尽量使磁盘处于低能耗的状态下。“笔记本模式”会将许多的 I/O 操作组织在一起,一次完成,而在每次的磁盘 I/O 之间是默认长达 10 分钟的非活动期,这样会大大减少磁盘启动的次数。为了完成这么长时间的非活动期,内核就要在一次活动期时完成尽可能多的 I/O 任务。在一次活动期间,要完成大量的预读,然后将所有的缓冲同步。 十 legacy_va_layout If non-zero, this sysctl disables the new 32-bit mmap map layout - the kernel will use the legacy (2.4) layout for all processes 十一 lowmem_reserve_ratio Ratio of total pages to free pages for each memory zone. 十二 max_map_count This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries. While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation. The default value is 65536. 十三 min_free_kbytes This is used to force the Linux VM to keep a minimum number of kilobytes free. The VM uses this number to compute a pages_min value for each lowmem zone in the system. Each lowmem zone gets a number of reserved free pages based proportionally on its size. 十四 mmap_min_addr This file indicates the amount of address space which a user process will be restricted from mmaping. Since kernel null dereference bugs could accidentally operate based on the information in the first couple of pages of memory userspace processes should not be allowed to write to them. By default this value is set to 0 and no protections will be enforced by the security module. Setting this value to something like 64k will allow the vast majority of applications to work correctly and provide defense in depth against future potential kernel bugs. 十五 nr_hugepages nr_hugepages configures number of hugetlb page reserved for the system. 十六 nr_pdflush_threads The count of currently-running pdflush threads. This is a read-only value. 十七 numa_zonelist_order This sysctl is only for NUMA. 'Where the memory is allocated from' is controlled by zonelists. In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following: ZONE_NORMAL -> ZONE_DMA. This means that a memory allocation request for GFP_KERNEL will get memory from ZONE_DMA only when ZONE_NORMAL is not available. In NUMA case, you can think of following 2 types of order. Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL: (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. Type(A) offers the best locality for processes on Node(0), but ZONE_DMA will be used before ZONE_NORMAL exhaustion. This increases possibility of out-of-memory (OOM) of ZONE_DMA because ZONE_DMA is tend to be small. Type(B) cannot offer the best locality but is more robust against OOM of the DMA zone. Type(A) is called as "Node" order. Type (B) is "Zone" order. "Node order" orders the zonelists by node, then by zone within each node. Specify "[Nn]ode" for node order. "Zone Order" orders the zonelists by zone type, then by node within each zone. Specify "[Zz]one" for zone order. Specify "[Dd]efault" to request automatic configuration. Autoconfiguration will select "node" order in following case: (1) if the DMA zone does not exist or (2) if the DMA zone comprises greater than 50% of the available memory or (3) if any node's DMA zone comprises greater than 60% of its local memory and the amount of local memory is big enough. Otherwise, "zone" order will be selected. Default order is recommended unless this is causing problems for your system/application. 十八 overcommit_memory Controls overcommit of system memory, possibly allowing processes to allocate (but not use) more memory than is actually available. 0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slighly more memory in this mode. This is the default. 1 - Always overcommit. Appropriate for some scientific applications. 2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap plus a configurable percentage (default is 50) of physical RAM. Depending on the percentage you use, in most situations this means a process will not be killed while attempting to use already-allocated memory but will receive errors on memory allocation as appropriate. 十九 overcommit_ratio Percentage of physical memory size to include in overcommit calculations. Memory allocation limit = swapspace + physmem * (overcommit_ratio / 100) swapspace = total size of all swap areas physmem = size of physical memory in system 二十 page-cluster page-cluster controls the number of pages which are written to swap in a single attempt. The swap I/O size. It is a logarithmic value - setting it to zero means "1 page", setting it to 1 means "2 pages", setting it to 2 means "4 pages", etc. The default value is three (eight pages at a time). There may be some small benefits in tuning this to a different value if your workload is swap-intensive. 二十一 panic_on_oom This enables or disables panic on out-of-memory feature. If this is set to 1, the kernel panics when out-of-memory happens. If this is set to 0, the kernel will kill some rogue process, by calling oom_kill(). Usually, oom_killer can kill rogue processes and system will survive. If you want to panic the system rather than killing rogue processes, set this to 1. The default value is 0. 二十二 percpu_pagelist_fraction This is the fraction of pages at most (high mark pcp->high) in each zone that are allocated for each per cpu page list. The min value for this is 8. It means that we don't allow more than 1/8th of pages in each zone to be allocated in any single per_cpu_pagelist. This entry only changes the value of hot per cpu pagelists. User can specify a number like 100 to allocate 1/100th of each zone to each per cpu page list. The batch value of each per cpu pagelist is also updated as a result. It is set to pcp->high / 4. The upper limit of batch is (PAGE_SHIFT * 8). The initial value is zero. Kernel does not use this value at boot time to set the high water marks for each per cpu page list. 二十三 stat_interval With this tunable you can configure VM statistics update interval. The default value is 1. This tunable first appeared in 2.6.22 kernel. 二十四 swap_token_timeout This file contains valid hold time of swap out protection token. The Linux VM has token based thrashing control mechanism and uses the token to prevent unnecessary page faults in thrashing situation. The unit of the value is second. The value would be useful to tune thrashing behavior. This tunable was removed in 2.6.20 when the algorithm got improved. 二十五 swappiness swappiness is a parameter which sets the kernel's balance between reclaiming pages from the page cache and swapping process memory. The default value is 60. If you want kernel to swap out more process memory and thus cache more file contents increase the value. Otherwise, if you would like kernel to swap less decrease it. 二十六 vdso_enabled When this flag is set, the kernel maps a vDSO page into newly created processes and passes its address down to glibc upon exec(). This feature is enabled by default. vDSO is a virtual DSO (dynamic shared object) exposed by the kernel at some address in every process' memory. It's purpose is to speed up system calls. The mapping address used to be fixed (0xffffe000), but starting with 2.6.18 it's randomized (besides the security implications, this also helps debuggers 二十七 vfs_cache_pressure Controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects. At the default value of vfs_cache_pressure = 100 the kernel will attempt to reclaim dentries and inodes at a "fair" rate with respect to pagecache and swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100 causes the kernel to prefer to reclaim dentries and inodes.

硬件内存大小

3、查询当前某程序的线程或进程数(pid为3660 )

pstree -p ps -e | grep java | awk '{print $1}' | wc -l

pstree -p 3660 | wc -l ps -p 3660 H 或 ps H -p 3660

4、查询当前整个系统已用的线程或进程数

pstree -p | wc -l

1、 cat /proc/${pid}/status #查看该pid的thread数 等信息, ppid=》parent pid

2、pstree -p ${pid} #显示该pid 的(子)进程

3、top -p ${pid} 再按H 或者直接输入 top -bH -d 3 -p ${pid}

top -H

手册中说:-H : Threads toggle

加上这个选项启动top,top一行显示一个线程。否则,它一行显示一个进程。

4、ps xH

手册中说:H Show threads as if they were processes

这样可以查看所有存在的线程。

5、ps -mp

手册中说:m Show threads after processes

这样可以查看一个进程起的线程数。



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3