ceph集群状态检查常用命令

您所在的位置:网站首页 查看集群信息的地址是什么 ceph集群状态检查常用命令

ceph集群状态检查常用命令

2023-07-25 07:37| 来源: 网络整理| 查看: 265

导读: 1.从零部署一个ceph集群 2.ceph block device与cephfs快速入门 3.ceph 对象存储快速入门 4.Ceph存储集群&配置 5.centos7 通过cephadm部署ceph octopus版本

一旦我们运行一个集群,我们需要使用ceph工具来监控集群。 监视群集通常涉及检查OSD状态,监视器状态,放置组(pg)状态和元数据服务器状态。

health检查 [root@ceph-admin ~]# ceph health INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 HEALTH_OK 状态检查 [root@ceph-admin ~]# ceph status 或者ceph -s INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 cluster: id: 23db6d22-b1ce-11ea-b263-1e00940000dc health: HEALTH_OK services: mon: 3 daemons, quorum ceph-admin,ceph-node1,ceph-node2 (age 4d) mgr: ceph-admin.zlwsks(active, since 4d), standbys: ceph-node2.lylyez osd: 3 osds: 3 up (since 4d), 3 in (since 4d) rgw: 1 daemon active (mytest.myzone.ceph-node1.xykzap) task status: data: pools: 5 pools, 105 pgs objects: 228 objects, 5.3 KiB usage: 3.1 GiB used, 297 GiB / 300 GiB avail pgs: 105 active+clean 任期状态 [root@ceph-admin ~]# ceph quorum_status INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 {"election_epoch":28,"quorum":[0,1,2],"quorum_names":["ceph-admin","ceph-node1","ceph-node2"],"quorum_leader_name":"ceph-admin","quorum_age":355011,"features":{"quorum_con":"4540138292836696063","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus"]},"monmap":{"epoch":3,"fsid":"23db6d22-b1ce-11ea-b263-1e00940000dc","modified":"2020-06-19T01:46:37.153347Z","created":"2020-06-19T01:42:58.010834Z","min_mon_release":15,"min_mon_release_name":"octopus","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus"],"optional":[]},"mons":[{"rank":0,"name":"ceph-admin","public_addrs":{"addrvec":[{"type":"v2","addr":"10.10.128.174:3300","nonce":0},{"type":"v1","addr":"10.10.128.174:6789","nonce":0}]},"addr":"10.10.128.174:6789/0","public_addr":"10.10.128.174:6789/0","priority":0,"weight":0},{"rank":1,"name":"ceph-node1","public_addrs":{"addrvec":[{"type":"v2","addr":"10.10.128.175:3300","nonce":0},{"type":"v1","addr":"10.10.128.175:6789","nonce":0}]},"addr":"10.10.128.175:6789/0","public_addr":"10.10.128.175:6789/0","priority":0,"weight":0},{"rank":2,"name":"ceph-node2","public_addrs":{"addrvec":[{"type":"v2","addr":"10.10.128.176:3300","nonce":0},{"type":"v1","addr":"10.10.128.176:6789","nonce":0}]},"addr":"10.10.128.176:6789/0","public_addr":"10.10.128.176:6789/0","priority":0,"weight":0}]}} mon 检查 [root@ceph-admin ~]# ceph mon stat INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 e3: 3 mons at {ceph-admin=[v2:10.10.128.174:3300/0,v1:10.10.128.174:6789/0],ceph-node1=[v2:10.10.128.175:3300/0,v1:10.10.128.175:6789/0],ceph-node2=[v2:10.10.128.176:3300/0,v1:10.10.128.176:6789/0]}, election epoch 28, leader 0 ceph-admin, quorum 0,1,2 ceph-admin,ceph-node1,ceph-node2

检测集群使用状态

[root@ceph-admin ~]# ceph df INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 300 GiB 297 GiB 70 MiB 3.1 GiB 1.02 TOTAL 300 GiB 297 GiB 70 MiB 3.1 GiB 1.02 --- POOLS --- POOL ID STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 0 B 0 0 B 0 94 GiB .rgw.root 2 1.9 KiB 13 2.2 MiB 0 94 GiB myzone.rgw.log 3 3.4 KiB 207 6 MiB 0 94 GiB myzone.rgw.control 4 0 B 8 0 B 0 94 GiB myzone.rgw.meta 5 0 B 0 0 B 0 94 GiB

RAW STORAGE 每一列含义如下:

CLASS:OSD设备类别(或者集群总数)SIZE:集群管理的存储容量。AVAIL:群集中可用的可用空间量。USED:用户数据消耗的原始存储量。RAW USED:用户数据,内部开销或保留的容量消耗的原始存储量。%RAW USED:已用原始存储空间的百分比。 将此数字与满比率和接近满比率结合使用,以确保未达到群集的容量。

POOLS每一列含义如下:

POOL:pool的名称ID:pool IDOBJECTS:每个池中存储的对象的数量。USED:存储的数据量(以千字节为单位),除非该数字在兆字节(MB)之后附加M(兆字节)。%USED:每个池使用的存储百分比。MAX AVAIL:可以写入此池的数据量的估计值。

检查osd状态

ceph osd stat 或者ceph osd dump [root@ceph-admin ~]# ceph osd stat INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 3 osds: 3 up (since 4d), 3 in (since 4d); epoch: e164

还可以根据其在CRUSH映射中的位置检查OSD。

[root@ceph-admin ~]# ceph osd tree INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.29306 root default -5 0.09769 host ceph-admin 1 hdd 0.09769 osd.1 up 1.00000 1.00000 -7 0.09769 host ceph-node1 2 hdd 0.09769 osd.2 up 1.00000 1.00000 -3 0.09769 host ceph-node2 0 hdd 0.09769 osd.0 up 1.00000 1.00000

使用socket查询

Cepe admin socket允许通过套接字接口查询daemon。 默认情况下,Ceph套接字位于/var/run/ceph下。 要通过管理套接字访问守护程序,可以登录到运行该守护程序的主机,然后使用以下命令:

ceph daemon {daemon-name} ceph daemon {path-to-socket-file} daemon-name 可以通过ceph orch ps 进行查询 [root@ceph-admin ~]# ceph orch ps INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID alertmanager.ceph-admin ceph-admin running (4d) 65s ago 4d 0.21.0 prom/alertmanager c876f5897d7b 1519fba800d1 crash.ceph-admin ceph-admin running (4d) 65s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 96268d75560d crash.ceph-node1 ceph-node1 running (4d) 51s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 88b93a5fc13c crash.ceph-node2 ceph-node2 running (4d) 61s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc f28bf8e226a5 grafana.ceph-admin ceph-admin running (4d) 65s ago 4d 6.6.2 ceph/ceph-grafana:latest 87a51ecf0b1c ffdc94b51b4f mgr.ceph-admin.zlwsks ceph-admin running (4d) 65s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc f2f37c43ad33 mgr.ceph-node2.lylyez ceph-node2 running (4d) 61s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 296c63eace2e mon.ceph-admin ceph-admin running (4d) 65s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 9b9fc8886759 mon.ceph-node1 ceph-node1 running (4d) 51s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc b44f80941aa2 mon.ceph-node2 ceph-node2 running (4d) 61s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 583fadcf6429 node-exporter.ceph-admin ceph-admin running (4d) 65s ago 4d 1.0.1 prom/node-exporter 0e0218889c33 712293a35a3d node-exporter.ceph-node1 ceph-node1 running (4d) 51s ago 4d 1.0.1 prom/node-exporter 0e0218889c33 5488146a5ec9 node-exporter.ceph-node2 ceph-node2 running (4d) 61s ago 4d 1.0.1 prom/node-exporter 0e0218889c33 610e82d9a2a2 osd.0 ceph-node2 running (4d) 61s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 1ad0eaa85618 osd.1 ceph-admin running (4d) 65s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 2efb75ec9216 osd.2 ceph-node1 running (4d) 51s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc ceff74685794 prometheus.ceph-admin ceph-admin running (4d) 65s ago 4d 2.19.0 prom/prometheus:latest 39d1866a438a bc21536e7852 rgw.mytest.myzone.ceph-node1.xykzap ceph-node1 running (4d) 51s ago 4d 15.2.3 docker.io/ceph/ceph:v15 d72755c420bc 40f483714868

可以通过help查询可用的命令

ceph daemon {daemon-name} help

查询版本为例

[root@ceph-admin ~]# ceph daemon osd.1 version INFO:cephadm:Inferring fsid 23db6d22-b1ce-11ea-b263-1e00940000dc INFO:cephadm:Using recent ceph image ceph/ceph:v15 { "version": "15.2.3", "release": "octopus", "release_type": "stable" }


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3