获取Pod的网卡和IP地址的几种方式

您所在的位置:网站首页 mtu抓包 获取Pod的网卡和IP地址的几种方式

获取Pod的网卡和IP地址的几种方式

2023-04-13 20:56| 来源: 网络整理| 查看: 265

业务出现异常时需要抓包分析,但Pod内的容器没有提供tcpdump命令无法抓包,又或者不知道宿主机veth网卡是哪个Pod,无法直接在宿主机用tcpdump抓包,所以需要通过一些手段获取正确的网卡信息,找到与之对应的veth网卡。

Containerd方式一

适用于有ip命令的容器,可以使用kubectl exec进入容器内执行命令获取网卡序号。

步骤:

执行kubectl get pods -owide获取Pod以及所属节点执行kubectl exec获取网卡和IP地址,关键是网卡后面的数字序号,例如eth0@if12登录对应的节点,执行ip addr show命令筛选宿主机指定序号的veth网卡,该网卡就是Pod的eth0网卡。root@k8s-master1:~# kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 47m 172.16.2.31 k8s-master1 nginx-deployment-7fb96c846b-dtb2q 1/1 Running 0 40m 172.16.2.32 k8s-master1 nginx-deployment-7fb96c846b-hlt58 1/1 Running 0 40m 172.16.3.6 k8s-node1 nginx-deployment-7fb96c846b-m868g 1/1 Running 0 40m 172.16.0.56 k8s-master2 root@k8s-master1:~# root@k8s-master1:~# kubectl exec -ti busybox -- ip addr show 1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if12: mtu 1450 qdisc noqueue link/ether 92:d9:97:72:d5:ab brd ff:ff:ff:ff:ff:ff inet 172.16.2.31/24 brd 172.16.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::90d9:97ff:fe72:d5ab/64 scope link valid_lft forever preferred_lft forever root@k8s-master1:~# root@k8s-master1:~# ip addr show | grep "^12:\ veth" 12: veth7cbd10a4@if2: mtu 1450 qdisc noqueue master cni0 state UP group default root@k8s-master1:~# 方式二

适用于没有ip命令的容器,无法使用kubectl exec进入容器执行命令,只能通过宿主机进入该容器的命名空间获取网卡序号。

步骤:

执行kubectl get pods -owide获取Pod以及所属节点登录对应的节点,执行crictl ps命令获取容器ID执行nerdctl -n http://k8s.io inspect -f {{.State.Pid}} 命令获取容器PID执行nsenter -n -t ip addr show命令获取容器的网卡和IP地址执行ip addr show命令筛选宿主机指定序号的veth网卡root@k8s-master1:~# kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 47m 172.16.2.31 k8s-master1 nginx-deployment-7fb96c846b-dtb2q 1/1 Running 0 40m 172.16.2.32 k8s-master1 nginx-deployment-7fb96c846b-hlt58 1/1 Running 0 40m 172.16.3.6 k8s-node1 nginx-deployment-7fb96c846b-m868g 1/1 Running 0 40m 172.16.0.56 k8s-master2 root@k8s-master1:~# root@k8s-master1:~# kubectl exec -ti nginx-deployment-7fb96c846b-dtb2q -- ip addr show error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "c4ef4783d5d0d9220984607230817e19b614b07ba245bab55bc430e7bdf3abd3": OCI runtime exec failed: exec failed: unable to start container process: exec: "ip": executable file not found in $PATH: unknown root@k8s-master1:~# root@k8s-master1:~# crictl ps | grep busybox$ 3f085e4ef1c4d 295c7be079025 42 minutes ago Running nginx 0 3521e0a123229 nginx-deployment-7fb96c846b-dtb2q root@k8s-master1:~# root@k8s-master1:~# nerdctl -n k8s.io inspect -f {{.State.Pid}} 3f085e4ef1c4d 687162 root@k8s-master1:~# root@k8s-master1:~# nsenter -n -t 687162 ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if13: mtu 1450 qdisc noqueue state UP group default link/ether 5a:b3:fd:c9:06:22 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.16.2.32/24 brd 172.16.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::58b3:fdff:fec9:622/64 scope link valid_lft forever preferred_lft forever root@k8s-master1:~# root@k8s-master1:~# ip addr show | grep "^13:\ veth" 13: vethd3fc224d@if2: mtu 1450 qdisc noqueue master cni0 state UP group default root@k8s-master1:~# Dockerroot@k8s-node1:~# docker ps | grep nginx-deployment-7fb96c846b-hlt58 09e994c11b41 295c7be07902 "nginx -g 'daemon of…" 57 seconds ago Up 56 seconds k8s_nginx_nginx-deployment-7fb96c846b-hlt58_default_b6d4672e-559f-45c1-bc47-26a3b86e560a_0 9789ce9bba3c registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 57 seconds ago Up 57 seconds k8s_POD_nginx-deployment-7fb96c846b-hlt58_default_b6d4672e-559f-45c1-bc47-26a3b86e560a_0 root@k8s-node1:~# root@k8s-node1:~# docker inspect -f {{.State.Pid}} 09e994c11b41 6944 root@k8s-node1:~# root@k8s-node1:~# nsenter -n -t 6944 ip addr show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: eth0@if7: mtu 1450 qdisc noqueue state UP group default link/ether da:c5:dc:51:f9:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.16.3.6/24 brd 172.16.3.255 scope global eth0 valid_lft forever preferred_lft forever root@k8s-node1:~# root@k8s-node1:~# ip addr show | grep "^7:\ veth" 7: veth3b3e7690@if2: mtu 1450 qdisc noqueue master cni0 state UP group default root@k8s-node1:~# 其他方式

往Pod注入临时容器,然后执行ip命令获取网卡和IP地址。临时容器注入成功后无法单独删除,只能跟随Pod一起删除。

root@k8s-master1:~# kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 81m 172.16.2.31 k8s-master1 nginx-deployment-7fb96c846b-dtb2q 1/1 Running 0 75m 172.16.2.32 k8s-master1 nginx-deployment-7fb96c846b-hlt58 1/1 Running 0 75m 172.16.3.6 k8s-node1 nginx-deployment-7fb96c846b-m868g 1/1 Running 0 75m 172.16.0.56 k8s-master2 root@k8s-master1:~# root@k8s-master1:~# kubectl exec -ti nginx-deployment-7fb96c846b-dtb2q -- ip addr show error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "92122aa30868a183806c3120ba0aaacce72fea7306e642d7ce4ead3684bb3269": OCI runtime exec failed: exec failed: unable to start container process: exec: "ip": executable file not found in $PATH: unknown root@k8s-master1:~# root@k8s-master1:~# kubectl debug -ti nginx-deployment-7fb96c846b-dtb2q --image=busybox:1.28 --target=nginx Targeting container "nginx". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-bvft7. If you don't see a command prompt, try pressing enter. / # ip addr show 1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if13: mtu 1450 qdisc noqueue link/ether 5a:b3:fd:c9:06:22 brd ff:ff:ff:ff:ff:ff inet 172.16.2.32/24 brd 172.16.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::58b3:fdff:fec9:622/64 scope link valid_lft forever preferred_lft forever / # exit Session ended, the ephemeral container will not be restarted but may be reattached using 'kubectl attach nginx-deployment-7fb96c846b-dtb2q -c debugger-bvft7 -i -t' if it is still running root@k8s-master1:~# root@k8s-master1:~# ip addr show | grep "^13:\ veth" 13: vethd3fc224d@if2: mtu 1450 qdisc noqueue master cni0 state UP group default root@k8s-master1:~#

得到容器在宿主机的veth网卡就可以使用tcpdump进行抓包。

tcpdump -i vethd3fc224d



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3