Kubernetes

파드의 컨테이너 pid 및 cgroup에 할당된 cpu, mem 확인 방법(containerd)

백곰곰 2024. 3. 12. 17:50
728x90
반응형

특정 파드의 노드 상 pid를 확인하고 cgroup에 할당된 값을 확인해 보겠습니다.

파드 pid 확인 및 cgroup 설정값 확인

1. 해당 파드가 배포된 노드 접속

2. 컨테이너 목록 확인 및 파드명으로 container id 검색

# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER           IMAGE               CREATED              STATE               NAME                          ATTEMPT             POD ID              POD
68aa4891cf93f       05455a08881ea       About a minute ago   Running             nsenter                       0                   962745781cdb8       nsenter-a1brmb
acbde9dbbb13d       e4720093a3c13       4 days ago           Running             simple-http                   0                   1bfcc1fdab823       nginx-deployment-test-1-6fbcdc88c9-8mxr9
452eb6a07d711       6a3226f0df713       4 days ago           Running             liveness-probe                0                   46d1a323fb500       ebs-csi-node-zmdgs
e69ee3ec6602e       52b705756054c       4 days ago           Running             node-driver-registrar         0                   46d1a323fb500       ebs-csi-node-zmdgs
94142eaaa7592       e4720093a3c13       4 days ago           Running             simple-http                   0                   436e3dc0f27e6       nginx-deployment-test-1-6fbcdc88c9-6f89c
...
# crictl ps | grep [파드명]
94142eaaa7592       e4720093a3c13       4 days ago           Running             simple-http                   0                   436e3dc0f27e6       nginx-deployment-test-1-6fbcdc88c9-6f89c

3. 컨테이너의 pid 확인

# crictl inspect 94142eaaa7592 | grep -i pid
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
    "pid": 5171,
            "pid": 1
            "type": "pid"

 

4. 컨테이너 리소스 확인

# crictl inspect 94142eaaa7592
...
  "resources": {
      "linux": {
        "cpuPeriod": "100000",
        "cpuQuota": "100000",
        "cpuShares": "1024",
        "cpusetCpus": "",
        "cpusetMems": "",
        "hugepageLimits": [],
        "memoryLimitInBytes": "2147483648",
        "memorySwapLimitInBytes": "2147483648",
        "oomScoreAdj": "-997",
        "unified": {}
      },
      "windows": null
    }
...

참고) 파드 설정

    resources:
      limits:
        cpu: "1"
        memory: 2Gi
      requests:
        cpu: "1"
        memory: 2Gi

 

5. cgroup 확인

# cd /proc/5171
# cat /proc/5171/cgroup
11:net_cls,net_prio:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
10:devices:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
9:memory:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
8:freezer:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
7:perf_event:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
6:hugetlb:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
5:blkio:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
4:cpuset:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
3:pids:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
2:cpu,cpuacct:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
1:name=systemd:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope

# mem limit 확인
# cd /sys/fs/cgroup/memory/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
# ls | grep limit
# cat memory.limit_in_bytes
2147483648
# cat memory.soft_limit_in_bytes
9223372036854771712

# cpu limit 확인
# /sys/fs/cgroup/cpu/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
# cat cpu.cfs_quota_us
100000
# cat cpu.cfs_period_us
100000
# cat cpu.shares
1024

 

파드 request/limit 별 cgroup 설정값 비교

실제 파드에 설정한 resources 항목이 cgroup에 어떻게 반영되는지 확인해 보겠습니다.

케이스 1)

설정값

    resources:
      limits:
        cpu: "1"
        memory: 2Gi
      requests:
        cpu: "1"
        memory: 2Gi

 

crictl inspect 확인

  "resources": {
      "linux": {
        "cpuPeriod": "100000",
        "cpuQuota": "100000",
        "cpuShares": "1024",
        "cpusetCpus": "",
        "cpusetMems": "",
        "hugepageLimits": [],
        "memoryLimitInBytes": "2147483648",
        "memorySwapLimitInBytes": "2147483648",
        "oomScoreAdj": "-997",
        "unified": {}
      },
      "windows": null
    }

 

케이스 2)

설정값

    resources:
      limits:
        cpu: 800m
        memory: 1Gi
      requests:
        cpu: 400m
        memory: 512Mi

crictl inspect 확인

    "resources": {
      "linux": {
        "cpuPeriod": "100000",
        "cpuQuota": "80000",
        "cpuShares": "409",
        "cpusetCpus": "",
        "cpusetMems": "",
        "hugepageLimits": [],
        "memoryLimitInBytes": "1073741824",
        "memorySwapLimitInBytes": "1073741824",
        "oomScoreAdj": "968",
        "unified": {}
      },

 

cgroup 설정값 확인

# cat memory.soft_limit_in_bytes
9223372036854771712
# cat memory.limit_in_bytes
1073741824

# cat cpu.cfs_quota_us
80000
# cat cpu.cfs_period_us
100000
# cat cpu.shares
409

 

두 개의 케이스를 비교했을 때 실제 파드의 memory request 값은 cgroup에 별도로 설정되지 않는 것을 확인할 수 있습니다.

cpu의 경우에는 request값이 cpu.shares 값에 반영됩니다.

  - 계산식 : requests.cpu/1000*1024 (1 core = 1000m로 환산)
     ex) requests.cpu : 400m -> cpu.shares : 400/1000*1024 = 409.6

또한, memory의 limit값이 hard limit에 설정되는 것을 볼 수 있습니다.

 

참고)

- crictl 설치 :

 

GitHub - kubernetes-sigs/cri-tools: CLI and validation tools for Kubelet Container Runtime Interface (CRI) .

CLI and validation tools for Kubelet Container Runtime Interface (CRI) . - kubernetes-sigs/cri-tools

github.com

- Kubernetes Container Resource Requirements — Part 2: CPU :

 

Kubernetes Container Resource Requirements — Part 2: CPU

CPU Requests, Limits, Guaranteed or Burstable?

medium.com

- Kubernetes에서의 cpu requests, cpu limits는 어떻게 적용될까 :

 

Kubernetes에서의 cpu requests, cpu limits는 어떻게 적용될까 :: Ibiza

Kubernetes에서의 cpu requests, cpu limits는 어떻게 적용될까 Kubernetes 에서는 컨테이너 단위로 resource를 할당할 수 있다. 여기에는 memory, cpu, ephemeral-storage, hugepages 등이 포함된다. 이 중에서 cpu 의 requests,

kimmj.github.io

 

728x90