Kubernetes/Network Study

Flannel CNI 실습

백곰곰 2024. 9. 7. 17:21
728x90
반응형

가시다님의 Kubernetes Advanced Networking Study에 참여하게 되어, 스터디 때 다룬 주제를 정리하려고 합니다.
2주차는 K8S Flannel CNI & PAUSE를 주제로 진행되었으며,이번 글에서는 Flannel CNI 실습을 다룹니다.

 

CNI란?

Linux 컨테이너에서 네트워크 인터페이스를 구성하기 위한 플러그인(지정된 네트워크 구성을 적용하는 프로그램)을 작성하는 사양과 라이브러리로 구성되어 있습니다. 또한 여러 개의 지원되는 플러그인도 포함하고 있습니다. CNI는 오직 컨테이너의 네트워크 연결성과, 컨테이너가 삭제될 때 할당된 자원을 제거하는 일에만 집중합니다.

CNI 플러그인은 컨테이너가 네트워크에 연결되기 위한 모든 작업에 책임을 가지며, IP 관리(할당, 회수)와 라우팅 정보 설정까지 담당합니다.

CNI 스펙

CNI에는 아래와 같은 내용이 정의되어야 합니다.

  • 네트워크 구성을 관리자가 정의할 수 있는 형식
  • 컨테이너 런타임이 네트워크 플러그인에 요청을 전달하는 프로토콜
  • 제공된 구성을 기반으로 플러그인을 실행하는 절차
  • 플러그인이 다른 플러그인에 기능을 위임하는 절차
  • 플러그인이 런타임에 결과를 반환하는 데이터 형식

 

참고)


CNI 설치(Flannel)

kind를 사용하여 Flannel CNI 실습을 진행해보겠습니다.

1) 클러스터 생성

$ cat <<EOF> kind-cni.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  labels:
    mynode: control-plane
  extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
  - containerPort: 30002
    hostPort: 30002
  kubeadmConfigPatches:
  - |
    kind: ClusterConfiguration
    controllerManager:
      extraArgs:
        bind-address: 0.0.0.0
    etcd:
      local:
        extraArgs:
          listen-metrics-urls: http://0.0.0.0:2381
    scheduler:
      extraArgs:
        bind-address: 0.0.0.0
  - |
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0
- role: worker
  labels:
    mynode: worker
- role: worker
  labels:
    mynode: worker2
networking:
  podSubnet: "10.244.0.0/16"      # 클러스터에서 파드들이 사용할 네트워크 대역, kind에서 기본으로 사용하는 IP대역으로 설정
  serviceSubnet: "10.200.0.0/24"  # 클러스터에서 서비스가 사용할 네트워크 대역
  disableDefaultCNI: true         # 기본 CNI(kubenet)을 사용하지 않음
EOF

$ kind create cluster --config kind-cni.yaml --name myk8s --image kindest/node:v1.30.4
Creating cluster "myk8s" ...
 ✓ Ensuring node image (kindest/node:v1.30.4) 🖼
 ✓ Preparing nodes 📦 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-myk8s"
You can now use your cluster with:

kubectl cluster-info --context kind-myk8s

Thanks for using kind! 😊


$ kubectl cluster-info

Kubernetes control plane is running at https://127.0.0.1:53962
CoreDNS is running at https://127.0.0.1:53962/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ kubectl get nodes -o wide

NAME                  STATUS     ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION    CONTAINER-RUNTIME
myk8s-control-plane   NotReady   control-plane   3m29s   v1.30.4   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.6.31-linuxkit   containerd://1.7.18
myk8s-worker          NotReady   <none>          3m9s    v1.30.4   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.6.31-linuxkit   containerd://1.7.18
myk8s-worker2         NotReady   <none>          3m9s    v1.30.4   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.6.31-linuxkit   containerd://1.7.18

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                                                             NAMES
f6ea2c33cd1b   kindest/node:v1.30.4   "/usr/local/bin/entr…"   4 minutes ago   Up 4 minutes   0.0.0.0:30000-30002->30000-30002/tcp, 127.0.0.1:53962->6443/tcp   myk8s-control-plane
61477778322b   kindest/node:v1.30.4   "/usr/local/bin/entr…"   4 minutes ago   Up 4 minutes                                                                     myk8s-worker2
ed3524884827   kindest/node:v1.30.4   "/usr/local/bin/entr…"   4 minutes ago   Up 4 minutes                                                                     myk8s-worker

## 노드(컨테이너)에 필요한 툴 설치
$ docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping htop git nano -y'
$ docker exec -it myk8s-worker  sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping -y'
$ docker exec -it myk8s-worker2 sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping -y'

2) CNI 플러그인 빌드(bridge 플러그인 파일 준비 - flannel에서 해당 플러그인 호출)

$ docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/# apt install golang -y
...
root@myk8s-control-plane:/# git clone https://github.com/containernetworking/plugins
root@myk8s-control-plane:/# cd plugins
root@myk8s-control-plane:/plugins# ls
CONTRIBUTING.md  LICENSE    README.md	  build_linux.sh    go.mod  integration  plugins  test_linux.sh    vendor
DCO		 OWNERS.md  RELEASING.md  build_windows.sh  go.sum  pkg		 scripts  test_windows.sh
root@myk8s-control-plane:/plugins# chmod +x build_linux.sh
root@myk8s-control-plane:/plugins# ./build_linux.sh
Building plugins
  bandwidth
  firewall
  portmap
  sbr
...
root@myk8s-control-plane:/plugins# ls -l bin
total 75772
-rwxr-xr-x 1 root root  4093230 Sep  7 07:04 bandwidth
-rwxr-xr-x 1 root root  4471145 Sep  7 07:04 bridge
-rwxr-xr-x 1 root root 10195915 Sep  7 07:04 dhcp
-rwxr-xr-x 1 root root  4109486 Sep  7 07:04 dummy
...

root@myk8s-control-plane:/plugins# exit

## bridge 파일 로컬 복사
docker cp -a myk8s-control-plane:/plugins/bin/bridge .

 

3) CNI 설치

## CNI 설치 전 파드 상태 확인
kubectl get po -A -owide
NAMESPACE            NAME                                          READY   STATUS    RESTARTS   AGE     IP           NODE                  NOMINATED NODE   READINESS GATES
kube-system          coredns-7db6d8ff4d-2pg5b                      0/1     Pending   0          2m56s   <none>       myk8s-worker          <none>           <none>
kube-system          coredns-7db6d8ff4d-rrtpm                      0/1     Pending   0          2m56s   <none>       myk8s-worker          <none>           <none>
kube-system          etcd-myk8s-control-plane                      1/1     Running   0          3m12s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-apiserver-myk8s-control-plane            1/1     Running   0          3m12s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-controller-manager-myk8s-control-plane   1/1     Running   0          3m12s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-proxy-74c9n                              1/1     Running   0          2m54s   172.18.0.3   myk8s-worker          <none>           <none>
kube-system          kube-proxy-7l56n                              1/1     Running   0          2m54s   172.18.0.2   myk8s-worker2         <none>           <none>
kube-system          kube-proxy-t28gg                              1/1     Running   0          2m56s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-scheduler-myk8s-control-plane            1/1     Running   0          3m12s   172.18.0.4   myk8s-control-plane   <none>           <none>
local-path-storage   local-path-provisioner-7d4d9bdcc5-z6xmq       0/1     Pending   0          2m56s   <none>       myk8s-worker          <none>      

## flannel 설치
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

## bridge 파일 복사
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping htop git nano -y'
docker exec -it myk8s-worker  sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping -y'
docker exec -it myk8s-worker2 sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping -y'

## 설치 확인
$ kubectl get cm kube-flannel-cfg -n kube-flannel -oyaml | kubectl neat
apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",          # CNI 네트워크 인터페이스 이름 정의
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,     # 각 파드에서 자신의 트래픽을 루프백으로 받을 수 있도록 허용, 파드가 자신의 IP로 오는 트래픽을 수신할 수 있게 함
            "isDefaultGateway": true # flannel이 파드의 기본 게이트웨이 역할을 수용하도록 설정
          }
        },
        {
          "type": "portmap",   
          "capabilities": {
            "portMappings": true 
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "EnableNFTables": false,    ## iptables 기반으로 설정
      "Backend": {
        "Type": "vxlan"
      }
    }
kind: ConfigMap
metadata:
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-cfg
  namespace: kube-flannel
  

kubectl get po -A -owide
NAMESPACE            NAME                                          READY   STATUS    RESTARTS   AGE     IP           NODE                  NOMINATED NODE   READINESS GATES
kube-flannel         kube-flannel-ds-8b7bw                         1/1     Running   0          5m6s    172.18.0.2   myk8s-worker2         <none>           <none>
kube-flannel         kube-flannel-ds-j7dfw                         1/1     Running   0          5m6s    172.18.0.3   myk8s-worker          <none>           <none>
kube-flannel         kube-flannel-ds-kbvcz                         1/1     Running   0          5m7s    172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          coredns-7db6d8ff4d-2pg5b                      1/1     Running   0          6m40s   10.244.2.3   myk8s-worker          <none>           <none>
kube-system          coredns-7db6d8ff4d-rrtpm                      1/1     Running   0          6m40s   10.244.2.2   myk8s-worker          <none>           <none>
kube-system          etcd-myk8s-control-plane                      1/1     Running   0          6m56s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-apiserver-myk8s-control-plane            1/1     Running   0          6m56s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-controller-manager-myk8s-control-plane   1/1     Running   0          6m56s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-proxy-74c9n                              1/1     Running   0          6m38s   172.18.0.3   myk8s-worker          <none>           <none>
kube-system          kube-proxy-7l56n                              1/1     Running   0          6m38s   172.18.0.2   myk8s-worker2         <none>           <none>
kube-system          kube-proxy-t28gg                              1/1     Running   0          6m40s   172.18.0.4   myk8s-control-plane   <none>           <none>
kube-system          kube-scheduler-myk8s-control-plane            1/1     Running   0          6m56s   172.18.0.4   myk8s-control-plane   <none>           <none>
local-path-storage   local-path-provisioner-7d4d9bdcc5-z6xmq       1/1     Running   0          6m40s   10.244.2.4   myk8s-worker          <none>

CNI 설치 전에 Pending 상태이던 파드들이 설치 이후에 IP를 할당받은 것을 확인할 수 있습니다.

참고로, configmap에 포함된 설정은 daemonset의 hostPath 볼륨을 사용해서 노드에 설정됩니다.

$ kubectl get ds kube-flannel-ds -n kube-flannel -oyaml | kubectl neat
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
  labels:
    app: flannel
    k8s-app: flannel
    tier: node
  name: kube-flannel-ds
  namespace: kube-flannel
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: flannel
        tier: node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      containers:
      - args:
        - --ip-masq
        - --kube-subnet-mgr
        command:
        - /opt/bin/flanneld
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        image: docker.io/flannel/flannel:v0.25.6
        imagePullPolicy: IfNotPresent
        name: kube-flannel
        resources:
          requests:
            cpu: 100m
            memory: 50Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
            - NET_RAW
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /run/flannel
          name: run
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
        - mountPath: /run/xtables.lock
          name: xtables-lock
      dnsPolicy: ClusterFirst
      hostNetwork: true
      initContainers:
      - args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        command:
        - cp
        image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
        imagePullPolicy: IfNotPresent
        name: install-cni-plugin
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/cni/bin
          name: cni-plugin
      - args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        command:
        - cp
        image: docker.io/flannel/flannel:v0.25.6
        imagePullPolicy: IfNotPresent
        name: install-cni
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /etc/cni/net.d
          name: cni
        - mountPath: /etc/kube-flannel/
          name: flannel-cfg
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      serviceAccount: flannel
      serviceAccountName: flannel
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        operator: Exists
      volumes:
      - hostPath:
          path: /run/flannel
          type: ""
        name: run
      - hostPath:
          path: /opt/cni/bin
          type: ""
        name: cni-plugin
      - hostPath:
          path: /etc/cni/net.d
          type: ""
        name: cni
      - configMap:
          defaultMode: 420
          name: kube-flannel-cfg
        name: flannel-cfg
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate

 

노드 설정 확인)

$ docker exec -it myk8s-worker bash
root@myk8s-worker:~# cat /etc/cni/net.d/10-flannel.conflist
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}
root@myk8s-worker:~# ls /run/flannel
subnet.env
root@myk8s-worker:~# ls /opt/cni/bin
bridge	flannel  host-local  loopback  portmap	ptp

 

노드는 파드에 할당할 IP에 대한 정보(podCIDR)를 갖고 있습니다(CNI 마다 다를 수 있음). 그리고 podCIDR는 변경이 불가합니다(노드 재생성 필요).

$ kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
10.244.0.0/24 10.244.1.0/24 10.244.2.0/24

$ docker exec -it myk8s-control-plane bash
root@myk8s-control-plane:/# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=65485
FLANNEL_IPMASQ=true

$ kubectl logs kube-controller-manager-myk8s-control-plane -n kube-system | grep -i cidr
I0907 07:30:04.868508       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
I0907 07:30:06.826213       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
I0907 07:30:07.968912       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
I0907 07:30:07.968915       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
I0907 07:30:07.968919       1 shared_informer.go:320] Caches are synced for cidrallocator
I0907 07:30:07.973507       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="myk8s-control-plane" podCIDRs=["10.244.0.0/24"]
I0907 07:30:10.689407       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="myk8s-worker" podCIDRs=["10.244.1.0/24"]
I0907 07:30:11.943526       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="myk8s-worker2" podCIDRs=["10.244.2.0/24"]

 

NodeIPAM controller는 노드마다 겹치지 않는 podCIDR을 노드에 할당합니다.

해당 설정은 아래와 같이 노드에서도 확인할 수 있습니다(ipam). 또한 bridge CNI 플러그인 설정도 확인할 수 있습니다.

$ kubectl get po -A -owide | grep myk8s-control-plane
kube-flannel         kube-flannel-ds-2fl9b                         1/1     Running   0          75m   172.18.0.3   myk8s-control-plane   <none>           <none>
kube-system          coredns-7db6d8ff4d-flz46                      1/1     Running   0          76m   10.244.0.3   myk8s-control-plane   <none>           <none>
kube-system          coredns-7db6d8ff4d-kznt2                      1/1     Running   0          76m   10.244.0.4   myk8s-control-plane   <none>           <none>
kube-system          etcd-myk8s-control-plane                      1/1     Running   0          76m   172.18.0.3   myk8s-control-plane   <none>           <none>
kube-system          kube-apiserver-myk8s-control-plane            1/1     Running   0          76m   172.18.0.3   myk8s-control-plane   <none>           <none>
kube-system          kube-controller-manager-myk8s-control-plane   1/1     Running   0          76m   172.18.0.3   myk8s-control-plane   <none>           <none>
kube-system          kube-proxy-sdvs8                              1/1     Running   0          76m   172.18.0.3   myk8s-control-plane   <none>           <none>
kube-system          kube-scheduler-myk8s-control-plane            1/1     Running   0          76m   172.18.0.3   myk8s-control-plane   <none>           <none>
local-path-storage   local-path-provisioner-7d4d9bdcc5-s9qpg       1/1     Running   0          76m   10.244.0.2   myk8s-control-plane   <none>           <none>

$ docker exec -it myk8s-control-plane bash

root@myk8s-control-plane:~# cd /var/lib/cni/networks/cbr0
root@myk8s-control-plane:/var/lib/cni/networks/cbr0# ls
10.244.0.2  10.244.0.3	10.244.0.4  last_reserved_ip.0	lock
root@myk8s-control-plane:/var/lib/cni/networks/cbr0# cat 10.244.0.4
4163a41c49860677e39f05ba69d99547b86ac904f8068090aa5309c04006c4b6
eth0

root@myk8s-control-plane:~# cd /var/lib/cni/flannel
root@myk8s-control-plane:/var/lib/cni/flannel# ls
4163a41c49860677e39f05ba69d99547b86ac904f8068090aa5309c04006c4b6  8b4a938c07880a68ad9ddb4b44a5a241ca7407684ac863c7818c9a0477137336  ac5a30441e4a1cfe3c788cd7b01d04faa3ec6af4899cc8316b15f9e359506140
root@myk8s-control-plane:/var/lib/cni/flannel# cat 4163a41c49860677e39f05ba69d99547b86ac904f8068090aa5309c04006c4b6 | jq
{
  "cniVersion": "0.3.1",
  "hairpinMode": true,
  "ipMasq": false,
  "ipam": {
    "ranges": [
      [
        {
          "subnet": "10.244.0.0/24"
        }
      ]
    ],
    "routes": [
      {
        "dst": "10.244.0.0/16"
      }
    ],
    "type": "host-local"
  },
  "isDefaultGateway": true,
  "isGateway": true,
  "mtu": 65485,
  "name": "cbr0",
  "type": "bridge"
}

 

 

참고)

728x90

'Kubernetes > Network Study' 카테고리의 다른 글

Kubernetes Service 분석 - ClusterIP, NodePort  (3) 2024.09.28
Calico 개념 및 실습  (3) 2024.09.21
Pause Container 이해하기  (11) 2024.09.02
iptables 이해하기  (0) 2024.08.31
도커 없이 컨테이너 만들기  (1) 2024.08.28