Installer k3s et déployer cilium sous Debian 12
Une page simple et rapide pour installer un cluster Kubernetes avec k3s et Cilium, offrant un guide de mise en place rapide concis, sans approfondir les technologies. À la fin de cette documentation, vous aurez un cluster Kubernetes prêt à être exploité.
Environnement de test#
Pour les besoins de cette opération, j’utiliserai une machine sous Debian 12, disposant de 12 Go de mémoire vive, de 6 cœurs CPU virtuel, un disque de 100 Gb et une carte réseau 1 Gb ethernet.
L’objectif de cet environnement est de créer un cluster Kubernetes aussi proche que possible d’une configuration de production. Après quelques surprises lors de mes tests, j’utilise désormais directement « root » au lieu de « sudo ». Je néglige entièrement la partie IGC et les certificats pour le moment. L’objectif est d’avoir un cluster prêt à exécuter des pods, avec le serveur k3s fonctionnant à la fois comme « master » et « worker ».
Pour ce faire, j’utiliserai les outils K3S, Cilium (1.18.1), et Gateway-API. Cilium sera installé avec « Helm », tandis que les CRDs gateway-api seront installées avec kubectl kustomize. Volontairement, je ne verrai pas les outils de stockage (CSI) dans cette page.
Pré-requis#
Avant d’installer Kubernetes, assurez-vous que votre système a une horloge synchronisée avec une source de temps fiable, une adresse IP statique et le fichier /etc/hosts à jour. Vérifiez également que des outils ou des configurations inhabituelles, telles que SELinux, App Armor ou un durcissement excessif, ne bloquent pas l’installation des paquets.
Installation de k3s en tant que master#
L’équipe de Rancher offre un binaire en Go téléchargeable sur leur site web, permettant d’installer et de configurer un Kubernetes opérationnel avec une seule ligne de commande. Ci-dessous, la commande que j’utilise pour instancier k3s en limitant les outils installés, afin d’ajouter cilium ultérieurement :
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" sh -s - --write-kubeconfig-mode 644 \
--flannel-backend=none \
--disable-kube-proxy \
--disable servicelb \
--disable-network-policy \
--disable traefikLes arguments désactivent Flannel, empêchent le déploiement de Traefik (mode ingress) et bloque la configuration d’un network-policy par défaut. Les options et désactivations seront surchargées par cilium dans quelques instants.
Via un systemctl status k3s, contrôlez que le service est dans le statut “démarré”. Pour une facilité d’utilisation, je lance la commande kubectl completion, simplifiant la saisie des commandes : echo 'source <(kubectl completion bash)' >>~/.bashrc.
Lancez les deux commandes suivantes pour valider l’accès à Kubernetes et ses ressources : kubectl get nodes && kubectl get pods -A -o wide.
root@vps-1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vps-1 NotReady control-plane,master 15m v1.33.3+k3s1Kubernetes doit être lancé à cet instant, mais les pods sans doute dans un état “ContainerCreating” ou “CrashLoopbackOff” :
root@vps-1:~# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5688667fd4-sznbs 0/1 Pending 0 18m <none> <none> <none> <none>
kube-system local-path-provisioner-774c6665dc-nrxd5 0/1 Pending 0 18m <none> <none> <none> <none>
kube-system metrics-server-6f4c6675d5-jnhfk 0/1 Pending 0 18m <none> <none> <none> <none>En récupérant la liste des nœuds, le statut est « NotReady ». L’absence de composante réseau (CNI) empêche le démarrage des pods essentiels au fonctionnement de l’écosystème Kubernetes, rendant ainsi le cluster incapable d’accepter des charges de travail.
Plusieurs prérequis sont nécessaires pour le bon fonctionnement de Cilium, notamment le montage du volume /sys/fs/bpf (pour le module eBPF) et les CRDs gateway-api. Avant d’installer les CRDs, vérifiez la compatibilité entre la version de Cilium et les CRDs, disponible sur le site de Cilium
. Avec la version 1.18.1 de Cilium, vous pouvez utiliser gateway-api : kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.3.0" | kubectl apply -f -.
⚠️S’il y a un problème de connexion en lançant les commandes kubectl, exportez la variable “KUBECONFIG” suivante : export KUBECONFIG=/etc/rancher/k3s/k3s.yaml.## Installation de Cilium Enfin, nous arrivons. Depuis la version 1.14.0 de cilium, il est préférable d’utiliser leur ligne de commande (cilium-cli). Nous allons donc télécharger l’outil, l’installer et lancer le déploiement de cilium. Les opérations sont simples, la CLI fait tout.
$ export CLI_ARCH=amd64
$ export CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
$ curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
$ sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
$ tar xzvf cilium-linux-${CLI_ARCH}.tar.gz
$ mv cilium /usr/local/bin/
$ cilium install \
--version 1.18.1 \
--namespace kube-system \
--ipam kubernetes \
--kube-proxy-replacement=true \
--enable-gateway-api \
--set operator.replicas=1 \
--set ipam.operator.clusterPoolIPv4PodCIDRList=10.42.0.0/16 \
--set operator.metrics.enabled=false \
--set k8sServiceHost=127.0.0.1 \
--set k8sServicePort=6443Assurez-vous d’avoir les bonnes versions, via la commande cilium version. Mi-août 2025, l’article se base sur ces versions :
$ cilium version
cilium-cli: v0.18.6 compiled with go1.24.5 on linux/amd64
cilium image (default): v1.18.0
cilium image (stable): v1.18.1
cilium image (running): unknown. Unable to obtain cilium version. Reason: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refused
$ kubectl version
Client Version: v1.33.3+k3s1
Kustomize Version: v5.6.0
Server Version: v1.33.3+k3s1Via la commande cilium install, l’outil va déployer tout le nécessaire pour votre cluster Kubernetes. Patientez quelques minutes que tout le déploiement s’exécute. Au travers de la commande watch -d 'kubectl get pods -A -o wide -n kube-system', vous verrez les conteneurs gigoter.
Enfin, en faisant un cilium status --wait, vous verrez l’état de la CNI. L’option --wait est l’équivalent d’un watch, s’arrêtant une fois les status “OK”.
root@vps-1:~# cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1
Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1
Containers: cilium Running: 1
cilium-envoy Running: 1
cilium-operator Running: 1
clustermesh-apiserver
hubble-relay
Cluster Pods: 6/6 managed by Cilium
Helm chart version: 1.18.1
Image versions cilium quay.io/cilium/cilium:v1.18.1@sha256:65ab17c052d8758b2ad157ce766285e04173722df59bdee1ea6d5fda7149f0e9: 1
cilium-envoy quay.io/cilium/cilium-envoy:v1.34.4-1754895458-68cffdfa568b6b226d70a7ef81fc65dda3b890bf@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2: 1
cilium-operator quay.io/cilium/operator-generic:v1.18.1@sha256:97f4553afa443465bdfbc1cc4927c93f16ac5d78e4dd2706736e7395382201bc: 1Dans ce statut, relevez les états de “Cilium” et “Operator”. Tant qu’ils ne sont pas OK, la CNI n’est pas fonctionnelle.
Étape cruciale, tester le bon fonctionnement de Cilium. Lancez la commande cilium connectivity test. Pendant plusieurs minutes, une batterie de tests et quelques pods seront lancés, pour tester les différents composants de la CNI, le filtrage interne et externe.
root@vps-1:~# cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [default] Creating namespace cilium-test-1 for connectivity check...
✨ [default] Deploying echo-same-node service...
✨ [default] Deploying DNS test server configmap...
✨ [default] Deploying same-node deployment...
✨ [default] Deploying client deployment...
✨ [default] Deploying client2 deployment...
⌛ [default] Waiting for deployment cilium-test-1/client to become ready...
⌛ [default] Waiting for deployment cilium-test-1/client2 to become ready...
⌛ [default] Waiting for deployment cilium-test-1/echo-same-node to become ready...
⌛ [default] Waiting for pod cilium-test-1/client-645b68dcf7-qlr6z to reach DNS server on cilium-test-1/echo-same-node-69f8679755-8hs65 pod...
⌛ [default] Waiting for pod cilium-test-1/client2-66475877c6-cxkp2 to reach DNS server on cilium-test-1/echo-same-node-69f8679755-8hs65 pod...
⌛ [default] Waiting for pod cilium-test-1/client2-66475877c6-cxkp2 to reach default/kubernetes service...
⌛ [default] Waiting for pod cilium-test-1/client-645b68dcf7-qlr6z to reach default/kubernetes service...
⌛ [default] Waiting for Service cilium-test-1/echo-same-node to become ready...
⌛ [default] Waiting for Service cilium-test-1/echo-same-node to be synchronized by Cilium pod kube-system/cilium-wzzj6
⌛ [default] Waiting for NodePort 51.68.44.195:30096 (cilium-test-1/echo-same-node) to become ready...
ℹ️ Skipping IPCache check
🔭 Enabling Hubble telescope...
⚠️ Unable to contact Hubble Relay, disabling Hubble telescope and flow validation: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:4245: connect: connection refused"
ℹ️ Expose Relay locally with:
cilium hubble enable
cilium hubble port-forward&
ℹ️ Cilium version: 1.18.1
🏃[cilium-test-1] Running 123 tests ...
[=] [cilium-test-1] Test [no-policies] [1/123]
....................
[=] [cilium-test-1] Skipping test [no-policies-from-outside] [2/123] (skipped by condition)
[=] [cilium-test-1] Test [no-policies-extra] [3/123]
..
[=] [cilium-test-1] Test [allow-all-except-world] [4/123]
........
[=] [cilium-test-1] Test [client-ingress] [5/123]
..
[=] [cilium-test-1] Test [client-ingress-knp] [6/123]
..
[=] [cilium-test-1] Test [allow-all-with-metrics-check] [7/123]
..
[=] [cilium-test-1] Test [all-ingress-deny] [8/123]
......
[=] [cilium-test-1] Skipping test [all-ingress-deny-from-outside] [9/123] (skipped by condition)
[=] [cilium-test-1] Test [all-ingress-deny-knp] [10/123]
......
[=] [cilium-test-1] Test [all-egress-deny] [11/123]
........
[=] [cilium-test-1] Test [all-egress-deny-knp] [12/123]
........
[=] [cilium-test-1] Test [all-entities-deny] [13/123]
......
[=] [cilium-test-1] Test [cluster-entity] [14/123]
..
[=] [cilium-test-1] Skipping test [cluster-entity-multi-cluster] [15/123] (skipped by condition)
[=] [cilium-test-1] Test [host-entity-egress] [16/123]
..
[=] [cilium-test-1] Test [host-entity-ingress] [17/123]
[=] [cilium-test-1] Test [echo-ingress] [18/123]
..
[=] [cilium-test-1] Skipping test [echo-ingress-from-outside] [19/123] (skipped by condition)
[=] [cilium-test-1] Test [echo-ingress-knp] [20/123]
..
[=] [cilium-test-1] Test [client-ingress-icmp] [21/123]
..
[=] [cilium-test-1] Test [client-egress] [22/123]
..
[=] [cilium-test-1] Test [client-egress-knp] [23/123]
..
[=] [cilium-test-1] Test [client-egress-expression] [24/123]
..
[=] [cilium-test-1] Test [client-egress-expression-port-range] [25/123]
..
[=] [cilium-test-1] Test [client-egress-expression-knp] [26/123]
..
[=] [cilium-test-1] Test [client-egress-expression-knp-port-range] [27/123]
..
[=] [cilium-test-1] Test [client-with-service-account-egress-to-echo] [28/123]
..
[=] [cilium-test-1] Test [client-with-service-account-egress-to-echo-port-range] [29/123]
..
[=] [cilium-test-1] Test [client-egress-to-echo-service-account] [30/123]
..
[=] [cilium-test-1] Test [client-egress-to-echo-service-account-port-range] [31/123]
..
[=] [cilium-test-1] Test [to-entities-world] [32/123]
......
[=] [cilium-test-1] Test [to-entities-world-port-range] [33/123]
......
[=] [cilium-test-1] Test [to-cidr-external] [34/123]
....
[=] [cilium-test-1] Test [to-cidr-external-knp] [35/123]
....
[=] [cilium-test-1] Skipping test [seq-from-cidr-host-netns] [36/123] (skipped by condition)
[=] [cilium-test-1] Test [echo-ingress-from-other-client-deny] [37/123]
....
[=] [cilium-test-1] Test [client-ingress-from-other-client-icmp-deny] [38/123]
....
[=] [cilium-test-1] Test [client-egress-to-echo-deny] [39/123]
....
[=] [cilium-test-1] Test [client-egress-to-echo-deny-port-range] [40/123]
....
[=] [cilium-test-1] Test [client-ingress-to-echo-named-port-deny] [41/123]
..
[=] [cilium-test-1] Test [client-egress-to-echo-expression-deny] [42/123]
..
[=] [cilium-test-1] Test [client-egress-to-echo-expression-deny-port-range] [43/123]
..
[=] [cilium-test-1] Test [client-with-service-account-egress-to-echo-deny] [44/123]
..
[=] [cilium-test-1] Test [client-with-service-account-egress-to-echo-deny-port-range] [45/123]
..
[=] [cilium-test-1] Test [client-egress-to-echo-service-account-deny] [46/123]
.
[=] [cilium-test-1] Test [client-egress-to-echo-service-account-deny-port-range] [47/123]
.
[=] [cilium-test-1] Test [client-egress-to-cidr-deny] [48/123]
....
[=] [cilium-test-1] Test [client-egress-to-cidrgroup-deny] [49/123]
I0821 09:48:12.430887 17824 warnings.go:110] "Warning: cilium.io/v2alpha1 CiliumCIDRGroup is deprecated; use cilium.io/v2 CiliumCIDRGroup"
I0821 09:48:12.447954 17824 warnings.go:110] "Warning: cilium.io/v2alpha1 CiliumCIDRGroup is deprecated; use cilium.io/v2 CiliumCIDRGroup"
....
I0821 09:48:17.457318 17824 warnings.go:110] "Warning: cilium.io/v2alpha1 CiliumCIDRGroup is deprecated; use cilium.io/v2 CiliumCIDRGroup"
[=] [cilium-test-1] Test [client-egress-to-cidrgroup-deny-by-label] [50/123]
I0821 09:48:18.844599 17824 warnings.go:110] "Warning: cilium.io/v2alpha1 CiliumCIDRGroup is deprecated; use cilium.io/v2 CiliumCIDRGroup"
I0821 09:48:18.870625 17824 warnings.go:110] "Warning: cilium.io/v2alpha1 CiliumCIDRGroup is deprecated; use cilium.io/v2 CiliumCIDRGroup"
....
I0821 09:48:23.744067 17824 warnings.go:110] "Warning: cilium.io/v2alpha1 CiliumCIDRGroup is deprecated; use cilium.io/v2 CiliumCIDRGroup"
[=] [cilium-test-1] Test [client-egress-to-cidr-deny-default] [51/123]
....
[=] [cilium-test-1] Skipping test [clustermesh-endpointslice-sync] [52/123] (skipped by condition)
[=] [cilium-test-1] Test [health] [53/123]
.
[=] [cilium-test-1] Skipping test [north-south-loadbalancing] [54/123] (Feature node-without-cilium is disabled)
[=] [cilium-test-1] Skipping test [pod-to-pod-encryption] [55/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-pod-with-l7-policy-encryption] [56/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-pod-encryption-v2] [57/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-pod-with-l7-policy-encryption-v2] [58/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [node-to-node-encryption] [59/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [seq-egress-gateway] [60/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [seq-egress-gateway-multigateway] [61/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [egress-gateway-excluded-cidrs] [62/123] (Feature enable-egress-gateway is disabled)
[=] [cilium-test-1] Skipping test [seq-egress-gateway-with-l7-policy] [63/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-node-cidrpolicy] [64/123] (Feature cidr-match-nodes is disabled)
[=] [cilium-test-1] Skipping test [north-south-loadbalancing-with-l7-policy] [65/123] (Feature node-without-cilium is disabled)
[=] [cilium-test-1] Skipping test [north-south-loadbalancing-with-l7-policy-port-range] [66/123] (Feature node-without-cilium is disabled)
[=] [cilium-test-1] Test [echo-ingress-l7] [67/123]
......
[=] [cilium-test-1] Skipping test [echo-ingress-l7-via-hostport] [68/123] (skipped by condition)
[=] [cilium-test-1] Test [echo-ingress-l7-named-port] [69/123]
......
[=] [cilium-test-1] Test [client-egress-l7-method] [70/123]
......
[=] [cilium-test-1] Test [client-egress-l7-method-port-range] [71/123]
......
[=] [cilium-test-1] Test [client-egress-l7] [72/123]
........
[=] [cilium-test-1] Test [client-egress-l7-port-range] [73/123]
........
[=] [cilium-test-1] Test [client-egress-l7-named-port] [74/123]
........
[=] [cilium-test-1] Test [client-egress-tls-sni] [75/123]
......
[=] [cilium-test-1] Test [client-egress-tls-sni-denied] [76/123]
......
[=] [cilium-test-1] Test [client-egress-tls-sni-wildcard] [77/123]
......
[=] [cilium-test-1] Test [client-egress-tls-sni-wildcard-denied] [78/123]
..
[=] [cilium-test-1] Test [client-egress-tls-sni-double-wildcard] [79/123]
......
[=] [cilium-test-1] Test [client-egress-tls-sni-double-wildcard-denied] [80/123]
..
[=] [cilium-test-1] Test [client-egress-l7-tls-headers-sni] [81/123]
..
[=] [cilium-test-1] Test [client-egress-l7-tls-headers-other-sni] [82/123]
..
[=] [cilium-test-1] Test [client-egress-l7-set-header] [83/123]
..
[=] [cilium-test-1] Test [client-egress-l7-set-header-port-range] [84/123]
..
[=] [cilium-test-1] Skipping test [echo-ingress-auth-always-fail] [85/123] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test-1] Skipping test [echo-ingress-auth-always-fail-port-range] [86/123] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test-1] Skipping test [echo-ingress-mutual-auth-spiffe] [87/123] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test-1] Skipping test [echo-ingress-mutual-auth-spiffe-port-range] [88/123] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service] [89/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-allow-ingress-identity] [90/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-all] [91/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-backend-service] [92/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-ingress-identity] [93/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-source-egress-other-node] [94/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service] [95/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service-deny-all-ingress] [96/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service-deny-cidr] [97/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service-deny-world-identity] [98/123] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [l7-lb] [99/123] (Feature loadbalancer-l7 is disabled)
[=] [cilium-test-1] Test [dns-only] [100/123]
........
[=] [cilium-test-1] Test [to-fqdns] [101/123]
........
[=] [cilium-test-1] Test [to-fqdns-with-proxy] [102/123]
........
[=] [cilium-test-1] Skipping test [pod-to-controlplane-host] [103/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-k8s-on-controlplane] [104/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-controlplane-host-cidr] [105/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-k8s-on-controlplane-cidr] [106/123] (skipped by condition)
[=] [cilium-test-1] Test [policy-local-cluster-egress] [107/123]
..
[=] [cilium-test-1] Skipping test [local-redirect-policy] [108/123] (Feature enable-local-redirect-policy is disabled)
[=] [cilium-test-1] Skipping test [local-redirect-policy-with-node-dns] [109/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-pod-no-frag] [110/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [seq-bgp-control-plane-v1] [111/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [seq-bgp-control-plane-v2] [112/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [multicast] [113/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [strict-mode-encryption] [114/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [strict-mode-encryption-v2] [115/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [host-firewall-ingress] [116/123] (skipped by condition)
[=] [cilium-test-1] Skipping test [host-firewall-egress] [117/123] (skipped by condition)
[=] [cilium-test-1] Test [seq-client-egress-l7-tls-deny-without-headers] [118/123]
..
[=] [cilium-test-1] Test [seq-client-egress-l7-tls-headers] [119/123]
..
[=] [cilium-test-1] Test [seq-client-egress-l7-extra-tls-headers] [120/123]
....
[=] [cilium-test-1] Test [seq-client-egress-l7-tls-headers-port-range] [121/123]
..
[=] [cilium-test-1] Test [no-unexpected-packet-drops] [122/123]
.
[=] [cilium-test-1] Test [check-log-errors] [123/123]
.........
✅ [cilium-test-1] All 74 tests (295 actions) successful, 49 tests skipped, 0 scenarios skipped.L’objectif est d’obtenir un résultat positif avec tous les tests réussis. En cas d’échecs ou de scénarios manqués, des logs seront affichés pour vous fournir des indices afin de résoudre les problèmes.
Facultatif, mais conseillé : Hubble#
Pour aller plus loin, j’ai installé Hubble. Cet outil supervise et inspecte en profondeur les flux réseaux de votre cluster grâce au module eBPF. Rapidement, vous pouvez initier le déploiement en saisissant cilium hubble enable. De nouveaux pods vont s’initialiser, dont un vous permettant d’afficher une page web où vous pourrez voir les différents flux à l’intérieur de vos namespace, entre vos pods et services etc.
Installer k3s en tant que worker#
Si vous avez plusieurs machines, vous pouvez en dédier plusieurs pour faire des “worker ». Depuis votre master, récupérez le jeton de votre cluster (node-token) via la commande suivante : cat /var/lib/rancher/k3s/server/node-token. Maintenant, installons k3s sur le worker, en spécifiant l’IP du master et le node-token :
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='agent' K3S_URL=https://<ip_master_k3s>:6443 K3S_TOKEN=aaaaa::server:3289azer sh -Sur votre nœud master, en faisant un kubectl get nodes, vous devriez voir vos deux serveurs :
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s-worker1 Ready <none> 17h v1.27.4+k3s1
k3s-master1 Ready control-plane,master 17h v1.27.4+k3s1Si ce n’est pas le cas, regardez les journaux côté worker (soit c’est un problème de token, ou de connexion réseau (IP, DNS, hosts, route, pare-feu)).
Complément#
Lorsque vous faites des essais et des tests de déploiement avec k3s, vous pouvez rapidement saturer votre espace disque. N’oubliez pas de purger les images avec les commandes sudo k3s crictl images et sudo k3s crictl rmi --purge.
Sources#
- Documentation officielle : https://docs.cilium.io/en/stable/
- Spectrocloud : https://www.spectrocloud.com/blog/getting-started-with-cilium-for-kubernetes-networking-and-observability
- Mots-clés
- #kubernetes #k3s #cilium
- Auteur
- Julien HOMMET
- date +"%Y-%m-%d"
- Temps_lecture
- 12 minutes
- quantité_mots
- 2481 mots
- Catégorie
- linux
- maj $(date +"%Y-%m-%d")