Thank you for reading this post, don't forget to subscribe!
Общая схема:
имеются следующие сервера:
3 сервера для master nodes, 2 для worker nodes и 1 для load balancer:
192.168.1.120 kub-lb-120
192.168.1.121 kub-master-121
192.168.1.122 kub-master-122
192.168.1.123 kub-master-123
192.168.1.124 kub-work-124
192.168.1.124 kub-work-125
Балансировщик нагрузки NGINX
Будем использовать NGINX в качестве TCP балансировщика.
Добавляем репозиторий:
Добавляем в самый них nginx.conf /etc/nginx/nginx.conf
на:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
stream { upstream stream_backend { least_conn; server 192.168.1.121:6443; server 192.168.1.122:6443; server 192.168.1.123:6443; } server { listen 6443; proxy_pass stream_backend; } } |
[/codesyntax]
Рестартуем nginx сервер:
systemctl enable nginx
Kubernetes
Подготавливаем Kubernetes сервера на базе centos 7. (на всех нодах кроме балансировщика)
Добавляем docker репозиторий:
Устанавливаем docker:
1 2 3 4 5 6 7 8 |
{ "exec-opts": ["native.cgroupdriver=systemd"], "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "experimental":true } |
[/codesyntax]
swapoff -a
Мастер ноды:
Создаем директорию kubeadm:
В этой директории создаем конфигурационный файл для инициализации кластера:
C содержимым:
Обозначения: | |
kubernetesVersion | Версия kubernetes |
controlPlaneEndpoint | IP адрес api, мы указываем ip нашего балансировщика |
podSubnet | Сеть для pod-ов |
Инициализируем кластер с указанием нашего конфига и флагом —upload-certs. С версии 1.14 появилась возможность загружать сертификаты в etcd.
Примечание!!!!!!!
[codesyntax lang="php" blockstate="collapsed"]
1 2 3 4 5 |
Kubeadm сгенерирует нам сертификаты, конфиги для компонентов Kubernetes, и команды для join. Начиная с версии 1.14 в kubeadm появилась функция join для мастеров и оператор для etcd. Что позволяет быстро добавлять новые мастера в кластер Теперь мы просто запускаем команду для join мастеров на 2(3,4…) сервере и kubeadm автоматом выкачает сертификаты и перепишет конфиг для etcd. |
[/codesyntax]
при запуске данной команды может возникнуть следующая ошибка:
[root@kub-master-121 ~]# kubeadm init --config=/etc/kubernetes/kubeadm/kubeadm-config.yaml --upload-certs
W0126 16:48:07.358693 14441 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0126 16:48:07.358928 14441 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0\u00a0podSubnet"
W0126 16:48:08.378766 14441 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0126 16:48:08.378873 14441 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "kub-master-121" could not be reached
[WARNING Hostname]: hostname "kub-master-121": lookup kub-master-121 on 8.8.8.8:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
запустим что рекомендуется:
[root@kub-master-121 ~]# kubeadm config migrate --old-config /etc/kubernetes/kubeadm/kubeadm-config.yaml --new-config /etc/kubernetes/kubeadm/kubeadm-config-new.yaml
W0126 16:50:33.453918 14598 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0\u00a0podSubnet"
W0126 16:50:34.532884 14598 validation.go:28] Cannot validate kubelet config - no validator is available
W0126 16:50:34.532916 14598 validation.go:28] Cannot validate kube-proxy config - no validator is available
новый файл выглядит следующим образом:
cat /etc/kubernetes/kubeadm/kubeadm-config-new.yaml
[codesyntax lang="php" blockstate="collapsed"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: fey0rr.e29qe2vp4mk8sr15 ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.121 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: kub-master-121 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.1.120:6443 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.17.2 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {} |
[/codesyntax]
запустим инициацию с новым конфигом:
[root@kub-master-121 ~]# kubeadm init --config=/etc/kubernetes/kubeadm/kubeadm-config-new.yaml --upload-certs
результат выполнения следующий:
[codesyntax lang="php" blockstate="collapsed"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
W0126 16:52:19.241100 14605 validation.go:28] Cannot validate kube-proxy config - no validator is available W0126 16:52:19.241304 14605 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.2 [preflight] Running pre-flight checks [WARNING Hostname]: hostname "kub-master-121" could not be reached [WARNING Hostname]: hostname "kub-master-121": lookup kub-master-121 on 8.8.8.8:53: no such host [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kub-master-121 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.121 192.168.1.120] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kub-master-121 localhost] and IPs [192.168.1.121 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kub-master-121 localhost] and IPs [192.168.1.121 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0126 16:53:29.134465 14605 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0126 16:53:29.135384 14605 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 35.510449 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 5f362f91181397c2579e2c0d90befa28b641f58c53fd14a8f06dc1fb4faba53c [mark-control-plane] Marking the node kub-master-121 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kub-master-121 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: fey0rr.e29qe2vp4mk8sr15 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.1.120:6443 --token fey0rr.e29qe2vp4mk8sr15 \ --discovery-token-ca-cert-hash sha256:dadc52e26eba9da1e96cc005e8339e92426cecd6ce948536b06382cc79a1b07c \ --control-plane --certificate-key 5f362f91181397c2579e2c0d90befa28b641f58c53fd14a8f06dc1fb4faba53c Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.120:6443 --token fey0rr.e29qe2vp4mk8sr15 \ --discovery-token-ca-cert-hash sha256:dadc52e26eba9da1e96cc005e8339e92426cecd6ce948536b06382cc79a1b07c |
[/codesyntax]
Создаем директорию и кладем туда конфигурационный файл для подключения к kubernetes API:
1 2 3 |
[root@kub-master-121 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kub-master-121 NotReady master 2m35s v1.17.2 |
Далее копируем команду для join мастеров и запускаем ее на 2 и 3 сервере:
kubeadm join 192.168.1.120:6443 --token fey0rr.e29qe2vp4mk8sr15 \
--discovery-token-ca-cert-hash sha256:dadc52e26eba9da1e96cc005e8339e92426cecd6ce948536b06382cc79a1b07c \
--control-plane --certificate-key 5f362f91181397c2579e2c0d90befa28b641f58c53fd14a8f06dc1fb4faba53c
После удачного завершения мы увидим новые мастера
kubectl get nodes
1 2 3 4 5 |
[root@kub-master-121 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kub-master-121 NotReady master 9m35s v1.17.2 kub-master-122 NotReady master 2m57s v1.17.2 kub-master-123 NotReady master 2m45s v1.17.2 |
Сертификаты будут удаленны через 2 часа, токен для join будет удален через 24 часа, токен для выгрузки сертификатов удалится через 1 час
Смотрим наши токены :
kubeadm token list
1 2 3 4 |
[root@kub-master-121 ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 78v1n8.h0ovwq6dhcu98pao 1h 2020-01-26T18:54:05+06:00 <none> Proxy for managing TTL for the kubeadm-certs secret <none> fey0rr.e29qe2vp4mk8sr15 23h 2020-01-27T16:54:05+06:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token |
воркер ноды добавляем командой:
kubeadm join 192.168.1.120:6443 --token fey0rr.e29qe2vp4mk8sr15 \ --discovery-token-ca-cert-hash sha256:dadc52e26eba9da1e96cc005e8339e92426cecd6ce948536b06382cc79a1b07c
проверяем:
1 2 3 4 5 6 7 |
[root@kub-master-121 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kub-master-121 <strong>NotReady </strong> master 45m v1.17.2 kub-master-122 <strong>NotReady </strong> master 39m v1.17.2 kub-master-123 <strong>NotReady </strong> master 39m v1.17.2 kub-work-124 <strong>NotReady </strong> <none> 2m18s v1.17.2 kub-work-125 <strong>NotReady </strong> <none> 52s v1.17.2 |
Все сервера добавлены, но они в статусе NotReady. Это из-за отсутствия сети. В нашем примере мы будем использовать сеть calico. Для этого установим ее в наш кластер
Заходим на оф сайт. На момент написания статьи самая последняя версия:
https://docs.projectcalico.org/v3.11/introduction/
выкачиваем её
ждём пару минут и проверяем:
kubectl get nodes
1 2 3 4 5 6 7 |
[root@kub-master-121 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kub-master-121 <strong>Ready </strong> master 55m v1.17.2 kub-master-122 <strong>Ready </strong> master 48m v1.17.2 kub-master-123 <strong>Ready </strong> master 48m v1.17.2 kub-work-124 <strong>Ready </strong> <none> 11m v1.17.2 kub-work-125 <strong>Ready </strong> <none> 10m v1.17.2 |
базовая установка завершена.
================================================
сгенерировать токен для добавления воркеров:
kubeadm token generate
kubeadm token create <generated-token> --print-join-command --ttl=24h