Thank you for reading this post, don't forget to subscribe!
Данная роль устанавливает кластер kubernetes со следующей схемой работы:
количество мастеров и воркеров устанавливается в файле /etc/ansible/hosts
также необходимо задать происходит ли установка через прокси сервер, иии всё. дальше роль всё сделает сама, никаких дополнительных правок не нужно. Ниже я постараюсь описать каждый плейбук, что делает, зачем - по подробнее.
начнёмссс:
создаём структуру директорий:
mkdir -p /etc/ansible/{playbooks/roles_play,roles/kubernetes/{handlers,tasks,templates}}
cat /etc/ansible/roles/kubernetes/handlers/main.yml
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
--- - name: Reload systemd command: systemctl daemon-reload - name: Reload docker service: name=docker state=reloaded - name: Restart docker service: name=docker state=started enabled=yes - name: start etcd service: name=etcd state=started enabled=yes - name: start rsyslog service: name=rsyslog state=started enabled=yes - name: start haproxy service: name=haproxy state=started enabled=yes - name: start kubelet service: name=kubelet state=started enabled=yes |
[/codesyntax]
рассмотрим шаблоны:
cat /etc/ansible/roles/kubernetes/templates/config.json
данный шаблон необходим чтобы изнутри контейнера был доступ до прокси:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 |
{ "proxies": { "default": { "httpProxy": "{{ http_proxy }}", "httpsProxy": "{{ http_proxy }}", "noProxy": "{{groups['kubernetes'] | to_yaml(width=1300)| replace('\n', '')}}" } } } |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/http-proxy.conf
данный конфиг необходим чтобы докер мог выкачивать образы через прокси:
[codesyntax lang="php"]
1 2 3 4 5 |
[Service] Environment="HTTP_PROXY={{ http_proxy }}" Environment="HTTPS_PROXY={{ https_proxy }}" Environment="NO_PROXY={{groups.kubernetes | to_yaml(width=1300)| replace('\n', '')}}" |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/docker-proxy.sh
данный скрипт костыль необходим чтобы после добавления http-proxy.conf из NO_PROXY удалить квадратные скобки [] с ними работает некорректно
[codesyntax lang="php"]
1 2 3 4 5 6 7 |
#!/bin/bash cat /etc/systemd/system/docker.service.d/http-proxy.conf| grep NO_PROXY | sed 's|\[||' | sed 's|\]||' > /root/docker-no-proxy.txt cat /etc/systemd/system/docker.service.d/http-proxy.conf | grep -v 'NO_PROXY' > /root/docker-proxy.txt cat /root/docker-proxy.txt /root/docker-no-proxy.txt > /etc/systemd/system/docker.service.d/http-proxy.conf systemctl daemon-reload systemctl restart docker |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/daemon.json
необходим для работы kubernetes
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 |
{ "exec-opts": ["native.cgroupdriver=systemd"], "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "experimental":true } |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/etcd.conf
шаблон необходим для установки etcd, количество узлов указывается только в /etc/ansible/hosts в группе kubermaster
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 |
# [member] ETCD_NAME={{ansible_hostname}} ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="http://{{ansible_default_ipv4.address}}:2380" ETCD_LISTEN_CLIENT_URLS="http://{{ansible_default_ipv4.address}}:2379,http://127.0.0.1:2379" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://{{ansible_default_ipv4.address}}:2380" ETCD_INITIAL_CLUSTER={% for host in groups['kubermaster'] %}{{ hostvars[host]['ansible_hostname'] }}=http://{{ hostvars[host]['ansible_default_ipv4']['address'] }}:2380,{% endfor %} ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="ab5f20b33aa4" ETCD_ADVERTISE_CLIENT_URLS="http://{{ansible_default_ipv4.address}}:2379" |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/haproxy.cfg
шаблон для установки haproxy количество узлов указывается только в /etc/ansible/hosts в группе kubermaster (так как проксируется на мастера)
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend front bind *:6443 option tcplog mode tcp default_backend backend_servers backend backend_servers mode tcp balance roundrobin {% for item in groups['kubermaster'] %} server {{ hostvars[item]['ansible_hostname'] }} {{ hostvars[item]['ansible_default_ipv4']['address'] }}:6443 check {% endfor %} |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/k8s.conf
это для сетки надо
[codesyntax lang="php"]
1 2 3 4 |
net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/kubernetes.repo
репозиторий
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 |
[kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/rsyslog.conf
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
# rsyslog configuration file # For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html # If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html #### MODULES #### # The imjournal module bellow is now used as a message source instead of imuxsock. $ModLoad imuxsock # provides support for local system logging (e.g. via logger command) $ModLoad imjournal # provides access to the systemd journal #$ModLoad imklog # reads kernel messages (the same are read from journald) #$ModLoad immark # provides --MARK-- message capability # Provides UDP syslog reception $ModLoad imudp $UDPServerRun 514 # Provides TCP syslog reception $ModLoad imtcp $InputTCPServerRun 514 #### GLOBAL DIRECTIVES #### # Where to place auxiliary files $WorkDirectory /var/lib/rsyslog # Use default timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # File syncing capability is disabled by default. This feature is usually not required, # not useful and an extreme performance hit #$ActionFileEnableSync on # Include all config files in /etc/rsyslog.d/ $IncludeConfig /etc/rsyslog.d/*.conf # Turn off message reception via local log socket; # local messages are retrieved through imjournal now. $OmitLocalLogging on # File to store the position in the journal $IMJournalStateFile imjournal.state #### RULES #### # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg :omusrmsg:* # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log local2.* /var/log/haproxy.log # ### begin forwarding rule ### # The statement between the begin … end define a SINGLE forwarding # rule. They belong together, do NOT split them. If you create multiple # forwarding rules, duplicate the whole block! # Remote Logging (we use TCP for reliable delivery) # # An on-disk queue is created for this action. If the remote host is # down, messages are spooled to disk and sent when it is up again. #$ActionQueueFileName fwdRule1 # unique name prefix for spool files #$ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) #$ActionQueueSaveOnShutdown on # save messages to disk on shutdown #$ActionQueueType LinkedList # run asynchronously #$ActionResumeRetryCount -1 # infinite retries if host is down # remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional #*.* @@remote-host:514 # ### end of the forwarding rule ### |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/shell-for-kuber.sh
данный скрипт\костыль парсит вывод инициации кубера и записывает токены для мастера и воркера
[codesyntax lang="php"]
1 2 3 4 |
#!/bin/bash cat /root/token.txt | grep -Ei 'kubeadm join|--token|--discovery|--control-plane' | grep '^-' | sed 's|^-||g' | tr -d "'" | head -3 |tr -d '\' | tr -s '\r\n' ' ''' > /root/token-master.txt cat /root/token.txt | grep -Ei 'kubeadm join|--token|--discovery|--control-plane' | grep '^-' | sed 's|^-||g' | tr -d "'" | tail -2 |tr -d '\' | tr -s '\r\n' ' ''' > /root/token-worker.txt |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/templates/kub.yaml
с данного файла запускается инициация кластера, мастера задаются через хостовой файл в группе kubermaster
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration localAPIEndpoint: advertiseAddress: {{first_master_ip}} #Адрес на котором слушает API-сервер --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable #Версия кластера которую мы будем устанавливать apiServer: #Список хостов для которых kubeadm генерирует сертификаты certSANs: - 127.0.0.1 {% for item in groups['kubermaster'] %} - {{hostvars[item]['ansible_default_ipv4']['address'] }} {% endfor %} {% for item in groups['kubermaster'] %} - {{hostvars[item]['ansible_hostname'] }} {% endfor %} controlPlaneEndpoint: {{first_master_ip}} #адрес мастера или балансировщика нагрузки etcd: #адреса кластера etc external: endpoints: {% for item in groups['kubermaster'] %} - http://{{hostvars[item]['ansible_default_ipv4']['address'] }}:2379 {% endfor %} networking: podSubnet: 192.168.0.0/16 # подсеть для подов, у каждого CNI она своя. |
[/codesyntax]
с шаблонами закончили.
cat /etc/ansible/roles/kubernetes/tasks/main.yml
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
--- - import_tasks: proxy-add.yml tags: proxy-add when: proxy - import_tasks: add-to-hosts.yml tags: /etc/hosts - import_tasks: DISABLE-selinux-swap.yml tags: disable-selinux-swap - import_tasks: install-repo.yml tags: repo - import_tasks: preinstall.yml tags: preinstall - import_tasks: install-docker.yml tags: installdocker - import_tasks: docker_proxy.yaml tags: docker_proxy when: proxy - import_tasks: etcd.yml tags: etcd - import_tasks: haproxy.yml tags: haproxy - import_tasks: proxy-delete-environment.yml tags: proxy-delete-environment when: proxy - import_tasks: kubernetes.yml tags: kubernetes - import_tasks: copy-key.yml tags: copy - import_tasks: kubernetes-master-worker.yml tags: master-worker - import_tasks: proxy-delete.yml tags: proxy-delete when: proxy |
[/codesyntax]
ну и пойдём по списку:
cat /etc/ansible/roles/kubernetes/tasks/proxy-add.yml
добавляем прокси если установка через него будет идти.
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
- name: PROXY ----- changes for install kubernetes through proxy "{{ http_proxy }}" lineinfile: dest=/etc/yum.conf state=present regexp="{{ http_proxy }}" insertafter=EOF line="proxy={{ http_proxy }}" ignore_errors: yes - name: PROXY ----- add proxy to /etc/environment blockinfile: dest: /etc/environment block: | export http_proxy="{{ http_proxy }}" export https_proxy="{{ http_proxy }}" state: present |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/add-to-hosts.yml
добавляем хосты на все ноды:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
- name: Add all hosts and ip to /etc/hosts lineinfile: dest: /etc/hosts regexp: '{{ hostvars[item].ansible_default_ipv4.address }}.*{{ item }}$' line: "{{ hostvars[item].ansible_default_ipv4.address }} {{ hostvars[item].ansible_hostname }}" state: present become: yes with_items: "{{ groups.kubernetes }}" - name: save only uniq in /etc/hosts to /etc/hosts2 shell: "/usr/bin/cat /etc/hosts | /usr/bin/awk '!a[$0]++' > /etc/hosts2" - name: save only uniq shell: "mv /etc/hosts2 /etc/hosts" |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/DISABLE-selinux-swap.yml
отключаем selinux и swap
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 |
- name: DISABLE SELINUX selinux: state=disabled - name: Remove swapfile from /etc/fstab mount: name: swap fstype: swap state: absent - name: Disable swap command: swapoff -a |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/install-repo.yml
добавляем репозитории
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
- name: Install EPEL repo. yum: name: https://dl.fedoraproject.org/pub/epel/epel-release-latest-{{ ansible_distribution_major_version }}.noarch.rpm state: present - name: Import EPEL GPG key. rpm_key: key: /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-{{ ansible_distribution_major_version }} state: present - name: Add Docker repo get_url: url: https://download.docker.com/linux/centos/docker-ce.repo dest: /etc/yum.repos.d/docer-ce.repo become: yes - name: Check that the /etc/yum.repos.d/kubernetes.repo exists stat: path: /etc/yum.repos.d/kubernetes.repo register: stat_result - name: Copy the template repo /etc/yum.repos.d/kubernetes.repo, if it doesnt exist already template: src: kubernetes.repo dest: /etc/yum.repos.d/kubernetes.repo when: stat_result.stat.exists == False |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/preinstall.yml
делаем преинсталл стандартных пакетов yum-utils yum-plugin-priorities device-mapper-persistent-data lvm2 python2-pip rsyslog ntp. Настраиваем время, копируем шаблон k8s.conf для сети, настраиваем rsyslog добавляем конфиг из шаблона
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
- name: Install default packages yum: name: "{{item}}" state: present with_items: - yum-utils - yum-plugin-priorities - device-mapper-persistent-data - lvm2 - python2-pip - rsyslog - ntp - name: Make sure NTP is started up service: name=ntpd state=started enabled=yes - name: force ntp update "systemctl stop ntpd" shell: "systemctl stop ntpd" - name: update ntp "ntpd -q" shell: "ntpd -q" when: not proxy - name: start ntp "systemctl start ntpd" shell: "systemctl start ntpd" - name: purge docker-compose package yum: name: docker-compose state: removed - name: install pip yum: name: python-pip - name: install the package, force upgrade pip: name: pip executable: pip state: latest - name: Check that the /etc/sysctl.d/k8s.conf exist stat: path: /etc/sysctl.d/k8s.conf register: stat_result - name: Copy the template /etc/sysctl.d/k8s.conf , if it doesnt exist already template: src: /etc/ansible/roles/kubernetes/templates/k8s.conf dest: /etc/sysctl.d/k8s.conf when: stat_result.stat.exists == False - name: enable forward command: sysctl -p # Установка/настройка rsyslog - name: Delete SYSLOGD_OPTIONS from /etc/sysconfig/rsyslog lineinfile: path: /etc/sysconfig/rsyslog state: absent regexp: '^SYSLOGD_OPTIONS' - name: ADD SYSLOGD_OPTIONS from /etc/sysconfig/rsyslog lineinfile: path: /etc/sysconfig/rsyslog state: present line: 'SYSLOGD_OPTIONS="-c 2 -r"' - name: ADD /etc/rsyslog.conf template: src: /etc/ansible/roles/kubernetes/templates/rsyslog.conf dest: /etc/rsyslog.conf notify: - start rsyslog |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/install-docker.yml
ставим docker и докучи docker-compose, из шаблона ставим daemon.json
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
- name: Set var for install docker and docker-compose set_fact: docker_package_state: "latest" docker_install_compose: "True" docker_compose_version: "1.22.0" docker_compose_path: "/usr/local/bin/docker-compose" - name: Install Docker package: name: docker-ce state: latest become: yes notify: - Restart docker - name: Make sure DOCKER is started up service: name=docker state=started enabled=yes - name: Copy the template to /etc/docker/daemon.json template: src: /etc/ansible/roles/kubernetes/templates/daemon.json dest: /etc/docker/daemon.json - name: restarted docker service: name=docker state=restarted enabled=yes - name: Check current docker-compose version. command: docker-compose --version register: docker_compose_current_version changed_when: false failed_when: false - name: Delete existing docker-compose version if it's different. file: path: "{{ docker_compose_path }}" state: absent when: > docker_compose_current_version.stdout is defined and docker_compose_version not in docker_compose_current_version.stdout - name: Install Docker Compose (if configured). get_url: url: https://github.com/docker/compose/releases/download/{{ docker_compose_version }}/docker-compose-Linux-x86_64 dest: "{{ docker_compose_path }}" mode: 0755 - name: install docker-compose stuff with pip pip: name: " {{ item }}" with_items: - pyyaml - docker-py |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/docker_proxy.yaml
если необходима работа через proxy, то этот плейбук добавит такую возможность:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
--- - name: create dir for .docker and service.d file: path: "{{item}}" state: directory mode: 0755 with_items: - /root/.docker/ - /etc/systemd/system/docker.service.d/ - name: copy template for proxy to /root/.docker/config.json template: src: /etc/ansible/roles/kubernetes/templates/config.json dest: /root/.docker/config.json mode: 644 - name: copy template for proxy to /etc/systemd/system/docker.service.d/http-proxy.conf template: src: /etc/ansible/roles/kubernetes/templates/http-proxy.conf dest: /etc/systemd/system/docker.service.d/http-proxy.conf mode: 644 notify: - Reload systemd - Reload docker - name: copy bash script for proxy to /etc/systemd/system/docker.service.d/http-proxy.conf template: src: /etc/ansible/roles/kubernetes/templates/docker-proxy.sh dest: /root/docker-proxy.sh - name: Run bash script /root/docker-proxy.sh to do change in /etc/systemd/system/docker.service.d/http-proxy.conf shell: /bin/bash /root/docker-proxy.sh - name: Delete bash script /root/docker-proxy.sh file: path: /root/docker-proxy.sh state: absent - name: Restart docker on master service: name: docker state: restarted |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/etcd.yml
ставим etcd на мастера (группа kubermaster) поэтому рекомендуем минимум 3 чтоб у etcd был кворум
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
- name: install etcd yum: name: etcd state: latest when: "inventory_hostname in groups['kubermaster']" - name: Delete /etc/etcd/etcd.conf file: path: /etc/etcd/etcd.conf state: absent when: "inventory_hostname in groups['kubermaster']" - name: Copy the template /etc/etcd/etcd.conf to group kubermaster template: src: /etc/ansible/roles/kubernetes/templates/etcd.conf dest: /etc/etcd/etcd.conf when: "inventory_hostname in groups['kubermaster']" notify: - start etcd - name: Make sure ETCD is started up service: name=etcd state=started enabled=yes when: "inventory_hostname in groups['kubermaster']" |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/haproxy.yml
ставим haproxy чтобы он раскидывал запросы с воркеров на мастера (по умолчанию всё летит на сервак с которого инициировали кластер кубера) ставится хапрокси только на группу kuberworker
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
- name: install haproxy yum: name: haproxy state: latest when: "inventory_hostname in groups['kuberworker']" notify: - start haproxy - name: Make sure HAPROXY is started up service: name=haproxy state=started enabled=yes when: "inventory_hostname in groups['kuberworker']" - name: Delete /etc/haproxy/haproxy.cfg file: path: /etc/haproxy/haproxy.cfg state: absent when: "inventory_hostname in groups['kuberworker']" - name: Copy the template /etc/haproxy/haproxy.cfg to group kuberworker template: src: /etc/ansible/roles/kubernetes/templates/haproxy.cfg dest: /etc/haproxy/haproxy.cfg when: "inventory_hostname in groups['kuberworker']" - name: Reload haproxy shell: "systemctl reload haproxy" when: "inventory_hostname in groups['kuberworker']" |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/proxy-delete-environment.yml
данный плейбук нужен для удаления прокси(если установка производится через него) из environment, т.е. из переменной окружения $PATH, это надо чтоб кластер kubernetes нормально инициировался
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 |
- name: PROXY ----- remove proxy ONLY from /etc/environment blockinfile: dest: /etc/environment block: | export http_proxy="{{ http_proxy }}" export https_proxy="{{ http_proxy }}" state: absent |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/kubernetes.yml
в данном плейбуке происходит инициация кластера kubernetes, рассмотрим каждую часть подробнее:
- name: Set var from first inventory
здесь мы записываем в переменную first_master_ip - первый ip адрес из группы kubermaster с него и будет производится инициация.
- name: install default packages for kubernetes
ставим все необходимые пакеты
- name: Start and enable kubelet
сразу стартуем, добавляем в автозапуск
- name: Delete config files if exist on {{first_master_ip}}
если сервере с которого производим установку есть файлы: /root/kub-new.yaml /root/kub.yaml то удаляем их
- name: Copy template /etc/ansible/roles/kubernetes/templates/kub.yaml to the first master {{first_master_ip}}
копируем файл для инициации из шаблона
- name: Copy bash script /etc/ansible/roles/kubernetes/templates/shell-for-kuber.sh for parse init kuber text to the master {{first_master_ip}}
копируем костыль для парсинга вывода
- name: reconfigure /root/kub.yaml to /root/kub-new.yaml on the {{first_master_ip}}
так как наш вариант kub.yaml более старый (но простой в набивании мастеров мы копируем именно его) после запускаем реконфигурацию под новый вид
- name: INITIAL KUBERNETES CLUSTER
инициируем кластер кубернетиса и записываем всё в переменную result_of_initial
- name: Create file token.txt
создаём файл куда будет записан вывод из result_of_initial
- name: Copy facts of initial kubernetes to files /root/token.txt
записываем в token.txt вывод инициации result_of_initial
- name: Run bash script /root/shell-for-kuber.sh to parse result of initial kubernetes
парсим вывод
- name: Set var token from kubermaster /root/token-master.txt
записываем в переменную token_kubermaster токен для мастеров
- name: Set var token from kuberworker /root/token-worker.txt
записываем в переменную token_kuberworker токен для воркеров
- name: create home, copy admin.conf, chown owner of /root/.kube/
создаём структуру каталогов и копируем необходимые конфиги
- name: Delete files with result of initial, tokens and bash script on the {{first_master_ip}}
удаляем наши шаблоны скрипты которые более не нужны (чистим за собой)
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
- name: Set var from first inventory set_fact: first_master_ip: "{{ groups['kubermaster'][0] }}" - name: install default packages for kubernetes yum: name: "{{item}}" state: present with_items: - kubelet - kubeadm - kubectl notify: - start kubelet - name: Start and enable kubelet service: name=kubelet state=started enabled=yes - name: Delete config files if exist on {{first_master_ip}} file: path: "{{item}}" state: absent with_items: - /root/kub-new.yaml - /root/kub.yaml delegate_to: "{{first_master_ip}}" run_once: true - name: Copy template /etc/ansible/roles/kubernetes/templates/kub.yaml to the first master {{first_master_ip}} template: src: /etc/ansible/roles/kubernetes/templates/kub.yaml dest: /root/kub.yaml delegate_to: "{{first_master_ip}}" run_once: true - name: Copy bash script /etc/ansible/roles/kubernetes/templates/shell-for-kuber.sh for parse init kuber text to the master {{first_master_ip}} template: src: /etc/ansible/roles/kubernetes/templates/shell-for-kuber.sh dest: /root/shell-for-kuber.sh delegate_to: "{{first_master_ip}}" run_once: true - name: reconfigure /root/kub.yaml to /root/kub-new.yaml on the {{first_master_ip}} shell: kubeadm config migrate --old-config /root/kub.yaml --new-config /root/kub-new.yaml delegate_to: "{{first_master_ip}}" run_once: true - name: INITIAL KUBERNETES CLUSTER shell: kubeadm init --config=/root/kub-new.yaml --upload-certs register: result_of_initial delegate_to: "{{first_master_ip}}" run_once: true - name: Create file token.txt file: path: "{{item}}" state: touch mode: 0644 with_items: - /root/token.txt delegate_to: "{{first_master_ip}}" run_once: true - name: Copy facts of initial kubernetes to files /root/token.txt copy: content: "{{ result_of_initial | to_nice_yaml }}" dest: "/root/token.txt" delegate_to: "{{first_master_ip}}" run_once: true - name: Run bash script /root/shell-for-kuber.sh to parse result of initial kubernetes shell: /bin/bash /root/shell-for-kuber.sh delegate_to: "{{first_master_ip}}" run_once: true - name: Set var token from kubermaster /root/token-master.txt shell: cat /root/token-master.txt register: token_kubermaster delegate_to: "{{first_master_ip}}" run_once: true - name: Set var token from kuberworker /root/token-worker.txt shell: cat /root/token-worker.txt register: token_kuberworker delegate_to: "{{first_master_ip}}" run_once: true - name: create home, copy admin.conf, chown owner of /root/.kube/ become: yes become_user: root shell: "{{item}}" with_items: - "mkdir -p /root/.kube" - "cp -i /etc/kubernetes/admin.conf /root/.kube/config" - "chown root:root /root/.kube/config" - "chown root:root /etc/kubernetes/pki" delegate_to: "{{first_master_ip}}" run_once: true - name: Delete files with result of initial, tokens and bash script on the {{first_master_ip}} file: path: "{{item}}" state: absent with_items: - /root/token.txt - /root/shell-for-kuber.sh - /root/token-master.txt - /root/token-worker.txt - /root/kub.yaml delegate_to: "{{first_master_ip}}" run_once: true ################### #- name: host # debug: # msg: # - "{{ token_kubermaster.stdout_lines }}" # - "{{ token_kuberworker.stdout_lines }}" # delegate_to: "{{first_master_ip}}" # run_once: true |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/copy-key.yml
тут копируются ключи сгенерированные при инициации kubernetes, сначала на тачку с ансиблом, а потом на остальные
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
- name: Delete directory /etc/ansible/roles/kubernetes/templates/pki on localhost ansible local_action: file path=/etc/ansible/roles/kubernetes/templates/pki state=absent run_once: true - name: Create directory on /etc/ansible/roles/kubernetes/templates/pki on localhost ansible local_action: file path=/etc/ansible/roles/kubernetes/templates/pki state=directory mode=0755 run_once: true - name: Create directory /etc/kubernetes/pki file: path: /etc/kubernetes/pki state: directory mode: '0755' when: "inventory_hostname in groups['kubermaster']" - name: CHECK all files on the "{{first_master_ip}}" in directory /etc/kubernetes/pki/ shell: ls /etc/kubernetes/pki/ register: list_of_key delegate_to: "{{first_master_ip}}" run_once: true - name: COPY all files from the "{{first_master_ip}}" in directory /etc/kubernetes/pki/ to localhost ansible fetch: src: /etc/kubernetes/pki/{{item}} dest: /etc/ansible/roles/kubernetes/templates/pki/ with_items: "{{list_of_key.stdout_lines}}" delegate_to: "{{first_master_ip}}" run_once: true - name: Copy all files from /etc/ansible/roles/kubernetes/templates/pki/{{first_master_ip}}/etc/kubernetes/pki/ to /etc/kubernetes/pki template: src: /etc/ansible/roles/kubernetes/templates/pki/{{first_master_ip}}/etc/kubernetes/pki/{{item}} dest: /etc/kubernetes/pki/{{item}} with_items: "{{list_of_key.stdout_lines}}" when: "inventory_hostname in groups['kubermaster']" - name: Delete directory /etc/ansible/roles/kubernetes/templates/pki on localhost ansible local_action: file path=/etc/ansible/roles/kubernetes/templates/pki state=absent run_once: true |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/kubernetes-master-worker.yml
тут происходит добавление мастеров и воркеров, ещё скачивается сетка calico и деплоится, но после того как всё добавлено, походу не успевает одуплиться и деплой не проходит поэтому приходится потом ручками заходить и запускать:
kubectl apply -f /root/calico.yaml
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
- name: ADD kubernetes master become: yes become_user: root command: "{{ token_kubermaster.stdout}}" when: "inventory_hostname in groups['kubermaster']" - name: create home, copy admin.conf, chown owner of /root/.kube/ become: yes become_user: root shell: "{{item}}" with_items: - "mkdir -p /root/.kube" - "cp -i /etc/kubernetes/admin.conf /root/.kube/config" - "chown root:root /root/.kube/config" - "chown root:root /etc/kubernetes/pki" when: "inventory_hostname in groups['kubermaster']" - name: ADD kubernetes worker become: yes become_user: root command: "{{ token_kuberworker.stdout}}" when: "inventory_hostname in groups['kuberworker']" - name: change kubernetes API address replace: path: "{{item}}" regexp: "^(.*)server:(.*)$" replace: " server: https://127.0.0.1:6443" with_items: - /root/.kube/config - /etc/kubernetes/kubelet.conf - /etc/kubernetes/admin.conf - /etc/kubernetes/scheduler.conf - /etc/kubernetes/controller-manager.conf when: "inventory_hostname in groups['kubermaster']" - name: Restart kubelet and docker on master service: name: "{{item}}" state: restarted with_items: - kubelet - docker when: "inventory_hostname in groups['kubermaster']" - name: Delete files with result of initial, tokens and bash script on the {{first_master_ip}} file: path: "{{item}}" state: absent with_items: - /root/token.txt - /root/shell-for-kuber.sh - /root/token-master.txt - /root/token-worker.txt - /root/kub.yaml delegate_to: "{{first_master_ip}}" run_once: true - name: PROXY ----- add proxy to /etc/environment for download calico blockinfile: dest: /etc/environment block: | export http_proxy="{{ http_proxy }}" export https_proxy="{{ http_proxy }}" state: present when: proxy - name: Download calico network get_url: url: https://docs.projectcalico.org/manifests/calico.yaml dest: /root/calico.yaml mode: 0440 delegate_to: "{{first_master_ip}}" run_once: true - name: deploy calico network /root/calico.yaml shell: kubectl apply -f /root/calico.yaml delegate_to: "{{first_master_ip}}" run_once: true #- name: TOKENS for add master and worker # debug: # msg: # - "{{ token_kubermaster.stdout_lines }}" # - "{{ token_kuberworker.stdout_lines }}" # delegate_to: "{{first_master_ip}}" # run_once: true |
[/codesyntax]
cat /etc/ansible/roles/kubernetes/tasks/proxy-delete.yml
удаляем после установки проксю
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
- name: PROXY ----- delete from yum.conf proxy "{{ http_proxy }}" lineinfile: dest=/etc/yum.conf state=absent regexp="{{ http_proxy }}" insertafter=EOF line="proxy={{ http_proxy }}" ignore_errors: yes - name: PROXY ----- delete proxy from /etc/environment blockinfile: dest: /etc/environment block: | export http_proxy="{{ http_proxy }}" export https_proxy="{{ http_proxy }}" state: absent |
[/codesyntax]
cat /etc/ansible/hosts
сюда добавляем наши хосты
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 |
[kubernetes:children] kubermaster kuberworker [kubermaster] 192.168.1.170 192.168.1.171 192.168.1.172 [kuberworker] 192.168.1.173 192.168.1.174 |
[/codesyntax]
cat /etc/ansible/playbooks/roles_play/kubernetes.yml
с данного плейбука запускаем роль
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
--- - hosts: kubernetes become: true ignore_errors: yes become_method: sudo gather_facts: yes vars: - proxy: true # here use true/false - http_proxy: "http://192.168.1.179:3128" - https_proxy: "http://192.168.1.179:3128" roles: - kubernetes # tasks: # - include_role: # name: name1 # name: name # # тут мы ничего не меняем, только указываем будет ли установка через прокси или нет (лучше нет). В /etc/ansible/hosts добавляем группу kubernetes # с подгруппами kubermaster куда вносим ip адреса мастеров и kuberworker куда добавляем воркеры. #[kubernetes:children] #kubermaster #kuberworker #[kubermaster] #[kuberworker] # будет произведена установка etcd на мастеров (поэтому лучше выбрать их 3 штуки), так же вся установка запуск команд будет производиться на мастере # а именно на первом сервере в группе kubermaster |
[/codesyntax]
запускаем следующим образом:
ansible-playbook -u ansible /etc/ansible/playbooks/roles_play/kubernetes.yml
пользователь ansible естественно должен быть добавлен на всех тачках и иметь полные права из под sudo
Если необходимо протестить работу через прокси сервер, то на каждую тачку кидаем этот скрипт:
скрипт iptables для блокирования доступов во внешку.
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
cat <strong>iptables.sh</strong> #!/bin/bash ### Скрипт конфигурации IPTables ### # Очищаем предыдущие записи iptables -F # Установка политик по умолчанию iptables -P INPUT DROP iptables -P FORWARD ACCEPT iptables -P OUTPUT DROP # Разрешаем локальный интерфейс iptables -A INPUT -i lo -j ACCEPT # REL, ESTB allow iptables -A INPUT -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p udp -m state --state RELATED,ESTABLISHED -j ACCEPT # Разрешаем рабочие порты # 22 порт для всех iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Ansible iptables -A INPUT -p tcp -s 192.168.1.177 --dport 22 -j ACCEPT # Прокси iptables -A OUTPUT -p tcp -d 192.168.1.179 --dport 3128 -j ACCEPT # DNS у нас гугловые iptables -A OUTPUT -d 8.8.8.8 -j ACCEPT #сервера kuber iptables -A INPUT -p tcp -s 192.168.1.170,192.168.1.171,192.168.1.172,192.168.1.173,192.168.1.174,192.168.1.175,192.168.1.5 -j ACCEPT iptables -A OUTPUT -d 192.168.1.170,192.168.1.171,192.168.1.172,192.168.1.173,192.168.1.174,192.168.1.175,192.168.1.5 -j ACCEPT # Просмотр iptables -L --line-number echo service iptables save echo service iptables reload echo "Done" |
[/codesyntax]