Thank you for reading this post, don't forget to subscribe!
cd certs
Повторяем для Gangway
Запуск Dex и Gangway
svc-k8s-ldap-auth
и пароль для него:
zFW4!PxUqd-5JG
и группу:
k8s-access (в неё в AD будем добавлять пользователей )
baseDN тоже в AD смотрим
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
kind: ConfigMap apiVersion: v1 metadata: name: dex namespace: kube-system data: config.yaml: | issuer: https://dex2.prod.test.local/ web: http: 0.0.0.0:5556 staticClients: - id: oidc-auth-client redirectURIs: - 'https://gangway2.prod.test.local/callback' name: 'oidc-auth-client' secret: "super_strong_password" #shared secret from prerequisites connectors: - type: ldap id: ldap name: LDAP config: host: test.local:389 #Address of AD Server insecureNoSSL: true insecureSkipVerify: true bindDN: CN=svc-k8s-ldap-auth,OU=Service,OU=Accounts,OU=Cellular,OU=Businesses,DC=test,DC=local bindPW: zFW4!PxUqd-5JG #password of user with access to search AD userSearch: baseDN: OU=Bishkek,OU=North,OU=Users,OU=Accounts,OU=Cellular,OU=Businesses,DC=test,DC=local #filter: "(objectClass=person)" filter: "(objectClass=user)" username: sAMAccountName idAttr: sAMAccountName emailAttr: userPrincipalName nameAttr: displayName groupSearch: baseDN: OU=Groups,OU=Cellular,OU=Businesses,DC=test,DC=local filter: "(objectClass=group)" #userAttr: distinguishedName userAttr: DN groupAttr: member #nameAttr: cn nameAttr: name oauth2: skipApprovalScreen: true storage: type: kubernetes config: inCluster: true |
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat dex_deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
apiVersion: apps/v1 kind: Deployment metadata: labels: app: dex name: dex namespace: kube-system spec: replicas: 1 selector: matchLabels: app: dex strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: dex spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml image: quay.io/dexidp/dex:v2.16.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 5556 name: http protocol: TCP resources: {} volumeMounts: - mountPath: /etc/dex/cfg name: config dnsPolicy: ClusterFirst serviceAccountName: dex restartPolicy: Always volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex name: config |
ТАК КАК У МЕНЯ имя кластера(test.local) и имя домен контроллера AD(test.local) совпадают (я прокосячил при установке кластера), то используем следующий вариант:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
apiVersion: apps/v1 kind: Deployment metadata: labels: app: dex name: dex namespace: kube-system spec: replicas: 1 selector: matchLabels: app: dex strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: dex spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml image: quay.io/dexidp/dex:v2.16.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 5556 name: http protocol: TCP resources: {} volumeMounts: - mountPath: /etc/dex/cfg name: config # dnsPolicy: ClusterFirst dnsPolicy: "None" dnsConfig: nameservers: - 10.230.144.12 - 10.230.144.14 searches: - test.local options: - name: ndots value: "2" - name: edns0 serviceAccountName: dex restartPolicy: Always volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex name: config |
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat dex_ingress_service.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
--- kind: Service apiVersion: v1 metadata: name: dex namespace: kube-system spec: selector: app: dex ports: - name: dex port: 5556 targetPort: 5556 protocol: TCP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dex namespace: kube-system spec: tls: - hosts: - dex2.prod.test.local secretName: dex-tls rules: - host: dex2.prod.test.local #Your DNS Name for Dex http: paths: - backend: serviceName: dex servicePort: 5556 |
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat dex_rbac.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
--- apiVersion: v1 kind: ServiceAccount metadata: name: dex namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: dex namespace: kube-system rules: - apiGroups: ["dex.coreos.com"] resources: ["*"] verbs: ["*"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["create"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: dex namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: dex subjects: - kind: ServiceAccount name: dex namespace: kube-system |
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat gangway_configmap.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
--- apiVersion: v1 kind: ConfigMap metadata: name: gangway namespace: kube-system data: gangway.yaml: | clusterName: "TestProdCluster" authorizeURL: "https://dex2.prod.test.local/auth" #replace the domain name with your domain tokenURL: "https://dex2.prod.test.local/token" #replace the domain name with your domain scopes: ["openid", "profile", "email", "offline_access", "groups"] clientID: "oidc-auth-client" clientSecret: "super_strong_password" #secret key from prerequisites again. This should match the Dex key trustedCAPath: "/opt/ca.crt" redirectURL: "https://gangway2.prod.test.local/callback" usernameClaim: "email" emailClaim: "email" apiServerURL: https://10.242.146.30:6443 #This should be your k8s API URL |
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat gangway_deployment.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
apiVersion: apps/v1 kind: Deployment metadata: name: gangway namespace: kube-system labels: app: gangway spec: replicas: 1 selector: matchLabels: app: gangway strategy: template: metadata: labels: app: gangway spec: containers: - name: gangway image: gcr.io/heptio-images/gangway:v3.1.0 imagePullPolicy: Always command: ["gangway", "-config", "/gangway/gangway.yaml"] env: - name: GANGWAY_SESSION_SECURITY_KEY valueFrom: secretKeyRef: name: gangway-key key: sesssionkey ports: - name: http containerPort: 8080 protocol: TCP resources: requests: cpu: "100m" memory: "100Mi" limits: cpu: "100m" memory: "100Mi" volumeMounts: - name: gangway mountPath: /gangway/ - name: ca-crt mountPath: /opt livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 20 timeoutSeconds: 1 periodSeconds: 60 failureThreshold: 3 readinessProbe: httpGet: path: / port: 8080 timeoutSeconds: 1 periodSeconds: 10 failureThreshold: 3 volumes: - name: gangway configMap: name: gangway - name: ca-crt secret: secretName: ca |
ТАК КАК У МЕНЯ имя кластера(test.local) и имя домен контроллера AD(test.local) совпадают (я прокосячил при установке кластера), то используем следующий вариант:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
apiVersion: apps/v1 kind: Deployment metadata: name: gangway namespace: kube-system labels: app: gangway spec: replicas: 1 selector: matchLabels: app: gangway strategy: template: metadata: labels: app: gangway spec: containers: - name: gangway image: gcr.io/heptio-images/gangway:v3.1.0 imagePullPolicy: Always command: ["gangway", "-config", "/gangway/gangway.yaml"] env: - name: GANGWAY_SESSION_SECURITY_KEY valueFrom: secretKeyRef: name: gangway-key key: sesssionkey ports: - name: http containerPort: 8080 protocol: TCP resources: requests: cpu: "100m" memory: "100Mi" limits: cpu: "100m" memory: "100Mi" volumeMounts: - name: gangway mountPath: /gangway/ - name: ca-crt mountPath: /opt livenessProbe: httpGet: path: / port: 8080 initialDelaySeconds: 20 timeoutSeconds: 1 periodSeconds: 60 failureThreshold: 3 readinessProbe: httpGet: path: / port: 8080 timeoutSeconds: 1 periodSeconds: 10 failureThreshold: 3 dnsPolicy: "None" dnsConfig: nameservers: - 10.230.144.12 - 10.230.144.14 searches: - test.local options: - name: ndots value: "2" - name: edns0 volumes: - name: gangway configMap: name: gangway - name: ca-crt secret: secretName: ca |
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat gangway_ingress_service.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
--- kind: Service apiVersion: v1 metadata: name: gangway-svc namespace: kube-system labels: app: gangway spec: type: ClusterIP ports: - name: "http" protocol: TCP port: 80 targetPort: "http" selector: app: gangway --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gangway namespace: kube-system spec: tls: - secretName: gangway-tls hosts: - gangway2.prod.test.local #dns name previously configured for gangway rules: - host: gangway2.prod.test.local #dns name previously configured for gangway http: paths: - backend: serviceName: gangway-svc servicePort: http |
добавляем правила на чтение(view) в namespace (terminal-soft ) для пользователей добавленных в группе AD (k8s-access)
[root@prod-vsrv-kubemaster1 k8s-ad-auth]# cat rbac.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 |
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view-user-my-test namespace: terminal-soft roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: Group name: k8s-access # name group in AD |
если нужны админские права для группы, то используем вот такой rbac:
1 2 3 4 5 6 7 8 9 10 11 |
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: Group name: k8s-access |
1 2 3 4 5 6 7 8 9 |
apiServer: extraArgs: authorization-mode: Node,RBAC # вот отсюда oidc-ca-file: /etc/ssl/certs/ca.crt oidc-client-id: oidc-auth-client oidc-groups-claim: groups oidc-issuer-url: https://dex2.prod.test.local/ oidc-username-claim: email |
/etc/kubernetes/manifest/kube-apiserver.manifest.
1 2 3 4 5 |
- --oidc-ca-file=/etc/ssl/certs/ca.crt - --oidc-client-id=oidc-auth-client - --oidc-groups-claim=groups - --oidc-issuer-url=https://dex2.prod.test.local/ - --oidc-username-claim=email |
kubectl config set-cluster TestProdCluster --server=https://10.242.146.30:6443 --certificate-authority=ca-TestProdCluster.pem --embed-certs
мне нужно использовать:kubectl config set-cluster TestProdCluster --server=https://10.242.146.30:6443 --insecure-skip-tls-verify=true
ну и всё запускаем с любой тачки с которой есть доступ до кластера, эту инструкцию и можно проверять доступы:
kubectl get pods
1 2 |
Error from server (Forbidden): pods is forbidden: User "user@test.local" cannot list resource "pods" in API group "" in the namespace "default" |
и получаем ошибку так как для этого пользователя нет прав.
а запускаем команду для просмотра разрешённого namespace:
kubectl get pods -n terminal-soft
1 2 3 4 5 6 7 8 |
NAME READY STATUS RESTARTS AGE deployment-terminal-soft-5bd7f8b6f4-45xh4 1/1 Running 0 10d deployment-terminal-soft-5bd7f8b6f4-7wgrr 1/1 Running 0 10d redis-terminal-soft-master-0 2/2 Running 0 154d redis-terminal-soft-slave-0 2/2 Running 0 154d redis-terminal-soft-slave-1 2/2 Running 0 154d redis-terminal-soft-slave-2 2/2 Running 0 154d |
тут всё ок.