Thank you for reading this post, don't forget to subscribe!
Stolon -это кластер для postgresql с автофайловером, похож на patrony. - будем запускать это в кубере.
ЛЮБАЯ БАЗА ДАННЫХ В КОНТЕЙНЕРЕ - ЭТО ЗЛО.
Ставить будем через Helm.
[root@minikub ~]# mkdir stolon
[root@minikub ~]# cd stolon/
Для начала выкачиваем архив:
helm fetch stable/stolon
распаковываем:
[root@minikub stolon]# tar xvfz stolon-1.6.1.tgz
создаём namespace postgres в нём будем запускать наш кластер stolon
[root@minikub stolon]# kubectl create namespace postgres
создадим secret-ы для суперпользователя:
1 |
kubectl create secret generic <strong>pg-su</strong> --namespace postgres --from-literal<span class="token operator">=</span>username<span class="token operator">=</span><span class="token string">'MY_USER'</span> --from-literal<span class="token operator">=</span>password<span class="token operator">=</span><span class="token string">'MY_PASSWORD'</span> |
для репликации:
1 |
kubectl create secret generic <strong>pg-repl</strong> --namespace postgres --from-literal<span class="token operator">=</span>username<span class="token operator">=</span><span class="token string">'repl_username'</span> --from-literal<span class="token operator">=</span>password<span class="token operator">=</span><span class="token string">'repl_password'</span> |
теперь правим файл с переменными:
/root/stolon/stolon/values.yaml
- для суперпользователя:
[codesyntax lang="php"]
1 2 3 4 |
superuserSecret: name: pg-su usernameKey: username passwordKey: password |
[/codesyntax]
2. для репликации:
[codesyntax lang="php"]
1 2 3 4 |
replicationSecret: name: pg-repl usernameKey: username passwordKey: password |
[/codesyntax]
3. удаляем не нужное:
[codesyntax lang="php"]
1 2 3 4 5 6 7 |
superuserUsername: "stolon" ## password for the superuser (REQUIRED if superuserSecret is not set) superuserPassword: replicationUsername: "repluser" ## password for the replication user (REQUIRED if replicationSecret is not set) replicationPassword: |
[/codesyntax]
4. меняем с
[codesyntax lang="php"]
1 2 3 |
clusterSpec: {} # sleepInterval: 1s # maxStandbys: 5 |
[/codesyntax]
на
[codesyntax lang="php"]
1 2 3 4 5 |
clusterSpec: synchronousReplication: true minSynchronousStandbys: 1 # quorum-like replication maxSynchronousStandbys: 1 # quorum-like replication initMode: new |
[/codesyntax]
5. отключение пакета:
[codesyntax lang="php"]
1 2 |
podDisruptionBudget: minAvailable: 2 |
[/codesyntax]
6. Настройте pgParameters и установите количество реплик равным 3 (всего 3 экземпляра: 1 мастер, 1 резервный синхронизатор и 1 резервный асинхронный):
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
pgParameters: max_connections: 100 # … keeper: # … replicaCount: 3 # … proxy: # … replicaCount: 3 # … sentinel: # … replicaCount: 3 |
[/codesyntax]
7. Можно ещё настроить аннотации обнаружения сервиса для Prometheus, но я этого делать не стал.
[codesyntax lang="php"]
1 2 3 |
annotations: prometheus.io/scrape: "true" prometheus.io/port: "8080" |
[/codesyntax]
8. можно оставить всё как есть но у меня данные хранятся на nfs поэтому поправим ещё сторедж класс
[codesyntax lang="php"]
1 2 3 4 5 6 7 |
persistence: enabled: true storageClassName: "managed-nfs-storage" accessModes: - ReadWriteOnce size: 10Gi |
[/codesyntax]
managed-nfs-storage - берём отсюда:
[codesyntax lang="php"]
1 2 3 4 5 |
[root@minikub stolon]# kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage test.ru/nfs Retain Immediate false 39h standard (default) k8s.io/minikube-hostpath Delete Immediate false 185d |
[/codesyntax]
всё, теперь файл values.yaml будет иметь вид:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
[root@minikub stolon]# cat /root/stolon/stolon/values.yaml # clusterName: image: repository: sorintlab/stolon tag: v0.16.0-pg10 pullPolicy: IfNotPresent ## Add secrets manually via kubectl on kubernetes cluster and reference here # pullSecrets: # - name: "myKubernetesSecret" # used by create-cluster-job when store.backend is etcd etcdImage: repository: k8s.gcr.io/etcd-amd64 tag: 2.3.7 pullPolicy: IfNotPresent debug: false # Enable the creation of a shm volume shmVolume: enabled: false persistence: enabled: true ## If defined, storageClassName: <storageClass> ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## storageClassName: "managed-nfs-storage" accessModes: - ReadWriteOnce size: 10Gi rbac: create: true serviceAccount: create: true # The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template name: superuserSecret: name: "pg-su" usernameKey: username passwordKey: password replicationSecret: name: "pg-repl" usernameKey: username passwordKey: password superuserPasswordFile: #superuserUsername: "stolon" ## password for the superuser (REQUIRED if superuserSecret and superuserPasswordFile are not set) #superuserPassword: #replicationPasswordFile: #replicationUsername: "repluser" ## password for the replication user (REQUIRED if replicationSecret and replicationPasswordFile are not set) #replicationPassword: ## backend could be one of the following: consul, etcdv2, etcdv3 or kubernetes store: backend: kubernetes # endpoints: "http://stolon-consul:8500" kubeResourceKind: configmap pgParameters: max_connections: "100" ports: stolon: containerPort: 5432 metrics: containerPort: 8080 serviceMonitor: # When set to true then use a ServiceMonitor to collect metrics enabled: false # Custom labels to use in the ServiceMonitor to be matched with a specific Prometheus labels: {} # Set the namespace the ServiceMonitor should be deployed to # namespace: default # Set how frequently Prometheus should scrape # interval: 30s # Set timeout for scrape # scrapeTimeout: 10s job: autoCreateCluster: true autoUpdateClusterSpec: true annotations: {} clusterSpec: # sleepInterval: 1s # maxStandbys: 5 synchronousReplication: true minSynchronousStandbys: 1 # quorum-like replication maxSynchronousStandbys: 1 # quorum-like replication initMode: new ## Enable support ssl into postgres, you must specify the certs. ## ref: https://www.postgresql.org/docs/10/ssl-tcp.html ## tls: enabled: false rootCa: |- serverCrt: |- serverKey: |- # existingSecret: name-of-existing-secret-to-postgresql keeper: uid_prefix: "keeper" replicaCount: 3 annotations: {} resources: {} priorityClassName: "" fsGroup: "" service: type: ClusterIP annotations: {} ports: keeper: port: 5432 targetPort: 5432 protocol: TCP nodeSelector: {} affinity: {} tolerations: [] volumes: [] volumeMounts: [] hooks: failKeeper: enabled: false podDisruptionBudget: minAvailable: 2 # maxUnavailable: 1 extraEnv: [] # - name: STKEEPER_LOG_LEVEL # value: "info" proxy: replicaCount: 3 annotations: {} resources: {} priorityClassName: "" service: type: ClusterIP # loadBalancerIP: "" annotations: {} ports: proxy: port: 5432 targetPort: 5432 protocol: TCP nodeSelector: {} affinity: {} tolerations: [] podDisruptionBudget: # minAvailable: 1 # maxUnavailable: 1 extraEnv: [] # - name: STPROXY_LOG_LEVEL # value: "info" # - name: STPROXY_TCP_KEEPALIVE_COUNT # value: "0" # - name: STPROXY_TCP_KEEPALIVE_IDLE # value: "0" # - name: STPROXY_TCP_KEEPALIVE_INTERVAL # value: "0" sentinel: replicaCount: 3 annotations: {} resources: {} priorityClassName: "" nodeSelector: {} affinity: {} tolerations: [] podDisruptionBudget: # minAvailable: 1 # maxUnavailable: 1 extraEnv: [] # - name: STSENTINEL_LOG_LEVEL # value: "info" ## initdb scripts ## Specify dictionary of scripts to be run at first boot, the entry point script is create_script.sh ## i.e. you can use pgsql to run sql script on the cluster. ## # initdbScripts: # create_script.sh: | # #!/bin/sh # echo "Do something." ## nodePostStart scripts ## Specify dictionary of scripts to be run at first boot, the entry point script is postStartScript.sh ## i.e. you can create tablespace directory here. ## # nodePostStartScript: # postStartScript.sh: | # #!/bin/bash # echo "Do something." |
[/codesyntax]
запускаем установку:
[root@minikub stolon]# helm install stolon-pg --namespace postgres /root/stolon/stolon/ --values /root/stolon/stolon/values.yaml
нормальный вывод следующий:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
NAME: stolon-pg LAST DEPLOYED: Fri Oct 2 12:07:56 2020 NAMESPACE: postgres STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Stolon cluster installed and initialized. To get superuser password run PGPASSWORD=$(kubectl get secret --namespace postgres pg-su -o jsonpath="{.data.password}" | base64 --decode; echo) |
[/codesyntax]
проверим корректно ли пароль вытаскивается:
[root@minikub stolon]# kubectl get secret --namespace postgres pg-su -o jsonpath="{.data.password}" | base64 --decode; echo
MY_PASSWORD
как видим пароль тот который мы задали.
Проверяем:
Диски созданы:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[root@minikub stolon]# kubectl get pvc No resources found. [root@minikub stolon]# kubectl get pvc -n postgres NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-stolon-pg-keeper-0 Bound pvc-43fc5a5d-e203-4ebd-bb68-e40738780dc5 10Gi RWO managed-nfs-storage 12m data-stolon-pg-keeper-1 Bound pvc-6602d824-b3c3-4df8-b7fa-cc49b3d62c17 10Gi RWO managed-nfs-storage 12m data-stolon-pg-keeper-2 Bound pvc-dcb8ea84-651d-4f1a-8032-a67409fba5c7 10Gi RWO managed-nfs-storage 12m [root@minikub stolon]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-43fc5a5d-e203-4ebd-bb68-e40738780dc5 10Gi RWO Retain Bound postgres/data-stolon-pg-keeper-0 managed-nfs-storage 12m pvc-6602d824-b3c3-4df8-b7fa-cc49b3d62c17 10Gi RWO Retain Bound postgres/data-stolon-pg-keeper-1 managed-nfs-storage 12m pvc-dcb8ea84-651d-4f1a-8032-a67409fba5c7 10Gi RWO Retain Bound postgres/data-stolon-pg-keeper-2 managed-nfs-storage 12m |
[/codesyntax]
поды так же все создались:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[root@minikub stolon]# kubectl get deployments.apps -n postgres NAME READY UP-TO-DATE AVAILABLE AGE stolon-pg-proxy 3/3 3 3 14m stolon-pg-sentinel 3/3 3 3 14m [root@minikub stolon]# kubectl get pod -n postgres NAME READY STATUS RESTARTS AGE stolon-pg-create-cluster-lgdlq 0/1 Completed 0 14m stolon-pg-keeper-0 1/1 Running 0 14m stolon-pg-keeper-1 1/1 Running 0 14m stolon-pg-keeper-2 1/1 Running 0 14m stolon-pg-proxy-8458f44864-9hftr 1/1 Running 0 14m stolon-pg-proxy-8458f44864-9l4jw 1/1 Running 0 14m stolon-pg-proxy-8458f44864-kzqs4 1/1 Running 0 14m stolon-pg-sentinel-59f9df4676-6dr9s 1/1 Running 0 14m stolon-pg-sentinel-59f9df4676-jj94w 1/1 Running 0 14m stolon-pg-sentinel-59f9df4676-vdct7 1/1 Running 0 14m |
[/codesyntax]
общая информация:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
[root@minikub stolon]# kubectl -n postgres get all NAME READY STATUS RESTARTS AGE pod/stolon-pg-create-cluster-lgdlq 0/1 Completed 0 14m pod/stolon-pg-keeper-0 1/1 Running 0 14m pod/stolon-pg-keeper-1 1/1 Running 0 14m pod/stolon-pg-keeper-2 1/1 Running 0 14m pod/stolon-pg-proxy-8458f44864-9hftr 1/1 Running 0 14m pod/stolon-pg-proxy-8458f44864-9l4jw 1/1 Running 0 14m pod/stolon-pg-proxy-8458f44864-kzqs4 1/1 Running 0 14m pod/stolon-pg-sentinel-59f9df4676-6dr9s 1/1 Running 0 14m pod/stolon-pg-sentinel-59f9df4676-jj94w 1/1 Running 0 14m pod/stolon-pg-sentinel-59f9df4676-vdct7 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/stolon-pg-keeper-headless ClusterIP None <none> 5432/TCP 14m service/stolon-pg-proxy ClusterIP 10.103.26.10 <none> 5432/TCP 14m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/stolon-pg-proxy 3/3 3 3 14m deployment.apps/stolon-pg-sentinel 3/3 3 3 14m NAME DESIRED CURRENT READY AGE replicaset.apps/stolon-pg-proxy-8458f44864 3 3 3 14m replicaset.apps/stolon-pg-sentinel-59f9df4676 3 3 3 14m NAME READY AGE statefulset.apps/stolon-pg-keeper 3/3 14m NAME COMPLETIONS DURATION AGE job.batch/stolon-pg-create-cluster 1/1 5s 14m |
[/codesyntax]
подключаемся к базе, для этого смотрим IP адрес:
[codesyntax lang="php"]
1 2 3 |
[root@minikub stolon]# <strong>kubectl -n postgres get services | fgrep proxy</strong> stolon-pg-proxy ClusterIP <strong>10.101.142.225 </strong> <none> 5432/TCP 4m28s |
[/codesyntax]
[root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-0 -- psql --host 10.101.142.225 --port 5432 --username MY_USER -W postgres
MY_USER - так я обозвал пользователя
MY_PASSWORD -это мой пароль
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
Password for user MY_USER: psql (10.12 (Debian 10.12-1.pgdg90+1)) Type "help" for help. postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+---------+----------+------------+------------+--------------------- postgres | MY_USER | UTF8 | en_US.utf8 | en_US.utf8 | template0 | MY_USER | UTF8 | en_US.utf8 | en_US.utf8 | =c/MY_USER + | | | | | MY_USER=CTc/MY_USER template1 | MY_USER | UTF8 | en_US.utf8 | en_US.utf8 | =c/MY_USER + | | | | | MY_USER=CTc/MY_USER (3 rows) |
[/codesyntax]
создаём базу:
postgres=# CREATE DATABASE test;
CREATE DATABASE
выходим и переконекчиваемся под новой базой:
[root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-0 -- psql --host 10.101.142.225 --port 5432 --username MY_USER -W -d test
создадим таблицу, добавим в неё данные
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
Password for user MY_USER: psql (10.12 (Debian 10.12-1.pgdg90+1)) Type "help" for help. test=# test=# create table test (id int primary key not null, value text not null); CREATE TABLE test=# insert into test values (1, 'value1'); INSERT 0 1 test=# select * from test; id | value ----+-------- 1 | value1 (1 row) test=# \du List of roles Role name | Attributes | Member of ---------------+------------------------------------------------------------+----------- MY_USER | Superuser, Create role, Create DB, Replication, Bypass RLS | {} repl_username | Replication | {} |
[/codesyntax]
Проверим что будет если master грохнуть:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 |
[root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-0 -- psql --host 10.101.142.225 --port 5432 --username MY_USER -W -d test Password for user MY_USER: psql (10.12 (Debian 10.12-1.pgdg90+1)) Type "help" for help. test=# select pg_is_in_recovery(); pg_is_in_recovery ------------------- f (1 row) |
[/codesyntax]
stolon-pg-keeper-0
мастер, потому что pg_is_in_recovery
в состоянии false
не закрывая оболочку откроем второе окно и грохнем мастера:
[root@minikub ~]# kubectl -n postgres delete pod stolon-pg-keeper-0
pod "stolon-pg-keeper-0" deleted
смотрим логи сентинеля:
[root@minikub stolon]# kubectl logs -f -n postgres stolon-pg-sentinel-59f9df4676-mcnq6
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 |
2020-10-02T08:23:50.507Z WARN cmd/sentinel.go:266 no keeper info available {"db": "c9b80de7", "keeper": "keeper1"} 2020-10-02T08:23:50.507Z WARN cmd/sentinel.go:266 no keeper info available {"db": "7f99e9eb", "keeper": "keeper0"} 2020-10-02T08:23:55.654Z WARN cmd/sentinel.go:266 no keeper info available {"db": "c9b80de7", "keeper": "keeper1"} 2020-10-02T08:24:05.759Z INFO cmd/sentinel.go:1267 removing failed synchronous standby {"masterDB": "7f99e9eb", "db": "c9b80de7"} 2020-10-02T08:24:05.760Z INFO cmd/sentinel.go:1305 adding new synchronous standby in good state trying to reach MaxSynchronousStandbys {"masterDB": "7f99e9eb", "synchronousStandbyDB": "607490bc", "keeper": "keeper2"} 2020-10-02T08:24:05.760Z INFO cmd/sentinel.go:1349 merging current and previous synchronous standbys {"masterDB": "7f99e9eb", "prevSynchronousStandbys": {"c9b80de7":{}}, "synchronousStandbys": {"607490bc":{}}} 2020-10-02T08:24:05.760Z INFO cmd/sentinel.go:1353 adding previous synchronous standby {"masterDB": "7f99e9eb", "synchronousStandbyDB": "c9b80de7", "keeper": "keeper1"} 2020-10-02T08:24:05.760Z INFO cmd/sentinel.go:1361 synchronousStandbys changed {"masterDB": "7f99e9eb", "prevSynchronousStandbys": {"c9b80de7":{}}, "synchronousStandbys": {"607490bc":{},"c9b80de7":{}}} 2020-10-02T08:24:15.871Z INFO cmd/sentinel.go:1284 removing synchronous standby in excess {"masterDB": "7f99e9eb", "db": "607490bc"} 2020-10-02T08:24:15.871Z INFO cmd/sentinel.go:1361 synchronousStandbys changed {"masterDB": "7f99e9eb", "prevSynchronousStandbys": {"607490bc":{},"c9b80de7":{}}, "synchronousStandbys": {"c9b80de7":{}}} |
[/codesyntax]
теперь смотрим по конфигам кто есть кто:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
[root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-1 cat /stolon-data/postgres/postgresql.conf | fgrep sync synchronous_standby_names = 'stolon_64676afb' [root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-0 cat /stolon-data/postgres/postgresql.conf | fgrep sync synchronous_standby_names = '' [root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-2 cat /stolon-data/postgres/postgresql.conf | fgrep sync synchronous_standby_names = '' [root@minikub stolon]# [root@minikub stolon]# [root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-1 cat /stolon-data/postgres/recovery.conf | fgrep slot_name command terminated with exit code 1 [root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-0 cat /stolon-data/postgres/recovery.conf | fgrep slot_name primary_slot_name = 'stolon_64676afb' [root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-2 cat /stolon-data/postgres/recovery.conf | fgrep slot_name primary_slot_name = 'stolon_cdca3cc3' |
[/codesyntax]
- keeper-2 - async replica
- keeper-1 - master
- keeper-0 - sync replica
после всех наших манипуляций, подключаюсь к базе и проверяю данные:
[codesyntax lang="php"]
1 2 3 4 5 6 7 8 9 10 11 |
[root@minikub stolon]# kubectl -n postgres exec -it stolon-pg-keeper-0 -- psql --host 10.101.142.225 --port 5432 --username MY_USER -W -d test Password for user MY_USER: psql (10.12 (Debian 10.12-1.pgdg90+1)) Type "help" for help. test=# select * from test; id | value ----+-------- 1 | value1 (1 row) |
[/codesyntax]
как видим всё нормально.
так же при переезде мастера на другой кипер мы можем увидеть что место на nfs сервере меняется.
Тут мастер на keeper-1
[codesyntax lang="php"]
1 2 3 4 5 6 7 |
[root@minikub stolon]# du -csh /nfs-client/* 16K /nfs-client/lost+found 95M /nfs-client/postgres-data-stolon-pg-keeper-0-pvc-43fc5a5d-e203-4ebd-bb68-e40738780dc5 <strong>159M </strong> /nfs-client/postgres-data-stolon-pg-keeper-1-pvc-6602d824-b3c3-4df8-b7fa-cc49b3d62c17 63M /nfs-client/postgres-data-stolon-pg-keeper-2-pvc-dcb8ea84-651d-4f1a-8032-a67409fba5c7 8.0K /nfs-client/tmp |
[/codesyntax]
а тут на keeper-0
[codesyntax lang="php"]
1 2 3 4 5 6 7 |
[root@minikub stolon]# du -csh /nfs-client/* 16K /nfs-client/lost+found <strong>191M </strong> /nfs-client/postgres-data-stolon-pg-keeper-0-pvc-43fc5a5d-e203-4ebd-bb68-e40738780dc5 63M /nfs-client/postgres-data-stolon-pg-keeper-1-pvc-6602d824-b3c3-4df8-b7fa-cc49b3d62c17 95M /nfs-client/postgres-data-stolon-pg-keeper-2-pvc-dcb8ea84-651d-4f1a-8032-a67409fba5c7 8.0K /nfs-client/tmp |
[/codesyntax]
как видим объём изменился.