Thank you for reading this post, don't forget to subscribe!
есть следующие серверы с IP-адресами:
- 192.168.1.121
- 192.168.1.122
- 192.168.1.123
- 192.168.1.124
- 192.168.1.125
- 192.168.1.126
- 192.168.1.127
Распределение ролей:
- (Master Servers) + etcd-кластер:
- 192.168.1.121 (Master 1 + etcd)
- 192.168.1.122 (Master 2 + etcd)
- 192.168.1.123 (Master 3 + etcd)
- (Filer Servers):
- 192.168.1.124 (Filer 1)
- 192.168.1.125 (Filer 2)
- (Volume Servers):
- 192.168.1.126 (Volume 1)
- 192.168.1.127 (Volume 2)
Все сервера работают под управлением Debian 12.
Имеете доступ с правами root или через sudo.
Порты, необходимые для коммуникации между серверами, открыты (9333, 8080, 8888, 2379, 2380 и др.).
Определите последнюю версию etcd
echo "Последняя версия etcd: $ETCD_VERSION"
1 2 3 |
root@debian:~# ETCD_VERSION=$(curl -s https://api.github.com/repos/etcd-io/etcd/releases/latest | grep tag_name | cut -d '"' -f 4) echo "Последняя версия etcd: $ETCD_VERSION" Последняя версия etcd: v3.5.16 |
Скачайте и распакуйте etcd на всех трёх нодах 192.168.1.121/192.168.1.122/192.168.1.123
1 2 3 |
wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz tar -xzf etcd-${ETCD_VERSION}-linux-amd64.tar.gz |
Установите бинарные файлы etcd
1 2 |
mv etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/ |
Проверьте установленную версию etcd
etcd --version
Создание пользователя и директорий для etcd
1 2 3 4 5 6 7 |
mkdir -p /var/lib/etcd/ mkdir /etc/etcd groupadd --system etcd useradd -s /sbin/nologin --system -g etcd etcd chown -R etcd:etcd /var/lib/etcd/ chown -R etcd:etcd /etc/etcd/ |
Создание файла сервиса systemd для etcd
1 |
nano /etc/systemd/system/etcd.service |
Содержимое файла:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
[Unit] Description=etcd key-value store Documentation=https://github.com/etcd-io/etcd After=network-online.target Wants=network-online.target [Service] User=etcd Type=notify EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/local/bin/etcd Restart=always RestartSec=10s LimitNOFILE=40000 [Install] WantedBy=multi-user.target |
Настройка конфигурации etcd на каждом сервере
Создайте файл конфигурации /etc/etcd/etcd.conf на каждом мастер-сервере, заменяя IP-адреса и имена узлов на соответствующие.
Для сервера 192.168.1.121:
nano /etc/etcd/etcd.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 |
#[Member] ETCD_NAME="etcd1" ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="http://192.168.1.121:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.1.121:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.121:2380" ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.121:2380,etcd2=http://192.168.1.122:2380,etcd3=http://192.168.1.123:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-1" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.121:2379" |
Для сервера 192.168.1.122:
nano /etc/etcd/etcd.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 |
#[Member] ETCD_NAME="etcd2" ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="http://192.168.1.122:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.1.122:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.122:2380" ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.121:2380,etcd2=http://192.168.1.122:2380,etcd3=http://192.168.1.123:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-1" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.122:2379" |
Для сервера 192.168.1.123:
nano /etc/etcd/etcd.conf
1 2 3 4 5 6 7 8 9 10 11 12 13 |
#[Member] ETCD_NAME="etcd3" ETCD_DATA_DIR="/var/lib/etcd" ETCD_LISTEN_PEER_URLS="http://192.168.1.123:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.1.123:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.123:2380" ETCD_INITIAL_CLUSTER="etcd1=http://192.168.1.121:2380,etcd2=http://192.168.1.122:2380,etcd3=http://192.168.1.123:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-1" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.123:2379" |
Запуск и включение сервиса etcd на всех серверах
1 2 3 4 |
systemctl daemon-reload systemctl enable etcd systemctl start etcd |
Проверка состояния etcd-кластера
etcdctl member list
1 2 3 4 |
root@debian:~# etcdctl member list 7dd50737ac19f89c, started, etcd3, http://192.168.1.123:2380, http://192.168.1.123:2379, false 845887864a948ab6, started, etcd2, http://192.168.1.122:2380, http://192.168.1.122:2379, false cfd148b5195b70f6, started, etcd1, http://192.168.1.121:2380, http://192.168.1.121:2379, false |
Установка SeaweedFS
пользователя и бинарник ставим на ВСЕ сервера
Создание пользователя seaweedfs
1 2 |
sudo useradd -m -s /bin/bash seaweedfs |
Определите последнюю версию SeaweedFS:
1 2 3 |
SEAWEED_VERSION=$(curl -s https://api.github.com/repos/seaweedfs/seaweedfs/releases/latest | grep tag_name | cut -d '"' -f 4) echo "Последняя версия SeaweedFS: $SEAWEED_VERSION" |
Скачайте и распакуйте SeaweedFS:
1 2 3 |
wget https://github.com/seaweedfs/seaweedfs/releases/download/${SEAWEED_VERSION}/linux_amd64.tar.gz tar -zxvf linux_amd64.tar.gz |
Установите бинарные файлы:
1 2 3 |
mv weed /usr/local/bin/ chmod +x /usr/local/bin/weed |
Проверьте установку:
weed version
Настройка мастер-серверов SeaweedFS
На серверах 192.168.1.121, 192.168.1.122, 192.168.1.123
1. Создание директории для данных мастер-сервера
1 2 3 |
mkdir -p /data/seaweedfs/master chown -R seaweedfs:seaweedfs /data/seaweedfs/master |
2. Создание файла сервиса systemd для мастер-сервера
1 2 |
nano /etc/systemd/system/seaweedfs-master.service |
Содержимое файла:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[Unit] Description=SeaweedFS Master Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed master \ -ip=192.168.1.121 \ -mdir=/data/seaweedfs/master \ -defaultReplication=001 \ -peers=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Restart=on-failure [Install] WantedBy=multi-user.target |
для 192.168.1.122
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[Unit] Description=SeaweedFS Master Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed master \ -ip=192.168.1.122 \ -mdir=/data/seaweedfs/master \ -defaultReplication=001 \ -peers=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Restart=on-failure [Install] WantedBy=multi-user.target |
для 192.168.1.123
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[Unit] Description=SeaweedFS Master Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed master \ -ip=192.168.1.123 \ -mdir=/data/seaweedfs/master \ -defaultReplication=001 \ -peers=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Restart=on-failure [Install] WantedBy=multi-user.target |
3. Запуск и включение сервиса мастер-сервера
1 2 3 4 |
systemctl daemon-reload systemctl enable seaweedfs-master systemctl start seaweedfs-master |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
root@debian:~# systemctl status seaweedfs-master ● seaweedfs-master.service - SeaweedFS Master Server Loaded: loaded (/etc/systemd/system/seaweedfs-master.service; enabled; preset: enabled) Active: active (running) since Sun 2024-09-15 17:09:09 +06; 6s ago Main PID: 907 (weed) Tasks: 7 (limit: 1094) Memory: 12.8M CPU: 23ms CGroup: /system.slice/seaweedfs-master.service └─907 /usr/local/bin/weed master -ip=192.168.1.121 -mdir=/data/seaweedfs/master -defaultReplication=001 -peers=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Sep 15 17:09:09 debian systemd[1]: Started seaweedfs-master.service - SeaweedFS Master Server. Sep 15 17:09:09 debian weed[907]: I0915 17:09:09.872355 file_util.go:27 Folder /data/seaweedfs/master Permission: -rwxr-xr-x Sep 15 17:09:09 debian weed[907]: I0915 17:09:09.873623 master.go:282 current: 192.168.1.121:9333 peers:192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Sep 15 17:09:09 debian weed[907]: I0915 17:09:09.873771 master_server.go:134 Volume Size Limit is 30000 MB Sep 15 17:09:09 debian weed[907]: I0915 17:09:09.874121 master.go:163 Start Seaweed Master 30GB 3.73 6063a889ed61b4e3ef29360faa5d7623a4a70364 at 192.168.1.121:9333 Sep 15 17:09:09 debian weed[907]: I0915 17:09:09.876676 raft_server.go:119 Starting RaftServer with 192.168.1.121:9333 Sep 15 17:09:09 debian weed[907]: I0915 17:09:09.883302 raft_server.go:168 current cluster leader: |
Настройка Volume серверов
На серверах 192.168.1.126 и 192.168.1.127
1 2 3 |
mkdir -p /data/seaweedfs/volume chown -R seaweedfs:seaweedfs /data/seaweedfs/volume |
Создание файла сервиса systemd для Volume сервера
1 2 |
nano /etc/systemd/system/seaweedfs-volume.service |
Содержимое файла:
Для сервера 192.168.1.126
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[Unit] Description=SeaweedFS Volume Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed volume \ -dir=/data/seaweedfs/volume \ -max=100 \ -ip=192.168.1.126 \ -mserver=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Restart=on-failure [Install] WantedBy=multi-user.target |
Для сервера 192.168.1.127
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[Unit] Description=SeaweedFS Volume Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed volume \ -dir=/data/seaweedfs/volume \ -max=100 \ -ip=192.168.1.127 \ -mserver=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Restart=on-failure [Install] WantedBy=multi-user.target |
Запуск и включение сервиса Volume сервера
1 2 3 4 |
systemctl daemon-reload systemctl enable seaweedfs-volume systemctl start seaweedfs-volume |
Проверка статуса сервиса
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
root@debian:~# systemctl status seaweedfs-volume ● seaweedfs-volume.service - SeaweedFS Volume Server Loaded: loaded (/etc/systemd/system/seaweedfs-volume.service; enabled; preset: enabled) Active: active (running) since Sun 2024-09-15 17:48:02 +06; 14s ago Main PID: 37494 (weed) Tasks: 7 (limit: 1096) Memory: 13.8M CPU: 24ms CGroup: /system.slice/seaweedfs-volume.service └─37494 /usr/local/bin/weed volume -dir=/data/seaweedfs/volume -max=100 -ip=192.168.1.126 -mserver=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 Sep 15 17:48:02 debian systemd[1]: Started seaweedfs-volume.service - SeaweedFS Volume Server. Sep 15 17:48:02 debian weed[37494]: I0915 17:48:02.883108 file_util.go:27 Folder /data/seaweedfs/volume Permission: -rwxr-xr-x Sep 15 17:48:02 debian weed[37494]: I0915 17:48:02.890427 disk_location.go:239 Store started on dir: /data/seaweedfs/volume with 0 volumes max 100 Sep 15 17:48:02 debian weed[37494]: I0915 17:48:02.890456 disk_location.go:242 Store started on dir: /data/seaweedfs/volume with 0 ec shards Sep 15 17:48:02 debian weed[37494]: I0915 17:48:02.890800 volume.go:380 Start Seaweed volume server 30GB 3.73 6063a889ed61b4e3ef29360faa5d7623a4a70364 at 192.168.1.126:8080 Sep 15 17:48:02 debian weed[37494]: I0915 17:48:02.890848 volume_grpc_client_to_master.go:52 Volume server start with seed master nodes: [192.168.1.121:9333 192.168.1.122:9333 192.168.1.123:9333] Sep 15 17:48:02 debian weed[37494]: I0915 17:48:02.891648 volume_grpc_client_to_master.go:109 Heartbeat to: 192.168.1.121:9333 |
Настройка Filer серверов
1 2 3 |
mkdir -p /data/seaweedfs/filer chown -R seaweedfs:seaweedfs /data/seaweedfs/filer |
Создание файла конфигурации filer.toml
1 2 3 4 |
mkdir -p /etc/seaweedfs touch /etc/seaweedfs/filer.toml chown -R seaweedfs:seaweedfs /etc/seaweedfs |
nano /etc/seaweedfs/filer.toml
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[leveldb2] enabled = false [etcd] enabled = true servers = "192.168.1.121:2379,192.168.1.122:2379,192.168.1.123:2379" username = "" password = "" key_prefix = "seaweedfs." timeout = "3s" tls_ca_file = "" tls_client_crt_file = "" tls_client_key_file = "" |
Создание файла сервиса systemd для Filer сервера
1 2 |
nano /etc/systemd/system/seaweedfs-filer.service |
Для сервера 192.168.1.124
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[Unit] Description=SeaweedFS Filer Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed filer \ -ip=192.168.1.124 \ -port=8888 \ -master=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 \ -defaultReplicaPlacement=001 Restart=on-failure [Install] WantedBy=multi-user.target |
Для сервера 192.168.1.125
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[Unit] Description=SeaweedFS Filer Server After=network.target [Service] User=seaweedfs Group=seaweedfs ExecStart=/usr/local/bin/weed filer \ -ip=192.168.1.125 \ -port=8888 \ -master=192.168.1.121:9333,192.168.1.122:9333,192.168.1.123:9333 \ -defaultReplicaPlacement=001 Restart=on-failure [Install] WantedBy=multi-user.target |
Запуск и включение сервиса Filer сервера
1 2 3 4 |
systemctl daemon-reload systemctl enable seaweedfs-filer systemctl start seaweedfs-filer |
Проверка работы SeaweedFS и доступ к веб-интерфейсу
SeaweedFS предоставляет веб-интерфейсы для мастера (Master Server) и файлового сервера (Filer Server), которые позволяют вам управлять системой и мониторить её состояние.
1. Доступ к веб-интерфейсу Master Server
Мастер-сервер предоставляет веб-интерфейс по умолчанию на порту 9333.
- URL для доступа:
http://<IP-адрес-мастера>:9333
Например, если мастер-сервер работает на IP 192.168.1.121
, введите в браузере:
Возможности веб-интерфейса мастера:
- Просмотр списка зарегистрированных Volume Servers.
- Мониторинг состояния томов (volumes) и их репликации.
- Отображение общей информации о состоянии кластера.
2. Доступ к веб-интерфейсу Filer Server
Файловый сервер (Filer) также предоставляет веб-интерфейс по умолчанию на порту 8888.
- URL для доступа:
http://<IP-адрес-файлера>:8888
Например, для доступа к Filer на IP 192.168.1.124
, введите:
http://192.168.1.124:8888
Возможности веб-интерфейса Filer:
- Навигация по файловой системе.
- Загрузка и скачивание файлов.
- Создание и удаление директорий.
- Просмотр и редактирование метаданных файлов.
3. Тестирование работы через веб-интерфейс Filer
Загрузка файла:
- Откройте веб-интерфейс Filer.
- Перейдите в директорию, куда хотите загрузить файл.
- Нажмите кнопку "Upload" или "Загрузить".
- Выберите файл на вашем компьютере и подтвердите загрузку.
Скачивание файла:
- Найдите нужный файл в интерфейсе и нажмите на него, чтобы скачать.
Дополнительные инструменты и команды
Использование weed shell
:
- Это интерактивная оболочка для взаимодействия с SeaweedFS.
1 2 3 4 5 |
root@debian:~# weed shell master: localhost:9333 filer: I0915 18:47:48.255833 masterclient.go:228 master localhost:9333 redirected to leader 192.168.1.121:9333 .master: localhost:9333 filers: [192.168.1.124:8888 192.168.1.125:8888] > |
Просмотр списка томов:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
> volume.list Topology volumeSizeLimit:30000 MB hdd(volume:12/200 active:12 free:188 remote:0) DataCenter DefaultDataCenter hdd(volume:12/200 active:12 free:188 remote:0) Rack DefaultRack hdd(volume:12/200 active:12 free:188 remote:0) DataNode 192.168.1.126:8080 hdd(volume:6/100 active:6 free:94 remote:0) Disk hdd(volume:6/100 active:6 free:94 remote:0) volume id:6 size:272 file_count:1 replica_placement:1 version:3 modified_at_second:1726404295 volume id:1 size:8 replica_placement:1 version:3 modified_at_second:1726403744 volume id:2 size:912 file_count:3 replica_placement:1 version:3 modified_at_second:1726404235 volume id:3 size:327712 file_count:1 replica_placement:1 version:3 modified_at_second:1726404240 volume id:4 size:8 replica_placement:1 version:3 modified_at_second:1726403744 volume id:5 size:327744 file_count:1 delete_count:1 deleted_byte_count:327668 replica_placement:1 version:3 modified_at_second:1726404167 Disk hdd total size:656656 file_count:6 deleted_file:1 deleted_bytes:327668 DataNode 192.168.1.126:8080 total size:656656 file_count:6 deleted_file:1 deleted_bytes:327668 DataNode 192.168.1.127:8080 hdd(volume:6/100 active:6 free:94 remote:0) Disk hdd(volume:6/100 active:6 free:94 remote:0) volume id:1 size:8 replica_placement:1 version:3 modified_at_second:1726403744 volume id:2 size:912 file_count:3 replica_placement:1 version:3 modified_at_second:1726404235 volume id:3 size:327712 file_count:1 replica_placement:1 version:3 modified_at_second:1726404240 volume id:4 size:8 replica_placement:1 version:3 modified_at_second:1726403744 volume id:5 size:327744 file_count:1 delete_count:1 deleted_byte_count:327668 replica_placement:1 version:3 modified_at_second:1726404167 volume id:6 size:272 file_count:1 replica_placement:1 version:3 modified_at_second:1726404295 Disk hdd total size:656656 file_count:6 deleted_file:1 deleted_bytes:327668 DataNode 192.168.1.127:8080 total size:656656 file_count:6 deleted_file:1 deleted_bytes:327668 Rack DefaultRack total size:1313312 file_count:12 deleted_file:2 deleted_bytes:655336 DataCenter DefaultDataCenter total size:1313312 file_count:12 deleted_file:2 deleted_bytes:655336 total size:1313312 file_count:12 deleted_file:2 deleted_bytes:655336 > |
проверка кластера
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
> cluster.check Topology volumeSizeLimit:30000 MB hdd(volume:12/200 active:12 free:188 remote:0) the cluster has 2 filers: [192.168.1.124:8888 192.168.1.125:8888] the cluster has 2 volume servers: [192.168.1.126:8080 192.168.1.127:8080] checking master localhost:9333 to volume server 192.168.1.126:8080 … ok round trip 0.535ms clock delta 6.289ms checking master localhost:9333 to volume server 192.168.1.127:8080 … ok round trip 0.555ms clock delta 4.627ms checking volume server 192.168.1.126:8080 to master localhost:9333 … rpc error: code = Unknown desc = ping master localhost:9333: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:19333: connect: connection refused" checking volume server 192.168.1.127:8080 to master localhost:9333 … rpc error: code = Unknown desc = ping master localhost:9333: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:19333: connect: connection refused" checking filer 192.168.1.124:8888 to master localhost:9333 … rpc error: code = Unknown desc = ping master localhost:9333: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:19333: connect: connection refused" checking filer 192.168.1.125:8888 to master localhost:9333 … rpc error: code = Unknown desc = ping master localhost:9333: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:19333: connect: connection refused" checking filer 192.168.1.124:8888 to volume server 192.168.1.126:8080 … ok round trip 0.456ms clock delta -27.200ms checking filer 192.168.1.124:8888 to volume server 192.168.1.127:8080 … ok round trip 0.443ms clock delta -28.862ms checking filer 192.168.1.125:8888 to volume server 192.168.1.126:8080 … ok round trip 1.655ms clock delta -33.870ms checking filer 192.168.1.125:8888 to volume server 192.168.1.127:8080 … ok round trip 1.697ms clock delta -35.513ms checking volume server 192.168.1.126:8080 to 192.168.1.127:8080 … ok round trip 1.567ms clock delta -1.052ms checking volume server 192.168.1.127:8080 to 192.168.1.126:8080 … ok round trip 1.367ms clock delta 2.172ms checking filer 192.168.1.124:8888 to 192.168.1.124:8888 … ok round trip 0.207ms clock delta 0.044ms checking filer 192.168.1.124:8888 to 192.168.1.125:8888 … ok round trip 0.450ms clock delta 7.323ms checking filer 192.168.1.125:8888 to 192.168.1.124:8888 … ok round trip 0.375ms clock delta -7.190ms checking filer 192.168.1.125:8888 to 192.168.1.125:8888 … ok round trip 0.224ms clock delta 0.033ms |
вообще список команд можно посмотреть так:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
> help Type: "help <command>" for help on <command>. Most commands support "<command> -h" also for options. cluster.check # check current cluster network connectivity cluster.ps # check current cluster process status cluster.raft.add # add a server to the raft cluster cluster.raft.ps # check current raft cluster status cluster.raft.remove # remove a server from the raft cluster collection.delete # delete specified collection collection.list # list all collections ec.balance # balance all ec shards among all racks and volume servers ec.decode # decode a erasure coded volume into a normal volume ec.encode # apply erasure coding to a volume ec.rebuild # find and rebuild missing ec shards among volume servers fs.cat # stream the file content on to the screen fs.cd # change directory to a directory /path/to/dir fs.configure # configure and apply storage options for each location fs.du # show disk usage fs.log.purge # purge filer logs fs.ls # list all files under a directory fs.mergeVolumes # re-locate chunks into target volumes and try to clear lighter volumes. fs.meta.cat # print out the meta data content for a file or directory fs.meta.changeVolumeId # change volume id in existing metadata. fs.meta.load # load saved filer meta data to restore the directory and file structure fs.meta.notify # recursively send directory and file meta data to notification message queue fs.meta.save # save all directory and file meta data to a local file for metadata backup. fs.mkdir # create a directory fs.mv # move or rename a file or a folder fs.pwd # print out current directory fs.rm # remove file and directory entries fs.tree # recursively list all files under a directory fs.verify # recursively verify all files under a directory lock # lock in order to exclusively manage the cluster mount.configure # configure the mount on current server mq.balance # balance topic partitions mq.topic.configure # configure a topic with a given name mq.topic.describe # describe a topic mq.topic.list # print out all topics remote.cache # cache the file content for mounted directories or files remote.configure # remote storage configuration remote.meta.sync # synchronize the local file meta data with the remote file metadata remote.mount # mount remote storage and pull its metadata remote.mount.buckets # mount all buckets in remote storage and pull its metadata remote.uncache # keep the metadata but remote cache the file content for mounted directories or files remote.unmount # unmount remote storage s3.bucket.create # create a bucket with a given name s3.bucket.delete # delete a bucket by a given name s3.bucket.list # list all buckets s3.bucket.quota # set/remove/enable/disable quota for a bucket s3.bucket.quota.enforce # check quota for all buckets, make the bucket read only if over the limit s3.circuitBreaker # configure and apply s3 circuit breaker options for each bucket s3.clean.uploads # clean up stale multipart uploads s3.configure # configure and apply s3 options for each bucket unlock # unlock the cluster-wide lock volume.balance # balance all volumes among volume servers volume.check.disk # check all replicated volumes to find and fix inconsistencies. It is optional and resource intensive. volume.configure.replication # change volume replication value volume.copy # copy a volume from one volume server to another volume server volume.delete # delete a live volume from one volume server volume.deleteEmpty # delete empty volumes from all volume servers volume.fix.replication # add or remove replicas to volumes that are missing replicas or over-replicated volume.fsck # check all volumes to find entries not used by the filer volume.grow # grow volumes volume.list # list all volumes volume.mark # Mark volume writable or readonly from one volume server volume.mount # mount a volume from one volume server volume.move # move a live volume from one volume server to another volume server volume.tier.download # download the dat file of a volume from a remote tier volume.tier.move # change a volume from one disk type to another volume.tier.upload # upload the dat file of a volume to a remote tier volume.unmount # unmount a volume from one volume server volume.vacuum # compact volumes if deleted entries are more than the limit volume.vacuum.disable # disable vacuuming request from Master, however volume.vacuum still works. volume.vacuum.enable # enable vacuuming request from Master volumeServer.evacuate # move out all data on a volume server volumeServer.leave # stop a volume server from sending heartbeats to the master > |
Балансировщик haproxy
192.168.1.125 (Filer 2)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
global log /dev/log local0 maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 timeout connect 5s timeout client 50s timeout server 50s frontend filer_frontend bind *:8889 default_backend filer_backend backend filer_backend balance roundrobin option tcp-check # Включаем проверку на уровне TCP server filer1 192.168.1.124:8888 check server filer2 192.168.1.125:8888 check |
systemctl enable haproxy
systemctl restart haproxy