
Create documentation for rook-ceph install, removal and deployment models. Story: 2011066 Task: 50934 Change-Id: I137d3251078d5868cd2515a617afc5859858b4ac Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
41 KiB
Install Rook Ceph
Rook Ceph in an orchestrator providing a containerized solution for Ceph Storage with a specialized Kubernetes Operator to automate the management of the cluster. It is an alternative solution to the bare metal Ceph Storage. See https://rook.io/docs/rook/latest-release/Getting-Started/intro/ for more details.
Before configuring the deployment model and services.
Certify that there is no no ceph-store storage backend configured on the system:
~(keystone_admin)$ system storage-backend-list
Create a storage backend for Rook Ceph, choose your deployment model (controller, dedicated, open), and the desired services (block or ecblock, filesystem, object):
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed
Create a
host-fs ceph
for each host that will house a Rook Ceph monitor (preferably an ODD number of hosts):~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
For DX platforms, adding a floating monitor is recommended. To add a floating monitor, the inactive controller should be locked:
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller) ~(keystone_admin)$ system controllerfs-add ceph-float=<size>
Configure .
Check the uuid of the disks of the desired host that will house the :
~(keystone_admin)$ system host-disk-list <hostname>
Note
The placement should follow the chosen deployment model placement rules.
Add the desired disks to the system as (Preferably an EVEN number of ):
~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid>
For more details om deployment models and services see deployment-models-for-rook-ceph-b855bd0108cf
.
After configuring environment according to the chosen deployment model correctly, Rook Ceph will install automatically.
Check the health of the cluster after some minutes after application
applied using any ceph commands, for example ceph status
.
~(keystone_admin)$ ceph -s
e.g. (STD with 3 mon and 12 OSDs):
~(keystone_admin)$ ceph -s
cluster:
id: 5c8eb4ff-ba21-40f4-91ed-68effc47a08b
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2d)
mgr: c(active, since 5d), standbys: a, b
mds: 1/1 daemons up, 1 hot standby
osd: 12 osds: 12 up (since 5d), 12 in (since 5d)
data:
volumes: 1/1 healthy
pools: 4 pools, 81 pgs
objects: 133 objects, 353 MiB
usage: 3.8 GiB used, 5.7 TiB / 5.7 TiB avail
pgs: 81 active+clean
Check if the cluster contains all the desired elements. All pods should be running or completed to the cluster to be considered healthy. You can see the Rook Ceph pods with:
~(keystone_admin)$ kubectl get pod -n rook-ceph
e.g. (SX with 1 mon and 2 OSDs):
~(keystone_admin)$ kubectl get pod -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-2g9pz 0/1 Completed 0 11m
csi-cephfsplugin-4j7l6 2/2 Running 0 11m
csi-cephfsplugin-provisioner-67bd9fcc8d-jckzq 5/5 Running 0 11m
csi-rbdplugin-dzdb8 2/2 Running 0 11m
csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m
rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m
rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m
rook-ceph-mds-kube-cephfs-a-76847477bf-2snzp 2/2 Running 0 11m
rook-ceph-mds-kube-cephfs-b-6984b58b79-fzhk6 2/2 Running 0 11m
rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m
rook-ceph-mon-a-6976b847f4-5vmg9 2/2 Running 0 11m
rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m
rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m
rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m
rook-ceph-osd-prepare-controller-0-s4bzz 0/1 Completed 0 11m
rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s
rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m
stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Additional Features and Procedures
Add New OSDs on a Running Cluster
To add new to the cluster, add the new to the platform and re-apply the application.
~(keystone_admin)$ system host-stor-add <host> <disk_uuid>
~(keystone_admin)$ system application-apply rook-ceph
Add New Monitor on a Running Cluster
To add a new monitor to the cluster, add the host-fs
to
the desired host and re-apply the application.
~(keystone_admin)$ system host-fs-add <host> ceph=<size>
~(keystone_admin)$ system application-apply rook-ceph
Enable Ceph Dashboard
To enable Ceph Dashboard a Helm override must be provided before the application apply. You should provide a password coded in base64.
Create the override file.
$ openssl base64 -e <<< "my_dashboard_passwd"
bXlfZGFzaGJvYXJkX3Bhc3N3ZAo=
$ cat << EOF >> dashboard-override.yaml
cephClusterSpec:
dashboard:
enabled: true
password: "bXlfZGFzaGJvYXJkX3Bhc3N3ZAo="
EOF
Check Rook Ceph Pods
You can check the pods of the storage cluster running the following command:
kubectl get pod -n rook-ceph
Instalation on Simplex with controller model, 1 monitor, installing manually, services: block and cephfs
In this configuration, you can add monitors and on the Simplex node.
On a system with no bare metal Ceph storage backend on it, add a ceph-rook storage back end. Use block (RBD), cephfs (default option, no need to specify with arguments).
$ system storage-backend-add ceph-rook --deployment controller --confirmed
Add the
host-fs ceph
on controller, thehost-fs ceph
is configured with 10 GB.$ system host-fs-add controller-0 ceph=10
To add , get the of each disk to feed the
host-stor-add
command:$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 9 | | | | | | | | | | | | | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 9 | | | | | | | | | | | | | 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 4 | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
Add all the desired disks as :
# system host-stor-add controller-0 #UUID $ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | state | configuring-with-app | | journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 0fb88b8b-a134-4754-988a-382c10123fbb | | ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d | | idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:19:41.335302+00:00 | | updated_at | None | +------------------+--------------------------------------------------+ # system host-stor-add controller-0 #UUID $ system host-stor-add controller-0 283359b5-d06f-4e73-a58f-e15f7ea41abd +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 1 | | function | osd | | state | configuring-with-app | | journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part2 | | journal_node | /dev/cdb2 | | uuid | 13baee21-daad-4266-bfdd-b549837d8b88 | | ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 | | idisk_uuid | 283359b5-d06f-4e73-a58f-e15f7ea41abd | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:18:28.107688+00:00 | | updated_at | None | +------------------+--------------------------------------------------+
Check the progress of the app. With a valid configuration of
host-fs
and , the app will apply automatically.$ system application-show rook-ceph #or $ system application-list
After the app is applied the pod list of the namespace rook-ceph should look like this:
$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE ceph-mgr-provision-2g9pz 0/1 Completed 0 11m csi-cephfsplugin-4j7l6 2/2 Running 0 11m csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m csi-rbdplugin-dzdb8 2/2 Running 0 11m csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m rook-ceph-osd-1-rfgr4984-t653f 2/2 Running 0 11m rook-ceph-osd-prepare-controller-0-8ge4z 0/1 Completed 0 11m rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Installation on Duplex with controller model, 3 monitors, installing manually, services: block and cephfs
In this configuration, you can add monitors and OSDs on the Duplex node.
On a system with no bare metal Ceph storage backend on it, add a ceph-rook storage back end. Use block (RBD), cephfs (default option, no need to specify with arguments).
$ system storage-backend-add ceph-rook --deployment controller --confirmed
Add the
controller-fs
ceph-float
configured with 10 GB.$ system controllerfs-add ceph-float=<size>
Add the
host-fs ceph
on each controller, thehost-fs ceph
is configured with 10 GB.$ system host-fs-add controller-0 ceph=10
To add , get the of each disk to feed the
host-stor-add
command.$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 9 | | | | | | | | | | | | | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 9 | | | | | | | | | | | | | 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 4 | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ $ system host-disk-list controller-1 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 32be8509 | | | | | | | | | | | | | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 62d4613b | | | | | | | | | | | | | 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 3003aa5e | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
Add all the desired disks as :
# system host-stor-add controller-0 #UUID $ system host-stor-add controller-0 9bb0cb55-7eba-426e-a1d3-aba002c7eebc +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | state | configuring-with-app | | journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 0fb88b8b-a134-4754-988a-382c10123fbb | | ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d | | idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:19:41.335302+00:00 | | updated_at | None | +------------------+--------------------------------------------------+ # system host-stor-add controller-1 #UUID $ system host-stor-add controller-1 1e36945e-e0fb-4a72-9f96-290f9bf57523 +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 1 | | function | osd | | state | configuring-with-app | | journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 13baee21-daad-4266-bfdd-b549837d8b88 | | ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 | | idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:18:28.107688+00:00 | | updated_at | None | +------------------+--------------------------------------------------+
Check the progress of the app. With a valid configuration of monitors and , the app will apply automatically.
$ system application-show rook-ceph #or $ system application-list
After the app is applied the pod list of the namespace
rook-ceph
should look like this:$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE csi-cephfsplugin-64z6c 2/2 Running 0 34m csi-cephfsplugin-dhsqp 2/2 Running 2 (17m ago) 34m csi-cephfsplugin-gch9g 2/2 Running 0 34m csi-cephfsplugin-pkzg2 2/2 Running 0 34m csi-cephfsplugin-provisioner-5467c6c4f-r2lp6 5/5 Running 0 22m csi-rbdplugin-2vmzf 2/2 Running 2 (17m ago) 34m csi-rbdplugin-6j69b 2/2 Running 0 34m csi-rbdplugin-6j8jj 2/2 Running 0 34m csi-rbdplugin-hwbl7 2/2 Running 0 34m csi-rbdplugin-provisioner-fd84899c-wwbrz 5/5 Running 0 22m mon-float-post-install-sw8qb 0/1 Completed 0 6m5s mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s rook-ceph-crashcollector-controller-0-6f47c4c9f5-hbbnt 1/1 Running 0 33m rook-ceph-crashcollector-controller-1-76585f8db8-cb4jl 1/1 Running 0 11m rook-ceph-exporter-controller-0-c979d9977-kt7tx 1/1 Running 0 33m rook-ceph-exporter-controller-1-86bc859c4-q4mxd 1/1 Running 0 11m rook-ceph-mds-kube-cephfs-a-55978b78b9-dcbtf 2/2 Running 0 22m rook-ceph-mds-kube-cephfs-b-7b8bf4549f-thr7g 2/2 Running 2 (12m ago) 33m rook-ceph-mgr-a-649cf9c487-vfs65 3/3 Running 0 17m rook-ceph-mgr-b-d54c5d7cb-qwtnm 3/3 Running 0 33m rook-ceph-mon-a-5cc7d56767-64dbd 2/2 Running 0 6m30s rook-ceph-mon-b-6cf5b79f7f-skrtd 2/2 Running 0 6m31s rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s rook-ceph-operator-69b5674578-lmmdl 1/1 Running 0 22m rook-ceph-osd-0-847f6f7dd9-6xlln 2/2 Running 0 16m rook-ceph-osd-1-7cc87df4c4-jlpk9 2/2 Running 0 33m rook-ceph-osd-prepare-controller-0-4rcd6 0/1 Completed 0 22m rook-ceph-tools-84659bcd67-r8qbp 1/1 Running 0 22m stx-ceph-manager-689997b4f4-hk6gh 1/1 Running 0 22m
Installation on Standard with dedicated model, 5 monitors, services: ecblock and cephfs
In this configuration, you can add monitors on 5 hosts and, to fit this deployment in the dedicated model, will be added on workers only. Compute-1 and Compute-2 were chosen to house the cluster .
On a system with no bare metal Ceph storage backend on it, add a ceph-rook storage back end. To fit in the dedicated model, the must be placed on dedicated workers only. We will use
ecblock
instead of and cephfs.$ system storage-backend-add ceph-rook --deployment dedicated --confirmed --services ecblock,filesystem
Add all the
host-fs
on the nodes that will house mon, mgr and mds. In this particular case, 5 hosts will have thehost-fs ceph
configured.$ system host-fs-add controller-0 ceph=20 $ system host-fs-add controller-1 ceph=20 $ system host-fs-add compute-0 ceph=20 $ system host-fs-add compute-1 ceph=20 $ system host-fs-add compute-2 ceph=20
To add get the of each disk to feed the
host-stor-add
command.$ system host-disk-list compute-1 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ | d7023797-68c9-4b3c-8adb-7fc4980e7c0a | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VBfb16ffca-2826118 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 9 | | | | | | | | | | | | | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB92c5f4e7-c1884d9 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 9 | | | | | | | | | | | | | 283359b5-d06f-4e73-a58f-e15f7ea41abd | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB4390bf35-c0758bd | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 4 | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+ $ system host-disk-list compute-2 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_id | device_path | | | de | num | type | gib | gib | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+ | 48c0501e-1144-49b8-8579-00d82a3db14f | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB86b2b09b- | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | | | | | | 968 | | | 32be8509 | | | | | | | | | | | | | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf454c46a- | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | | | | | | | | | 62d4613b | | | | | | | | | | | | | 090c9a7c-67e3-4d92-886c-646ff26418b6 | /dev/sdc | 2080 | HDD | 9.765 | 0.0 | Undetermined | VB5d1b89fd- | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | | | | | | | | | 3003aa5e | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
Add all the desired disks as , here for sake of simplicity only one on compute-1 and compute-2 will be added:
# system host-stor-add compute-1 #UUID $ system host-stor-add compute-1 9bb0cb55-7eba-426e-a1d3-aba002c7eebc +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 0 | | function | osd | | state | configuring-with-app | | journal_location | 0fb88b8b-a134-4754-988a-382c10123fbb | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 0fb88b8b-a134-4754-988a-382c10123fbb | | ihost_uuid | 57a7a41e-7805-406d-b204-2736adc8391d | | idisk_uuid | 9bb0cb55-7eba-426e-a1d3-aba002c7eebc | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:19:41.335302+00:00 | | updated_at | None | +------------------+--------------------------------------------------+ # system host-stor-add compute-2 #UUID $ system host-stor-add compute-2 1e36945e-e0fb-4a72-9f96-290f9bf57523 +------------------+--------------------------------------------------+ | Property | Value | +------------------+--------------------------------------------------+ | osdid | 1 | | function | osd | | state | configuring-with-app | | journal_location | 13baee21-daad-4266-bfdd-b549837d8b88 | | journal_size_gib | 1024 | | journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 | | journal_node | /dev/sdb2 | | uuid | 13baee21-daad-4266-bfdd-b549837d8b88 | | ihost_uuid | 51d26b14-412d-4bf8-b2b0-2fba69026459 | | idisk_uuid | 1e36945e-e0fb-4a72-9f96-290f9bf57523 | | tier_uuid | 23091432-bf36-4fc3-a314-72b70265e7b0 | | tier_name | storage | | created_at | 2024-06-24T14:18:28.107688+00:00 | | updated_at | None | +------------------+--------------------------------------------------+
Check the progress of the app. With a valid configuration of
host-fs
and , the app will apply automatically.$ system application-show rook-ceph #or $ system application-list
After the app is applied the pod list of the namespace
rook-ceph
should look like this:$ kubectl get pod -n rook-ceph NAME READY STATUS RESTARTS AGE ceph-mgr-provision-2g9pz 0/1 Completed 0 11m csi-cephfsplugin-4j7l6 2/2 Running 0 11m csi-cephfsplugin-provisioner-6726cfcc8d-jckzq 5/5 Running 0 11m csi-rbdplugin-dzdb8 2/2 Running 0 11m csi-rbdplugin-provisioner-5698784bb8-4t7xw 5/5 Running 0 11m rook-ceph-crashcollector-controller-0-c496bf9bc-6bc4m 1/1 Running 0 11m rook-ceph-exporter-controller-0-857698d7cc-9dqn4 1/1 Running 0 10m rook-ceph-mds-kube-cephfs-a-49c4747797-2snzp 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-b-6fc4b58b08-fzhk6 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-c-12f4b58b1e-fzhk6 2/2 Running 0 11m rook-ceph-mds-kube-cephfs-d-a6s4d6a8w4-4d64g 2/2 Running 0 11m rook-ceph-mgr-a-5b86cb5c74-bhp59 2/2 Running 0 11m rook-ceph-mgr-b-wd12af64t4-dw62i 2/2 Running 0 11m rook-ceph-mgr-c-s684gs86g4-62srg 2/2 Running 0 11m rook-ceph-mgr-d-68r4864f64-8a4a6 2/2 Running 0 11m rook-ceph-mgr-e-as5d4we6f4-6aef4 2/2 Running 0 11m rook-ceph-mon-a-6976b847f4-c4g6s 2/2 Running 0 11m rook-ceph-mon-b-464fc6e8a3-fd864 2/2 Running 0 11m rook-ceph-mon-c-468fc68e4c-6w8sa 2/2 Running 0 11m rook-ceph-mon-d-8fc5686c4d-5v1w6 2/2 Running 0 11m rook-ceph-mon-e-21f3c12e3a-6s7qq 2/2 Running 0 11m rook-ceph-operator-c66b98d94-87t8s 1/1 Running 0 12m rook-ceph-osd-0-f56c65f6-kccfn 2/2 Running 0 11m rook-ceph-osd-1-7ff8bc8bc7-7tqhz 2/2 Running 0 11m rook-ceph-osd-prepare-compute-1-8ge4z 0/1 Completed 0 11m rook-ceph-osd-prepare-compute-2-s32sz 0/1 Completed 0 11m rook-ceph-provision-zp4d5 0/1 Completed 0 5m23s rook-ceph-tools-785644c966-6zxzs 1/1 Running 0 11m stx-ceph-manager-64d8db7fc4-tgll8 1/1 Running 0 11m stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s