From 3bd59f9595b3165825b8a373a798d3a8c9f87fdf Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Elisamara=20Aoki=20Gon=C3=A7alves?= Date: Tue, 25 Mar 2025 20:48:05 +0000 Subject: [PATCH] Update docs to mention that Openstack DOES NOT support ceph-rook (r10,ds R10) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Remove mentions of rook-ceph from the Openstack Install Guide. Add warning about rook-ceph not being supported in openstack. Fix merge conflict. Change-Id: I4c120927e1b8e387d38878a5b682e052c9d2b9f3 Signed-off-by: Elisamara Aoki Gonçalves --- .../aio_duplex_install_kubernetes.rst | 251 +++++++------- .../aio_simplex_install_kubernetes.rst | 241 +++++++------- .../controller_storage_install_kubernetes.rst | 308 +++++++++--------- 3 files changed, 419 insertions(+), 381 deletions(-) diff --git a/doc/source/deploy_install_guides/release/bare_metal/aio_duplex_install_kubernetes.rst b/doc/source/deploy_install_guides/release/bare_metal/aio_duplex_install_kubernetes.rst index 9a9079373..cab9c6ec5 100644 --- a/doc/source/deploy_install_guides/release/bare_metal/aio_duplex_install_kubernetes.rst +++ b/doc/source/deploy_install_guides/release/bare_metal/aio_duplex_install_kubernetes.rst @@ -934,18 +934,20 @@ A persistent storage backend is required if your application requires |PVCs|. The StarlingX OpenStack application **requires** |PVCs|. -.. note:: +.. only:: starlingx or platform - Each deployment model enforces a different structure for the Rook Ceph - cluster and its integration with the platform. + There are two options for persistent storage backend: the host-based Ceph + solution and the Rook container-based Ceph solution. -There are two options for persistent storage backend: the host-based Ceph -solution and the Rook container-based Ceph solution. + .. note:: -.. note:: + Host-based Ceph will be deprecated and removed in an upcoming release. + Adoption of Rook-Ceph is recommended for new deployments. - Host-based Ceph will be deprecated and removed in an upcoming release. - Adoption of Rook-Ceph is recommended for new deployments. +.. warning:: + + Currently |prod-os| does not support rook-ceph. If you plan on using + |prod-os|, choose host-based Ceph. For host-based Ceph: @@ -969,32 +971,39 @@ For host-based Ceph: # List OSD storage devices ~(keystone_admin)$ system host-stor-list controller-0 -For Rook-Ceph: +.. only:: starlingx or platform -#. Add Storage-Backend with Deployment Model. + For Rook-Ceph: - .. code-block:: none + .. note:: - ~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller - ~(keystone_admin)$ system storage-backend-list - +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+ - | uuid | name | backend | state | task | services | capabilities | - +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+ - | 45e3fedf-c386-4b8b-8405-882038dd7d13 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: controller replication: 2 | - | | | | | | | min_replication: 1 | - +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+ + Each deployment model enforces a different structure for the Rook Ceph + cluster and its integration with the platform. -#. Set up a ``contorllerfs ceph-float`` filesystem. + #. Add Storage-Backend with Deployment Model. - .. code-block:: none + .. code-block:: none - ~(keystone_admin)$ system controllerfs-add ceph-float=20 + ~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller + ~(keystone_admin)$ system storage-backend-list + +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+ + | uuid | name | backend | state | task | services | capabilities | + +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+ + | 45e3fedf-c386-4b8b-8405-882038dd7d13 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: controller replication: 2 | + | | | | | | | min_replication: 1 | + +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+ -#. Set up a ``host-fs ceph`` filesystem on controller-0. + #. Set up a ``contorllerfs ceph-float`` filesystem. - .. code-block:: none + .. code-block:: none - ~(keystone_admin)$ system host-fs-add controller-0 ceph=20 + ~(keystone_admin)$ system controllerfs-add ceph-float=20 + + #. Set up a ``host-fs ceph`` filesystem on controller-0. + + .. code-block:: none + + ~(keystone_admin)$ system host-fs-add controller-0 ceph=20 ------------------- @@ -1459,13 +1468,15 @@ For host-based Ceph: # List OSD storage devices ~(keystone_admin)$ system host-stor-list controller-1 -For Rook-Ceph: +.. only:: starlingx or platform -#. Set up a ``host-fs ceph`` filesystem on controller-1. + For Rook-Ceph: - .. code-block:: none + #. Set up a ``host-fs ceph`` filesystem on controller-1. - ~(keystone_admin)$ system host-fs-add controller-1 ceph=20 + .. code-block:: none + + ~(keystone_admin)$ system host-fs-add controller-1 ceph=20 ------------------- Unlock controller-1 @@ -1481,117 +1492,119 @@ Controller-1 will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -------------------------------------------------------------------- -If configuring Rook Ceph Storage Backend, configure the environment -------------------------------------------------------------------- +.. only:: starlingx or platform -#. Check if the rook-ceph app is uploaded. + ------------------------------------------------------------------- + If configuring Rook Ceph Storage Backend, configure the environment + ------------------------------------------------------------------- - .. code-block:: none + #. Check if the rook-ceph app is uploaded. - ~(keystone_admin)$ source /etc/platform/openrc - ~(keystone_admin)$ system application-list - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - | application | version | manifest name | manifest file | status | progress | - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - | cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | - | dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | - | nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | - | oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + .. code-block:: none -#. List all the disks. + ~(keystone_admin)$ source /etc/platform/openrc + ~(keystone_admin)$ system application-list + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + | application | version | manifest name | manifest file | status | progress | + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + | cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | + | dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | + | nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | + | oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - .. code-block:: none + #. List all the disks. - ~(keystone_admin)$ system host-disk-list controller-0 - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | 7ce699f0-12dd-4416-ae43-00d3877450f7 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB0e18230e-6a8780e1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | - | bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB144f1510-14f089fd | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | - | 937cfabc-8447-4dbd-8ca3-062a46953023 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB95057d1c-4ee605c2 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + .. code-block:: none - (keystone_admin)]$ system host-disk-list controller-1 - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | 52c8e1b5-0551-4748-a7a0-27b9c028cf9d | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB9b565509-a2edaa2e | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | - | 93020ce0-249e-4db3-b8c3-6c7e8f32713b | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBa08ccbda-90190faa | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | - | dc0ec403-67f8-40bf-ada0-6fcae3ed76da | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB16244caf-ab36d36c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + ~(keystone_admin)$ system host-disk-list controller-0 + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | 7ce699f0-12dd-4416-ae43-00d3877450f7 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB0e18230e-6a8780e1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | + | bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB144f1510-14f089fd | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | + | 937cfabc-8447-4dbd-8ca3-062a46953023 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB95057d1c-4ee605c2 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ -#. Choose empty disks and provide hostname and uuid to finish |OSD| - configuration: + (keystone_admin)]$ system host-disk-list controller-1 + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | 52c8e1b5-0551-4748-a7a0-27b9c028cf9d | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB9b565509-a2edaa2e | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | + | 93020ce0-249e-4db3-b8c3-6c7e8f32713b | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBa08ccbda-90190faa | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | + | dc0ec403-67f8-40bf-ada0-6fcae3ed76da | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB16244caf-ab36d36c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - .. code-block:: none + #. Choose empty disks and provide hostname and uuid to finish |OSD| + configuration: - ~(keystone_admin)$ system host-stor-add controller-0 osd bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 - ~(keystone_admin)$ system host-stor-add controller-1 osd 93020ce0-249e-4db3-b8c3-6c7e8f32713b + .. code-block:: none -#. Wait for |OSDs| pod to be ready. + ~(keystone_admin)$ system host-stor-add controller-0 osd bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 + ~(keystone_admin)$ system host-stor-add controller-1 osd 93020ce0-249e-4db3-b8c3-6c7e8f32713b - .. code-block:: none + #. Wait for |OSDs| pod to be ready. - $ kubectl get pods -n rook-ceph - NAME READY STATUS RESTARTS AGE - ceph-mgr-provision-w55rh 0/1 Completed 0 10m - csi-cephfsplugin-8j7xz 2/2 Running 1 (11m ago) 12m - csi-cephfsplugin-lmmg2 2/2 Running 0 12m - csi-cephfsplugin-provisioner-5467c6c4f-mktqg 5/5 Running 0 12m - csi-rbdplugin-8m8kd 2/2 Running 1 (11m ago) 12m - csi-rbdplugin-provisioner-fd84899c-kpv4q 5/5 Running 0 12m - csi-rbdplugin-z92sk 2/2 Running 0 12m - mon-float-post-install-sw8qb 0/1 Completed 0 6m5s - mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s - rook-ceph-crashcollector-controller-0-589f5f774-sp6zf 1/1 Running 0 7m49s - rook-ceph-crashcollector-controller-1-68d66b9bff-zwgp9 1/1 Running 0 7m36s - rook-ceph-exporter-controller-0-5fd477bb8-jgsdk 1/1 Running 0 7m44s - rook-ceph-exporter-controller-1-6f5d8695b9-ndksh 1/1 Running 0 7m32s - rook-ceph-mds-kube-cephfs-a-5f584f4bc-tbk8q 2/2 Running 0 7m49s - rook-ceph-mgr-a-6845774cb5-lgjjd 3/3 Running 0 9m1s - rook-ceph-mgr-b-7fccfdf64d-4pcmc 3/3 Running 0 9m1s - rook-ceph-mon-a-69fd4895c7-2lfz4 2/2 Running 0 11m - rook-ceph-mon-b-7fd8cbb997-f84ng 2/2 Running 0 11m - rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s - rook-ceph-operator-69b5674578-z456r 1/1 Running 0 13m - rook-ceph-osd-0-5f59b5bb7b-mkwrg 2/2 Running 0 8m17s - rook-ceph-osd-prepare-controller-0-rhjgx 0/1 Completed 0 8m38s - rook-ceph-provision-5glpc 0/1 Completed 0 6m17s - rook-ceph-tools-7dc9678ccb-nmwwc 1/1 Running 0 12m - stx-ceph-manager-664f8585d8-5lt8c 1/1 Running 0 10m + .. code-block:: none -#. Check ceph cluster health. + $ kubectl get pods -n rook-ceph + NAME READY STATUS RESTARTS AGE + ceph-mgr-provision-w55rh 0/1 Completed 0 10m + csi-cephfsplugin-8j7xz 2/2 Running 1 (11m ago) 12m + csi-cephfsplugin-lmmg2 2/2 Running 0 12m + csi-cephfsplugin-provisioner-5467c6c4f-mktqg 5/5 Running 0 12m + csi-rbdplugin-8m8kd 2/2 Running 1 (11m ago) 12m + csi-rbdplugin-provisioner-fd84899c-kpv4q 5/5 Running 0 12m + csi-rbdplugin-z92sk 2/2 Running 0 12m + mon-float-post-install-sw8qb 0/1 Completed 0 6m5s + mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s + rook-ceph-crashcollector-controller-0-589f5f774-sp6zf 1/1 Running 0 7m49s + rook-ceph-crashcollector-controller-1-68d66b9bff-zwgp9 1/1 Running 0 7m36s + rook-ceph-exporter-controller-0-5fd477bb8-jgsdk 1/1 Running 0 7m44s + rook-ceph-exporter-controller-1-6f5d8695b9-ndksh 1/1 Running 0 7m32s + rook-ceph-mds-kube-cephfs-a-5f584f4bc-tbk8q 2/2 Running 0 7m49s + rook-ceph-mgr-a-6845774cb5-lgjjd 3/3 Running 0 9m1s + rook-ceph-mgr-b-7fccfdf64d-4pcmc 3/3 Running 0 9m1s + rook-ceph-mon-a-69fd4895c7-2lfz4 2/2 Running 0 11m + rook-ceph-mon-b-7fd8cbb997-f84ng 2/2 Running 0 11m + rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s + rook-ceph-operator-69b5674578-z456r 1/1 Running 0 13m + rook-ceph-osd-0-5f59b5bb7b-mkwrg 2/2 Running 0 8m17s + rook-ceph-osd-prepare-controller-0-rhjgx 0/1 Completed 0 8m38s + rook-ceph-provision-5glpc 0/1 Completed 0 6m17s + rook-ceph-tools-7dc9678ccb-nmwwc 1/1 Running 0 12m + stx-ceph-manager-664f8585d8-5lt8c 1/1 Running 0 10m - .. code-block:: none + #. Check ceph cluster health. - $ ceph -s - cluster: - id: c18dfe3a-9b72-46e4-bb6e-6984f131598f - health: HEALTH_OK + .. code-block:: none - services: - mon: 2 daemons, quorum a,b (age 9m) - mgr: a(active, since 6m), standbys: b - mds: 1/1 daemons up, 1 hot standby - osd: 2 osds: 2 up (since 7m), 2 in (since 7m) + $ ceph -s + cluster: + id: c18dfe3a-9b72-46e4-bb6e-6984f131598f + health: HEALTH_OK - data: - volumes: 1/1 healthy - pools: 4 pools, 113 pgs - objects: 25 objects, 594 KiB - usage: 72 MiB used, 19 GiB / 20 GiB avail - pgs: 113 active+clean + services: + mon: 2 daemons, quorum a,b (age 9m) + mgr: a(active, since 6m), standbys: b + mds: 1/1 daemons up, 1 hot standby + osd: 2 osds: 2 up (since 7m), 2 in (since 7m) - io: - client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr + data: + volumes: 1/1 healthy + pools: 4 pools, 113 pgs + objects: 25 objects, 594 KiB + usage: 72 MiB used, 19 GiB / 20 GiB avail + pgs: 113 active+clean + + io: + client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr -.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest + .. include:: /_includes/bootstrapping-and-deploying-starlingx.rest .. _extend-dx-with-workers: diff --git a/doc/source/deploy_install_guides/release/bare_metal/aio_simplex_install_kubernetes.rst b/doc/source/deploy_install_guides/release/bare_metal/aio_simplex_install_kubernetes.rst index 6690b85e3..a48ce6a93 100644 --- a/doc/source/deploy_install_guides/release/bare_metal/aio_simplex_install_kubernetes.rst +++ b/doc/source/deploy_install_guides/release/bare_metal/aio_simplex_install_kubernetes.rst @@ -842,9 +842,9 @@ Optionally Configure PCI-SRIOV Interfaces :end-before: end-config-controller-0-OS-k8s-sriov-sx -************************************************************ -Optional - Initialize a Ceph-rook Persistent Storage Backend -************************************************************ +******************************************************* +Optional - Initialize a Ceph Persistent Storage Backend +******************************************************* A persistent storage backend is required if your application requires |PVCs|. @@ -855,18 +855,20 @@ A persistent storage backend is required if your application requires The StarlingX OpenStack application **requires** |PVCs|. -.. note:: +.. only:: starlingx or platform - Each deployment model enforces a different structure for the Rook Ceph - cluster and its integration with the platform. + There are two options for persistent storage backend: the host-based Ceph + solution and the Rook container-based Ceph solution. -There are two options for persistent storage backend: the host-based Ceph -solution and the Rook container-based Ceph solution. + .. note:: -.. note:: + Host-based Ceph will be deprecated and removed in an upcoming release. + Adoption of Rook-Ceph is recommended for new deployments. - Host-based Ceph will be deprecated and removed in an upcoming release. - Adoption of Rook-Ceph is recommended for new deployments. +.. warning:: + + Currently |prod-os| does not support rook-ceph. If you plan on using + |prod-os|, choose host-based Ceph. For host-based Ceph: @@ -890,77 +892,84 @@ For host-based Ceph: # List OSD storage devices ~(keystone_admin)$ system host-stor-list controller-0 -For Rook-Ceph: +.. only:: starlingx or platform -#. Check if the rook-ceph app is uploaded. + For Rook-Ceph: - .. code-block:: none + .. note:: - $ source /etc/platform/openrc - $ system application-list - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - | application | version | manifest name | manifest file | status | progress | - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - | cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | - | dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | - | nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | - | oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + Each deployment model enforces a different structure for the Rook Ceph + cluster and its integration with the platform. -#. Add Storage-Backend with Deployment Model. + #. Check if the rook-ceph app is uploaded. - There are three deployment models: Controller, Dedicated, and Open. + .. code-block:: none - For the simplex and duplex environments you can use the Controller and Open - configuration. + $ source /etc/platform/openrc + $ system application-list + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + | application | version | manifest name | manifest file | status | progress | + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + | cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | + | dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | + | nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | + | oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - Controller (default) - |OSDs| must only be added to host with controller personality set. + #. Add Storage-Backend with Deployment Model. - Replication factor is limited to a maximum of 2. + There are three deployment models: Controller, Dedicated, and Open. - Dedicated - |OSDs| must be added only to hosts with the worker personality. + For the simplex and duplex environments you can use the Controller and Open + configuration. - The replication factor is limited to a maximum of 3. + Controller (default) + |OSDs| must only be added to host with controller personality set. - This model aligns with existing Bare-metal Ceph use of dedicated storage - hosts in groups of 2 or 3. + Replication factor is limited to a maximum of 2. - Open - |OSDs| can be added to any host without limitations. + Dedicated + |OSDs| must be added only to hosts with the worker personality. - Replication factor has no limitations. + The replication factor is limited to a maximum of 3. - Application Strategies for deployment model controller. + This model aligns with existing Bare-metal Ceph use of dedicated storage + hosts in groups of 2 or 3. - Simplex - |OSDs|: Added to controller nodes. + Open + |OSDs| can be added to any host without limitations. - Replication Factor: Default 1, maximum 2. + Replication factor has no limitations. - MON, MGR, MDS: Configured based on the number of hosts where the - ``host-fs ceph`` is available. + Application Strategies for deployment model controller. - .. code-block:: none + Simplex + |OSDs|: Added to controller nodes. - $ system storage-backend-add ceph-rook --deployment controller --confirmed - $ system storage-backend-list - +--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+ - | uuid | name | backend | state | task | services | capabilities | - +--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+ - | a2452e47-4b2b-4a3a-a8f0-fb749d92d9cd | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block, | deployment_model: controller replication: | - | | | | | | filesystem | 1 min_replication: 1 | - +--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+ + Replication Factor: Default 1, maximum 2. -#. Set up a ``host-fs ceph`` filesystem. + MON, MGR, MDS: Configured based on the number of hosts where the + ``host-fs ceph`` is available. - .. code-block:: none + .. code-block:: none - $ system host-fs-add controller-0 ceph=20 + $ system storage-backend-add ceph-rook --deployment controller --confirmed + $ system storage-backend-list + +--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+ + | uuid | name | backend | state | task | services | capabilities | + +--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+ + | a2452e47-4b2b-4a3a-a8f0-fb749d92d9cd | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block, | deployment_model: controller replication: | + | | | | | | filesystem | 1 min_replication: 1 | + +--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+ + + #. Set up a ``host-fs ceph`` filesystem. + + .. code-block:: none + + $ system host-fs-add controller-0 ceph=20 .. incl-config-controller-0-openstack-specific-aio-simplex-end: @@ -980,76 +989,78 @@ Controller-0 will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -For Rook-Ceph: +.. only:: starlingx or platform -#. List all the disks. + For Rook-Ceph: - .. code-block:: none + #. List all the disks. - $ system host-disk-list controller-0 - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | 17408af3-e211-4e2b-8cf1-d2b6687476d5 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBba52ec56-f68a9f2d | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | - | cee99187-dac4-4a7b-8e58-f2d5bd48dcaf | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf96fa322-597194da | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | - | 0c6435af-805a-4a62-ad8e-403bf916f5cf | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBeefed5ad-b4815f0d | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + .. code-block:: none -#. Choose empty disks and provide hostname and uuid to finish |OSD| - configuration: + $ system host-disk-list controller-0 + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | 17408af3-e211-4e2b-8cf1-d2b6687476d5 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBba52ec56-f68a9f2d | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | + | cee99187-dac4-4a7b-8e58-f2d5bd48dcaf | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf96fa322-597194da | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | + | 0c6435af-805a-4a62-ad8e-403bf916f5cf | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBeefed5ad-b4815f0d | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - .. code-block:: none + #. Choose empty disks and provide hostname and uuid to finish |OSD| + configuration: - $ system host-stor-add controller-0 osd cee99187-dac4-4a7b-8e58-f2d5bd48dcaf + .. code-block:: none -#. Wait for |OSDs| pod to be ready. + $ system host-stor-add controller-0 osd cee99187-dac4-4a7b-8e58-f2d5bd48dcaf - .. code-block:: none + #. Wait for |OSDs| pod to be ready. - $ kubectl get pods -n rook-ceph - NAME READY STATUS RESTARTS AGE - ceph-mgr-provision-78xjk 0/1 Completed 0 4m31s - csi-cephfsplugin-572jc 2/2 Running 0 5m32s - csi-cephfsplugin-provisioner-5467c6c4f-t8x8d 5/5 Running 0 5m28s - csi-rbdplugin-2npb6 2/2 Running 0 5m32s - csi-rbdplugin-provisioner-fd84899c-k8wcw 5/5 Running 0 5m32s - rook-ceph-crashcollector-controller-0-589f5f774-d8sjz 1/1 Running 0 3m24s - rook-ceph-exporter-controller-0-5fd477bb8-c7nxh 1/1 Running 0 3m21s - rook-ceph-mds-kube-cephfs-a-cc647757-6p9j5 2/2 Running 0 3m25s - rook-ceph-mds-kube-cephfs-b-5b5845ff59-xprbb 2/2 Running 0 3m19s - rook-ceph-mgr-a-746fc4dd54-t8bcw 2/2 Running 0 4m40s - rook-ceph-mon-a-b6c95db97-f5fqq 2/2 Running 0 4m56s - rook-ceph-operator-69b5674578-27bn4 1/1 Running 0 6m26s - rook-ceph-osd-0-7f5cd957b8-ppb99 2/2 Running 0 3m52s - rook-ceph-osd-prepare-controller-0-vzq2d 0/1 Completed 0 4m18s - rook-ceph-provision-zcs89 0/1 Completed 0 101s - rook-ceph-tools-7dc9678ccb-v2gps 1/1 Running 0 6m2s - stx-ceph-manager-664f8585d8-wzr4v 1/1 Running 0 4m31s + .. code-block:: none -#. Check ceph cluster health. + $ kubectl get pods -n rook-ceph + NAME READY STATUS RESTARTS AGE + ceph-mgr-provision-78xjk 0/1 Completed 0 4m31s + csi-cephfsplugin-572jc 2/2 Running 0 5m32s + csi-cephfsplugin-provisioner-5467c6c4f-t8x8d 5/5 Running 0 5m28s + csi-rbdplugin-2npb6 2/2 Running 0 5m32s + csi-rbdplugin-provisioner-fd84899c-k8wcw 5/5 Running 0 5m32s + rook-ceph-crashcollector-controller-0-589f5f774-d8sjz 1/1 Running 0 3m24s + rook-ceph-exporter-controller-0-5fd477bb8-c7nxh 1/1 Running 0 3m21s + rook-ceph-mds-kube-cephfs-a-cc647757-6p9j5 2/2 Running 0 3m25s + rook-ceph-mds-kube-cephfs-b-5b5845ff59-xprbb 2/2 Running 0 3m19s + rook-ceph-mgr-a-746fc4dd54-t8bcw 2/2 Running 0 4m40s + rook-ceph-mon-a-b6c95db97-f5fqq 2/2 Running 0 4m56s + rook-ceph-operator-69b5674578-27bn4 1/1 Running 0 6m26s + rook-ceph-osd-0-7f5cd957b8-ppb99 2/2 Running 0 3m52s + rook-ceph-osd-prepare-controller-0-vzq2d 0/1 Completed 0 4m18s + rook-ceph-provision-zcs89 0/1 Completed 0 101s + rook-ceph-tools-7dc9678ccb-v2gps 1/1 Running 0 6m2s + stx-ceph-manager-664f8585d8-wzr4v 1/1 Running 0 4m31s - .. code-block:: none + #. Check ceph cluster health. - $ ceph -s - cluster: - id: 75c8f017-e7b8-4120-a9c1-06f38e1d1aa3 - health: HEALTH_OK + .. code-block:: none - services: - mon: 1 daemons, quorum a (age 32m) - mgr: a(active, since 30m) - mds: 1/1 daemons up, 1 hot standby - osd: 1 osds: 1 up (since 30m), 1 in (since 31m) + $ ceph -s + cluster: + id: 75c8f017-e7b8-4120-a9c1-06f38e1d1aa3 + health: HEALTH_OK - data: - volumes: 1/1 healthy - pools: 4 pools, 113 pgs - objects: 22 objects, 595 KiB - usage: 27 MiB used, 9.7 GiB / 9.8 GiB avail - pgs: 113 active+clean + services: + mon: 1 daemons, quorum a (age 32m) + mgr: a(active, since 30m) + mds: 1/1 daemons up, 1 hot standby + osd: 1 osds: 1 up (since 30m), 1 in (since 31m) - io: - client: 852 B/s rd, 1 op/s rd, 0 op/s wr + data: + volumes: 1/1 healthy + pools: 4 pools, 113 pgs + objects: 22 objects, 595 KiB + usage: 27 MiB used, 9.7 GiB / 9.8 GiB avail + pgs: 113 active+clean + + io: + client: 852 B/s rd, 1 op/s rd, 0 op/s wr .. incl-unlock-controller-0-aio-simplex-end: diff --git a/doc/source/deploy_install_guides/release/bare_metal/controller_storage_install_kubernetes.rst b/doc/source/deploy_install_guides/release/bare_metal/controller_storage_install_kubernetes.rst index b6e905b13..184dfcfa0 100644 --- a/doc/source/deploy_install_guides/release/bare_metal/controller_storage_install_kubernetes.rst +++ b/doc/source/deploy_install_guides/release/bare_metal/controller_storage_install_kubernetes.rst @@ -758,6 +758,11 @@ host machine. If configuring host based Ceph Storage Backend, Add Ceph OSDs to controllers ---------------------------------------------------------------------------- +.. warning:: + + Currently |prod-os| does not support rook-ceph. If you plan on using + |prod-os|, choose host-based Ceph. + .. only:: starlingx .. tabs:: @@ -829,196 +834,205 @@ Complete system configuration by reviewing procedures in: - |index-sysconf-kub-78f0e1e9ca5a| - |index-admintasks-kub-ebc55fefc368| +.. only:: starlingx or platform -******************************************************************* -If configuring Rook Ceph Storage Backend, configure the environment -******************************************************************* + ******************************************************************* + If configuring Rook Ceph Storage Backend, configure the environment + ******************************************************************* -.. note:: + .. warning:: - Each deployment model enforces a different structure for the Rook Ceph - cluster and its integration with the platform. + Currently |prod-os| does not support rook-ceph. If you plan on using + |prod-os|, choose host-based Ceph. -#. Check if the rook-ceph app is uploaded. + .. note:: - .. code-block:: none + Each deployment model enforces a different structure for the Rook Ceph + cluster and its integration with the platform. - $ source /etc/platform/openrc - $ system application-list - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - | application | version | manifest name | manifest file | status | progress | - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - | cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | - | dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | - | nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | - | oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - | rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | - +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + #. Check if the rook-ceph app is uploaded. -#. Add Storage-Backend with Deployment Model. + .. code-block:: none - There are three deployment models: Controller, Dedicated, and Open. + $ source /etc/platform/openrc + $ system application-list + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + | application | version | manifest name | manifest file | status | progress | + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ + | cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | + | dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | + | nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | + | oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + | rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | + +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ - For the simplex and duplex environments you can use the Controller and Open - configuration. + #. Add Storage-Backend with Deployment Model. - Controller (default) - |OSDs| must only be added to host with controller personality set. + There are three deployment models: Controller, Dedicated, and Open. - Replication factor is limited to a maximum of 2. + For the simplex and duplex environments you can use the Controller and Open + configuration. This model aligns with the existing Bare-metal Ceph assignment of OSDs to controllers. + + Controller (default) + |OSDs| must only be added to host with controller personality set. - Dedicated - |OSDs| must be added only to hosts with the worker personality. + Replication factor is limited to a maximum of 2. - The replication factor is limited to a maximum of 3. + This model aligns with the existing Bare-metal Ceph assignement of OSDs + to controllers. - This model aligns with existing Bare-metal Ceph use of dedicated storage - hosts in groups of 2 or 3. + Dedicated + |OSDs| must be added only to hosts with the worker personality. - Open - |OSDs| can be added to any host without limitations. + The replication factor is limited to a maximum of 3. - Replication factor has no limitations. + This model aligns with existing Bare-metal Ceph use of dedicated storage + hosts in groups of 2 or 3. - Application Strategies for deployment model controller. + Open + |OSDs| can be added to any host without limitations. - Duplex, Duplex+ or Standard - |OSDs|: Added to controller nodes. + Replication factor has no limitations. - Replication Factor: Default 1, maximum 'Any'. + Application Strategies for deployment model controller. - .. code-block:: none + Duplex, Duplex+ or Standard + |OSDs|: Added to controller nodes. - $ system storage-backend-add ceph-rook --deployment open --confirmed - $ system storage-backend-list - +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+ - | uuid | name | backend | state | task | services | capabilities | - +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+ - | 0dfef1f0-a5a4-4b20-a013-ef76e92bcd42 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: open replication: 2 | - | | | | | | | min_replication: 1 | - +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+ + Replication Factor: Default 1, maximum 'Any'. -#. Set up a ``host-fs ceph`` filesystem. + .. code-block:: none - .. code-block:: none + $ system storage-backend-add ceph-rook --deployment open --confirmed + $ system storage-backend-list + +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+ + | uuid | name | backend | state | task | services | capabilities | + +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+ + | 0dfef1f0-a5a4-4b20-a013-ef76e92bcd42 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: open replication: 2 | + | | | | | | | min_replication: 1 | + +--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+ - $ system host-fs-add controller-0 ceph=20 - $ system host-fs-add controller-1 ceph=20 - $ system host-fs-add compute-0 ceph=20 + #. Set up a ``host-fs ceph`` filesystem. -#. List all the disks. + .. code-block:: none - .. code-block:: none + $ system host-fs-add controller-0 ceph=20 + $ system host-fs-add controller-1 ceph=20 + $ system host-fs-add compute-0 ceph=20 - $ system host-disk-list controller-0 - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | 7f2b9ff5-b6ee-4eaf-a7eb-cecd3ba438fd | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB3e6c5449-c7224b07 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | - | fdaf3f71-a2df-4b40-9e70-335900f953a3 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB323207f8-b6b9d531 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | - | ced60373-0dbc-4bc7-9d03-657c1f92164a | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB49833b9d-a22a2455 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - $ system host-disk-list controller-1 - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | 119533a5-bc66-47e0-a448-f0561871989e | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBb1b06a09-6137c63a | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | - | 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB5fcf59a9-7c8a531b | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | - | 7351013f-8280-4ff3-88bd-76e88f14fa2f | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB0d1ce946-d0a172c4 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - $ system host-disk-list compute-0 - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - | 14245695-46df-43e8-b54b-9fb3c22ac359 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB8ac41a93-82275093 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | - | 765d8dff-e584-4064-9c95-6ea3aa25473c | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB569d6dab-9ae3e6af | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | - | c9b4ed65-da32-4770-b901-60b56fd68c35 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBf88762a8-9aa3315c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | - +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + #. List all the disks. -#. Choose empty disks and provide hostname and uuid to finish |OSD| - configuration: + .. code-block:: none - .. code-block:: none + $ system host-disk-list controller-0 + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | 7f2b9ff5-b6ee-4eaf-a7eb-cecd3ba438fd | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB3e6c5449-c7224b07 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | + | fdaf3f71-a2df-4b40-9e70-335900f953a3 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB323207f8-b6b9d531 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | + | ced60373-0dbc-4bc7-9d03-657c1f92164a | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB49833b9d-a22a2455 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + $ system host-disk-list controller-1 + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | 119533a5-bc66-47e0-a448-f0561871989e | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBb1b06a09-6137c63a | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | + | 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB5fcf59a9-7c8a531b | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | + | 7351013f-8280-4ff3-88bd-76e88f14fa2f | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB0d1ce946-d0a172c4 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + $ system host-disk-list compute-0 + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ + | 14245695-46df-43e8-b54b-9fb3c22ac359 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB8ac41a93-82275093 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 | + | 765d8dff-e584-4064-9c95-6ea3aa25473c | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB569d6dab-9ae3e6af | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 | + | c9b4ed65-da32-4770-b901-60b56fd68c35 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBf88762a8-9aa3315c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 | + +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+ - $ system host-stor-add controller-0 osd fdaf3f71-a2df-4b40-9e70-335900f953a3 - $ system host-stor-add controller-1 osd 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 - $ system host-stor-add compute-0 osd c9b4ed65-da32-4770-b901-60b56fd68c35 -#. Apply the rook-ceph application. + #. Choose empty disks and provide hostname and uuid to finish |OSD| + configuration: - .. code-block:: none + .. code-block:: none - $ system application-apply rook-ceph + $ system host-stor-add controller-0 osd fdaf3f71-a2df-4b40-9e70-335900f953a3 + $ system host-stor-add controller-1 osd 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 + $ system host-stor-add compute-0 osd c9b4ed65-da32-4770-b901-60b56fd68c35 + #. Apply the rook-ceph application. -#. Wait for |OSDs| pod to be ready. + .. code-block:: none - .. code-block:: none + $ system application-apply rook-ceph - $ kubectl get pods -n rook-ceph - NAME READY STATUS RESTARTS AGE - ceph-mgr-provision-nh6dl 0/1 Completed 0 18h - csi-cephfsplugin-2nnwf 2/2 Running 10 (3h9m ago) 18h - csi-cephfsplugin-flbll 2/2 Running 14 (3h42m ago) 18h - csi-cephfsplugin-provisioner-5467c6c4f-98fxk 5/5 Running 5 (4h7m ago) 18h - csi-cephfsplugin-zzskz 2/2 Running 17 (168m ago) 18h - csi-rbdplugin-42ldl 2/2 Running 17 (168m ago) 18h - csi-rbdplugin-8xzxz 2/2 Running 14 (3h42m ago) 18h - csi-rbdplugin-b6dvk 2/2 Running 10 (3h9m ago) 18h - csi-rbdplugin-provisioner-fd84899c-6795x 5/5 Running 5 (4h7m ago) 18h - rook-ceph-crashcollector-compute-0-59f554f6fc-5s5cz 1/1 Running 0 4m19s - rook-ceph-crashcollector-controller-0-589f5f774-b2297 1/1 Running 0 3h2m - rook-ceph-crashcollector-controller-1-68d66b9bff-njrhg 1/1 Running 1 (4h7m ago) 18h - rook-ceph-exporter-compute-0-569b65cf6c-xhfjk 1/1 Running 0 4m14s - rook-ceph-exporter-controller-0-5fd477bb8-rzkqd 1/1 Running 0 3h2m - rook-ceph-exporter-controller-1-6f5d8695b9-772rb 1/1 Running 1 (4h7m ago) 18h - rook-ceph-mds-kube-cephfs-a-654c56d89d-mdklw 2/2 Running 11 (166m ago) 18h - rook-ceph-mds-kube-cephfs-b-6c498f5db4-5hbcj 2/2 Running 2 (166m ago) 3h2m - rook-ceph-mgr-a-5d6664f544-rgfpn 3/3 Running 9 (3h42m ago) 18h - rook-ceph-mgr-b-5c4cb984b9-cl4qq 3/3 Running 0 168m - rook-ceph-mgr-c-7d89b6cddb-j9hxp 3/3 Running 0 3h9m - rook-ceph-mon-a-6ffbf95cdf-cvw8r 2/2 Running 0 3h9m - rook-ceph-mon-b-5558b5ddc7-h7nhz 2/2 Running 2 (4h7m ago) 18h - rook-ceph-mon-c-6db9c888cb-mfxfh 2/2 Running 0 167m - rook-ceph-operator-69b5674578-k6k4j 1/1 Running 0 8m10s - rook-ceph-osd-0-dd94574ff-dvrrs 2/2 Running 2 (4h7m ago) 18h - rook-ceph-osd-1-5d7f598f8f-88t2j 2/2 Running 0 3h9m - rook-ceph-osd-2-6776d44476-sqnlj 2/2 Running 0 4m20s - rook-ceph-osd-prepare-compute-0-ls2xw 0/1 Completed 0 5m16s - rook-ceph-osd-prepare-controller-0-jk6bz 0/1 Completed 0 5m27s - rook-ceph-osd-prepare-controller-1-d845s 0/1 Completed 0 5m21s - rook-ceph-provision-vtvc4 0/1 Completed 0 17h - rook-ceph-tools-7dc9678ccb-srnd8 1/1 Running 1 (4h7m ago) 18h - stx-ceph-manager-664f8585d8-csl7p 1/1 Running 1 (4h7m ago) 18h + #. Wait for |OSDs| pod to be ready. -#. Check ceph cluster health. + .. code-block:: none - .. code-block:: none + $ kubectl get pods -n rook-ceph + NAME READY STATUS RESTARTS AGE + ceph-mgr-provision-nh6dl 0/1 Completed 0 18h + csi-cephfsplugin-2nnwf 2/2 Running 10 (3h9m ago) 18h + csi-cephfsplugin-flbll 2/2 Running 14 (3h42m ago) 18h + csi-cephfsplugin-provisioner-5467c6c4f-98fxk 5/5 Running 5 (4h7m ago) 18h + csi-cephfsplugin-zzskz 2/2 Running 17 (168m ago) 18h + csi-rbdplugin-42ldl 2/2 Running 17 (168m ago) 18h + csi-rbdplugin-8xzxz 2/2 Running 14 (3h42m ago) 18h + csi-rbdplugin-b6dvk 2/2 Running 10 (3h9m ago) 18h + csi-rbdplugin-provisioner-fd84899c-6795x 5/5 Running 5 (4h7m ago) 18h + rook-ceph-crashcollector-compute-0-59f554f6fc-5s5cz 1/1 Running 0 4m19s + rook-ceph-crashcollector-controller-0-589f5f774-b2297 1/1 Running 0 3h2m + rook-ceph-crashcollector-controller-1-68d66b9bff-njrhg 1/1 Running 1 (4h7m ago) 18h + rook-ceph-exporter-compute-0-569b65cf6c-xhfjk 1/1 Running 0 4m14s + rook-ceph-exporter-controller-0-5fd477bb8-rzkqd 1/1 Running 0 3h2m + rook-ceph-exporter-controller-1-6f5d8695b9-772rb 1/1 Running 1 (4h7m ago) 18h + rook-ceph-mds-kube-cephfs-a-654c56d89d-mdklw 2/2 Running 11 (166m ago) 18h + rook-ceph-mds-kube-cephfs-b-6c498f5db4-5hbcj 2/2 Running 2 (166m ago) 3h2m + rook-ceph-mgr-a-5d6664f544-rgfpn 3/3 Running 9 (3h42m ago) 18h + rook-ceph-mgr-b-5c4cb984b9-cl4qq 3/3 Running 0 168m + rook-ceph-mgr-c-7d89b6cddb-j9hxp 3/3 Running 0 3h9m + rook-ceph-mon-a-6ffbf95cdf-cvw8r 2/2 Running 0 3h9m + rook-ceph-mon-b-5558b5ddc7-h7nhz 2/2 Running 2 (4h7m ago) 18h + rook-ceph-mon-c-6db9c888cb-mfxfh 2/2 Running 0 167m + rook-ceph-operator-69b5674578-k6k4j 1/1 Running 0 8m10s + rook-ceph-osd-0-dd94574ff-dvrrs 2/2 Running 2 (4h7m ago) 18h + rook-ceph-osd-1-5d7f598f8f-88t2j 2/2 Running 0 3h9m + rook-ceph-osd-2-6776d44476-sqnlj 2/2 Running 0 4m20s + rook-ceph-osd-prepare-compute-0-ls2xw 0/1 Completed 0 5m16s + rook-ceph-osd-prepare-controller-0-jk6bz 0/1 Completed 0 5m27s + rook-ceph-osd-prepare-controller-1-d845s 0/1 Completed 0 5m21s + rook-ceph-provision-vtvc4 0/1 Completed 0 17h + rook-ceph-tools-7dc9678ccb-srnd8 1/1 Running 1 (4h7m ago) 18h + stx-ceph-manager-664f8585d8-csl7p 1/1 Running 1 (4h7m ago) 18h - $ ceph -s - cluster: - id: 5b579aca-617f-4f2a-b059-73e7071111dc - health: HEALTH_OK + #. Check ceph cluster health. - services: - mon: 3 daemons, quorum a,b,c (age 2h) - mgr: a(active, since 2h), standbys: c, b - mds: 1/1 daemons up, 1 hot standby - osd: 3 osds: 3 up (since 82s), 3 in (since 2m) + .. code-block:: none - data: - volumes: 1/1 healthy - pools: 4 pools, 113 pgs - objects: 26 objects, 648 KiB - usage: 129 MiB used, 29 GiB / 29 GiB avail - pgs: 110 active+clean - 2 active+clean+scrubbing+deep - 1 active+clean+scrubbing + $ ceph -s + cluster: + id: 5b579aca-617f-4f2a-b059-73e7071111dc + health: HEALTH_OK - io: - client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr + services: + mon: 3 daemons, quorum a,b,c (age 2h) + mgr: a(active, since 2h), standbys: c, b + mds: 1/1 daemons up, 1 hot standby + osd: 3 osds: 3 up (since 82s), 3 in (since 2m) + + data: + volumes: 1/1 healthy + pools: 4 pools, 113 pgs + objects: 26 objects, 648 KiB + usage: 129 MiB used, 29 GiB / 29 GiB avail + pgs: 110 active+clean + 2 active+clean+scrubbing+deep + 1 active+clean+scrubbing + + io: + client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr .. end-content \ No newline at end of file