docs/doc/source/storage/kubernetes/deployment-models-for-rook-ceph-b855bd0108cf.rst
Caio Correa 45e522e3f0 Add Performance Configurations on Rook Ceph
Create a document for performance configuration on rook ceph.
Also reviewed all Rook Ceph documentation.

Story: 2011066
Task: 51381

Change-Id: Id98f1fec24f3059d91528d843b5384f3839d265c
Signed-off-by: Caio Correa <caio.correa@windriver.com>
2024-11-21 13:06:41 +00:00

278 lines
8.3 KiB
ReStructuredText

.. WARNING: Add no lines of text between the label immediately following
.. and the title.
.. _deployment-models-for-rook-ceph-b855bd0108cf:
============================================
Deployment Models and Services for Rook Ceph
============================================
The deployment model is the topology strategy that defines the storage backend
capabilities of the deployment. The deployment model dictates how the storage
solution will look like defining rules for the placement of the storage cluster
elements.
Available Deployment Models
---------------------------
Each deployment model works with different deployment strategies and rules to
fit different needs. Choose one of the following models according to the
demands of your cluster:
Controller Model (default)
- The |OSDs| must be added only in hosts with controller personality.
- The replication factor can be configured up to size 3.
- Can swap into Open Model.
Dedicated Model
- The |OSDs| must be added only in hosts with worker personality.
- The replication factor can be configured up to size 3.
- Can swap into Open Model.
Open Model
- The |OSD| placement does not have any limitation.
- The replication factor does not have any limitation.
- Can swap into controller or dedicated if the placement requisites are
satisfied.
Replication Factor
------------------
The replication factor is the number of copies that each piece of data has
spread across the cluster to provide redundancy.
You can change the replication of an existing Rook Ceph storage backend with
the following command:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store replication=<size>
Possible Replication Factors on Deployment Models for platforms.
Simplex Controller Model:
Default: 1
Max: 3
Simplex Open Model:
Default: 1
Max: Any
Duplex Controller Model:
Default: 2
Max: 3
Duplex Open Model:
Default: 1
Max: Any
Duplex+ or Standard Controller Model:
Default: 2
Max: 3
Duplex+ or Standard Dedicated Model:
Default: 2
Max: 3
Duplex+ or Standard Open Model:
Default: 2
Max: Any
Minimum Replication Factor
**************************
The minimum replication factor is the least number of copies that each piece of
data have spread across the cluster to provide redundancy.
You can assign any number smaller than the replication factor to this
parameter. The default value is replication - 1.
You can change the minimum replication of an existing Rook Ceph storage backend
with the command:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store min_replication=<size>
Monitor Count
*************
Monitors (mons) are allocated on all the hosts that have a ``host-fs ceph``
with the monitor capability on it.
When the host has no |OSD| registered on the platform, you should add ``host-fs ceph``
in every node intended to house a monitor with the command:
.. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
When there are |OSDs| registered on a host you should add the monitor function
to the existing ``host-fs``.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd,monitor
Possible Monitor Count on Deployment Models for Platforms
*********************************************************
Simplex:
Min: 1
Max: 1
Duplex:
Min: 1
Recommended: 3 (using floating monitor)
Max: 3 (using floating monitor)
Duplex+ or Standard:
Min: 1
Recommended: 3
Max: Any
Floating Monitor (only in Duplex)
*********************************
A Floating monitor is possible and recommended on Duplex platforms. The monitor
roams and is always allocated on the active controller.
To add the floating monitor:
.. note::
You should lock the inactive controller add ``controllerfs ceph-float`` to
the platform.
.. code-block:: none
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
~(keystone_admin)$ system controllerfs-add ceph-float=<size>
Host-fs and controller-fs
-------------------------
To properly set the environment for Rook Ceph, some filesystems are needed.
.. note::
All changes in ``host-fs`` and ``controller-fs`` need a reapply on the
application to properly propagate the modifications in the Rook ceph
cluster.
Functions
*********
The functions parameter contains the ceph cluster function of a given host. A
``host-fs`` can have monitor and osd functions, a ``controller-fs`` can only
have the monitor function.
To modify the function of a ``host-fs`` the complete list of functions desired
must be informed.
Examples:
.. code-block:: none
#(only monitor)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=monitor
#(only osd)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
#(no function)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
#(only monitor)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=monitor
#(no function)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
Services
--------
Services are the storage types (or classes) that provides storage to each pod
with some mount or storage space allocation.
Available Services
******************
There are four possible services compatible with Rook Ceph. You can combine
them following the rules:
``block`` (default)
- Not possible to be deployed together with ecblock.
- Will enable the block service in rook, will use cephRBD.
``ecblock``
- Not possible to be deployed together with block.
- Will enable the ecblock service in rook, will use cephRBD.
``filesystem`` (default)
- Will enable the ceph filesystem and use cephFS.
``object``
- Will enable the ceph object store (RGW).
Services Parameterization for the Open Model
********************************************
In the 'open' deployment model, no specific configurations are enforced. Users are responsible
for customizing settings based on their specific needs. To update configurations, a Helm override is required.
When applying a helm-override update, list-type values are completely replaced, not incrementally updated.
For example, modifying cephFileSystems (or cephBlockPools, cephECBlockPools, cephObjectStores) via
Helm override will overwrite the entire entry.
Here is an **example** of how to change a parameter, using failureDomain as
an example, for **Cephfs** and **RBD**:
.. tabs::
.. group-tab:: Cephfs
.. code-block:: none
# Get the current crush rule information
ceph osd pool get kube-cephfs-data crush_rule
# Get the current default values
helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^cephFileSystems:/,/^[[:alnum:]_-]*:/p;' | sed '$d' > cephfs_overrides.yaml
# Update the failure domain
sed -i 's/failureDomain: osd/failureDomain: host/g' cephfs_overrides.yaml
# Get the current user override values ("combined overrides" is what will be deployed):
system helm-override-show rook-ceph rook-ceph-cluster rook-ceph
# Set the new overrides
system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --reuse-values --values cephfs_overrides.yaml
# Get the updated user override values
system helm-override-show rook-ceph rook-ceph-cluster rook-ceph
# Apply the application
system application-apply rook-ceph
# Confirm the current crush rule information:
ceph osd pool get kube-cephfs-data crush_rule
.. group-tab:: RBD
.. code-block:: none
# Retrieve the current values and extract the cephBlockPools section:
helm get values -n rook-ceph rook-ceph-cluster -o yaml | sed -n '/^cephBlockPools:/,/^[[:alnum:]_-]*:/p;' | sed '$d' > rbd_overrides.yaml
# Modify the failureDomain parameter from osd to host in the rbd_overrides.yaml file:
sed -i 's/failureDomain: osd/failureDomain: host/g' rbd_overrides.yaml
# Set the update configuration:
system helm-override-update rook-ceph rook-ceph-cluster rook-ceph --reuse-values --values rbd_overrides.yaml
# Apply the application
system application-apply rook-ceph