
Create documentation for rook-ceph install, removal and deployment models. Story: 2011066 Task: 50934 Change-Id: I137d3251078d5868cd2515a617afc5859858b4ac Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
219 lines
5.8 KiB
ReStructuredText
219 lines
5.8 KiB
ReStructuredText
.. WARNING: Add no lines of text between the label immediately following
|
|
.. and the title.
|
|
|
|
.. _deployment-models-for-rook-ceph-b855bd0108cf:
|
|
|
|
============================================
|
|
Deployment Models and Services for Rook Ceph
|
|
============================================
|
|
|
|
The deployment model is the topology strategy that defines the storage backend
|
|
capabilities of the deployment. The deployment model dictates how the storage
|
|
solution will look like defining rules for the placement of the storage cluster
|
|
elements.
|
|
|
|
Available Deployment Models
|
|
---------------------------
|
|
|
|
Each deployment model works with different deployment strategies and rules to
|
|
fit different needs. Choose one of the following models according to the
|
|
demands of your cluster:
|
|
|
|
Controller Model (default)
|
|
- The |OSDs| must be added only in hosts with controller personality.
|
|
- The replication factor can be configured up to size 3.
|
|
- Can swap into Open Model.
|
|
|
|
Dedicated Model
|
|
- The |OSDs| must be added only in hosts with worker personality.
|
|
- The replication factor can be configured up to size 3.
|
|
- Can swap into Open Model.
|
|
|
|
Open Model
|
|
- The |OSD| placement does not have any limitation.
|
|
- The replication factor does not have any limitation.
|
|
- Can swap into controller or dedicated if the placement requisites are
|
|
satisfied.
|
|
|
|
Replication Factor
|
|
------------------
|
|
|
|
The replication factor is the number of copies that each piece of data has
|
|
spread across the cluster to provide redundancy.
|
|
|
|
You can change the replication of an existing Rook Ceph storage backend with
|
|
the following command:
|
|
|
|
.. code-block:: none
|
|
|
|
~(keystone_admin)$ system storage-backend-modify ceph-rook-store replication=<size>
|
|
|
|
Possible Replication Factors on Deployment Models for platforms.
|
|
|
|
Simplex Controller Model:
|
|
Default: 1
|
|
Max: 3
|
|
|
|
Simplex Open Model:
|
|
Default: 1
|
|
Max: Any
|
|
|
|
Duplex Controller Model:
|
|
Default: 2
|
|
Max: 3
|
|
|
|
Duplex Open Model:
|
|
Default: 1
|
|
Max: Any
|
|
|
|
Duplex+ or Standard Controller Model:
|
|
Default: 2
|
|
Max: 3
|
|
|
|
Duplex+ or Standard Dedicated Model:
|
|
Default: 2
|
|
Max: 3
|
|
|
|
Duplex+ or Standard Open Model:
|
|
Default: 2
|
|
Max: Any
|
|
|
|
Minimum Replication Factor
|
|
**************************
|
|
|
|
The minimum replication factor is the least number of copies that each piece of
|
|
data have spread across the cluster to provide redundancy.
|
|
|
|
You can assign any number smaller than the replication factor to this
|
|
parameter. The default value is replication - 1.
|
|
|
|
You can change the minimum replication of an existing Rook Ceph storage backend
|
|
with the command:
|
|
|
|
.. code-block:: none
|
|
|
|
~(keystone_admin)$ system storage-backend-modify ceph-rook-store min_replication=<size>
|
|
|
|
Monitor Count
|
|
*************
|
|
|
|
Monitors (mons) are allocated on all the hosts that have a ``host-fs ceph``
|
|
with the monitor capability on it.
|
|
|
|
When the host has no |OSD| registered on the platform, you should add ``host-fs ceph``
|
|
in every node intended to house a monitor with the command:
|
|
|
|
.. code-block:: none
|
|
|
|
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
|
|
|
|
When there are |OSDs| registered on a host you should add the monitor function
|
|
to the existing ``host-fs``.
|
|
|
|
.. code-block:: none
|
|
|
|
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd,monitor
|
|
|
|
Possible Monitor Count on Deployment Models for Platforms
|
|
*********************************************************
|
|
|
|
Simplex:
|
|
Min: 1
|
|
Max: 1
|
|
|
|
Duplex:
|
|
Min: 1
|
|
Recommended: 3 (using floating monitor)
|
|
Max: 3 (using floating monitor)
|
|
|
|
Duplex+ or Standard:
|
|
Min: 1
|
|
Recommended: 3
|
|
Max: Any
|
|
|
|
Floating Monitor (only in Duplex)
|
|
*********************************
|
|
|
|
A Floating monitor is possible and recommended on Duplex platforms. The monitor
|
|
roams and is always allocated on the active controller.
|
|
|
|
To add the floating monitor:
|
|
|
|
.. note::
|
|
|
|
You should lock the inactive controller add ``controllerfs ceph-float`` to
|
|
the platform.
|
|
|
|
.. code-block:: none
|
|
|
|
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
|
|
~(keystone_admin)$ system controllerfs-add ceph-float=<size>
|
|
|
|
|
|
Host-fs and controller-fs
|
|
-------------------------
|
|
|
|
To properly set the environment for Rook Ceph, some filesystems are needed.
|
|
|
|
.. note::
|
|
|
|
All changes in ``host-fs`` and ``controller-fs`` need a reapply on the
|
|
application to properly propagate the modifications in the Rook ceph
|
|
cluster.
|
|
|
|
Functions
|
|
*********
|
|
|
|
The functions parameter contains the ceph cluster function of a given host. A
|
|
``host-fs`` can have monitor and osd functions, a ``controller-fs`` can only
|
|
have the monitor function.
|
|
|
|
To modify the function of a ``host-fs`` the complete list of functions desired
|
|
must be informed.
|
|
|
|
Examples:
|
|
|
|
.. code-block:: none
|
|
|
|
#(only monitor)
|
|
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=monitor
|
|
|
|
#(only osd)
|
|
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
|
|
|
|
#(no function)
|
|
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
|
|
|
|
#(only monitor)
|
|
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=monitor
|
|
|
|
#(no function)
|
|
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
|
|
|
|
Services
|
|
--------
|
|
|
|
Services are the storage types (or classes) that provides storage to each pod
|
|
with some mount or storage space allocation.
|
|
|
|
Available Services
|
|
******************
|
|
|
|
There are four possible services compatible with Rook Ceph. You can combine
|
|
them following the rules:
|
|
|
|
``block`` (default)
|
|
- Not possible to be deployed together with ecblock.
|
|
|
|
- Will enable the block service in rook, will use cephRBD.
|
|
|
|
``ecblock``
|
|
- Not possible to be deployed together with block.
|
|
|
|
- Will enable the ecblock service in rook, will use cephRBD.
|
|
|
|
``filesystem`` (default)
|
|
- Will enable the ceph filesystem and use cephFS.
|
|
|
|
``object``
|
|
- Will enable the ceph object store (RGW). |