docs/doc/source/deploy/standard-configuration-with-dedicated-storage.rst
Stone b250326272 Deployment Configuration
Address review comments from previous patch. Also implemented abbreviations as substitutions
in docs/doc/source/shared/strings.txt. NB: This doc still to be reconciled against existing
install/deploy content.

Restored fault/index version of FM toctree in top-level index.

Removed Fault Management from top-level index.

Initial review submission.

Signed-off-by: Stone <ronald.stone@windriver.com>
Change-Id: I76c2e99bd96bb1d58d6daf67ac46e6e48e4396d7
Signed-off-by: Stone <ronald.stone@windriver.com>
2020-12-03 10:10:03 -05:00

54 lines
2.5 KiB
ReStructuredText

.. gzi1565204095452
.. _standard-configuration-with-dedicated-storage:
=============================================
Standard Configuration with Dedicated Storage
=============================================
Deployment of |prod| with dedicated storage nodes provides the highest capacity
\(single region\), performance, and scalability.
.. image:: ../deploy_install_guides/r5_release/figures/starlingx-deployment-options-dedicated-storage.png
:width: 800
.. note::
Physical L2 switches are not shown in the deployment diagram in subsequent
chapters. Only the L2 networks they realize are shown.
See :ref:`Common Components <common-components>` for a description of common
components of this deployment configuration.
The differentiating physical feature of this model is that the controller,
storage, and worker functionalities are deployed on separate physical hosts
allowing controller nodes, storage nodes, and worker nodes to scale
independently from each other.
The controller nodes provide the master function for the system. Two controller
nodes are required to provide redundancy. The controller nodes' server and
peripheral resources such as CPU cores/speed, memory, storage, and network
interfaces can be scaled to meet requirements.
Storage nodes provide a large scale Ceph cluster for the storage backend for
Kubernetes |PVCs|. They are deployed in replication groups of either two or
three for redundancy. For a system configured to use two storage hosts per
replication group, a maximum of eight storage hosts \(four replication groups\)
are supported. For a system with three storage hosts per replication group, up
to nine storage hosts \(three replication groups\) are supported. The system
provides redundancy and scalability through the number of Ceph |OSDs| installed
in a storage node group, with more |OSDs| providing more capacity and better
storage performance. The scalability and performance of the storage function is
affected by the |OSD| size and speed, optional |SSD| or |NVMe| Ceph journals,
CPU cores and speeds, memory, disk controllers, and networking. |OSDs| can be
grouped into storage tiers according to their performance characteristics.
.. note::
A storage backend is not configured by default. You can use either
internal Ceph or an external Netapp Trident backend.
.. xreflink For more information,
see the :ref:`|stor-doc| <storage-configuration-storage-resources>` guide.
On worker nodes, the primary disk is used for system requirements and for
container local ephemeral storage.