docs/doc/source/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.rst
MCamp859 d235b95056 R5 updates to landing page and installation
Patch set 5: Updated R5 RN.

Patch set 4: Resolved merge confict.

Patch set 3: Removed R5 install guides.
Local docs build is successful.

Patch set 2: Added R6 install guides.
Removed R1-R4 install guides.
R5 will be removed next patch set.

Updated main docs landing page with R5 links.
Installation index page: promoted R5 to supported, moved R4 to
archive section, created R6 (latest) section.

Change-Id: Ic0e0409f2385a9a6f29b83a2eda12753ea4ac1a3
Signed-off-by: MCamp859 <maryx.camp@intel.com>
2021-05-31 11:46:08 -04:00

46 lines
2.0 KiB
ReStructuredText

.. rde1565203741901
.. _deployment-and-configuration-options-standard-configuration-with-controller-storage:
==============================================
Standard Configuration with Controller Storage
==============================================
|prod| supports a small-scale deployment option using a small Ceph cluster as a
back-end for Kubernetes |PVCs| deployed on the
controller nodes instead of using dedicated storage nodes.
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-controller-storage.png
:width: 800
See :ref:`Common Components <common-components>` for a description of common
components of this deployment configuration.
This deployment configuration consists of a two node HA controller+storage
cluster managing up to 200 worker nodes. The limit on the size of the worker
node pool is due to the performance and latency characteristics of the small
integrated Ceph cluster on the controller+storage nodes.
This configuration optionally uses dedicated physical disks configured on each
controller+storage node as Ceph |OSDs|. The typical solution requires one
primary disk used by the platform for system purposes and subsequent disks
are used for Ceph |OSDs|.
Optionally, instead of using an internal Ceph cluster across controllers, you
can configure an external Netapp Trident storage backend.
On worker nodes, the primary disk is used for system requirements and for
container local ephemeral storage.
HA controller services run across the two controller+storage nodes in either
Active/Active or Active/Standby mode. The two node Ceph cluster on the
controller+storage nodes provides HA storage through |OSD| replication between
the two nodes.
In the event of an overall controller+storage node failure, all controller HA
services become active on the remaining healthy controller+storage node, and
the above mentioned nodal Ceph replication protects the Kubernetes |PVCs|.
On overall worker node failure, hosted application containers on the failed
worker node are recovered on the remaining healthy worker nodes.