
Address review comments from previous patch. Also implemented abbreviations as substitutions in docs/doc/source/shared/strings.txt. NB: This doc still to be reconciled against existing install/deploy content. Restored fault/index version of FM toctree in top-level index. Removed Fault Management from top-level index. Initial review submission. Signed-off-by: Stone <ronald.stone@windriver.com> Change-Id: I76c2e99bd96bb1d58d6daf67ac46e6e48e4396d7 Signed-off-by: Stone <ronald.stone@windriver.com>
2.0 KiB
Standard Configuration with Controller Storage
supports a small-scale deployment option using a small Ceph cluster as a back-end for Kubernetes deployed on the controller nodes instead of using dedicated storage nodes.
Note
Physical L2 switches are not shown in the deployment diagram in subsequent chapters. Only the L2 networks they support are shown.
See Common Components <common-components>
for a
description of common components of this deployment configuration.
This deployment configuration consists of a two node HA controller+storage cluster managing up to 200 worker nodes. The limit on the size of the worker node pool is due to the performance and latency characteristics of the small integrated Ceph cluster on the controller+storage nodes.
This configuration uses dedicated physical disks configured on each controller+storage node as Ceph . The primary disk is used by the platform for system purposes and subsequent disks are used for Ceph .
On worker nodes, the primary disk is used for system requirements and for container local ephemeral storage.
HA controller services run across the two controller+storage nodes in either Active/Active or Active/Standby mode. The two node Ceph cluster on the controller+storage nodes provides HA storage through replication between the two nodes.
In the event of an overall controller+storage node failure, all controller HA services become active on the remaining healthy controller+storage node, and the above mentioned nodal Ceph replication protects the Kubernetes .
On overall worker node failure, hosted application containers on the failed worker node are recovered on the remaining healthy worker nodes.