diff --git a/doc/source/_includes/performance-configurations-rook-ceph-9e719a652b02.rest b/doc/source/_includes/performance-configurations-rook-ceph-9e719a652b02.rest new file mode 100644 index 000000000..fd597dd6b --- /dev/null +++ b/doc/source/_includes/performance-configurations-rook-ceph-9e719a652b02.rest @@ -0,0 +1,2 @@ +.. details-cpu-memory-start +.. details-cpu-memory-end \ No newline at end of file diff --git a/doc/source/introduction/terms.rst b/doc/source/introduction/terms.rst index b174694f0..2bd76ec36 100644 --- a/doc/source/introduction/terms.rst +++ b/doc/source/introduction/terms.rst @@ -38,6 +38,14 @@ commonly used in the |org| community and in this documentation. |CSM| Observability An OpenTelemetry agent that collects array-level metrics for Dell storage. + |CRUSH| + The |CRUSH| algorithm computes storage locations in order to determine how + to store and retrieve data. |CRUSH| allows Ceph clients to communicate with + |OSDs| directly rather than through a centralized server or broker. By + using an algorithmically-determined method of storing and retrieving data, + Ceph avoids a single point of failure, a performance bottleneck, and a + physical limit to its scalability. + Data Network(s) Networks attached to pci-passthrough and/or sriov interfaces that are made available to hosted containers or hosted |VMs| for pci-passthrough and/or |SRIOV| diff --git a/doc/source/shared/abbrevs.txt b/doc/source/shared/abbrevs.txt index 0f4311743..47c9e2c3e 100755 --- a/doc/source/shared/abbrevs.txt +++ b/doc/source/shared/abbrevs.txt @@ -37,6 +37,7 @@ .. |CRs| replace:: :abbr:`CRs (Custom Resources)` .. |CRD| replace:: :abbr:`CRD (Custom Resource Definition)` .. |CRDs| replace:: :abbr:`CRDs (Custom Resource Definitions)` +.. |CRUSH| replace:: :abbr:`CRUSH (Controlled Replication Under Scalable Hashing)` .. |CSI| replace:: :abbr:`CSI (Container Storage Interface)` .. |CSIs| replace:: :abbr:`CSIs (Container Storage Interfaces)` .. |CSK| replace:: :abbr:`CSK (Code Signing Key)` @@ -113,6 +114,7 @@ .. |MAC| replace:: :abbr:`MAC (Media Access Control)` .. |MDS| replace:: :abbr:`MDS (MetaData Server for cephfs)` .. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)` +.. |MGR| replace:: :abbr:`MGR (Ceph Manager)` .. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)` .. |ML| replace:: :abbr:`ML (Machine Learning)` .. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)` @@ -175,6 +177,7 @@ .. |PW| replace:: :abbr:`PW (Per Worker)` .. |QAT| replace:: :abbr:`QAT (QuickAssist Technology)` .. |QoS| replace:: :abbr:`QoS (Quality of Service)` +.. |RADOS| replace:: :abbr:`RADOS (Reliable Autonomous Distributed Object Store)` .. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)` .. |RAN| replace:: :abbr:`RAN (Radio Access Network)` .. |RAPL| replace:: :abbr:`RAPL (Running Average Power Limit)` diff --git a/doc/source/storage/kubernetes/performance-configurations-rook-ceph-9e719a652b02.rst b/doc/source/storage/kubernetes/performance-configurations-rook-ceph-9e719a652b02.rst index 73c397d64..5ecd8659a 100644 --- a/doc/source/storage/kubernetes/performance-configurations-rook-ceph-9e719a652b02.rst +++ b/doc/source/storage/kubernetes/performance-configurations-rook-ceph-9e719a652b02.rst @@ -12,20 +12,20 @@ configuration adjustments to ensure optimal performance. Rook introduces additional management overhead compared to a traditional bare-metal Ceph setup and needs more infrastructure resources. -Consequently, increasing the number of platform cores will improve I/O performance for -|OSD|, monitor and |MDS| pods. - -Increasing the number of |OSDs| in the cluster can also improve performance, reducing -the load on individual disks and enhancing throughput. - -When we talk about memory, it's important to emphasize that Ceph's default for -the |OSD| is 4GB, and we do not recommend decreasing it below 4GB. However, the -system could work with only 2GB. +For more information on Ceph hardware recommendation, see the documentation at +`Hardware Recommendations +`__. Another factor to consider is the size of the data blocks. Reading and writing small block files can degrade Ceph's performance, especially during high-frequency operations. +.. only:: partner + + .. include:: /_includes/performance-configurations-rook-ceph-9e719a652b02.rest + :start-after: details-cpu-memory-start + :end-before: details-cpu-memory-end + Pod resource limit tuning ------------------------- @@ -69,6 +69,7 @@ Finally, apply the Rook-Ceph application: ~(keystone_admin)$ system application-apply rook-ceph +.. _bluestore-tunable-parameters: Bluestore tunable parameters ----------------------------