Merge "Memory dimensioning guidance (r10,dsR10 minor)"

This commit is contained in:
Zuul 2025-05-05 16:19:08 +00:00 committed by Gerrit Code Review
commit 3d39d4db58
4 changed files with 23 additions and 9 deletions

View File

@ -0,0 +1,2 @@
.. details-cpu-memory-start
.. details-cpu-memory-end

View File

@ -38,6 +38,14 @@ commonly used in the |org| community and in this documentation.
|CSM| Observability |CSM| Observability
An OpenTelemetry agent that collects array-level metrics for Dell storage. An OpenTelemetry agent that collects array-level metrics for Dell storage.
|CRUSH|
The |CRUSH| algorithm computes storage locations in order to determine how
to store and retrieve data. |CRUSH| allows Ceph clients to communicate with
|OSDs| directly rather than through a centralized server or broker. By
using an algorithmically-determined method of storing and retrieving data,
Ceph avoids a single point of failure, a performance bottleneck, and a
physical limit to its scalability.
Data Network(s) Data Network(s)
Networks attached to pci-passthrough and/or sriov interfaces that are made Networks attached to pci-passthrough and/or sriov interfaces that are made
available to hosted containers or hosted |VMs| for pci-passthrough and/or |SRIOV| available to hosted containers or hosted |VMs| for pci-passthrough and/or |SRIOV|

View File

@ -37,6 +37,7 @@
.. |CRs| replace:: :abbr:`CRs (Custom Resources)` .. |CRs| replace:: :abbr:`CRs (Custom Resources)`
.. |CRD| replace:: :abbr:`CRD (Custom Resource Definition)` .. |CRD| replace:: :abbr:`CRD (Custom Resource Definition)`
.. |CRDs| replace:: :abbr:`CRDs (Custom Resource Definitions)` .. |CRDs| replace:: :abbr:`CRDs (Custom Resource Definitions)`
.. |CRUSH| replace:: :abbr:`CRUSH (Controlled Replication Under Scalable Hashing)`
.. |CSI| replace:: :abbr:`CSI (Container Storage Interface)` .. |CSI| replace:: :abbr:`CSI (Container Storage Interface)`
.. |CSIs| replace:: :abbr:`CSIs (Container Storage Interfaces)` .. |CSIs| replace:: :abbr:`CSIs (Container Storage Interfaces)`
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)` .. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
@ -113,6 +114,7 @@
.. |MAC| replace:: :abbr:`MAC (Media Access Control)` .. |MAC| replace:: :abbr:`MAC (Media Access Control)`
.. |MDS| replace:: :abbr:`MDS (MetaData Server for cephfs)` .. |MDS| replace:: :abbr:`MDS (MetaData Server for cephfs)`
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)` .. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
.. |MGR| replace:: :abbr:`MGR (Ceph Manager)`
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)` .. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
.. |ML| replace:: :abbr:`ML (Machine Learning)` .. |ML| replace:: :abbr:`ML (Machine Learning)`
.. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)` .. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)`
@ -175,6 +177,7 @@
.. |PW| replace:: :abbr:`PW (Per Worker)` .. |PW| replace:: :abbr:`PW (Per Worker)`
.. |QAT| replace:: :abbr:`QAT (QuickAssist Technology)` .. |QAT| replace:: :abbr:`QAT (QuickAssist Technology)`
.. |QoS| replace:: :abbr:`QoS (Quality of Service)` .. |QoS| replace:: :abbr:`QoS (Quality of Service)`
.. |RADOS| replace:: :abbr:`RADOS (Reliable Autonomous Distributed Object Store)`
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)` .. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
.. |RAN| replace:: :abbr:`RAN (Radio Access Network)` .. |RAN| replace:: :abbr:`RAN (Radio Access Network)`
.. |RAPL| replace:: :abbr:`RAPL (Running Average Power Limit)` .. |RAPL| replace:: :abbr:`RAPL (Running Average Power Limit)`

View File

@ -12,20 +12,20 @@ configuration adjustments to ensure optimal performance. Rook introduces
additional management overhead compared to a traditional bare-metal Ceph setup additional management overhead compared to a traditional bare-metal Ceph setup
and needs more infrastructure resources. and needs more infrastructure resources.
Consequently, increasing the number of platform cores will improve I/O performance for For more information on Ceph hardware recommendation, see the documentation at
|OSD|, monitor and |MDS| pods. `Hardware Recommendations
<https://docs.ceph.com/en/reef/start/hardware-recommendations>`__.
Increasing the number of |OSDs| in the cluster can also improve performance, reducing
the load on individual disks and enhancing throughput.
When we talk about memory, it's important to emphasize that Ceph's default for
the |OSD| is 4GB, and we do not recommend decreasing it below 4GB. However, the
system could work with only 2GB.
Another factor to consider is the size of the data blocks. Reading and writing Another factor to consider is the size of the data blocks. Reading and writing
small block files can degrade Ceph's performance, especially during small block files can degrade Ceph's performance, especially during
high-frequency operations. high-frequency operations.
.. only:: partner
.. include:: /_includes/performance-configurations-rook-ceph-9e719a652b02.rest
:start-after: details-cpu-memory-start
:end-before: details-cpu-memory-end
Pod resource limit tuning Pod resource limit tuning
------------------------- -------------------------
@ -69,6 +69,7 @@ Finally, apply the Rook-Ceph application:
~(keystone_admin)$ system application-apply rook-ceph ~(keystone_admin)$ system application-apply rook-ceph
.. _bluestore-tunable-parameters:
Bluestore tunable parameters Bluestore tunable parameters
---------------------------- ----------------------------