Memory dimensioning guidance (r10,dsR10 minor)
Improve the initial text to make it clearer about resource usage in Performance Configurations on Rook Ceph. Add acronyms Add glossary entry CRUSH Change-Id: I675bee30ac4ae7187af328f590503533bee4cdba Signed-off-by: Elisamara Aoki Gonçalves <elisamaraaoki.goncalves@windriver.com>
This commit is contained in:
parent
6a773b1cba
commit
10978f94be
@ -0,0 +1,2 @@
|
||||
.. details-cpu-memory-start
|
||||
.. details-cpu-memory-end
|
@ -38,6 +38,14 @@ commonly used in the |org| community and in this documentation.
|
||||
|CSM| Observability
|
||||
An OpenTelemetry agent that collects array-level metrics for Dell storage.
|
||||
|
||||
|CRUSH|
|
||||
The |CRUSH| algorithm computes storage locations in order to determine how
|
||||
to store and retrieve data. |CRUSH| allows Ceph clients to communicate with
|
||||
|OSDs| directly rather than through a centralized server or broker. By
|
||||
using an algorithmically-determined method of storing and retrieving data,
|
||||
Ceph avoids a single point of failure, a performance bottleneck, and a
|
||||
physical limit to its scalability.
|
||||
|
||||
Data Network(s)
|
||||
Networks attached to pci-passthrough and/or sriov interfaces that are made
|
||||
available to hosted containers or hosted |VMs| for pci-passthrough and/or |SRIOV|
|
||||
|
@ -37,6 +37,7 @@
|
||||
.. |CRs| replace:: :abbr:`CRs (Custom Resources)`
|
||||
.. |CRD| replace:: :abbr:`CRD (Custom Resource Definition)`
|
||||
.. |CRDs| replace:: :abbr:`CRDs (Custom Resource Definitions)`
|
||||
.. |CRUSH| replace:: :abbr:`CRUSH (Controlled Replication Under Scalable Hashing)`
|
||||
.. |CSI| replace:: :abbr:`CSI (Container Storage Interface)`
|
||||
.. |CSIs| replace:: :abbr:`CSIs (Container Storage Interfaces)`
|
||||
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
|
||||
@ -113,6 +114,7 @@
|
||||
.. |MAC| replace:: :abbr:`MAC (Media Access Control)`
|
||||
.. |MDS| replace:: :abbr:`MDS (MetaData Server for cephfs)`
|
||||
.. |MEC| replace:: :abbr:`MEC (Multi-access Edge Computing)`
|
||||
.. |MGR| replace:: :abbr:`MGR (Ceph Manager)`
|
||||
.. |MLD| replace:: :abbr:`MLD (Multicast Listener Discovery)`
|
||||
.. |ML| replace:: :abbr:`ML (Machine Learning)`
|
||||
.. |MNFA| replace:: :abbr:`MNFA (Multi-Node Failure Avoidance)`
|
||||
@ -175,6 +177,7 @@
|
||||
.. |PW| replace:: :abbr:`PW (Per Worker)`
|
||||
.. |QAT| replace:: :abbr:`QAT (QuickAssist Technology)`
|
||||
.. |QoS| replace:: :abbr:`QoS (Quality of Service)`
|
||||
.. |RADOS| replace:: :abbr:`RADOS (Reliable Autonomous Distributed Object Store)`
|
||||
.. |RAID| replace:: :abbr:`RAID (Redundant Array of Inexpensive Disks)`
|
||||
.. |RAN| replace:: :abbr:`RAN (Radio Access Network)`
|
||||
.. |RAPL| replace:: :abbr:`RAPL (Running Average Power Limit)`
|
||||
|
@ -12,20 +12,20 @@ configuration adjustments to ensure optimal performance. Rook introduces
|
||||
additional management overhead compared to a traditional bare-metal Ceph setup
|
||||
and needs more infrastructure resources.
|
||||
|
||||
Consequently, increasing the number of platform cores will improve I/O performance for
|
||||
|OSD|, monitor and |MDS| pods.
|
||||
|
||||
Increasing the number of |OSDs| in the cluster can also improve performance, reducing
|
||||
the load on individual disks and enhancing throughput.
|
||||
|
||||
When we talk about memory, it's important to emphasize that Ceph's default for
|
||||
the |OSD| is 4GB, and we do not recommend decreasing it below 4GB. However, the
|
||||
system could work with only 2GB.
|
||||
For more information on Ceph hardware recommendation, see the documentation at
|
||||
`Hardware Recommendations
|
||||
<https://docs.ceph.com/en/reef/start/hardware-recommendations>`__.
|
||||
|
||||
Another factor to consider is the size of the data blocks. Reading and writing
|
||||
small block files can degrade Ceph's performance, especially during
|
||||
high-frequency operations.
|
||||
|
||||
.. only:: partner
|
||||
|
||||
.. include:: /_includes/performance-configurations-rook-ceph-9e719a652b02.rest
|
||||
:start-after: details-cpu-memory-start
|
||||
:end-before: details-cpu-memory-end
|
||||
|
||||
Pod resource limit tuning
|
||||
-------------------------
|
||||
|
||||
@ -69,6 +69,7 @@ Finally, apply the Rook-Ceph application:
|
||||
|
||||
~(keystone_admin)$ system application-apply rook-ceph
|
||||
|
||||
.. _bluestore-tunable-parameters:
|
||||
|
||||
Bluestore tunable parameters
|
||||
----------------------------
|
||||
|
Loading…
x
Reference in New Issue
Block a user