From 556d5de9d391eaee13e5d5ed12481f802909e1ea Mon Sep 17 00:00:00 2001
From: Julia Kreger <juliaashleykreger@gmail.com>
Date: Tue, 19 Apr 2022 13:33:13 -0700
Subject: [PATCH] increase disk_erasure_coconcurrency

When we added concurrent disk erasures, we kept the concurrency
to 1 as to not risk any different oeprator behavior, at the cost
of not faster erasure times.

That being said, we have had the setting in place for some time
and we have received no reports of issues, so we are incrementing
it to four as that should be still quite relatively safe from a
concurrency standpoint for disk controllers in systems.

Change-Id: I6326422d60ec024a739ca596f46552bbd91b0419
---
 ironic/conf/deploy.py                                  |  2 +-
 ...imum-disk-erasure-concurrency-6d132bd84e3df4cf.yaml | 10 ++++++++++
 2 files changed, 11 insertions(+), 1 deletion(-)
 create mode 100644 releasenotes/notes/maximum-disk-erasure-concurrency-6d132bd84e3df4cf.yaml

diff --git a/ironic/conf/deploy.py b/ironic/conf/deploy.py
index 7a7fb37d7e..99b78ebfa0 100644
--- a/ironic/conf/deploy.py
+++ b/ironic/conf/deploy.py
@@ -108,7 +108,7 @@ opts = [
                        'state. If True, shred will be invoked and cleaning '
                        'will continue.')),
     cfg.IntOpt('disk_erasure_concurrency',
-               default=1,
+               default=4,
                min=1,
                mutable=True,
                help=_('Defines the target pool size used by Ironic Python '
diff --git a/releasenotes/notes/maximum-disk-erasure-concurrency-6d132bd84e3df4cf.yaml b/releasenotes/notes/maximum-disk-erasure-concurrency-6d132bd84e3df4cf.yaml
new file mode 100644
index 0000000000..f094215933
--- /dev/null
+++ b/releasenotes/notes/maximum-disk-erasure-concurrency-6d132bd84e3df4cf.yaml
@@ -0,0 +1,10 @@
+---
+other:
+  - |
+    The maximum disk erasure concurrency setting,
+    ``[deploy]disk_erasure_concurrency`` has been incremed to 4.
+    Previously, this was kept at 1 in order to maintain continuity of
+    experience, but operators have not reported any issues with an increased
+    concurrency, and as such we feel comfortable upstream enabling concurrent
+    disk erasure/cleaning. This setting applies to the ``erase_devices`` clean
+    step.