dc_vault restore playbook
Bug fix Added information regarding dc_vault in: restoring-starlingx-system-data-and-storage running-ansible-backup-playbook-locally... Acted on Mary's comments https://review.opendev.org/c/starlingx/docs/+/790296 Signed-off-by: Adil <mohamed.adilassakkali@windriver.com> Change-Id: I1b878919ad4068b0f1af35f605628f3146cdb219
This commit is contained in:
parent
c3e43eb4dc
commit
3d075a0f50
@ -17,7 +17,7 @@ This procedure takes a snapshot of the etcd database at the time of backup,
|
||||
stores it in the system data backup, and then uses it to initialize the
|
||||
Kubernetes cluster during a restore. Kubernetes configuration will be
|
||||
restored and pods that are started from repositories accessible from the
|
||||
internet or from external repositories will start immediately. StarlingX
|
||||
internet or from external repositories will start immediately. |prod|
|
||||
specific applications must be re-applied once a storage cluster is configured.
|
||||
|
||||
.. warning::
|
||||
@ -92,7 +92,7 @@ conditions are in place:
|
||||
$ source /etc/platform/openrc
|
||||
~(keystone_admin)]$ dcmanager subcloud unmanage <subcloud-name>
|
||||
|
||||
where <subcloud-name> is the name of the subcloud to be unmanaged.
|
||||
where ``<subcloud-name>`` is the name of the subcloud to be unmanaged.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
@ -126,11 +126,11 @@ conditions are in place:
|
||||
<system-backup-running-ansible-restore-playbook-remotely>`.
|
||||
|
||||
.. note::
|
||||
The backup files contains the system data and updates.
|
||||
The backup files contain the system data and updates.
|
||||
|
||||
#. If the backup file contains patches, Ansible Restore playbook
|
||||
restore\_platform.yml will apply the patches and prompt you to reboot the
|
||||
system, you will need to re-run Ansible Restore playbook
|
||||
system, you will need to re-run Ansible Restore playbook.
|
||||
|
||||
The current software version on the controller is compared against the
|
||||
version available in the backup file. If the backed-up version includes
|
||||
@ -156,6 +156,8 @@ conditions are in place:
|
||||
|
||||
This must be done before unlocking controller-0.
|
||||
|
||||
#. Unlock Controller-0.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ system host-unlock controller-0
|
||||
@ -163,6 +165,17 @@ conditions are in place:
|
||||
After you unlock controller-0, storage nodes become available and Ceph
|
||||
becomes operational.
|
||||
|
||||
#. If the system is a Distributed Cloud system controller, restore the **dc-vault**
|
||||
using the restore\_dc\_vault.yml playbook. Perform this step after unlocking
|
||||
controller-0:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_dc_vault.yml -e "initial_backup_dir=/home/sysadmin backup_filename=localhost_dc_vault_backup_2020_07_15_21_24_22.tgz ansible_become_pass=St0rlingX*"
|
||||
|
||||
.. note::
|
||||
The dc-vault backup archive is created by the backup.yml playbook.
|
||||
|
||||
#. Authenticate the system as Keystone user **admin**.
|
||||
|
||||
Source the **admin** user environment as follows:
|
||||
@ -315,7 +328,7 @@ conditions are in place:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ kubectl get pods -n kube-system | grep -e calico -e coredns
|
||||
~(keystone_admin)]$ kubectl get pods -n kube-system | grep -e calico -e coredns
|
||||
calico-kube-controllers-5cd4695574-d7zwt 1/1 Running
|
||||
calico-node-6km72 1/1 Running
|
||||
calico-node-c7xnd 1/1 Running
|
||||
@ -328,8 +341,7 @@ conditions are in place:
|
||||
|
||||
~(keystone_admin)]$ system restore-complete
|
||||
|
||||
#. Alarms 750.006 alarms disappear one at a time, as the apps are auto
|
||||
applied.
|
||||
#. Alarms 750.006 alarms disappear one at a time, as the apps are auto applied.
|
||||
|
||||
.. rubric:: |postreq|
|
||||
|
||||
@ -347,7 +359,7 @@ applied.
|
||||
$ source /etc/platform/openrc
|
||||
~(keystone_admin)]$ dcmanager subcloud manage <subcloud-name>
|
||||
|
||||
where <subcloud-name> is the name of the subcloud to be managed.
|
||||
where ``<subcloud-name>`` is the name of the subcloud to be managed.
|
||||
|
||||
|
||||
.. comments in steps seem to throw numbering off.
|
||||
|
@ -13,7 +13,7 @@ Use the following command to run the Ansible Backup playbook and back up the
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>" -e "backup_user_local_registry=true"
|
||||
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>" -e "backup_user_local_registry=true"
|
||||
|
||||
The <admin\_password> and <ansible\_become\_pass\> need to be set correctly
|
||||
using the ``-e`` option on the command line, or an override file, or in the
|
||||
@ -29,6 +29,8 @@ The output files will be named:
|
||||
|
||||
- inventory\_hostname\_docker\_local\_registry\_backup\_timestamp.tgz
|
||||
|
||||
- inventory\_hostname\_dc\_vault\_backup\_timestamp.tgz
|
||||
|
||||
The variables prefix can be overridden using the ``-e`` option on the command
|
||||
line or by using an override file.
|
||||
|
||||
@ -40,6 +42,8 @@ line or by using an override file.
|
||||
|
||||
- docker\_local\_registry\_backup\_filename\_prefix
|
||||
|
||||
- dc\_vault\_backup\_filename\_prefix
|
||||
|
||||
The generated backup tar files will be displayed in the following format,
|
||||
for example:
|
||||
|
||||
@ -51,6 +55,8 @@ for example:
|
||||
|
||||
- localhost\_openstack\_backup\_2020\_07\_15\_21\_24\_22.tgz
|
||||
|
||||
- localhost\_dc\_vault\_backup\_2020\_07\_15\_21\_24\_22.tgz
|
||||
|
||||
These files are located by default in the /opt/backups directory on
|
||||
controller-0, and contains the complete system backup.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user