docs/doc/source/kube-virt/vm-snapshot-and-restore-21158b60cd56.rst
Ngairangbam Mili 29fc773578 Need the Procedure to enable snapshot functionality for KubeVirt + rook-ceph combination- ADDTIONAL UPDATES (R10, dsr10,dsr10 MINOR)
Change-Id: If380295e0e7bbcfd0b052c54cdb52db59de5cc70
Signed-off-by: Ngairangbam Mili <ngairangbam.mili@windriver.com>
2025-03-27 11:39:22 +00:00

205 lines
7.9 KiB
ReStructuredText

.. WARNING: Add no lines of text between the label immediately following
.. and the title.
.. _vm-snapshot-and-restore-21158b60cd56:
=======================
VM Snapshot and Restore
=======================
|VM| snapshot allows you to snapshot the running |VM| with existing
configuration and restore back to configuration point.
Snapshot a VM
-------------
Snapshotting a |VM| is supported for online and offline |VMs|.
When snapshotting a running |VM| the controller will check for the qemu guest
agent in the |VM|. If the agent exists it will freeze the |VM| filesystems
before taking the snapshot and unfreeze after the snapshot. It is recommended
to take online snapshots with the guest agent for a better snapshot, if not
present, a best effort snapshot will be taken.
.. rubric:: |proc|
.. note::
It is necessary that the |CRDs| and snapshot-controller running pod are
present in the system to create the Volume Snapshot Class. The |CRDs| and
snapshot-controller are created by default during installation when running
the bootstrap playbook.
#. Create ``VolumeSnapshotClass`` for ``cephfs`` and ``rbd``:
.. tabs::
.. group-tab:: Ceph
Set the snapshotClass.create field to true for ``cephfs-provisioner``.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True
+----------------+--------------------+
| Property | Value |
+----------------+--------------------+
| name | cephfs-provisioner |
| namespace | kube-system |
| user_overrides | snapshotClass: |
| | create: true |
| | |
+----------------+--------------------+
Set the snapshotClass.create field to true for ``rbd-provisioner``.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True
+----------------+-----------------+
| Property | Value |
+----------------+-----------------+
| name | rbd-provisioner |
| namespace | kube-system |
| user_overrides | snapshotClass: |
| | create: true |
| | |
+----------------+-----------------+
Run the :command:`application-apply` command to apply the overrides.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+--------------------------------------+
| Property | Value |
+---------------+--------------------------------------+
| active | True |
| app_version | 1.0-65 |
| created_at | 2024-01-08T18:15:07.178753+00:00 |
| manifest_file | fluxcd-manifests |
| manifest_name | platform-integ-apps-fluxcd-manifests |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2024-01-08T18:39:10.251660+00:00 |
+---------------+--------------------------------------+
After a few seconds, confirm the creation of the Volume Snapshot Class.
.. code-block:: none
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
AME DRIVER DELETIONPOLICY AGE
cephfs-snapshot cephfs.csi.ceph.com Delete 40s
rbd-snapshot rbd.csi.ceph.com Delete 40s
.. group-tab:: Rook Ceph
Create ``VolumeSnapshotClasses`` for ``cephfs`` provisioner.
.. code-block:: none
cat <<EOF>cephfs-snapshotclass.yaml
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: cephfs-snapshot
driver: rook-ceph.cephfs.csi.ceph.com
parameters:
clusterID: rook-ceph
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Delete
EOF
kubectl apply -f cephfs-snapshotclass.yaml
Create ``VolumeSnapshotClass`` for ``rbd`` provisioner.
.. code-block:: none
cat <<EOF>rbd-snapshotclass.yaml
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: rbd-snapshot
driver: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
deletionPolicy: Delete
EOF
kubectl apply -f rbd-snapshotclass.yaml
After a few seconds, confirm the creation of the Volume Snapshot Class.
.. code-block:: none
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
NAME DRIVER DELETIONPOLICY AGE
cephfs-snapshot rook-ceph.cephfs.csi.ceph.com Delete 109m
rbd-snapshot rook-ceph.rbd.csi.ceph.com Delete 109m
#. Create snapshot manifest of running |VM| using the example yaml below:
.. code-block:: none
cat<<EOF>cirros-snapshot.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
name: snap-cirros
spec:
source:
apiGroup: kubevirt.io
kind: VirtualMachine
name: pvc-test-vm
failureDeadline: 3m
EOF
.. note::
Make sure to replace the NAME field with the name of the |VM| to take
snapshot as shown in the output of :command:`kubectl get vm`.
#. Apply the snapshot manifest and verify if the snapshot is successfully
created.
.. code-block:: none
kubectl apply -f cirros-snapshot.yaml
[sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineSnapshot
NAME SOURCEKIND SOURCENAME PHASE READYTOUSE CREATIONTIME ERROR
snap-cirros VirtualMachine pvc-test-vm Succeeded true 28m
Example manifest to restore the snapshot:
.. code-block:: none
cat <<EOF>cirros-restore.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineRestore
metadata:
name: restore-cirros
spec:
target:
apiGroup: kubevirt.io
kind: VirtualMachine
name: pvc-test-vm
virtualMachineSnapshotName: snap-cirros
EOF
kubectl apply -f cirros-restore.yaml
Verify the snapshot restore:
.. code-block:: none
[sysadmin@controller-0 kubevirt-GA-testing(keystone_admin)]$ kubectl get VirtualMachineRestore
NAME TARGETKIND TARGETNAME COMPLETE RESTORETIME ERROR
restore-cirros VirtualMachine pvc-test-vm true 34m