Merge "Need the Procedure to enable snapshot functionality for KubeVirt + rook-ceph combination- ADDTIONAL UPDATES (R10, dsr10,dsr10 MINOR)" into r/stx.10.0
This commit is contained in:
commit
f93d4454dd
@ -24,21 +24,12 @@ present, a best effort snapshot will be taken.
|
||||
|
||||
.. rubric:: |proc|
|
||||
|
||||
To enable snapshot functionality system does require snapshot |CRD| and
|
||||
snapshot controller to be created on system. Follow the steps below:
|
||||
|
||||
#. Run to install snapshot |CRD| and snapshot controller on Kubernets:
|
||||
|
||||
- ``kubectl apply -f`` https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
|
||||
|
||||
- ``kubectl apply -f`` https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
|
||||
|
||||
- ``kubectl apply -f`` https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
|
||||
|
||||
- ``kubectl apply -f`` https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
|
||||
|
||||
- ``kubectl apply -f`` https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
|
||||
.. note::
|
||||
|
||||
It is necessary that the |CRDs| and snapshot-controller running pod are
|
||||
present in the system to create the Volume Snapshot Class. The |CRDs| and
|
||||
snapshot-controller are created by default during installation when running
|
||||
the bootstrap playbook.
|
||||
|
||||
#. Create ``VolumeSnapshotClass`` for ``cephfs`` and ``rbd``:
|
||||
|
||||
@ -46,44 +37,76 @@ snapshot controller to be created on system. Follow the steps below:
|
||||
|
||||
.. group-tab:: Ceph
|
||||
|
||||
Set the snapshotClass.create field to true for ``cephfs-provisioner``.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cat <<EOF>cephfs-storageclass.yaml
|
||||
—
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-cephfsplugin-snapclass
|
||||
driver: cephfs.csi.ceph.com
|
||||
parameters:
|
||||
clusterID: 60ee9439-6204-4b11-9b02-3f2c2f0a4344
|
||||
csi.storage.k8s.io/snapshotter-secret-name: ceph-pool-kube-cephfs-data
|
||||
csi.storage.k8s.io/snapshotter-secret-namespace: default
|
||||
deletionPolicy: Delete
|
||||
EOF
|
||||
cat <<EOF>rbd-storageclass.yaml
|
||||
—
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-rbdplugin-snapclass
|
||||
driver: rbd.csi.ceph.com
|
||||
parameters:
|
||||
clusterID: 60ee9439-6204-4b11-9b02-3f2c2f0a4344
|
||||
csi.storage.k8s.io/snapshotter-secret-name: ceph-pool-kube-rbd
|
||||
csi.storage.k8s.io/snapshotter-secret-namespace: default
|
||||
deletionPolicy: Delete
|
||||
EOF
|
||||
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True
|
||||
+----------------+--------------------+
|
||||
| Property | Value |
|
||||
+----------------+--------------------+
|
||||
| name | cephfs-provisioner |
|
||||
| namespace | kube-system |
|
||||
| user_overrides | snapshotClass: |
|
||||
| | create: true |
|
||||
| | |
|
||||
+----------------+--------------------+
|
||||
|
||||
Set the snapshotClass.create field to true for ``rbd-provisioner``.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True
|
||||
+----------------+-----------------+
|
||||
| Property | Value |
|
||||
+----------------+-----------------+
|
||||
| name | rbd-provisioner |
|
||||
| namespace | kube-system |
|
||||
| user_overrides | snapshotClass: |
|
||||
| | create: true |
|
||||
| | |
|
||||
+----------------+-----------------+
|
||||
|
||||
Run the :command:`application-apply` command to apply the overrides.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ system application-apply platform-integ-apps
|
||||
+---------------+--------------------------------------+
|
||||
| Property | Value |
|
||||
+---------------+--------------------------------------+
|
||||
| active | True |
|
||||
| app_version | 1.0-65 |
|
||||
| created_at | 2024-01-08T18:15:07.178753+00:00 |
|
||||
| manifest_file | fluxcd-manifests |
|
||||
| manifest_name | platform-integ-apps-fluxcd-manifests |
|
||||
| name | platform-integ-apps |
|
||||
| progress | None |
|
||||
| status | applying |
|
||||
| updated_at | 2024-01-08T18:39:10.251660+00:00 |
|
||||
+---------------+--------------------------------------+
|
||||
|
||||
After a few seconds, confirm the creation of the Volume Snapshot Class.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
|
||||
AME DRIVER DELETIONPOLICY AGE
|
||||
cephfs-snapshot cephfs.csi.ceph.com Delete 40s
|
||||
rbd-snapshot rbd.csi.ceph.com Delete 40s
|
||||
|
||||
.. group-tab:: Rook Ceph
|
||||
|
||||
Create ``VolumeSnapshotClasses`` for ``cephfs`` provisioner.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cat <<EOF>cephfs-storageclass.yaml
|
||||
cat <<EOF>cephfs-snapshotclass.yaml
|
||||
---
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-cephfsplugin-snapclass
|
||||
name: cephfs-snapshot
|
||||
driver: rook-ceph.cephfs.csi.ceph.com
|
||||
parameters:
|
||||
clusterID: rook-ceph
|
||||
@ -91,23 +114,36 @@ snapshot controller to be created on system. Follow the steps below:
|
||||
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
|
||||
deletionPolicy: Delete
|
||||
EOF
|
||||
cat <<EOF>rbd-storageclass.yaml
|
||||
kubectl apply -f cephfs-snapshotclass.yaml
|
||||
|
||||
Create ``VolumeSnapshotClass`` for ``rbd`` provisioner.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cat <<EOF>rbd-snapshotclass.yaml
|
||||
---
|
||||
apiVersion: snapshot.storage.k8s.io/v1
|
||||
kind: VolumeSnapshotClass
|
||||
metadata:
|
||||
name: csi-rbdplugin-snapclass
|
||||
name: rbd-snapshot
|
||||
driver: rook-ceph.rbd.csi.ceph.com
|
||||
parameters:
|
||||
clusterID: rook-ceph
|
||||
csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
|
||||
csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph
|
||||
deletionPolicy: Delete
|
||||
EOF
|
||||
EOF
|
||||
kubectl apply -f rbd-snapshotclass.yaml
|
||||
|
||||
.. note::
|
||||
After a few seconds, confirm the creation of the Volume Snapshot Class.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
|
||||
NAME DRIVER DELETIONPOLICY AGE
|
||||
cephfs-snapshot rook-ceph.cephfs.csi.ceph.com Delete 109m
|
||||
rbd-snapshot rook-ceph.rbd.csi.ceph.com Delete 109m
|
||||
|
||||
Get the cluster ID from: ``kubectl describe sc cephfs, rbd``.
|
||||
|
||||
#. Create snapshot manifest of running |VM| using the example yaml below:
|
||||
|
||||
@ -126,6 +162,11 @@ snapshot controller to be created on system. Follow the steps below:
|
||||
failureDeadline: 3m
|
||||
EOF
|
||||
|
||||
.. note::
|
||||
|
||||
Make sure to replace the NAME field with the name of the |VM| to take
|
||||
snapshot as shown in the output of :command:`kubectl get vm`.
|
||||
|
||||
#. Apply the snapshot manifest and verify if the snapshot is successfully
|
||||
created.
|
||||
|
||||
@ -140,7 +181,7 @@ Example manifest to restore the snapshot:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
<<EOF>cirros-restore.yaml
|
||||
cat <<EOF>cirros-restore.yaml
|
||||
apiVersion: snapshot.kubevirt.io/v1alpha1
|
||||
kind: VirtualMachineRestore
|
||||
metadata:
|
||||
|
Loading…
x
Reference in New Issue
Block a user