docs/doc/source/storage/kubernetes/configure-ceph-file-system-for-internal-ceph-storage-backend.rst
Rafael Jardim 2e74ccd0b7 Storage Update
Signed-off-by: Rafael Jardim <rafaeljordao.jardim@windriver.com>
Change-Id: Ic8eea41e912e52ddebc5ed9dca62e8d4f9255b09
2021-03-23 09:20:34 -03:00

7.7 KiB

Configure Ceph File System for Internal Ceph Storage Backend

CephFS (Ceph File System) is a highly available, mutli-use, performant file store for a variety of applications, built on top of Ceph's Distributed Object Store (RADOS).

CephFS provides the following functionality:

  • Enabled by default (along with existing Ceph RDB)
  • Highly available, multi-use, performant file storage
  • Scalability using a separate RADOS pool for the file's metadata
  • Metadata using Metadata Servers (MDS) that provide high availability and scalability
  • Deployed in HA configurations for all deployment options
  • Integrates cephfs-provisioner supporting Kubernetes StorageClass
  • Enables configuration of:
    • PersistentVolumeClaim () using StorageClass and ReadWriteMany accessmode
    • Two or more application pods mounting and reading/writing data to it

CephFS is configured automatically when a Ceph backend is enabled and provides a Kubernetes StorageClass. Once enabled, every node in the cluster that serves as a Ceph monitor will also be configured as a CephFS Metadata Server (MDS). Creation of the CephFS pools, filesystem initialization, and creation of Kubernetes resource is done by the platform-integ-apps application, using cephfs-provisioner Helm chart.

When applied, platform-integ-apps creates two Ceph pools for each storage backend configured, one for CephFS data and a second pool for metadata:

  • CephFS data pool: The pool name for the default storage backend is kube-cephfs-data

  • Metadata pool: The pool name is kube-cephfs-metadata

    When a new storage backend is created, a new CephFS data pool will be created with the name kube-cephfs-data- <storage_backend_name>, and the metadata pool will be created with the name kube-cephfs-metadata- <storage_backend_name>. The default filesystem name is kube-cephfs.

    When a new storage backend is created, a new filesystem will be created with the name kube-cephfs- <storage_backend_name>.

For example, if the user adds a storage backend named, 'test', cephfs-provisioner will create the following pools:

  • kube-cephfs-data-test
  • kube-cephfs-metadata-test

Also, the application platform-integ-apps will create a filesystem kube cephfs-test.

If you list all the pools in a cluster with 'test' storage backend, you should see four pools created by cephfs-provisioner using platform-integ-apps. Use the following command to list the CephFS pools created.

$ ceph osd lspools
  • kube-rbd
  • kube-rbd-test
  • kube-cephfs-data
  • kube-cephfs-data-test
  • kube-cephfs-metadata
  • kube-cephfs-metadata-test

Use the following command to list Ceph File Systems:

$ ceph fs ls
name: kube-cephfs, metadata pool: kube-cephfs-metadata, data pools: [kube-cephfs-data ]
name: kube-cephfs-silver, metadata pool: kube-cephfs-metadata-silver, data pools: [kube-cephfs-data-silver ]

cephfs-provisioner creates in a Kubernetes cluster, a StorageClass for each storage backend present.

These StorageClass resources should be used to create PersistentVolumeClaim resources in order to allow pods to use CephFS. The default StorageClass resource is named cephfs, and additional resources are created with the name <storage_backend_name> -cephfs for each additional storage backend created.

For example, when listing StorageClass resources in a cluster that is configured with a storage backend named 'test', the following storage classes are created:

$ kubectl get sc
NAME              PROVISIONER     RECLAIM.. VOLUME..  ALLOWVOLUME.. AGE
cephfs            ceph.com/cephfs Delete    Immediate false         65m
general (default) ceph.com/rbd    Delete    Immediate false         66m
test-cephfs       ceph.com/cephfs Delete    Immediate false         65m
test-general      ceph.com/rbd    Delete    Immediate false         66m

All Kubernetes resources (pods, StorageClasses, PersistentVolumeClaims, configmaps, etc.) used by the provisioner are created in the kube-system namespace.

Note

Multiple Ceph file systems are not enabled by default in the cluster. You can enable it manually, for example, using the command; ceph fs flag set enable\_multiple true --yes-i-really-mean-it.

Persistent Volume Claim (PVC)

If you need to create a Persistent Volume Claim, you can create it using kubectl. For example:

  1. Create a file named my_pvc.yaml, and add the following content:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: claim1
      namespace: kube-system
    spec:
      storageClassName: cephfs
      accessModes:
      - ReadWriteMany
      resources:
       requests:
        storage: 1Gi
  2. To apply the updates, use the following command:

    $ kubectl apply -f my_pvc.yaml
  3. After the is created, use the following command to see the bound to the existing StorageClass.

    $ kubectl get pvc -n kube-system
    
    NAME STATUS  VOLUME      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    claim1       Boundpvc..  1Gi        RWX            cephfs  
  4. The is automatically provisioned by the StorageClass, and a is created. Use the following command to list the .

    $ kubectl get pv -n kube-system
    
    NAME    CAPACITY ACCESS..RECLAIM.. STATUS CLAIM             STORAGE.. REASON AGE
    pvc-5.. 1Gi      RWX     Delete    Bound  kube-system/claim1 cephfs          26s
  5. Create Pods to use the . Create a file my_pod.yaml:

    kind: Pod
    apiVersion: v1
    metadata:
      name: test-pod
      namespace: kube-system
    spec:
      containers:
      - name: test-pod
        image: gcr.io/google_containers/busybox:1.24
        command:
          - "/bin/sh"
        args:
          - "-c"
          - "touch /mnt/SUCCESS && exit 0 || exit 1"
        volumeMounts:
          - name: pvc
            mountPath: "/mnt"
      restartPolicy: "Never"
      volumes:
        - name: pvc
          persistentVolumeClaim:
            claimName: claim1 
  6. Apply the inputs to the pod.yaml file, using the following command.

    $ kubectl apply -f my_pod.yaml

For more information on Persistent Volume Support, see, About Persistent Volume Support <about-persistent-volume-support>, and, : Creating Persistent Volume Claims <kubernetes-user-tutorials-creating-persistent-volume-claims>.