Skip to content

In a previous blog post, we showed how the Kubernetes-native K10 data management platform can be used to backup and recover stateful applications that are deployed on Kubernetes using Rook-Ceph storage.

 

Since then, Rook v1.4 has been released and this recent release comes with Ceph CSI v3.0. The Ceph CSI release has a large number of new features but, most importantly, it includes Beta API support for persistent volume snapshots. This API upgrade (from the previously experimental Alpha) is needed to run Ceph effectively with modern Kubernetes releases (v1.17+) and recent versions of Red Hat OpenShift (v4.5+). rook_ceph_logos

In this post, we will look into how to install Rook Ceph using physical devices or Kubernetes storage classes that can provide block volumes, a database (Redis in this example) using Rook Ceph, and installing the K10 platform to backup the database.  Detailed instructions can be found here.

Setting up a Ceph (RBD) Cluster

 To set up a Ceph cluster with Rook v.14 and Ceph CSI v3.0,  we have to deploy the rook-operator and then a CephCluster resource.

 

First, let’s clone the rook github repo, and change the directory to rook/cluster/examples/kubernetes/ceph

 

$ git clone -b v1.4.0 https://github.com/rook/rook.git

Cloning into 'rook'...
...
Note: switching to '801c593439679aaf964984df0969695ec44f640d'.

$ cd rook/cluster/examples/kubernetes/ceph

 

Now, let's create all the common resource including the default Rook deployment namespace and required Custom Resource Definitions (CRDs):

 

$ kubectl create -f common.yaml
namespace/rook-ceph created
...
clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created

 

Once all the common resources have been created, the below command can be run to deploy the Rook operator

 

$ kubectl create -f operator.yaml 
configmap/rook-ceph-operator-config created
deployment.apps/rook-ceph-operator created

 

Note: For Openshift deployments, the command below needs to be run to create the Operator. Refer to the Operator section in this document for more details.

 

$ kubectl create -f operator-openshift.yaml
securitycontextconstraints.security.openshift.io/rook-ceph created
securitycontextconstraints.security.openshift.io/rook-ceph-csi created

 

Next, let’s verify that the rook-operator has been deployed on the rook-ceph namespace successfully.

 

$ kubectl get pods -n rook-ceph -w

 

After the operator is successfully installed, let’s create the CephCluster. Ceph installed via Rook can be backed by either physical devices that are mounted on the Kubernetes hosts or by using an existing storage provider (using PVC). Depending on your preference and Kubernetes setup, create the cluster using either cluster-test.yaml (host devices) or cluster-on-pvc.yaml (existing storage provider).

 

If you are using physical host-mounted devices with Ceph, make sure that the hosts have the lvm2 package installed. This is required by the most recent Ceph releases.

 

If you are using disks provided by an existing storage provider, change the cluster-on-pvc.yaml configuration file to include the storage class that should be used.

 

We recommend the existing storage provider approach for easier testing on your existing Kubernetes cluster and will use it below. Note that Ceph CSI v3.0 will get automatically used as we are installing Rook v1.4 in this post.

 

$ kubectl create -f cluster-on-pvc.yaml 
cephcluster.ceph.rook.io/rook-ceph created

 

Ensure that the Ceph cluster is installed and working as expected by running the below command:

 

$ kubectl create -f toolbox.yaml
$ kubectl --namespace=rook-ceph exec -it $(kubectl --namespace=rook-ceph \
    get pod -l "app=rook-ceph-tools" \
    -o jsonpath='{.items[0].metadata.name}') ceph status
  cluster:
    id:     <cluster id>
    health: HEALTH_OK
...

 

Now that the rook-operator and CephCluster have been created, we can create the StorageClass and VolumeSnapshotClass that will define the newly created storage provisioners should be used to provision application storage and the snapshot configuration when volume snapshots are invoked.

 

$ cd csi/rbd
$kubectl create -f storageclass.yaml
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

$ kubectl create -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-rbdplugin-snapclass created

 

An important change in Ceph CSI v3.0 is designating support for Ceph RBD Snapshots as Beta instead of the previous Alpha (not to be confused with the Kubernetes Snapshot API support in Rook Ceph also moving from Alpha to Beta). This change means that Ceph RBD snapshots (CephFS snapshots are still in the works) can be considered to be stable and supported without breaking changes moving forward.

 

Similarly, Ceph CSI v3.0 also supports the stable Kubernetes Snapshot Beta API and this can be confirmed by describing the the VolumeSnapshotClass that was just created and looking for the v1beta1 API version specification:

 

kubectl  describe  volumesnapshotclasses.snapshot.storage.k8s.io  csi-rbdplugin-snapclass
Name:             csi-rbdplugin-snapclass
...
API Version:      snapshot.storage.k8s.io/v1beta1

 

Finally, let’s make the Rook Ceph RBD StorageClass rook-ceph-block to be the default storage class for use with Redis and K10 instead of the default your cluster might have within it.

 

$ kubectl patch storageclass <existing-default-storageclass> \
    -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
$ kubectl patch storageclass rook-ceph-block \
    -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

 

Installing K10 and Integrating it with Rook Ceph Storage

 

We can use Helm to install K10 using the below command (complete documentation):

 

$ kubectl create namespace kasten-io
$ helm install k10 --namespace=kasten-io kasten/k10
NAME: k10
LAST DEPLOYED: Mon Aug 10 13:55:55 2020
NAMESPACE: kasten-io
...

 

Now, let’s annotate the newly created VolumeSnapshotClass so that K10 uses it when creating a snapshot of the Rook Ceph volumes.

 

$ kubectl annotate volumesnapshotclasses.snapshot.storage.k8s.io csi-rbdplugin-snapclass k10.kasten.io/is-snapshot-class=true

 

Using K10 to Backup and Restore Redis

Now that K10 is up and running, let's first install Redis and then use K10 to back it up.

 

$ kubectl create namespace redis
$ helm install redis stable/redis --namespace redis
NAME: redis
LAST DEPLOYED: Mon Aug 10 14:03:55 2020
...

 

To make sure the volume for Redis was provisioned through the StorageClass rook-ceph-block, just list the PersistentVolumeClaims from redis namespace

 

$ kubectl get pvc -n redis
NAME     STATUS   VOLUME             CAPACITY   ACCESS
MODES   STORAGECLASS      AGE

redis-data-redis-master-0 Bound pvc-c619efb2-8deb-4474-bcc9-5ba74f63b422   8Gi       
RWO     rook-ceph-block   2m12s

redis-data-redis-slave-0 Bound pvc-73d7e7f2-71fc-44c4-9622-2597deed1429   8Gi       
RWO     rook-ceph-block   2m12s

redis-data-redis-slave-1 Bound pvc-f504fec9-74de-4052-9a54-c8d822093084   8Gi       
RWO     rook-ceph-block   81s

 

After Redis is installed, the K10 dashboard can be accessed via the following port-forward command:

 

$ kubectl --namespace kasten-io port-forward service/gateway 8000

 

The K10 dashboard will now be accessible in your browser via http://localhost:8000 and the Redis application can be seen by following the Applications card from the main dashboard.

 

one

 

Click on the snapshot icon for the Redis application to take a snapshot of the application and its data.

 

two

 

Once confirmed, you will see a notification of the snapshot action that was just invoked.

 

three

You can then view the action status (it should complete quickly) from the main dashboard, with more information, including artifacts, available via a side panel that is exposed by clicking on the completed action.

 

four

 

In most major public clouds, these snapshot are durable and can be treated as a backup. However, if you are using a storage system, like Ceph, where the lifetime of a volume snapshot is tied to the underlying volume, you should configure K10 to store backups in an object storage system instead.

 

Once the backup is completed, we can go back to the applications card to check the restore point for the snapshot that we just created.

 

five

 

Clicking on the restore button will normally show all the restore points for Redis but, since we just created a snapshot, we will only see one restore point card.

 

six

 

You can click on that restore point card to restore the backup that was created, and you will, once again, get the notification to monitor the restore job progress from the K10 dashboard.

 

seven

 

Soon after the job gets created, as can be seen in the below screenshot, the restore job will successfully complete and the restored volume artifacts will be made visible.

 

eight

Conclusion

This article has shown how K10 can be easily integrated with the latest version of Rook and Ceph CSI. We are very excited to see Ceph CSI mature and turn into a production-grade storage system for Kubernetes users out there. We highly encourage you to go give K10 a spin Try Kasten K10  with our Free Kubernetes Trial

Finally, we would love to hear from you to see how K10 could be useful in your Kubernetes environment. Find us on Twitter, drop us an email, or leave us message!

Use K10 Free Now!

 

Download Free Kasten K10

logo-aws-color

logo-azure-color

logo-digital-ocean-color

logo-google-cloud-color

logo-kubernetes-color

logo-openshift-color

logo-suse-rancher-color

logo-k3s-color

logo-vmware-tanzu-color

For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at contact@kasten.io

For product support: Open a case via Veeam
Community support: Veeam Community

Address:

Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message