Skip to content

mysql-scality-01Thank you for joining me for my very first article on the Kasten K10 Blog! In this article I’ll demonstrate how to use Kasten K10 Data Management Platform to backup and restore a MySQL database on a Kubernetes cluster. We will perform the backup, and then restore using the Scality RING.

Scality RING is a software that turns any x86 server into web-scale storage. Scality claims that its RING product can scale capacity indefinitely and guarantees 100% availability while reducing costs over legacy systems.

During this article, we’ll use Google Kubernetes Engine (GKE), to deploy applications and use Kasten K10 to perform the backup and restore. You can use a different dedicated public cloud provider, just skip the GKE part for the service account creation and start at the Helm create namespace command. Follow the documentation here for installing on any cloud provider.

The following instructions below are divided into four steps:

  1. Installing Kasten K10 with Helm chart on your GKE cluster
  2. Using Scality RING location profile 
  3. Installing MySQL
  4. Demonstration of Backup and Restore

Tools that are used in this tutorial are:

  • kubectl - Kubernetes client
  • gcloud - Google cloud client (maybe different for other cloud providers)
  • Helm v3

Step 1: Installing Kasten K10 on your GKE Cluster

There are two ways to install Kasten K10 on the Kubernetes cluster: One is to install directly from the Google cloud marketplace and the other is to use Helm to demonstrate below.

To install Kasten K10 on GKE, you first have to create a service account that has the storage admin role (roles/compute.storageAdmin) to enable the cluster to take the backup with required permissions.

First, set the default project on which your kubernetes cluster has been created.

$ gcloud config set project <project-id>

Then run the following command to create the service account and attach the IAM roles to the service account, which will then be used in the Kubernetes cluster.

$ myproject=$(gcloud config get-value core/project)
$ gcloud iam service-accounts create k10-test-sa --display-name "K10 Service Account"
$ k10saemail=$(gcloud iam service-accounts list --filter "k10-test-sa" --format="value(email)")
$ gcloud iam service-accounts keys create --iam-account=${k10saemail} k10-sa-key.json
$ gcloud projects add-iam-policy-binding ${myproject} --member serviceAccount:${k10saemail} --role roles/compute.storageAdmin

Now, you can install Kasten K10 with an API key from the service account that we just created. We’ll use the base64 tool to encode the k10-sa-key.json file generated above and install K10 with the newly created credentials.

$ sa_key=$(base64 -w0 k10-sa-key.json)

$ kubectl create namespace kasten-io
$ helm repo add kasten
$ helm install k10 kasten/k10 --namespace=kasten-io --set secrets.googleApiKey=$sa_key

Helm install will create multiple deployments and services, and you can validate the install by running the following command:

$ kubectl get pods -n kasten-io --watch

Once all of the pods and containers are in running condition, you can port-forward the gateway service to access the K10 dashboard from the browser.code-4

You can access the K10 dashboard at after running the following command:

$ kubectl --namespace kasten-io port-forward service/gateway 8080:8000




Step 2: Using Scality RING Location Profile 

Once Kasten is deployed successfully, we have to create a Scality RING location profile, which will be used to do the backup and restore. On the dashboard, click on setting, and then on the location click New Profile.

create profile

For profile creation, you will need to provide the following values:

  • Profile Name
    • You can provide any name of your choice, let’s use Scality
  • Cloud storage provider
    • For Scality RING we are using S3 compatible
  • S3 access key
    • The access key used to manage the buckets that you created on Scality RING
  • S3 secret
    • The secret used to manage the buckets that you created on Scality RING
  • Endpoint
    • The endpoint of the S3 service API

profile creation


For the bucket name, you can use any name with the prefix of “kasten-” to use with Kasten K10. We have used the name of kasten-test.

profile creation bucket


Hit the create button, and if all details are correct then it will create the location profile successfully.

profile added

Step 3: Installing MySQL

We will start by creating a namespace named mysql.

$ kubectl create namespace mysql
namespace/mysql created

Then we will add the Helm bitnami charts to the helm repo list by running the following command:

$ helm repo add bitnami
"bitnami " has been added to your repositories

Now we install MySQL in the mysql namespace:

$ helm install --namespace mysql mysql-release bitnami/mysql


To validate the installation:

$ kubectl get all -n mysql


NAME                  READY   STATUS    RESTARTS   AGE
pod/mysql-release-0   1/1     Running   0          51s

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/mysql-release            ClusterIP   <none>        3306/TCP   56s
service/mysql-release-headless   ClusterIP   None          <none>        3306/TCP   56s

NAME                             READY   AGE
statefulset.apps/mysql-release   1/1     63s


Kasten K10 will automatically discover the application and the resources associated with these applications. You can verify this by going to the dashboard.



Now we just need to create the Kanister blueprint, so the backup and restore will be created using Scality RING. There are two steps to create a Kanister blueprint and attach it to the mysql release.

Create Kanister Blueprint

Create a file mysql-blueprint.yaml with the following content, or retrieve the yaml from:



$ kubectl --namespace kasten-io apply -f .\mysql-blueprint.yaml


Now, annotate the mysql release; so it will use Scality RING Kanister profile for backup.

$ kubectl --namespace mysql annotate sts mysql-release


We are now ready for backup and restore.


Step 4: Demonstration of Backup and Restore


You can take the backup of MySQL by going to the dashboard and clicking the snapshot button on the MySQL card.

backup card


You have the options to backup all, or certain resources, or handle exceptions. For simplicity, we have used the default settings. Click on the snapshot application.

backup mysql


The progress of the backup can be seen on the dashboard with the blue bar. You can see the Kanister artifact there. Backup is completed using Scality RING S3 storage.



The process of restoring an application using K10 is simple, just click the restore button on the application card. 



We only have one restore point right now, but you will have an option to restore your application to a certain time. Click on the restore point.



You can define the post restore hook and configure the restore point at this stage, or proceed with the default configuration. Click the Restore button.



You can see the restore progress on the dashboard.


We have now completed our Backup and Restore using Kasten K10 and Scality RING.


This article demonstrated the workings of Kasten K10, and how we can use the Kanister framework to leverage logical backup and restore for MySQL. The simplicity of the Kasten GUI demonstrates how quickly it is to add an external S3 storage. We also used Scality RING with the S3 server API, and then used this profile for data storage of our application. Scality RING turns any x86 server into web-scale storage and guarantees 100% availability.

By using Kasten K10 for data management, you can see how easy it is to backup and restore data in a Kubernetes environment. There is no need to be a Kubernetes expert or to be forced to use the CLI to do your backup and restore. I encourage you to give K10 a try for FREE no sign-up needed and let us know how we can help. We look forward to hearing from you!

Download Free Kasten K10










For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at

For product support: Open a case via Veeam
Community support: Veeam Community


Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message