Skip to content

By Christophe Jauffret and Michael Courcy

This article offers a step-by-step guide on how to configure and use Kasten K10 for data management for cloud native applications running on Karbon Kubernetes clusters. 

As Kubernetes users look for ways to rapidly configure and deploy Kubernetes clusters as they progress in their cloud native journeys, Nutanix and Kasten by Veeam are working together to make this possible. Used together, Nutanix Karbon and Kasten K10 simplify and streamline data protection in Kubernetes with an integrated, cloud-native solution.

An integrated component of Nutanix AOS, Karbon enables IT operators to deliver and manage end-to-end, production-ready Kubernetes environments with push-button simplicity, all while preserving a native user experience. Every Karbon Kubernetes cluster is deployed with Nutanix’s full-featured CSI driver, which natively integrates with Nutanix’s storage solutions, Volumes and Files, to deliver persistent storage for stateful containerized applications. S3-compatible storage is also easy to set up using Nutanix Objects.

Nutanix Objects is a software-defined object storage solution that non-disruptively scales out while lowering overall storage costs. It’s designed with an S3-compatible REST API interface to handle large amounts of unstructured data, all from a single namespace. Objects is deployed and managed as part of the Nutanix Enterprise Cloud Platform, eliminating the need for additional storage silos.

Nutanix Objects is an ideal target for Kasten K10 backup export, as it provides long-term retention and archiving, as well as cross-region replication.  As such, Kasten K10 is a perfect solution for managing the protection and the mobility of cloud-native applications on Karbon.


Purpose-built for Kubernetes, Kasten K10 is a data management software platform that runs on your cluster in its own namespace and protects your fleet of Kubernetes deployments with multi-cluster support. Kasten K10 provides secure multi-tenancy with fine-grained, role-based access control

Kasten K10 offers: 

  • Pre-qualified integrations with leading data sources including Relational and NoSQL data services
  • Support for all major cloud-based managed Kubernetes offerings and all leading on-prem distributions
  • Support for storage via Container-Storage Interface (CSI), as well as direct storage integrations for efficiency

Kasten is now a Nutanix Ready certified Partner.

What We’re Going To Do 

Here’s a summary of the steps we’ll walk you through in demonstration of how to integrate Karbon with Kasten K10:

  1. Verify snapshot capabilities of Karbon 
  2. Run the Kasten K10 preflight check
  3. Install Kasten K10 and a sample application (MySQL)
  4. Test the snapshot and recovery of an application  
  5. Create an external object store with Nutanix Objects
  6. Configure Kasten K10 to use this object store
  7. Export snapshot to this object store and test again recovery 

Karbon Snapshot Capability

Before installing Kasten K10 we’re going to verify Karbon snapshot capabilities because:

  1. Kasten supports snapshotting at the storage layer providing an efficient method for data protection. 
  2. A snapshot is crash-consistent (the state of each file is captured at the same time)
  3. Snapshots are local, which makes restoring and application quicker

For the purpose of this article, the default activation of the native Nutanix CSI snapshot feature needs to be verified. To do so, you will need admin access to your cluster, as well as a working kubeconfig/kubectl environment.

Verify the Nutanix CSI Snapshot Resources

Execute the following script who check the cluster and if they are not present and if there is a supported configuration, will install the resources:


DRIVER=$(kubectl get sc -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io\/is-default-class=="true")].provisioner}')

VERSION=$(kubectl -n ntnx-system get statefulset csi-provisioner-ntnx-plugin -o jsonpath='{.spec.template.spec.containers[?("ntnx-csi-plugin")].image}' | cut -f 2 -d 'v')

SNAP=$(kubectl -n ntnx-system get statefulset snapshot-controller -o jsonpath='{}' 2> /dev/null)

echo "You are using CSI driver v$VERSION with driver name $DRIVER\n"

if [[ $SNAP = "snapshot-controller" ]]
echo "Your Karbon cluster already support Snapshot Capability"

if [[ $DRIVER = "" && ( $VERSION = "2.3.1" || $VERSION = "2.2.0" ) ]]
kubectl apply -f$VERSION/snapshot-crd-$VERSION.yaml
kubectl apply -f$VERSION/karbon-fix-snapshot-$VERSION.yaml
elif [[ $DRIVER = "com.nutanix.csi" && ( $VERSION = "2.3.1" || $VERSION = "2.2.0" ) ]]
kubectl apply -f$VERSION/snapshot-crd-$VERSION.yaml
kubectl apply -f$VERSION/karbon-fix-snapshot-$VERSION-rev.yaml
echo "**************************************************************************"
echo "* Untested configuration. Upgrade your Karbon cluster or contact support *"
echo "**************************************************************************\n"


Create a Volume Snapshot Class

Get the CSI secret name of the default storageclass to create your default snapshotclass:


SECRET=$(kubectl get sc -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io\/is-default-class=="true")].parameters.csi\.storage\.k8s\.io\/provisioner-secret-name}')
DRIVER=$(kubectl get sc -o=jsonpath='{.items[?(@.metadata.annotations.storageclass\.kubernetes\.io\/is-default-class=="true")].provisioner}')

cat <<EOF | kubectl apply -f -
kind: VolumeSnapshotClass
name: default-snapshotclass
storageType: NutanixVolumes $SECRET kube-system
deletionPolicy: Delete


Kasten K10 Installation

Run the Pre-Flight Script

For the purpose of verifying that everything is running correctly, run the following pre-flight script:

helm repo add kasten --force-update && helm repo update
kubectl create ns kasten-io
kubectl annotate volumesnapshotclass default-snapshotclass \

curl -s | bash

Namespace option not provided, using default namespace
Checking for tools
--> Found kubectl
--> Found helm
Checking if the Kasten Helm repo is present
--> The Kasten Helm repo was found
Checking for required Helm Tiller version (>= v2.16.0)
--> No Tiller needed with Helm v3.5.2
K10Primer image
--> Using Image ( to run test
Checking access to the Kubernetes context karbon-context
--> Able to access the default Kubernetes namespace

Running K10Primer Job in cluster with command-
./k10tools primer
serviceaccount/k10-primer created created
job.batch/k10primer created
Waiting for pod k10primer-ph8wk to be ready - ContainerCreating
Waiting for pod k10primer-ph8wk to be ready -
Pod Ready!

Kubernetes Version Check:
Valid kubernetes version (v1.18.15) - OK

RBAC Check:
Kubernetes RBAC is enabled - OK

Aggregated Layer Check:
The Kubernetes Aggregated Layer is enabled - OK

CSI Capabilities Check:
Using CSI GroupVersion - OK

Validating Provisioners:
Is a CSI Provisioner - OK
Storage Classes:
Valid Storage Class - OK
Volume Snapshot Classes:
Has annotation set to true - OK
Has deletionPolicy 'Delete' - OK

Validate Generic Volume Snapshot:
Pod Created successfully - OK
GVS Backup command executed successfully - OK
Pod deleted successfully - OK

serviceaccount "k10-primer" deleted "k10-primer" deleted
job.batch "k10primer" deleted

Install Kasten K10 Via Helm Chart

We’re going to install Kasten K10 with minimal options. In this tutorial, we focus mainly on creating policies for protecting namespaces. We won’t be covering authentication, authorization, or how to expose the Kasten K10 dashboard.

helm install k10 kasten/k10 --namespace=kasten-io

NAME: k10
LAST DEPLOYED: Thu Feb 18 02:06:30 2021
NAMESPACE: kasten-io
STATUS: deployed
Thank you for installing Kasten’s K10 Data Management Platform!

Documentation can be found at

How to access the K10 Dashboard:

The K10 dashboard is not exposed externally. To establish a connection to it use the following `kubectl` command:

`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`

The Kasten dashboard will be available at: ``


Next, check that all pods are up and running in the kasten-io namespace:

kubectl get pods -n kasten-io

aggregatedapis-svc-7f756b8699-mgv6n 1/1 Running 0 23h
auth-svc-85ddb9b698-snxx6 1/1 Running 0 23h
catalog-svc-765dbf56d5-dqkg8 2/2 Running 0 23h
config-svc-598ff54d6d-kbxc7 1/1 Running 0 23h
crypto-svc-6994ddf946-qxjb4 1/1 Running 0 23h
dashboardbff-svc-6ccbc7c44d-ll6kv 1/1 Running 1 23h
executor-svc-5f7996dbfd-jjq2g 2/2 Running 0 23h
executor-svc-5f7996dbfd-mxdbp 2/2 Running 0 23h
executor-svc-5f7996dbfd-t8md4 2/2 Running 0 23h
frontend-svc-544b546fd4-lvvtd 1/1 Running 0 23h
gateway-6574867bf5-n9kqk 1/1 Running 0 23h
jobs-svc-6d5fb9b7d4-tz6sh 1/1 Running 0 23h
kanister-svc-6df4fc7656-67sln 1/1 Running 0 23h
logging-svc-56d6f69487-wk958 1/1 Running 0 23h
metering-svc-cbd4c89c6-5l5vn 1/1 Running 0 23h
prometheus-server-d5d545569-xrxn2 2/2 Running 0 23h
state-svc-796fd5c664-gzhsc 1/1 Running 0 23h

Test the Snapshot, Backup, and Recovery of an Application 

Install the Test Application

Here, we test Kasten’s backup and recovery capabilities on a mysql application running on Karbon clusters:

kubectl create namespace mysql 
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: StatefulSet
metadata: name: mysql
namespace: mysql
app: mysql
serviceName: mysql
replicas: 1
app: mysql
- name: mysql
image: mysql:8.0
value: ultrasecurepassword
- containerPort: 3306
name: mysql
- name: data
mountPath: /var/lib/mysql
- metadata:
name: data
accessModes: [ "ReadWriteOnce" ]
storage: 5Gi



We’ll then populate the database with some data once the mysql pod is ready:

kubectl exec -ti mysql-0 -n mysql -- bash

mysql --user=root --password=ultrasecurepassword
USE test;
CREATE TABLE pets (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);
INSERT INTO pets VALUES ('Puffball','Diane','hamster','f','1999-03-30',NULL);
| name | owner | species | sex | birth | death |
| Puffball | Diane | hamster | f | 1999-03-30 | NULL |
1 row in set (0.00 sec)

Snapshot the Application 

From this point onward, operations will be performed on the dashboard, which we’ll need to make available via port-forwarding:

kubectl --namespace kasten-io port-forward service/gateway 8080:8000

The Kasten K10 dashboard will be available at: 

Accept the end user license agreement.

In a production context, the dashboard would be exposed through an Ingress Controller with the correct authentication subsystem configured.

We’re now going to snapshot the application, which involves capturing the complete state of the application without exporting it to an external storage.

Go to Applications –> MySQL –> Create a Policy

kasten application backup

Name: mysql-backup  Action: Snapshot  Action Frequency: Daily Sub-hourly Frequencies: <default>

Snapshot Retention: <default> Select Applications: <default> Resources: <default>

kasten mysql backup policy

Select ‘Create Policy’:

create application backup policy kasten

Click the ‘run once’ button, shown below:


Now, go to the main dashboard and scroll down under ‘Actions’ to view progress:


After the circle turns solid green, click on the job to open a details pane and view all artifacts that were captured:


Back up the application

Backing up the application means exporting the snapshot to an external object store. Before doing so, we need to carve out that resource. 

Create an External Object Store with Nutanix Objects

We’ll use Nutanix Objects to enable object storage services on a remote cluster in order to receive the Kasten Backup export. 

We assume that an Object Store is already deployed on a different Nutanix cluster than the one where the Karbon Kubernetes Cluster is deployed.

We’ll then use Prism Central to connect the Object Store, create a new Bucket, and add a user.

In Prism Central, go to Services -> Objects and click ‘Access Keys’.


create object store access keys

Next, we’ll add a new user. Click ‘Add People’:

kasten object store access keys

Select ‘Add people not in a directory service’ and fill in the ‘Email’ and ‘Name’ fields. Then click ‘Next’.

create object store access keys add people

Generate keys and download them:

create object store access keys add people generate keys

create object store access keys add people generate keys success

This process downloads a text file containing your access and secret key, shown below:

kasten domain

Next, return to ‘Object Stores’ and click on the S3 Object Store that you created earlier:

create object store bucket kasten 1

Now, create a new bucket:

create object store bucket kasten 2

create object store bucket kasten 3

Select the bucket you just created and go to ‘User Access’:


Edit user access to give read and write permissions to the user you created earlier:



We’re now done with Nutanix Objects. We’re almost there!

Setting Up the Location Profile on the Kasten K10 Dashboard


In the Kasten UI, go to Settings -> Location Profile -> New Profile -> S3 compatible 


Name the profile ‘object-storage’ and then fill out the rest of information requested.

S3 Access Key and S3 Secret will be taken from the key file downloaded in the previous step.

Note that ‘Endpoint’ corresponds to the URL of your object store from the Nutanix UI.


‘Bucket Name’ should correspond to the name of the bucket you previously created.


Now, click ‘Save Profile’.

pasted image 0 (1)

pasted image 0

Change the Policy to Add an Export Profile

Find the appropriate policy under Dashboard –> Policies, then click ‘Edit’.



Next, select ‘Enable Backups via Snapshot Exports’ and select the appropriate profile in the drop-down menu:


Click ‘Edit Policy’ and then click ‘Run’ once again.

Now, navigate to the main dashboard and scroll down under ‘Actions’ in order to view progress:


The export will execute after the backup process completes. 

When the green progress circle is complete, click on the job to open a ‘details’ pane to view all of the artifacts that were captured.

Restoring the Application

If you go back to the restore point view, you can see that we now we have two restore points: a local restore point and an exported restore point.


The exported restore point can be used even if the namespace is deleted, or even if the cluster itself is deleted. The restore point can also be restored to another cluster where Kasten is installed.

Now, let’s delete the mysql namespace:

kubectl delete ns mysql

Use the exported restore point and click on ‘Restore’ without changing anything.



Let’s now check that our data is back:

kubectl exec -ti mysql-0 -n mysql -- bash

mysql --user=root --password=ultrasecurepassword
USE test;
mysql> SELECT * FROM pets;
| name | owner | species | sex | birth | death |
| Puffball | Diane | hamster | f | 1999-03-30 | NULL |
1 row in set (0.00 sec)

As expected, all data is intact and the restoration was successful!


Stateful workloads are no longer a major issue for Kubernetes administrators. The combination of Nutanix Karbon for management of production-grade Kubernetes clusters, Kasten K10 by Veeam for backup and recovery of data, and Nutanix Objects as an S3-compatible object storage repository is a complete solution for deploying stateful cloud-native applications with the level of resiliency and reliability you would expect from traditional on-premises infrastructure.

Now, try it out for yourself.

We encourage you to give Kasten K10 a try for FREE no sign-up needed, and let us know how we can help. We look forward to hearing from you!

Download Free Kasten K10










For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at

For product support: Open a case via Veeam
Community support: Veeam Community


Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message