Skip to content

This post covers how to use Kasten K10 to backup and restore the MongoDB database operating in the Google Kubernetes Engine (GKE) environment. GKE is a fully managed Kubernetes engine by the Google Cloud platform.

In this post, we walk through three ways to take a backup using Kasten K10:

  • Crash consistent volume snapshots,
  • Logical,
  • Application consistent (quiesce)

mongoDB-GKE-01

We assume you already have set up your Google cloud platform account and completed the steps to spin up the cluster. You can spin up the clusters through guided UI, which will take about four minutes to start the cluster.

The instructions in this post are organized into three sections:

  1. Installing Kasten K10 on your GKE cluster 
  2. Installing MongoDB
  3. The workflow of three backup strategies

Tools that are used in this blog post are:

  • kubectl - Kubernetes client
  • gcloud - Google cloud client
  • Helm v3

Step 1: Installing Kasten K10 on your GKE Cluster 

There are two ways to use Kasten K10 on a GKE cluster. One is to install from Google cloud marketplace and other is to use Helm as described below.

To install using helm, you can find documentation on Kasten K10. Please see the pre-requisite tools that are mentioned above before starting this tutorial. We use Kasten K10 Helm chart to install K10 on a Kubernetes cluster using Helm v3:

$ helm repo update
$ helm repo add kasten https://charts.kasten.io/

 

Next, we create a Kasten namespace to deploy the K10 application there:

$ kubectl create namespace kasten-io

 

Now, we install K10 using the command below:

$ helm install k10 kasten/k10 -n kasten-io

 

Helm install will create multiple deployments and services  and you can validate the install by the following command:

$ kubectl get pods -n kasten-io --watch

 

Once the pods are in running condition, you can port-forward the service to access the K10 dashboard from the browser.

You can access the K10 dashboard at 127.0.0.1:8080/k10/#/ after running the following command:

$ kubectl --namespace kasten-io port-forward service/gateway 8080:8000

 

Dashboard:

dashboard

Step 2: Installing MongoDB

We start with creating MongoDB namespace with the following command:

$ kubectl create namespace mongodb

namespace/mongodb created

 

We then add MongoDB repo by following this command:

$ helm repo add bitnami https://charts.bitnami.com/bitnami

 

Now we install MongoDB in the mongodb namespace:

$ helm install mongodb bitnami/mongodb -n mongodb

 

This will provision PersistentVolume, PersistentVolumeClaim, ReplicaSet, Deployment, and Pod.

To validate the installation:

$ kubectl get all -n mongodb

 

Result

NAME                         READY   STATUS    RESTARTS   AGE
pod/mongodb-588846f67-hwpxt   1/1    Running      0       2m5s

NAME                         TYPE      CLUSTER-IP   EXTERNAL-IP   PORT(S)     AGE
service/mongodb            ClusterIP    10.8.6.42     <none>     27017/TCP    2m6s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mongodb  1/1     1            1         2m7s

NAME                             DESIRED   CURRENT   READY   AGE
replicaset.apps/mongodb-588846f67   1         1        1     2m7s

 

K10 automatically discovers MongoDB instances and you will see the data and associated resources for this instance.

mongoinstall

We will now walk through three types of backup with Kasten K10:

  • Crash consistent volume snapshots
  • Logical
  • Application consistent (quiesce)

Crash Consistent Backup

By default, K10 will take volume snapshots to backup applications. This approach relies on the crash consistency provided by the data service, in this case, MongoDB. K10 supports different backup consistency levels including logical database-level backups. All backups can be exported to external target storage systems such as object stores. This post shows the various backup/restore restore workflows. As using crash consistent volume snapshots is the default backup mechanism, it requires no additional configuration.

On the MongoDB instance, click “snapshot” to manually backup. 

crash-snapshot
We can see the progress in the dashboard:
crash-snapshot-progress

Location Profiles

Please follow the instructions for creating a location profile. This profile represents the object store used for backups. This step is required to ensure successful backups and restores in the remaining sections of this tutorial.

Logical Backup

We start by creating a mongodb-logical namespace, and installing mongodb:

$ kubectl create namespace mongodb-logical
$ helm install mongo-logical bitnami/mongodb --namespace mongodb-logical --set replicaSet.enabled=true

 

We create two Blueprints because only a backup of the primary MongoDB  pod is required. We therefore, create a noop Blueprint and annotate the secondary MongoDB StatefulSet to use the noop Blueprint, which will prevent volume snapshots of the secondary’s volumes:

For creating the first Blueprint resource, use the following command:

kubectl --namespace kasten-io apply -f https://raw.githubusercontent.com/kanisterio/kanister/0.31.0/examples/stable/mongodb/mongo-blueprint.yaml

 

The second Blueprint can be created by using the following content. First, we will create a file name mongo_noop.yaml

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
  name: mongo-noop
actions:
  backup:
    type: StatefulSet
    phases:
    - func: KubeExec
      name: lockMongo
      args:
        namespace: ""
        pod: ""
        container: mongodb-secondary
        command:
        - echo
        - NOOP

 

Then apply it using the following command:

kubectl --namespace=kasten-io create -f mongo_noop.yaml

 

Now that we have completed creating Blueprints, we have to annotate the StatefulSets to guide K10 to use the Blueprint while performing operations on this MongoDB instance:

kubectl annotate statefulset mongo-logical-mongodb-primary kanister.kasten.io/blueprint='mongodb-blueprint' --namespace=mongodb-logical
kubectl annotate statefulset mongo-logical-mongodb-secondary kanister.kasten.io/blueprint='mongo-noop' --namespace=mongodb-logical

mongoLogicalCard

Now you can use Kasten K10 to Backup/Restore the application.

Application-Consistent (quiesce)

In order to get an application-consistent MongoDB backup where MongoDB is first quiesced, a snapshot of the underlying volume belonging to only the primary replica is performed. Similar to the crash consistent approach, this approach also uses volume snapshots. In this case, we’ll execute a command to quiesce the database. We can start by installing the MongoDB chart from the following command:

kubectl create namespace mongodb-app-consistent
helm install mongodb-app-consistent stable/mongodb --namespace=mongodb-app-consistent --set replicaSet.enabled=true

 

Next, we create a file name mongo_hooks.yaml

apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
  name: mongo-hooks
actions:
  backupPrehook:
    type: StatefulSet
    phases:
    - func: KubeExec
      name: lockMongo
      objects:
        mongoDbSecret:
          kind: Secret
          name: ''
          namespace: ''
      args:
        namespace: ""
        pod: ""
        container: mongodb-primary
        command:
        - bash
        - -o
        - errexit
        - -o
        - pipefail
        - -c
        - |
          export MONGODB_ROOT_PASSWORD=''
          mongo --authenticationDatabase admin -u root -p "${MONGODB_ROOT_PASSWORD}" --eval="db.fsyncLock()"
  backupPosthook:
    type: StatefulSet
    phases:
    - func: KubeExec
      name: unlockMongo
      objects:
        mongoDbSecret:
          kind: Secret
          name: ''
          namespace: ''
      args:
        namespace: ""
        pod: ""
        container: mongodb-primary
        command:
        - bash
        - -o
        - errexit
        - -o
        - pipefail
        - -c
        - |
          export MONGODB_ROOT_PASSWORD=''
          mongo --authenticationDatabase admin -u root -p "${MONGODB_ROOT_PASSWORD}" --eval="db.fsyncUnlock()"

 

Then apply the following command:

kubectl --namespace=kasten-io create -f mongo_hooks.yaml

 

Lastly we have to add the following annotation to the MongoDB StatefulSets to instruct K10 to use the above hooks when performing operations on this MongoDB database instance:

kubectl annotate statefulset mongodb-app-consistent-primary kanister.kasten.io/blueprint='mongo-hooks' --namespace=mongodb-app-consistent
kubectl annotate statefulset mongodb-app-consistent-secondary kanister.kasten.io/blueprint='mongo-noop' --namespace=mongodb-app-consistent

mongoappconsist

Now you can use Kasten K10 to Backup/Restore the application.

Conclusion

This post showed three different backup approaches using Kasten K10. You can always use the default backup mechanism by simply clicking on the snapshot button and restore with the restore button. You can also create logical database application backups by creating two Blueprints and annotating the StatefulSets to configure their use while performing a backup. The third strategy is application consistent backup, in which we created the hook Blueprint and annotated the StatefulSet to instruct K10 to use this hook.

We encourage you to give Kasten K10 with a Free Kubernetes Trial, and let us know how we can help. We look forward to hearing from you!

Download Free Kasten K10

logo-aws-color

logo-azure-color

logo-digital-ocean-color

logo-google-cloud-color

logo-kubernetes-color

logo-openshift-color

logo-suse-rancher-color

logo-k3s-color

logo-vmware-tanzu-color

For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at contact@kasten.io

For product support: Open a case via Veeam
Community support: Veeam Community

Address:

Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message