Skip to content

Blog Cross-Cluster Application Migration and Disaster Recovery for AWS EKS Using Kasten K10 by Michael Cade 5

Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service provided by AWS. It's a managed Kubernetes service where you don't have to maintain or create the cluster control plane or underlying infrastructure. EKS runs a cluster control plane across multiple AZs to ensure it maintains high availability and automatically replaces unhealthy instances. It works with different AWS services to provide scalability and security for your application.

In a previous blog post, PostgreSQL Backup and Restore on AWS Elastic Kubernetes Service using Kasten K10, we showed how you can backup and restore a PostgreSQL instance in an elastic Kubernetes service cluster. Although the workflow described how to backup and restore easily using Kasten K10, for serious failure scenarios like accidental snapshot deletion or larger scale outages, Kasten K10 also provides disaster recovery and application migration.

In this blog post, we use two EKS clusters, set up in different regions. Both clusters have Kasten K10 installed. One of the clusters has PostgreSQL installed, and we will use this cluster as the source cluster. We will set up a pipeline taking data and configuration backups of PostgreSQL and store them in the AWS S3 bucket. Then our destination cluster will use this backup and restore an application manually or as a scheduled restore.

To set up the EKS cluster, use this instruction on how to spin up the cluster using the console. To install Kasten K10 in your cluster, please refer to our previous post which also helps you to install PostgreSQL in your EKS cluster.

In this post, we assume you have PostgreSQL and Kasten K10 deployed in your EKS cluster. To demonstrate disaster recovery and mobility capability, we will walk through three steps:

  1. Configure Kasten K10 access to Amazon web services S3 bucket
  2. Create a policy to back up the data and application configuration from the source cluster using the export option
  3. Create a policy to import and restore these backups on the destination cluster

Configure Access to AWS S3 Bucket

In order for our cluster to use an S3 bucket for disaster recovery or application migration, we need to create a pair of location profiles on both clusters. To create a profile, we need to specify the AWS access key and AWS secret key in order to access AWS service. We also need to define the region and bucket name of AWS S3 used for migrating data. This region does not need to match the region of either cluster. Kasten10 is a bucket name that we have created in the AWS S3 us-east-1 region.

bucket

 

profile

Create a Policy to Export or Backup From the Source Cluster

Our source cluster has a PostgreSQL instance running and compliant with policies as we have seen in the previous blog post. We will use this cluster to export our PostgreSQL application data and spec to another cluster.

source-cluster
Now we will create a policy to take an application snapshot on schedule and also choose to export the data to the S3 bucket. All of the metadata and configuration for the PostgreSQL instance will be stored in the AWS S3 bucket alongside the application’s data. This data is compressed, encrypted, and deduplicated.
exportfromsource

Click on Export, and the export will start in the background. You can see the progress in the dashboard. Once we create the policy to backup, it starts exporting the application to the S3 bucket. We will then have access to a migration token. This token includes unique encryption keys and information about where the exported backup is stored. This information will be validated on the target cluster to ensure that the location profiles in both clusters are compatible. Don't worry if you forget to copy the string, you can always get it later from the policy page.

exportconfigfromsource

 

Once the policy has been created, you now have an overview of the policy or you can retrieve the migration token from here. As we have seen in the previous post, we can trigger policies on a schedule, or we can also trigger policies by manually clicking the run once button.
exportpolicysource

 

Once a policy triggers, you will see the progress in the dashboard. Everything we need to recover the instance is now stored in the AWS S3 bucket. Even if the source cluster is removed entirely, we can still restore the PostgreSQL data and settings with just the migration chain and access to the AWS S3 bucket.

sourceexportdone

 

Create a Policy to Import or Restore on the Destination Cluster

destcluster

 

Now on the destination cluster, we have no data present in the PostgreSQL namespace. We will use this cluster for disaster recovery and import data via a schedule. The location profile will grant access to the use of AWS S3 bucket to import the data, and the import policy also uses the migration token from the policy on the source cluster. On each policy run, Kasten K10 will check if there have been any new backups uploaded to the S3 bucket. The migration token will let the destination cluster’s import policy decrypt the configuration and data from the backup; Kasten K10 automatically validates that the token is compatible with the selected profile.

Screen Shot 2021-03-10 at 9.22.40 PM

 

destimport

By using this policy, the cluster automatically imports data every hour. It isn’t required, as Restore Point will be available in the target cluster to restore manually at a later time. With our policy created, we can click run once to immediately import the latest backup from the S3 bucket.

destimportdone

 

Once the import is completed, all the metadata will be visible as a restore point in Kasten K10’s catalog.

destpoint

 

Now we can use this imported RestorePoint  to restore the backup in any namespace on the target cluster. Once we initiate the restore, we can see its progress in the Dashboard.

destcomplete

 

Once the job is completed, the data and configuration are available in the namespace. We can check all the configuration was restored  by looking at the components in the PostgreSQL application card. The data can be validated by connecting to the postgres instance.

destfinish

Conclusion

We now have set up the data mobile pipeline between the EKS clusters. Any changes done in the first cluster will eventually change in the second cluster at a scheduled time. Since our data is exported in AWS S3 bucket, if our first cluster is completely lost or unavailable, this will not impact the destination cluster. We have the option to restore from that cluster.

Now, try it out for yourself.

We encourage you to give Kasten K10 a try for FREE, and let us know how we can help. You don’t have to speak to anyone, and you can get going here with just one command: the installation uses a single helm command. Free Kasten K10 also supports marketplace integrations with major Kubernetes distributions to make installation a simple click of a button. 

We look forward to hearing from you!

Download Free Kasten K10

logo-aws-color

logo-azure-color

logo-digital-ocean-color

logo-google-cloud-color

logo-kubernetes-color

logo-openshift-color

logo-suse-rancher-color

logo-k3s-color

logo-vmware-tanzu-color

For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at contact@kasten.io

For product support: Open a case via Veeam
Community support: Veeam Community

Address:

Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message