To demonstrate this functionality, this post will use two AKS clusters, setup in different regions. Each cluster will have K10 installed. One of the clusters will have a PostgreSQL instance running, and we’ll use this as our source. We’ll set up a pipeline that will take data and configuration backups of PostgreSQL and store them in Azure Blob storage. We’ll configure the destination cluster to automatically restore backups on a schedule.
In this post we assume you have PostgreSQL and K10 deployed in your AKS cluster. To demonstrate the disaster recovery and mobility capability, we will walk through how to:
Configure K10 to access Azure Blob storage to export backups
Create a policy to backup the data and application configuration from the source cluster
Create a policy to import and restore these backups on the destination cluster
1. Configure Access to Azure Blob storage
To configure access to the object storage location where we will store the application backups, we need to create a pair of location profiles. Location profiles are used by K10 to mediate access to storage locations. To create a profile, we’ll need to specify credentials, i.e. an Azure account name and access key. We will also need to add an Azure Blob storage container name and region. If the container does not exist, K10 will automatically create in the chosen region. This region does not need to match the region of either cluster. A profile should be created on each of the clusters and the container name and region should match.
2. Create a Policy to Backup from the Source Cluster
As is the case for local backups, we’ll create a policy to take application snapshots on a schedule. Because we want to restore this application in another cluster, we’ll also choose to export the data to the object storage. All of the metadata and configuration for the application, in this case for the PostgreSQL instance, will be stored in Azure Object storage alongside the application’s data. This data is compressed, encrypted, and deduplicated.
Once we create the policy to backup the application to object storage, we’ll get access to a migration token. This token includes unique encryption keys and information on where the exported backup is stored. This information will be validated on the target cluster to ensure that the location profiles on both clusters are compatible. Don’t worry if you forget to copy the string, you can always get it later from the policy page.
Once created, you can return to the polices page to see a high level overview of the policy, or retrieve the migration token. The policy will trigger actions on based on the specified schedule, but you can also manually trigger the policy by clicking the button labeled "run once."
Actions triggered by policies will show up in on the main dashboard in the Jobs section. Here we can now see that the policy triggered a backup and export of our PostgreSQL instance. Everything we need to recovery the instance in now stored in Azure Blob storage. Even if the source cluster is deleted completely, we would be able to restore PostgreSQL's data and configuration with only the migration string and access to the Azure Blob container.
3. Create a Policy to Restore on the Destination Cluster
On the destination cluster, we’ll create an import policy to import and restore our PostgreSQL instance. On each policy run, K10 will check if there have been any new backups uploaded to the object storage. In addition to a profile, which grants API access to the Container in Azure Blob storage, the import policy also uses the migration token from from the policy on the source cluster. This token will let the target cluster’s import policy decrypt the configuration and data from the backup. K10 automatically validates that the token is compatible with the selected profile.
In this case, I chose to automatically restore the backup, but it isn't required. Either way, a RestorePoint will be available in the target cluster an can used to restore manually at a later time.
We can watch the main dashboard for the import and restore jobs to complete.
Once the restore job finishes successfully, we can check to make sure PostgreSQL is up and running. We can open a tunnel to the PostgreSQL service via with kubectl port-forward, get the password from the Kubernetes secret, and check the instance's status by using a local psql client.
We’ve now successfully set up a data mobility pipeline. Any data or configuration changes made to the PostgreSQL instance on the first cluster will eventually be reflected in the destination cluster’s instance. In addition, because everything is stored entirely in Azure Blob storage in any region, the first cluster may be completely lost or unavailable without impacting the destination cluster.
Tom graduated with an M.S.E from the University of Michigan in 2013. His first job was on the server team at Maginatics, cloud based file system company which was acquired by EMC late in 2014. After the acquisition, he joined Dropbox where he was focused on improving the efficiency of Dropbox’s metadata storage, disaster recovery, and fault tolerance. In 2017, Tom joined the founding team at Kasten. Kasten is focussed on kubernetes data management, and was acquired by Veeam in 2020. He is now managing teams spanning multiple projects, all with the aim of simplifying data management in cloud-native environments.
Kasten, Inc. 8800 Lyra Drive, Suite 450 Columbus, Ohio 43240
We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.