Skip to content

Data Mobility PostgreSQL on AKS using K10 logos

In our previous post, "part 1", PostgreSQL Backup and Restore on Microsoft Azure using Kasten K10, we showed how to simply backup and restore a PostgreSQL instance running in an Azure Kubernetes Service (AKS) cluster. The workflow presented there is not sufficient to protect your application from serious failure scenarios like accidental snapshot deletion or larger scale outages.  In order to protect against such scenarios, we can leverage Kasten K10’s application mobility and disaster recovery capabilities. 

To demonstrate this functionality, this post will use two AKS clusters, setup in different regions. Each cluster will have K10 installed. One of the clusters will have a PostgreSQL instance running, and we’ll use this as our source. We’ll set up a pipeline that will take data and configuration backups of PostgreSQL and store them in Azure Blob storage. We’ll configure the destination cluster to automatically restore backups on a schedule.

For instructions on how to setup an AKS cluster see these instructions on how to do it using the Azure portal. For instructions on installing K10 in AKS, please see the previous blog post, which also gives instructions for installing a PostgreSQL instance in your cluster.

In this post we assume you have PostgreSQL and K10 deployed in your AKS cluster. To demonstrate the disaster recovery and mobility capability, we will walk through how to:

  1. Configure K10 to access Azure Blob storage to export backups
  2. Create a policy to backup the data and application configuration from the source cluster
  3. Create a policy to import and restore these backups on the destination cluster

1. Configure Access to Azure Blob storage

To configure access to the object storage location where we will store the application backups, we need to create a pair of location profiles. Location profiles are used by K10 to mediate access to storage locations. To create a profile, we’ll need to specify credentials, i.e. an Azure account name and access key. We will also need to add an Azure Blob storage container name and region. If the container does not exist, K10 will automatically create in the chosen region. This region does not need to match the region of either cluster. A profile should be created on each of the clusters and the container name and region should match.


2. Create a Policy to Backup from the Source Cluster

As is the case for local backups, we’ll create a policy to take application snapshots on a schedule. Because we want to restore this application in another cluster, we’ll also choose to export the data to the object storage. All of the metadata and configuration for the application, in this case for the PostgreSQL instance, will be stored in Azure Object storage alongside the application’s data. This data is compressed, encrypted, and deduplicated.


Once we create the policy to backup the application to object storage, we’ll get access to a migration token. This token includes unique encryption keys and information on where the exported backup is stored. This information will be validated on the target cluster to ensure that the location profiles on both clusters are compatible. Don’t worry if you forget to copy the string, you can always get it later from the policy page.

Once created, you can return to the polices page to see a high level overview of the policy, or retrieve the migration token. The policy will trigger actions on based on the specified schedule, but you can also manually trigger the policy by clicking the button labeled "run once."


Actions triggered by policies will show up in on the main dashboard in the Jobs section. Here we can now see that the policy triggered a backup and export of our PostgreSQL instance. Everything we need to recovery the instance in now stored in Azure Blob storage. Even if the source cluster is deleted completely, we would be able to restore PostgreSQL's data and configuration with only the migration string and access to the Azure Blob container.


3. Create a Policy to Restore on the Destination Cluster

On the destination cluster, we’ll create an import policy to import and restore our PostgreSQL instance. On each policy run, K10 will check if there have been any new backups uploaded to the object storage. In addition to a profile, which grants API access to the Container in Azure Blob storage, the import policy also uses the migration token from from the policy on the source cluster. This token will let the target cluster’s import policy decrypt the configuration and data from the backup. K10 automatically validates that the token is compatible with the selected profile.


In this case, I chose to automatically restore the backup, but it isn't required. Either way, a RestorePoint will be available in the target cluster an can used to restore manually at a later time.

We can watch the main dashboard for the import and restore jobs to complete.


Once the restore job finishes successfully, we can check to make sure PostgreSQL is up and running. We can open a tunnel to the PostgreSQL service via with kubectl port-forward, get the password from the Kubernetes secret, and check the instance's status by using a local psql client.

kubectl port-forward --namespace postgresql svc/postgres-postgresql 5432:5432 &
export PGPASSWORD=$(kubectl get secret --namespace postgresql postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
psql --host -U postgres -d postgres -p 5432


We’ve now successfully set up a data mobility pipeline. Any data or configuration changes made to the PostgreSQL instance on the first cluster will eventually be reflected in the destination cluster’s instance. In addition, because everything is stored entirely in Azure Blob storage in any region, the first cluster may be completely lost or unavailable without impacting the destination cluster. 

We would love to hear from you to understand how K10 could be useful in your Kubernetes environment. Follow and contact us on Twitter, LinkedIn, or drop us an email.

Download Free Kasten K10










For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at

For product support: Open a case via Veeam
Community support: Veeam Community


Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message