Skip to content

In part one of our blog post series on AKS and Storage, we took a look at how storage services provided by Azure for AKS differ in terms of features, and walked through an example of how to use the native API to provision storage directly from Kubernetes. In this part two, we’ll examine how various storage services differ in terms of performance, and explain how to benchmark and validate storage performance using Kubestr.

Standard versus Premium Options

Storage services are quite different in terms of the configuration and protocol they support, and what is natively included as part of integrations in AKS. Performance will also differ and is often linked to the size of the volume. Below, I will illustrate different sizes from each service. 

The tables below showcase the different tier options (i.e. Standard and Premium). Since Blob storage is still in preview, I have not included it as part of the comparison. 

NOTE: Pricing based on Azure Price Calculator — West Europe

Service

Managed Disks - Standard HDD

Azure Files - Standard

Azure NetApp Files (minimum 4 TB regardless) 

Performance 100 GB

(128 GB minimum) 60 MB/s - 500 IOPS

100 GB (66 MB/s - 500 IOPS)

100 GB (1,56 MB/s)

Cost

$5.89

$6

$604

Performance 250 GB

(256 GB minimum) 60 MB/s - 500 IOPS

250 GB (75 MB/s - 750 IOPS)

250 GB (3,91 MB/s)

Cost

$21

$15

$604

Performance 1000 GB

(1024 GB minimum) 60 MB/s - 500 IOPS

1000 (122 MB/s - 1424 IOPS)

1000 GB (15 MB/s)

Cost

$41

$60

$604

 

Service

Managed Disks - Premium

Azure Files - Premium

Azure NetApp Files (minimum 4 TB regardless)  for Premium

Performance 100 GB

(128 GB minimum) 100 MB/s - 500 IOPS

100 GB (66 MB/s - 500 IOPS)

(6.25 MB/s)

Cost

$21

$19

$1205

Performance 250 GB

(256 GB) 1100 IOPS - 125 MB/s

250 GB (656 IOPS + bursting and (75 MB/s)

(15.63 MB/s

Cost

$41

$48

$1205

Performance 1000 GB

(1024 GB) 5000 IOPS - 200 MBs

1000 (122 MB/s - 1424 IOPS)

(62.50) MB/s

Cost

$148

$192

$1205

 

NetApp has a disadvantage because of the requirement to start with a minimum of 4TB. If we use 8TB, NetApp delivers a higher throughput:NetApp Files Premium = 512 MB/s ($2410 per month)

Azure Managed Disks = 500 MB/s ($1040 per month)
Azure Files Premium = 360 MB/s ($1572 per month)

NOTE: If you use Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. As of now, Azure NetApp Files can only support 1,000 IP addresses within the same VNET (plus any additional peered VNETs). 

Benchmarking and Validating Storage Using Kubestr

Once the storage classes and providers are in place, a best practice is to validate the configuration and benchmark the storage. This ensures that we get the right performance before placing services on top of the Kubernetes cluster. A good solution to use for this is Kubestr. 

Kubestr is a collection of tools to discover, validate and evaluate Kubernetes storage options. Kubestr runs directly from Azure Cloud Shell, since it uses the Kubernetes API to communicate with the underlying cluster.

Download one of the latest versions here → Releases · kastenhq/kubestr (github.com)

Here’s an example:

wget https://github.com/kastenhq/kubestr/releases/download/v0.4.17/kubestr-v0.4.17-linux-amd64.tar.gz

tar -xf kubestr-v0.4.17-darwin-amd64.tar.gz

Chmod +x kubestr

./kubestr


If used correctly, Kubestr will present you with the following screen listing the different storage classes that are available for the cluster: 

image4-1

You can use this information to benchmark the storage class Azure Files, using the command:

./kubestr fio -s azurefile


This will create a new persistent volume using the built-in provisioner, and run a series of flexible I/O testers against the storage layer to determine the storage bandwidth and I/O. By default, as part of the storage class, it creates a volume of about 100 GB using native Azure Files, which should give close to 500 IOPS and 66 MB of throughput:

image5-1

Since it runs in parallel, and the limit cap in Azure Files IOPS is for both read and write operations, the results seem to be accurate. We can run a simulation with larger volume sizes, as well, by defining a custom volume size: 

./kubestr fio -s azurefile -z 250Gi


Once the benchmark is completed, Kubestr will automatically delete the volume, using the built-in provisioner.

We can repeat the benchmark against Azure Files Premium, which offers higher levels of bandwidth and IOPS:

./kubestr fio -s azurefile-premium -z 250Gi


image2-1


Remember, Kubestr can be used to validate CSI providers, to ensure that the CSI driver is configured correctly, and that the provisioner can perform operations such as creating or deleting volumes and snapshots. 

The Azure Disk CSI driver supports creating snapshots of persistent volumes. However, you must add that attribute to the storage class first by deploying the following yaml file:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/snapshot/storageclass-azuredisk-snapshot.yaml

 

If we now rerun Kubestr, it will detect the new capability for Azure Disks: 

image3-1

Using the following command, we can use Kubestr to validate the CSI and the creation of the volume snapshot: 

/kubestr csicheck -s manage-csi-premium -v csi-azuredisk-vsc


image1-1

What Kind of Storage Service Should You Use for AKS?

Various storage services work for Azure. In some cases, you may require persistent data and a storage service that resides outside of your AKS cluster. In most cases, I recommend going with Azure Files, as it provides native integration with CSI drivers, supports multiple protocols and provides bursting capabilities. It also works well with the native backup service in Azure. 

Finally, since there are a lot of changes happening to Kubernetes and the various storage integrations, I recommend proper storage benchmarking and validating the different drivers before deploying workloads on the clusters. Doing so will help ensure consistent performance and reduce the risk of your applications misbehaving. 

Learn more about Kubestr.

 

For information about Kasten K10

Contact Us

For information about Kasten K10, please send us a message using the form on this page, or email us at contact@kasten.io

For product support: Open a case via Veeam
Community support: Veeam Community

Address:

Kasten, Inc. 
8800 Lyra Drive, Suite 450
Columbus, Ohio 43240

We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.

Please Send Us a Message