- Home
- Kubernetes
- Resources
- Videos
- Kasten K10 Multi-Cluster Demo
Kasten K10 Multi-Cluster Demo
Transcription is available below video player.
Alex Vorbau:
Hi, I'm Alex. I'm an engineer here at Kasten. And I'm going to show you around the new K10 Multi-Cluster feature. Now, if you've used K10 before, this will look familiar to you. This is our dashboard for K10 running in a cluster. And in this cluster, we've discovered eight applications. We don't have any policies yet protecting those applications, so it says zero. And of course we don't have any data resulting from data protection. And no jobs have run. So this is a pretty fresh, new cluster.
But what is new on this screen is this breadcrumb item here for navigation that says Clusters. And the name of this particular cluster is “primary.” It could be anything, but just for simplicity, we have this named primary and secondary, which I'll show you in a second.
So if I were to go to the clusters page, this is new. This is a new single pane of glass, multi-cluster dashboard that shows a summary of all the information across your clusters. We only have two clusters set up here, and what you'll notice is that one of these clusters is labeled as primary. It also happens to be called primary. But when you configure a multi-cluster environment, you'll designate one of the clusters as primary. And so this is where the UI for the multi-cluster dashboard is. So I'm actually just pointing this UI at my primary cluster, but I can still browse through and look at all of the clusters through this UI.
So, like I said before, these are pretty fresh clusters. We don't have any policies yet. We have no compliance yet because we haven't set up policies. So we have eight unmanaged here, and three unmanaged applications on that one, and 11 total. Two clusters. I'll show you the global resources in a little bit, but we don't have any data being stored yet.
So let's go ahead and just dive into one of these clusters and take a look and see what kind of applications we have to protect here. We have a variety of different apps, but I see that mySQL is on here. So we'll try to protect that.
Now let's go take a look at the other cluster. I can jump straight there using this navigation feature. And here I see we have mySQL also, and I'm waiting for the details to be discovered. Okay, we have two volumes and it looks like they are eight gigs each volume here.
So let's go ahead and create a policy to be distributed to all of our clusters in one action. So this is a summary of our global resources. And what that means is we can define location profiles, which define your cloud settings at Amazon or Google or wherever, and you can define your policies centrally. And then we have this notion of a distribution, which I'll show you. But we define policies and profiles, and then we distribute them to our clusters.
So let's go ahead and start here. We'll create a location profile and we'll just call it my-location.... oops, spell it right... my-location-profile, and we'll set it up with Amazon. I’ll use my password manager here to grab the credentials. And paste this in here, the key. And I'll grab the secret, paste that in. I'll select a region. And I've got a test bucket that I called the kasten-io-alexv. And we'll save the profile.
Now, if you're familiar with Kasten's UI, you'll notice that this is different. We're basically trying to make it really clear that this is a global profile, that it hasn't actually been added to any clusters yet. It's being held here in the global UI. And then we'll distribute this in a moment. So let's go ahead and create a policy that uses this location profile. And a lot of this will look familiar if you've used K10 before. So we're going to call this protect-mysql. And we want to snapshot daily. And we'll go ahead and export to... It's already pre-configured or preselected my-location-profile there. We'll go ahead and export those to cloud storage for backup, to make them backups. And we're going to say mysql.
Now, what the K10 UI is doing here is it's actually querying all of the applications across all the clusters and then showing a deduplicated version list of them all. So I'm selecting mysql. And we don't need to do any of this additional advanced stuff here, so I'll go ahead and create the policy. Again, we have this extra bar at the top that just says that this has not been added to a distribution yet. This policy won't take effect until it's actually been moved to a cluster. And we can see that we're using this location profile.
Now, we have this convenient button here that allows you to have a one-button-click to go straight to Distributions and pre-fill the form. But instead of doing that, I'll just go ahead and do it manually so you can see the process. So now we're going to create a distribution to distribute these resources out to the clusters, and we're going to say protect-all-the-mysqls.
And now we've got to decide which clusters we want to send our resources to. So we're going to go ahead and specifically name them the primary and secondary cluster, but you could also use labels. If you have dozens or hundreds of clusters, you may label your clusters based on what of type cluster they are, maybe they're production or they're staging or test or dev. Maybe they are based on organizations. It depends on your situation, but maybe it's marketing or development and that sort of thing. So you can do it based on whatever labels you set up on the clusters themselves. And that way, if new clusters are added with that label, it'll automatically pick up.
So I'm going to add my policy that I just created, protect-mysql. And it's going to let me know, "Hey, this policy depends on this location profile." You probably want to add that, too. So we'll go ahead and add that as well. So now we've defined what we want to distribute and where we want to send these resources, and we're going to add the distribution and let it get to work.
So what it's going to do next is it will start synchronizing these resources out to these clusters, and it looks like it's already done. As is typical in our K10 UI, you can actually look at the YAML of the resource distribution, similar to our policies and profiles, is a Kubernetes custom resource. And so you can see the actual resource there in yellow.
So now when we go back here, we can see that these items have been distributed out and we can go look at the clusters and see what's going on here. So we can see that this policy has arrived on this cluster and it's labeled as Global, and we can't edit or delete it because it's a global policy. And if we come back here, the reason it's labeled as Non-Compliant is because it hasn't run yet. So we have a policy protecting it, it hasn't run yet, no jobs have been scheduled, so it's being shown as Non-Compliant.
And we'll switch over to the other cluster and see what's going on there. Okay, we also have a policy here. It's the same policy. It's mysql. And I'm going to go ahead and run this. And we'll also take a look at the location profiles here. We can see the location profile showed up as well, just like we expected. And I'll go back over to the primary. Now, normally I wouldn't have to manually start these. They would just pick up on schedule, but since we're demo here, I'll go ahead and just start them off.
Let me go back to the primary. It's where I meant to go. And we will run this as well. And then we'll start to see these applications come into compliance. And you can see that this one is now showing as compliant because the job has been scheduled and it's running and it's doing the backup and it'll export when it's finished.
Once those jobs have completed, you'll start to see the summary changing here. We're showing these applications as compliant. We have one policy on each of these clusters. And this will update in a moment here, Prometheus just hasn't updated in the background yet.
So that's how it works. We tried to make it easier for the customers that have lots of clusters. You can imagine how tedious would it be to manually do this across dozens or hundreds of clusters. And this should make it a lot easier. Hope you enjoy using it.