In Part 1 of this blog post series, we explored the possibility of Kubernetes becoming the new leading enterprise application platform. Part 2 examined one of the key reasons the platform is gaining traction with enterprises: the desire to move stateless applications off traditional infrastructure and into Kubernetes. In this third and final post, we’ll look at three key challenges for managing stateful workloads in Kubernetes.
So you’ve decided to adopt cloud native design and architecture. Or perhaps the C-Suite is asking you to deliver a cloud operating model as a service. In addition to the cultural changes required to introduce this new way of doing things, there are significant challenges related to infrastructure, operations and data management.
During a recent webinar, Kasten by Veeam’s Adam Bergh, who heads up Cloud Native Technical Partnerships at Kasten by Veeam, and Lenovo’s Ajay Dholakia, Principal Engineer and Senior Solution Architect at Lenovo, discussed each set of challenges in depth. Here are the key takeaways:
- Day Zero – Infrastructure: When you set out to design the infrastructure to support a cloud native model, there are several variables to consider. Will you deploy it on-premises or in a public cloud – or will you adopt a hybrid approach? According to Dholakia, there are pros and cons for both. “The question is, where does the data reside?” he said. “There will be challenges both in terms of latency and security, but also potentially regulatory restrictions.” Monitoring, security, storage and logging are also key considerations. “Getting ready for DevOps and cloud native methodologies requires continuous monitoring of the entire system stack, all the way up to the application – not just the hardware or the software infrastructure,” Dholakia added.
Ensuring you have the right skill set is also a top concern. “When you adopt containerization, you’re moving from a monolithic approach to a microservices-based approach, working with ISV partners and deploying services in a setting in which tools such as Kubernetes will be used for management and orchestration,” Dholakia said. “Do you have the requisite skill set? These are some of the early questions and concerns we must address leading up to Day Zero.”
- Day 1 – Culture: Once you’ve built out your infrastructure and you’ve chosen an application or two to containerize, you begin pushing code into production. The business is now relying on the applications you’re running and updating in Kubernetes. Here, compliance and security become critical.
“What we hear from customers is that they struggle to find a balance between taking the time they need to get things done and delivering at the velocity of the business need,” Bergh said. “They’re getting used to the fact that if a problem occurs, the code is quickly brought back into the DevOps cycle, and synergy between operations and developers is essential.”
Bergh added that getting organizations to function in a fundamentally new way can be challenging. “You’ve implemented a lot of new technologies, such as infrastructure-as-code,” he said. “Developers need to have some control over the operations of the infrastructure, which may be completely foreign to traditional enterprise IT.”
- Day 2 - Operations: Typically, the next step is to tackle the operational challenges of going cloud native: delivering guaranteed SLAs and security for your business-critical applications. Operations management and data governance are the focus at this stage, as you look for ways to ensure seamless data backup, recovery and mobility, without slowing down any teams. “Getting to production is a big challenge for a lot of organizations,” Bergh said. “It's rare that organizations implement a single Kubernetes cluster or even a single distribution. That could be using multiple clouds, for example. How do they empower the dev teams to move applications seamlessly into production?”
According to Bergh, most organizations still don’t have a full DR strategy in place, despite the necessity. “These things need to be considered from the get-go, and you need to be looking at the best-of-breed solutions up front during the Day Zero stage, to make sure that you’re ready for production and don't get hung up on Day One,” He said. “Smart organizations that are implementing cloud native are thinking about production-ready Kubernetes upfront, and they're partnering with organizations that understand production-ready Kubernetes.”
Bergh reminded the audience that ransomware is a constant threat and it’s not going away. The skills gap, he said, leaves holes in security, as teams ramp up with Kubernetes and cloud native applications. “Just because we containerized our applications and put them in this shiny new Kubernetes cluster, that doesn’t mean the data is protected,” he said. “You still need to be thinking about ransomware. It’s a real issue, from Day Zero to Day 1 and Day 2.”
Kasten and Lenovo are partnering to help solve these challenges, and enable organizations to reap all the benefits of a cloud native model via Kubernetes. Together they enable:
- Proven Kubernetes solutions that work at scale
- Improved ROI through flexible Kubernetes distribution choices, and server and storage infrastructure with proven integration
- Reliable protection and fast recovery of Kubernetes apps and data with Kasten K10 and Lenovo infrastructure.
Listen to the on-demand webinar to learn more about best-in-class cloud native solutions from Kasten by Veeam and Lenovo. Or, try Kasten K10 for free today!