At Kasten, our mission is to dramatically simplify operational management of stateful cloud-native applications. Kasten’s K10, our enterprise-grade data management platform for Kubernetes backup and DR delivers on this mission by helping our customers protect their cloud-native applications against accidental or malicious data loss.
As part of this mission to protect applications, we take security very seriously. In today’s multi-tenant Kubernetes clusters, security is critical for safety. This is why at Kasten we have built multiple ways of authenticating a user so that fine-grained role-based access control (RBAC) can be used with it. While we will cover our RBAC support in a later post, this article will list some of these authentication methods available in K10 with a focus on Red Hat OpenShift’s OAuth proxy.
Methods of Authentication
This mode allows users to authenticate using a username and password. If users want to evaluate the product or when we want to do POCs, this is a quick way to get a basic form of authentication setup with K10 but is not recommended for production usage.
Token based authentication
In this mode, the user is presented with a login screen where they can enter their Kubernetes/OpenShift Bearer token to gain access to K10’s dashboard.
In this mode, K10 will interact with an OIDC provider such as Okta, Google, or Keycloak for example, so that the user can use their existing credentials with that provider to gain access to K10’s dashboard.
Red Hat OpenShift’s OAuth Proxy
A number of our customers use K10 for Kubernetes backup, DR, and application mobility with the OpenShift Kubernetes distribution from Red Hat.
In this ecosystem, the use of OpenShift’s OAuth proxy to authenticate users to access various applications deployed in their OpenShift clusters is extremely common with the OpenShift cluster configured with Keycloak as the OpenID Connect provider.
To cleanly support this workflow for OpenShift customers, we recently added support for accessing the K10 dashboard by authenticating using the OpenShift OAuth proxy.
Screenshots of the Authentication flow involving OAuth proxy
When the user navigates to the K10 dashboard, the request reaches the proxy. The proxy presents a login screen to the user.
After clicking the login button, the user is forwarded to the OpenShift login screen. The OpenShift screen will provide the option of selecting kube:admin or the OIDC option if it has been configured in the cluster.
After clicking on the OIDC option, Okta in this example, the OIDC provider's login screen is shown.
When authentication with the OIDC provider succeeds, the user is redirected to the K10 dashboard.
Deploying K10 with OAuth proxy
The instructions for deploying K10 with OAuth proxy have been documented in K10’s documentation.
The following resources have to be deployed in order to setup OAuth proxy in the same namespace as K10.
Create a ServiceAccount that will be used by the OAuth proxy deployment
Create a ConfigMap annotated with the inject-cabundle OpenShift annotation. The annotation results in the injection of OpenShift's root CA into the ConfigMap. The name of this ConfigMap is used in the configuration of the OAuth proxy.
Deploy a Service for OAuth proxy. This needs to be annotated with the serving-cert-secret-name annotation. This will result in OpenShift generating a TLS private key and certificate that will be used by the OAuth proxy for secure connections to it. The name of the Secret used with the annotation must match with the name used in the OAuth proxy deployment.
Next, a Deployment for OAuth proxy needs to be created. It is recommended that a separate OpenShift OAuth client be registered for this purpose. The name of the client and its Secret will be used with the --client-id and --client-secret container arguments respectively shown in the Deployment spec below. The --client-id and --client-secret are defined in the OAuth client spec covered in the next section.
When an OpenShift ServiceAccount was used as the OAuth client, it was observed that the token generated by the proxy did not have sufficient scopes to operate K10. It is therefore not recommended to deploy the proxy using an OpenShift ServiceAccount as the OAuth client.
It is also important to configure the --pass-access-token option with the proxy so that it includes the OpenShift token in the X-Forwarded-Access-Token header when forwarding a request to K10.
The --scope configuration must have the user:full scope to ensure that the token generated by the proxy has sufficient scopes for operating K10.
The --upstream configuration must point to the K10 gateway Service.
Traffic meant for K10 must be forwarded to the OAuth proxy for authentication before it reaches K10. Ensure that ingress traffic on port 80 is forwarded to port 8080 and traffic on port 443 is forwarded to port 8083 of the oauth-proxy-svc Service respectively.
Here is one example of how to forward traffic to the proxy. In this example K10 was deployed with an external gateway Service. The gateway Service's ports were modified to forward traffic like so:
K10’s Authentication Service executes a Kubernetes Token Review using the OpenShift Token to verify that the token has been authenticated. If authenticated, the user is redirected to K10’s dashboard.
K10 also performs a Kubernetes Subject Access Review to verify that the token is authorized to operate K10. Depending on the scopes of the token, the user may have varying levels of access to the dashboard.
Users who are well versed with Red Hat’s OAuth Proxy, for authenticating users for accessing applications in their clusters, will find this article very useful. We hope that you will be as delighted as our OpenShift customers, who now have K10 fully integrated into their OpenShift environment for their cloud native data protection needs.
For users of OpenShift who are new to the OpenShift OAuth proxy, we would highly recommend learning more about this project using the information here, and also leverage the information in this article for deploying K10 with the OpenShift OAuth proxy.
Onkar Bhat is a member of the technical staff at Kasten and has been working on solving problems pertaining to data protection and disaster recovery in cloud native environments. His focus has been in the areas of authentication and authorization for multi-tenant and self-service data protection in Kubernetes. He previously worked as a Technical Lead in the SDN controller team at Big Switch Networks, which was acquired by Arista Networks in 2020. Prior to Big Switch, Onkar worked at NetApp on the SnapMirror team, backup and disaster recovery for on-prem storage, and the Altavault product, a cloud backup appliance. He has also worked on the Catalyst 6K team in the area of deep packet inspection at Cisco Systems. Onkar received his MS from Carnegie Mellon University.
Kasten, Inc. 8800 Lyra Drive, Suite 450 Columbus, Ohio 43240
We value the critical role that the security community plays in helping us protect the confidentiality, integrity, and availability of our software, services, and information. If you have information about security vulnerabilities that affect Kasten software, services, or information, please report it to us via our HackerOne Vulnerability Disclosure Program, or anonymously via this form.