Securing Kubernetes access using OIDC and Keycloak
Yesterday I finally implemented proper SSO for my Kubernetes clusters and since I noticed some bad patterns in other tutorials along the way, I decided to write my own. Hoping to make different mistakes.
Be aware, that if you can not modify your kube-apiserver
configuration or run certain Kubernetes distributions you won’t be able to use this.
Those of you running OpenShift or OKD, just run oc login --web
and you’re done, if you already configured SSO with your cluster.
Assumptions
Since the topic is big, there are some assumptions made:
- You already run Keycloak somewhere
- You already run your Kubernetes cluster
- You know some basic terms about OIDC
- The
oidc-login
plugin forkubectl
is already installed
There are also some details of personal taste, which you can just do different:
- I’ll use client roles as groups
- I’ll use a step-up flow with ACR
Knowns issues:
Why?
In worst case your kubeconfig
currently looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://k8s.example.com:6443
name: k8s
contexts:
- context:
cluster: k8s
namespace: default
user: admin@k8s
name: admin@k8s
current-context: admin@k8s
kind: Config
preferences: {}
users:
- name: admin@k8s
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
You can get above output using kubectl config view --minify
.
The important bit is the client-certificate-data
and client-key-data
, these are TLS certificates and usually long running.
If these are compromised the only way to revoke them is to re-key your entire cluster by changing the cluster CA. Kubernetes doesn’t understand CRLs. This is a lot of work and if run this in an organisation it would be something you need to do every time a member of the organisation leaves, that had access to the cluster.
A better way to handle this is to use OIDC tokens. There are many reasons for that but some are: Centralised management, easy to revoke and very limited in lifetime.
Setting up Keycloak client
The first thing to do is to create a keycloak public client. Since we run kubectl oidc-login
on people’s machines, we don’t want to use a confidential client that would require a shared client secret.
- Generate an UUID as client name (or use one of your choosing) and select a useful display name
- In the capability configuration, disable direct access grants, you don’t want anyone but your IdP to handle usernames and passwords
- (Optionally) Enable “Oauth 2.0 Device Authentication Grant”, if you want to use a
kubeconfig
on headless devices with no browser or from inside a container, be aware that this is incompatible with step-up authentication - enter
http://localhost:8000
andhttp://localhost:18000
as Valid redirect URIs - Go to the Advanced settings tab of your client
- Configure the signature algorithm for your ID and Access tokens to
RS256
- Adjust the Access and Client Session lifespans on the clients to your preference. This controls how often users have to re-authenticate due to expired tokens. This is independent from your Step-Up authentication timeout
- (optional) Provide ACR mapping for Step-Up authentication and configure your step-up browser flow
- Go to the “Roles” tab and create a client role, e.g.
cluster-admin
- Go to the “Client scopes” tab and enter the client’s dedicated scope
- Switch to “Scope” and disable “Full scope allowed” before switching back to the “Mappers” tab
- Create a new “User Client Role” mapper
- Select the Token Claim Name to be
groups
and the Client ID to be your current client - Create a new “Audience”
- Use “audience” as name and select your client ID to be the “Included Client Audience” and enable the checkbox at
Add to ID token
- (optional) After adding the client role to your user, you can check the
Evaluation
tab for the exact structure of the various tokens
Keycloak export
{
"clientId": "6b23faf3-2311-4f70-b496-85365e3430a3",
"name": "k8s",
"description": "",
"rootUrl": "",
"adminUrl": "",
"baseUrl": "",
"surrogateAuthRequired": false,
"enabled": true,
"alwaysDisplayInConsole": false,
"clientAuthenticatorType": "client-secret",
"redirectUris": [
"http://localhost:18000",
"http://localhost:8000"
],
"webOrigins": [
"http://localhost:18000",
"http://localhost:8000"
],
"notBefore": 0,
"bearerOnly": false,
"consentRequired": false,
"standardFlowEnabled": true,
"implicitFlowEnabled": false,
"directAccessGrantsEnabled": false,
"serviceAccountsEnabled": false,
"publicClient": true,
"frontchannelLogout": true,
"protocol": "openid-connect",
"attributes": {
"realm_client": "false",
"oidc.ciba.grant.enabled": "false",
"backchannel.logout.session.required": "true",
"oauth2.device.authorization.grant.enabled": "false",
"backchannel.logout.revoke.offline.tokens": "false",
"login_theme": "",
"display.on.consent.screen": "false",
"consent.screen.text": "",
"frontchannel.logout.url": "",
"backchannel.logout.url": "",
"logoUri": "",
"policyUri": "",
"tosUri": "",
"access.token.signed.response.alg": "RS256",
"id.token.signed.response.alg": "RS256",
"id.token.encrypted.response.alg": "",
"id.token.encrypted.response.enc": "",
"user.info.response.signature.alg": "",
"user.info.encrypted.response.alg": "",
"user.info.encrypted.response.enc": "",
"request.object.signature.alg": "",
"request.object.encryption.alg": "",
"request.object.encryption.enc": "",
"request.object.required": "",
"authorization.signed.response.alg": "",
"authorization.encrypted.response.alg": "",
"authorization.encrypted.response.enc": "",
"exclude.session.state.from.auth.response": "",
"exclude.issuer.from.auth.response": "",
"use.refresh.tokens": "true",
"client_credentials.use_refresh_token": "false",
"token.response.type.bearer.lower-case": "false",
"access.token.lifespan": "",
"client.session.idle.timeout": "",
"client.session.max.lifespan": "",
"client.offline.session.idle.timeout": "",
"client.offline.session.max.lifespan": "",
"tls.client.certificate.bound.access.tokens": "false",
"pkce.code.challenge.method": "",
"require.pushed.authorization.requests": "false",
"client.use.lightweight.access.token.enabled": "false",
"client.introspection.response.allow.jwt.claim.enabled": "false",
},
"fullScopeAllowed": false,
"nodeReRegistrationTimeout": -1,
"protocolMappers": [
{
"name": "groups",
"protocol": "openid-connect",
"protocolMapper": "oidc-usermodel-client-role-mapper",
"consentRequired": false,
"config": {
"introspection.token.claim": "true",
"multivalued": "true",
"userinfo.token.claim": "true",
"id.token.claim": "true",
"lightweight.claim": "false",
"access.token.claim": "true",
"claim.name": "groups",
"jsonType.label": "String",
"usermodel.clientRoleMapping.clientId": "6b23faf3-2311-4f70-b496-85365e3430a3"
}
},
{
"name": "audience",
"protocol": "openid-connect",
"protocolMapper": "oidc-audience-mapper",
"consentRequired": false,
"config": {
"included.client.audience": "6b23faf3-2311-4f70-b496-85365e3430a3",
"id.token.claim": "true",
"lightweight.claim": "false",
"access.token.claim": "true",
"introspection.token.claim": "true"
}
}
],
"defaultClientScopes": [
"acr",
"basic"
],
"optionalClientScopes": [],
"access": {
"view": true,
"configure": true,
"manage": true
},
"authorizationServicesEnabled": false
}
Setting up kubectl
With the IdP side done, we can move on to the client. In this case kubectl
with oidc-login
. This can easily done by running the following command:
kubectl oidc-login setup --oidc-issuer-url=https://keycloak.example.com/realms/example-realm --oidc-client-id=6b23faf3-2311-4f70-b496-85365e3430a3 --oidc-auth-request-extra-params="acr_values=reauth"
Replace 6b23faf3-2311-4f70-b496-85365e3430a3
with your client id, keycloak.example.com
with your keycloak hostname, example-realm
with your keycloak realm name and you can drop --oidc-auth-request-extra-params="acr_values=reauth"
if you skipped the step-up authentication parts.
This will test the authentication from kubectl
against Keycloak and should end up in a setup that says, that the login was successful.
It’ll also provide you with a example of your token and some instructions on how to grant your specific user cluster-admin
rights by creating a ClusterRoleBinding
.
You will also see a kubectl config
command that sets up the oidc
user, but if you use step-up authentication, you might need to add the --oidc-auth-request-extra-params="acr_values=reauth"
parameter. You should also add --oidc-use-pkce
since it’s recommended for public clients:
kubectl config set-credentials oidc --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-command=kubectl --exec-arg=oidc-login --exec-arg=get-token --exec-arg=--oidc-issuer-url=https://keycloak.example.com/realms/example-realm --exec-arg=--oidc-client-id=6b23faf3-2311-4f70-b496-85365e3430a3 --exec-arg=--oidc-auth-request-extra-params="acr_values=reauth"
--exec-arg=--oidc-use-pkce
Advanced example `kubeconfig`
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://k8s.example.com:6443
name: k8s
contexts:
- context:
cluster: k8s
user: oidc
name: k8s
current-context: k8s
kind: Config
preferences: {}
users:
- name: oidc
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- get-token
- --oidc-issuer-url=https://keycloak.example.com/realms/example-realm
- --oidc-client-id=6b23faf3-2311-4f70-b496-85365e3430a3
- --oidc-auth-request-extra-params="acr_values=reauth"
- --oidc-use-pkce
command: kubectl-oidc_login
env: null
installHint: |
# Krew (macOS, Linux, Windows and ARM)
kubectl krew install oidc-login
# Homebrew (macOS and Linux)
brew install int128/kubelogin/kubelogin
# Chocolatey (Windows)
choco install kubelogin
interactiveMode: IfAvailable
provideClusterInfo: false
Configuring the kube-apiserver
This will now depend on your Kubernetes distribution but I’ll provide the kube-apiserver
parameters to set and you can figure out, how to configure them in your Kubernetes distribution on your own.
--oidc-issuer-url=https://keycloak.example.com/realms/example-realm
--oidc-client-id=6b23faf3-2311-4f70-b496-85365e3430a3
--oidc-groups-claim=groups
--oidc-groups-prefix='idp-groups:'
--oidc-username-prefix='idp:'
--oidc-required-claim='acr=reauth'
Replace the value of --oidc-issuer-url
with your issuer URL from above. Also map the client ID correct and if you cared about step-up authentication, make sure you add the --oidc-required-claim
parameter. If you didn’t set it up, leave it out.
The --oidc-username-prefix
and even more important --oidc-groups-prefix
prevent the IdP groups and usernames to impersonate existing groups or service accounts in Kubernetes. This ensures that no one can grand themselves system:masters
even if they compromise your IdP and bypass all your admission controls.
There are still ways to do that, if you grant someone cluster-admin, but at least it requires extra effort.
Once this configuration is rolled out, using the oidc
user should just work.
kubectl auth whoami
Granting cluster-admin
Since during the client creation a client role cluster-admin was created and then mapped into the groups claim, we can map this as Kubernetes group and authorize it to be cluster-admin:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: oidc-cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: idp-groups:cluster-admin
Now when using kubectl get nodes --user oidc
it should actually display a list of nodes.
Closing thoughts
Setting up kubectl
to use SSO is not that complicated. The majority of work is configuring the IdP correctly. Since there is no client-secret involved, the kubeconfig
is free of any secrets and can easily be shared with other people.
A lot of oidc-login
examples, including the official one, utilise a client-secret. Since kubectl
is a client running on someone’s machine, I personally think it shouldn’t have a client secret, following OIDC best practice for client software.
This setup is quite convenient to use, given the security properties it provides. Depending on where your Keycloak instance(s) run, you still want to have a emergency access account, that can be used independent of your IdP for cluster recovery, but it reduces the exposure of sensitive credentials drastically.
I’m looking forward to see the remaining issues with this solution fixed, while enjoying to use it.