This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Cluster management

Manage your Kubernetes clusters via Greenhouse.

Greenhouse enables organizations to register their Kubernetes clusters within the platform, providing a centralized interface for managing and monitoring these clusters.
Once registered, users can perform tasks related to cluster management, such as deploying applications, scaling resources, and configuring access control, all within the Greenhouse platform.

This section provides guides for the management of Kubernetes clusters within Greenhouse.

1 - Cluster onboarding

Onboard an existing Kubernetes cluster to Greenhouse.

Content Overview

This guides describes how to onboard an existing Kubernetes cluster to your Greenhouse organization.
If you don’t have an organization yet please reach out to the Greenhouse administrators.

While all members of an organization can see existing clusters, their management requires org-admin or cluster-admin privileges.

NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.

Preparation

Download the latest greenhousectl binary from here.

Onboarding a Cluster to Greenhouse will require you to authenticate to two different Kubernetes clusters via respective kubeconfig files:

  • greenhouse: The cluster your Greenhouse installation is running on. You need organization-admin or cluster-admin privileges.
  • bootstrap: The cluster you want to onboard. You need system:masters privileges.

For consistency we will refer to those two clusters by their names from now on.

You need to have the kubeconfig files for both the greenhouse and the bootstrap cluster at hand. The kubeconfig file for the greenhouse cluster can be downloaded via the Greenhouse dashboard:

Organization > Clusters > Access Greenhouse cluster.

Onboard

For accessing the bootstrap cluster, the greenhousectl will expect your default Kubernetes kubeconfig file and context to be set to bootstrap. This can be achieved by passing the --kubeconfig flag or by setting the KUBECONFIG env var.

The location of the kubeconfig file to the greenhouse cluster is passed via the --greenhouse-kubeconfig flag.

greenhousectl cluster bootstrap --kubeconfig=<path/to/bootstrap-kubeconfig-file> --greenhouse-kubeconfig <path/to/greenhouse-kubeconfig-file> --org <greenhouse-organization-name> --cluster-name <name>

Since Greenhouse generates URLs which contain the cluster name, we highly recommend to choose a short cluster name. In particular for Gardener Clusters setting a short name is mandatory, because Gardener has very long cluster names, e.g. garden-greenhouse--monitoring-external.

A typical output when you run the command looks like

2024-02-01T09:34:55.522+0100	INFO	setup	Loaded kubeconfig	{"context": "default", "host": "https://api.greenhouse-qa.eu-nl-1.cloud.sap"}
2024-02-01T09:34:55.523+0100	INFO	setup	Loaded client kubeconfig	{"host": "https://api.monitoring.greenhouse.shoot.canary.k8s-hana.ondemand.com"}
2024-02-01T09:34:56.579+0100	INFO	setup	Bootstraping cluster	{"clusterName": "monitoring", "orgName": "ccloud"}
2024-02-01T09:34:56.639+0100	INFO	setup	created namespace	{"name": "ccloud"}
2024-02-01T09:34:56.696+0100	INFO	setup	created serviceAccount	{"name": "greenhouse"}
2024-02-01T09:34:56.810+0100	INFO	setup	created clusterRoleBinding	{"name": "greenhouse"}
2024-02-01T09:34:57.189+0100	INFO	setup	created clusterSecret	{"name": "monitoring"}
2024-02-01T09:34:58.309+0100	INFO	setup	Bootstraping cluster finished	{"clusterName": "monitoring", "orgName": "ccloud"}

After onboarding

  1. List all clusters in your Greenhouse organization:
   kubectl --namespace=<greenhouse-organization-name> get clusters
  1. Show the details of a cluster:
   kubectl --namespace=<greenhouse-organization-name> get cluster <name> -o yaml

Example:

apiVersion: greenhouse.sap/v1alpha1 kind: Cluster metadata: creationTimestamp: "2024-02-07T10:23:23Z" finalizers: - greenhouse.sap/cleanup generation: 1 name: monitoring namespace: ccloud resourceVersion: "282792586" uid: 0db6e464-ec36-459e-8a05-4ad668b57f42 spec: accessMode: direct maxTokenValidity: 72h status: bearerTokenExpirationTimestamp: "2024-02-09T06:28:57Z" kubernetesVersion: v1.27.8 statusConditions: conditions: - lastTransitionTime: "2024-02-09T06:28:57Z" status: "True" type: Ready

When the status.kubernetesVersion field shows the correct version of the Kubernetes cluster, the cluster was successfully bootstrapped in Greenhouse. Then status.conditions will contain a Condition with type=Ready and status="true""

In the remote cluster, a new namespace is created and contains some resources managed by Greenhouse. The namespace has the same name as your organization in Greenhouse.

Troubleshooting

If the bootstrapping failed, you can find details about why it failed in the Cluster.statusConditions. More precisely there will be a condition of type=KubeConfigValid which might have hints in the message field. This is also displayed in the UI on the Cluster details view. Reruning the onboarding command with an updated kubeConfig file will fix these issues.

2 - Remote Cluster Connectivity with OIDC

Onboard an existing Kubernetes cluster to Greenhouse, with OIDC configuration.

Content Overview

This guide describes how to onboard an existing Kubernetes cluster to your Greenhouse Organization with OIDC configuration. If you don’t have a Greenhouse Organization yet, please reach out to the Greenhouse administrators.

While all members of an Organization can see existing Clusters, their management requires org-admin or cluster-admin privileges.

NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.

OIDC Overview

Starting from Kubernetes v1.21, the feature Service Account Issuer Discovery transforms the Kubernetes API server into an OIDC identity provider. This setup facilitates the issuance of tokens, via service accounts to pods, which are recognizable by external services outside the Kubernetes cluster, thereby establishing an authentication pathway between the pod within the cluster and external services including those on Azure, AWS, etc.

Starting from Kubernetes v1.30, Structured Authentication Configuration moved to beta and the feature gate is enabled by default. This feature allows configuring multiple OIDC issuers and passing them as a configuration file to the Kubernetes API server.

More information on Structured Authentication Configuration can be found at https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration

With the combination of Service Account Issuer Discovery and Structured Authentication Configuration, Cluster to Cluster trust can be established.

A remote cluster can add the Greenhouse cluster’s Service Account Issuer as an OIDC issuer in its Structured Authentication Configuration. This allows the Greenhouse cluster to authenticate against said remote cluster, using an in-cluster service account token.

The OIDC Remote Cluster Connectivity is illustrated below -

sequenceDiagram autonumber participant User as User / Automation participant RC as Remote Cluster participant AC as Admin Cluster (Greenhouse) User->>RC: Creates Structured Auth with Admin-Cluster Service Account Issuer URL User->>RC: Applies ClusterRoleBinding for Cluster-Admin
(Pattern: `prefix:system:serviceaccount:org-name:cluster-name`) User->>AC: Applies Kubernetes Secret with OIDC parameters
(Namespace: Organization's Namespace) AC-->>AC: `Bootstrap controller creates ServiceAccount
(Sets OIDC Secret as owner on SA)` AC-->>AC: Bootstrap controller requests Token from ServiceAccount AC-->>AC: Bootstrap controller writes/updates KubeConfig in OIDC Secret
(Key: greenhouseKubeconfig) AC-->>AC: Bootstrap controller creates Cluster CR
(Sets Cluster as owner on OIDC Secret) AC-->>AC: Cluster controller fetches KubeConfig from Secret AC->>RC: Cluster controller requests Kubernetes Version & Node Status RC-->>AC: 🔍 Introspects Incoming Token
(Introspection towards Admin-Cluster Service Account Issuer URL) RC-->>RC: 🔒 Verifies Authorization via RBAC RC->>AC: ✅ Responds with Requested Resources or ❌ Authentication/Authorization Failure AC-->>AC: ⏰ Periodic rotation of Kubeconfig in OIDC Secret
(key: greenhouseKubeconfig)

Preparation

The Greenhouse cluster should expose the /.well-known/openid-configuration over an unauthenticated endpoint to allow remote clusters to fetch the OIDC configuration.

Some cloud providers or managed Kubernetes services might not expose the Service Account Issuer Discovery as an unauthenticated endpoint. In such cases, you can serve this configuration from a different endpoint and set this as the discoveryURL in structured authentication configuration.

Check out https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration for more information.

Configure the OIDC issuer in the Structured Authentication Configuration of the remote cluster.

Example Structured Authentication Configuration file

apiVersion: apiserver.config.k8s.io/v1beta1 kind: AuthenticationConfiguration jwt: - issuer: url: https://<greenhouse-service-account-issuer> audiences: - greenhouse # audience should be greenhouse claimMappings: username: claim: 'sub' # claim to be used as username prefix: 'greenhouse:' # prefix to be added to the username to prevent impersonation (can be any string of your choice) # additional trusted issuers # - issuer:

Add RBAC rules to the remote cluster, authorizing Greenhouse to manage Kubernetes resources.

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: greenhouse-<cluster-name>-oidc-access subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: greenhouse:system:serviceaccount:<your-organization-namespace>:<cluster-name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin

The subject kind User name must follow the pattern of <prefix>:system:serviceaccount:<your-organization-namespace>:<cluster-name>.

<prefix> is the prefix used in the Structured Authentication Configuration file for the username claim mapping.

For convenience purposes, the `prefix` is set to `greenhouse:` in the example `Structured Authentication Configuration`
but it can be any string identifier of your choice.

If you use '-' in prefix, for example, `identifier-` then the subject name should be `identifier-system:serviceaccount:<your-organization-namespace>:<cluster-name>`.

Onboard

You can now onboard the remote Cluster to your Greenhouse Organization by applying a Secret in the following format:

apiVersion: v1 kind: Secret metadata: annotations: "oidc.greenhouse.sap/api-server-url": "https://<remote-cluster-api-server-url>" name: <cluster-name> # ensure the name provided here is the same as the <cluster-name> in the ClusterRoleBinding namespace: <organization-namespace> data: ca.crt: <double-encoded-ca.crt> # remote cluster CA certificate base64 encoded type: greenhouse.sap/oidc # secret type

Mandatory fields:

  • the annotation oidc.greenhouse.sap/api-server-url must have a valid URL pointing to the remote cluster’s API server
  • the ca.crt field must contain the remote cluster’s CA certificate
  • the type of the Secret must be greenhouse.sap/oidc
  • the name of the secret must equal the <cluster-name> used in the ClusterRoleBinding Subject

ca.crt is the certificate-authority-data from the kubeconfig file of the remote cluster.

The certificate-authority-data can be extracted from the ConfigMap kube-root-ca.crt. This ConfigMap is present in every Namespace.

If the certificate is extracted from kube-root-ca.crt then it should be base64 encoded twice before adding it to the secret.

example:

$ kubectl get configmap kube-root-ca.crt -n kube-system -o jsonpath='{.data.ca\.crt}' | base64 | base64

If the certificate is extracted from the KubeConfig file then the certificate is already base64 encoded, so the encoding is needed only once.

Apply the Secret to the Organization Namespace to onboard the remote cluster.

$ kubectl apply -f <oidc-secret-file>.yaml

Troubleshooting

If the bootstrapping failed, you can find details about why it failed in the Cluster.status.statusConditions. More precisely there will be a condition of type=KubeConfigValid and type=Ready which contain more information in the message field. This is also displayed in the UI on the Cluster details view.

If there is any error message regarding RBAC then check the ClusterRoleBinding and ensure the subject name is correct

If there is any authentication error then you might see a message similar to the server has asked for the client to provide credentials, in such cases verify the Structured Authentication Configuration and ensure the issuer and audiences are correct.

The API Server logs in the remote cluster will provide more information on the authentication errors.

3 - Cluster offboarding

Offboarding an existing Kubernetes cluster in Greenhouse.

Content Overview

This guides describes how to off-board an existing Kubernetes cluster in your Greenhouse organization.

While all members of an organization can see existing clusters, their management requires org-admin or cluster-admin privileges.

NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.

Pre-requisites

Offboarding a Cluster in Greenhouse requires authenticating to the greenhouse cluster via kubeconfig file:

  • greenhouse: The cluster where Greenhouse installation is running on.
  • organization-admin or cluster-admin privileges is needed for deleting a Cluster resource.

Schedule Deletion

By default Cluster resource deletion is blocked by ValidatingWebhookConfiguration in Greenhouse. This is done to prevent accidental deletion of cluster resources.

List the clusters in your Greenhouse organization:

kubectl --namespace=<greenhouse-organization-name> get clusters

A typical output when you run the command looks like

NAME AGE ACCESSMODE READY mycluster-1 15d direct True mycluster-2 35d direct True mycluster-3 108d direct True

Delete a Cluster resource by annotating it with greenhouse.sap/delete-cluster: "true".

Example:

kubectl annotate cluster mycluster-1 greenhouse.sap/delete-cluster=true --namespace=my-org

Once the Cluster resource is annotated, the Cluster will be scheduled for deletion in 48 hours (UTC time). This is reflected in the Cluster resource annotations and in the status conditions.

View the deletion schedule by inspecting the Cluster resource:

kubectl get cluster mycluster-1 --namespace=my-org -o yaml

A typical output when you run the command looks like

apiVersion: greenhouse.sap/v1alpha1 kind: Cluster metadata: annotations: greenhouse.sap/delete-cluster: "true" greenhouse.sap/deletion-schedule: "2025-01-17 11:16:40" finalizers: - greenhouse.sap/cleanup name: mycluster-1 namespace: my-org spec: accessMode: direct kubeConfig: maxTokenValidity: 72 status: ... statusConditions: conditions: ... - lastTransitionTime: "2025-01-15T11:16:40Z" message: deletion scheduled at 2025-01-17 11:16:40 reason: ScheduledDeletion status: "False" type: Delete

In order to cancel the deletion, you can remove the greenhouse.sap/delete-cluster annotation:

kubectl annotate cluster mycluster-1 greenhouse.sap/delete-cluster- --namespace=my-org

the - at the end of the annotation name is used to remove the annotation.

Impact

When a Cluster resource is scheduled for deletion, all Plugin resources associated with the Cluster resource will skip the reconciliation process.

When the deletion schedule is reached, the Cluster resource will be deleted and all associated resources Plugin resources will be deleted as well.

Immediate Deletion

In order to delete a Cluster resource immediately -

  1. annotate the Cluster resource with greenhouse.sap/delete-cluster. (see Schedule Deletion)
  2. update the greenhouse.sap/deletion-schedule annotation to the current date and time.

You can also annotate the Cluster resource with greenhouse.sap/delete-cluster and greenhouse.sap/deletion-schedule at the same time and set the current date and time for deletion.

The time and date should be in YYYY-MM-DD HH:MM:SS format or golang’s time.DateTime format. The time should be in UTC timezone.

Troubleshooting

If the cluster deletion has failed, you can troubleshoot the issue by inspecting -

  1. Cluster resource status conditions, specifically the KubeConfigValid condition.
  2. status conditions of the Plugin resources associated with the Cluster resource. There will be a clear indication of the issue in HelmReconcileFailed condition.