Greenhouse enables organizations to register their Kubernetes clusters within the platform, providing a centralized interface for managing and monitoring these clusters. Once registered, users can perform tasks related to cluster management, such as deploying applications, scaling resources, and configuring access control, all within the Greenhouse platform.
This section provides guides for the management of Kubernetes clusters within Greenhouse.
1 - Cluster onboarding
Onboard an existing Kubernetes cluster to Greenhouse.
This guides describes how to onboard an existing Kubernetes cluster to your Greenhouse organization. If you don’t have an organization yet please reach out to the Greenhouse administrators.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Preparation
Download the latest greenhousectl binary from here.
Onboarding a Cluster to Greenhouse will require you to authenticate to two different Kubernetes clusters via respective kubeconfig files:
greenhouse: The cluster your Greenhouse installation is running on. You need organization-admin or cluster-admin privileges.
bootstrap: The cluster you want to onboard. You need system:masters privileges.
For consistency we will refer to those two clusters by their names from now on.
You need to have the kubeconfig files for both the greenhouse and the bootstrap cluster at hand. The kubeconfig file for the greenhouse cluster can be downloaded via the Greenhouse dashboard:
For accessing the bootstrap cluster, the greenhousectl will expect your default Kubernetes kubeconfig file and context to be set to bootstrap. This can be achieved by passing the --kubeconfig flag or by setting the KUBECONFIG env var.
The location of the kubeconfig file to the greenhouse cluster is passed via the --greenhouse-kubeconfig flag.
Since Greenhouse generates URLs which contain the cluster name, we highly recommend to choose a short cluster name.
In particular for Gardener Clusters setting a short name is mandatory, because Gardener has very long cluster names, e.g. garden-greenhouse--monitoring-external.
A typical output when you run the command looks like
2024-02-01T09:34:55.522+0100 INFO setup Loaded kubeconfig {"context": "default", "host": "https://api.greenhouse-qa.eu-nl-1.cloud.sap"}
2024-02-01T09:34:55.523+0100 INFO setup Loaded client kubeconfig {"host": "https://api.monitoring.greenhouse.shoot.canary.k8s-hana.ondemand.com"}
2024-02-01T09:34:56.579+0100 INFO setup Bootstraping cluster {"clusterName": "monitoring", "orgName": "ccloud"}
2024-02-01T09:34:56.639+0100 INFO setup created namespace {"name": "ccloud"}
2024-02-01T09:34:56.696+0100 INFO setup created serviceAccount {"name": "greenhouse"}
2024-02-01T09:34:56.810+0100 INFO setup created clusterRoleBinding {"name": "greenhouse"}
2024-02-01T09:34:57.189+0100 INFO setup created clusterSecret {"name": "monitoring"}
2024-02-01T09:34:58.309+0100 INFO setup Bootstraping cluster finished {"clusterName": "monitoring", "orgName": "ccloud"}
After onboarding
List all clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
Show the details of a cluster:
kubectl --namespace=<greenhouse-organization-name> get cluster <name> -o yaml
When the status.kubernetesVersion field shows the correct version of the Kubernetes cluster, the cluster was successfully bootstrapped in Greenhouse.
Then status.conditions will contain a Condition with type=Ready and status="true""
In the remote cluster, a new namespace is created and contains some resources managed by Greenhouse.
The namespace has the same name as your organization in Greenhouse.
Troubleshooting
If the bootstrapping failed, you can find details about why it failed in the Cluster.statusConditions. More precisely there will be a condition of type=KubeConfigValid which might have hints in the message field. This is also displayed in the UI on the Cluster details view.
Reruning the onboarding command with an updated kubeConfig file will fix these issues.
2 - Remote Cluster Connectivity with OIDC
Onboard an existing Kubernetes cluster to Greenhouse, with OIDC configuration.
This guide describes how to onboard an existing Kubernetes cluster to your Greenhouse Organization with OIDC
configuration.
If you don’t have a Greenhouse Organization yet, please reach out to the Greenhouse administrators.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
OIDC Overview
Starting from Kubernetes v1.21, the feature Service Account Issuer Discovery
transforms the Kubernetes API server into an OIDC identity provider. This setup facilitates the issuance of tokens, via
service accounts to pods, which are recognizable by external services outside the Kubernetes cluster, thereby
establishing an authentication pathway between the pod within the cluster and external services including those on
Azure, AWS, etc.
Starting from Kubernetes v1.30, Structured Authentication Configuration
moved to beta and the feature gate is enabled by default. This feature allows configuring multiple OIDC issuers and
passing them as a configuration file to the Kubernetes API server.
With the combination of Service Account Issuer Discovery and Structured Authentication Configuration, Cluster to
Cluster trust can be established.
A remote cluster can add the Greenhouse cluster’s Service Account Issuer as an
OIDC issuer in its Structured Authentication Configuration. This allows the Greenhouse cluster to authenticate
against said remote cluster, using an in-cluster service account token.
The OIDC Remote Cluster Connectivity is illustrated below -
sequenceDiagram
autonumber
participant User as User / Automation
participant RC as Remote Cluster
participant AC as Admin Cluster (Greenhouse)
User->>RC: Creates Structured Auth with Admin-Cluster Service Account Issuer URL
User->>RC: Applies ClusterRoleBinding for Cluster-Admin (Pattern: `prefix:system:serviceaccount:org-name:cluster-name`)
User->>AC: Applies Kubernetes Secret with OIDC parameters (Namespace: Organization's Namespace)
AC-->>AC: `Bootstrap controller creates ServiceAccount (Sets OIDC Secret as owner on SA)`
AC-->>AC: Bootstrap controller requests Token from ServiceAccount
AC-->>AC: Bootstrap controller writes/updates KubeConfig in OIDC Secret (Key: greenhouseKubeconfig)
AC-->>AC: Bootstrap controller creates Cluster CR (Sets Cluster as owner on OIDC Secret)
AC-->>AC: Cluster controller fetches KubeConfig from Secret
AC->>RC: Cluster controller requests Kubernetes Version & Node Status
RC-->>AC: 🔍 Introspects Incoming Token (Introspection towards Admin-Cluster Service Account Issuer URL)
RC-->>RC: 🔒 Verifies Authorization via RBAC
RC->>AC: ✅ Responds with Requested Resources or ❌ Authentication/Authorization Failure
AC-->>AC: ⏰ Periodic rotation of Kubeconfig in OIDC Secret (key: greenhouseKubeconfig)
Preparation
The Greenhouse cluster should expose the /.well-known/openid-configuration over an unauthenticated endpoint to allow
remote clusters to fetch the OIDC configuration.
Some cloud providers or managed Kubernetes services might not expose the Service Account Issuer Discovery as an unauthenticated endpoint. In such
cases, you can serve this configuration from a different endpoint and set this as the discoveryURL
in structured authentication configuration.
Configure the OIDC issuer in the Structured Authentication Configuration of the remote cluster.
Example Structured Authentication Configuration file
apiVersion: apiserver.config.k8s.io/v1beta1kind: AuthenticationConfigurationjwt:
- issuer:
url: https://<greenhouse-service-account-issuer>audiences:
- greenhouse# audience should be greenhouseclaimMappings:
username:
claim: 'sub'# claim to be used as usernameprefix: 'greenhouse:'# prefix to be added to the username to prevent impersonation (can be any string of your choice)# additional trusted issuers# - issuer:
Add RBAC rules to the remote cluster, authorizing Greenhouse to manage Kubernetes resources.
The subject kind User name must follow the pattern of
<prefix>:system:serviceaccount:<your-organization-namespace>:<cluster-name>.
<prefix> is the prefix used in the Structured Authentication Configuration file for the username claim mapping.
For convenience purposes, the `prefix` is set to `greenhouse:` in the example `Structured Authentication Configuration`
but it can be any string identifier of your choice.
If you use '-' in prefix, for example, `identifier-` then the subject name should be `identifier-system:serviceaccount:<your-organization-namespace>:<cluster-name>`.
Onboard
You can now onboard the remote Cluster to your Greenhouse Organization by applying a Secret in the following format:
apiVersion: v1kind: Secretmetadata:
annotations:
"oidc.greenhouse.sap/api-server-url": "https://<remote-cluster-api-server-url>"name: <cluster-name># ensure the name provided here is the same as the <cluster-name> in the ClusterRoleBindingnamespace: <organization-namespace>data:
ca.crt: <double-encoded-ca.crt># remote cluster CA certificate base64 encodedtype: greenhouse.sap/oidc# secret type
Mandatory fields:
the annotation oidc.greenhouse.sap/api-server-url must have a valid URL pointing to the remote cluster’s API server
the ca.crt field must contain the remote cluster’s CA certificate
the type of the Secret must be greenhouse.sap/oidc
the name of the secret must equal the <cluster-name> used in the ClusterRoleBinding Subject
ca.crt is the certificate-authority-data from the kubeconfig file of the remote cluster.
The certificate-authority-data can be extracted from the ConfigMap kube-root-ca.crt. This ConfigMap is present in every Namespace.
If the certificate is extracted from kube-root-ca.crt then it should be base64 encoded twice before adding it to the
secret.
If the certificate is extracted from the KubeConfig file then the certificate is already base64 encoded, so the
encoding is needed only once.
Apply the Secret to the Organization Namespace to onboard the remote cluster.
$ kubectl apply -f <oidc-secret-file>.yaml
Troubleshooting
If the bootstrapping failed, you can find details about why it failed in the Cluster.status.statusConditions. More precisely
there will be a condition of type=KubeConfigValid and type=Ready which contain more information in the message field.
This is also displayed in the UI on the Cluster details view.
If there is any error message regarding RBAC then check the ClusterRoleBinding and ensure the subject name is correct
If there is any authentication error then you might see a message similar to
the server has asked for the client to provide credentials,
in such cases verify the Structured Authentication Configuration and ensure the issuer and audiences are correct.
The API Server logs in the remote cluster will provide more information on the authentication errors.
3 - Cluster offboarding
Offboarding an existing Kubernetes cluster in Greenhouse.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Pre-requisites
Offboarding a Cluster in Greenhouse requires authenticating to the greenhouse cluster via kubeconfig file:
greenhouse: The cluster where Greenhouse installation is running on.
organization-admin or cluster-admin privileges is needed for deleting a Cluster resource.
Schedule Deletion
By default Cluster resource deletion is blocked by ValidatingWebhookConfiguration in Greenhouse.
This is done to prevent accidental deletion of cluster resources.
List the clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
A typical output when you run the command looks like
NAME AGE ACCESSMODE READY
mycluster-1 15d direct True
mycluster-2 35d direct True
mycluster-3 108d direct True
Delete a Cluster resource by annotating it with greenhouse.sap/delete-cluster: "true".
Once the Cluster resource is annotated, the Cluster will be scheduled for deletion in 48 hours (UTC time).
This is reflected in the Cluster resource annotations and in the status conditions.
View the deletion schedule by inspecting the Cluster resource:
kubectl get cluster mycluster-1 --namespace=my-org -o yaml
A typical output when you run the command looks like
the - at the end of the annotation name is used to remove the annotation.
Impact
When a Cluster resource is scheduled for deletion, all Plugin resources associated with the Cluster resource will skip the reconciliation process.
When the deletion schedule is reached, the Cluster resource will be deleted and all associated resources Plugin resources will be deleted as well.
Immediate Deletion
In order to delete a Cluster resource immediately -
annotate the Cluster resource with greenhouse.sap/delete-cluster. (see Schedule Deletion)
update the greenhouse.sap/deletion-schedule annotation to the current date and time.
You can also annotate the Cluster resource with greenhouse.sap/delete-cluster and greenhouse.sap/deletion-schedule at the same time and set the current date and time for deletion.
The time and date should be in YYYY-MM-DD HH:MM:SS format or golang’s time.DateTime format.
The time should be in UTC timezone.
Troubleshooting
If the cluster deletion has failed, you can troubleshoot the issue by inspecting -
Cluster resource status conditions, specifically the KubeConfigValid condition.
status conditions of the Plugin resources associated with the Cluster resource. There will be a clear indication of the issue in HelmReconcileFailed condition.