Cluster management
Manage your Kubernetes clusters via Greenhouse.
Greenhouse enables organizations to register their Kubernetes clusters within the platform, providing a centralized interface for managing and monitoring these clusters.
Once registered, users can perform tasks related to cluster management, such as deploying applications, scaling resources, and configuring access control, all within the Greenhouse platform.
This section provides guides for the management of Kubernetes clusters within Greenhouse.
1 - Cluster onboarding
Onboard an existing Kubernetes cluster to Greenhouse.
Content Overview
This guides describes how to onboard an existing Kubernetes cluster to your Greenhouse organization.
If you don’t have an organization yet please reach out to the Greenhouse administrators.
While all members of an organization can see existing clusters, their management requires org-admin
or cluster-admin
privileges.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Preparation
Download the latest greenhousectl
binary from here.
Onboarding a Cluster
to Greenhouse will require you to authenticate to two different Kubernetes clusters via respective kubeconfig
files:
greenhouse
: The cluster your Greenhouse installation is running on. You need organization-admin
or cluster-admin
privileges.bootstrap
: The cluster you want to onboard. You need system:masters
privileges.
For consistency we will refer to those two clusters by their names from now on.
You need to have the kubeconfig
files for both the greenhouse
and the bootstrap
cluster at hand. The kubeconfig
file for the greenhouse
cluster can be downloaded via the Greenhouse dashboard:
Organization > Clusters > Access Greenhouse cluster.
Onboard
For accessing the bootstrap
cluster, the greenhousectl
will expect your default Kubernetes kubeconfig
file and context
to be set to bootstrap
. This can be achieved by passing the --kubeconfig
flag or by setting the KUBECONFIG
env var.
The location of the kubeconfig
file to the greenhouse
cluster is passed via the --greenhouse-kubeconfig
flag.
greenhousectl cluster bootstrap --kubeconfig=<path/to/bootstrap-kubeconfig-file> --greenhouse-kubeconfig <path/to/greenhouse-kubeconfig-file> --org <greenhouse-organization-name> --cluster-name <name>
Since Greenhouse generates URLs which contain the cluster name, we highly recommend to choose a short cluster name.
In particular for Gardener Clusters setting a short name is mandatory, because Gardener has very long cluster names, e.g. garden-greenhouse--monitoring-external
.
A typical output when you run the command looks like
2024-02-01T09:34:55.522+0100 INFO setup Loaded kubeconfig {"context": "default", "host": "https://api.greenhouse-qa.eu-nl-1.cloud.sap"}
2024-02-01T09:34:55.523+0100 INFO setup Loaded client kubeconfig {"host": "https://api.monitoring.greenhouse.shoot.canary.k8s-hana.ondemand.com"}
2024-02-01T09:34:56.579+0100 INFO setup Bootstraping cluster {"clusterName": "monitoring", "orgName": "ccloud"}
2024-02-01T09:34:56.639+0100 INFO setup created namespace {"name": "ccloud"}
2024-02-01T09:34:56.696+0100 INFO setup created serviceAccount {"name": "greenhouse"}
2024-02-01T09:34:56.810+0100 INFO setup created clusterRoleBinding {"name": "greenhouse"}
2024-02-01T09:34:57.189+0100 INFO setup created clusterSecret {"name": "monitoring"}
2024-02-01T09:34:58.309+0100 INFO setup Bootstraping cluster finished {"clusterName": "monitoring", "orgName": "ccloud"}
After onboarding
- List all clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
- Show the details of a cluster:
kubectl --namespace=<greenhouse-organization-name> get cluster <name> -o yaml
Example:
apiVersion: greenhouse.sap/v1alpha1
kind: Cluster
metadata:
creationTimestamp: "2024-02-07T10:23:23Z"
finalizers:
- greenhouse.sap/cleanup
generation: 1
name: monitoring
namespace: ccloud
resourceVersion: "282792586"
uid: 0db6e464-ec36-459e-8a05-4ad668b57f42
spec:
accessMode: direct
maxTokenValidity: 72h
status:
bearerTokenExpirationTimestamp: "2024-02-09T06:28:57Z"
kubernetesVersion: v1.27.8
statusConditions:
conditions:
- lastTransitionTime: "2024-02-09T06:28:57Z"
status: "True"
type: Ready
When the status.kubernetesVersion
field shows the correct version of the Kubernetes cluster, the cluster was successfully bootstrapped in Greenhouse.
Then status.conditions
will contain a Condition
with type=Ready
and status="true""
In the remote cluster, a new namespace is created and contains some resources managed by Greenhouse.
The namespace has the same name as your organization in Greenhouse.
Troubleshooting
If the bootstrapping failed, you can find details about why it failed in the Cluster.statusConditions
. More precisely there will be a condition of type=KubeConfigValid
which might have hints in the message
field. This is also displayed in the UI on the Cluster
details view.
Reruning the onboarding command with an updated kubeConfig
file will fix these issues.
2 - Cluster offboarding
Offboarding an existing Kubernetes cluster in Greenhouse.
Content Overview
This guides describes how to off-board an existing Kubernetes cluster in your Greenhouse organization.
While all members of an organization can see existing clusters, their management requires org-admin
or cluster-admin
privileges.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Pre-requisites
Offboarding a Cluster
in Greenhouse requires authenticating to the greenhouse
cluster via kubeconfig
file:
greenhouse
: The cluster where Greenhouse installation is running on.organization-admin
or cluster-admin
privileges is needed for deleting a Cluster
resource.
Schedule Deletion
By default Cluster
resource deletion is blocked by ValidatingWebhookConfiguration
in Greenhouse.
This is done to prevent accidental deletion of cluster resources.
List the clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
A typical output when you run the command looks like
NAME AGE ACCESSMODE READY
mycluster-1 15d direct True
mycluster-2 35d direct True
mycluster-3 108d direct True
Delete a Cluster
resource by annotating it with greenhouse.sap/delete-cluster: "true"
.
Example:
kubectl annotate cluster mycluster-1 greenhouse.sap/delete-cluster=true --namespace=my-org
Once the Cluster
resource is annotated, the Cluster
will be scheduled for deletion in 48 hours (UTC time).
This is reflected in the Cluster
resource annotations and in the status conditions.
View the deletion schedule by inspecting the Cluster
resource:
kubectl get cluster mycluster-1 --namespace=my-org -o yaml
A typical output when you run the command looks like
apiVersion: greenhouse.sap/v1alpha1
kind: Cluster
metadata:
annotations:
greenhouse.sap/delete-cluster: "true"
greenhouse.sap/deletion-schedule: "2025-01-17 11:16:40"
finalizers:
- greenhouse.sap/cleanup
name: mycluster-1
namespace: my-org
spec:
accessMode: direct
kubeConfig:
maxTokenValidity: 72
status:
...
statusConditions:
conditions:
...
- lastTransitionTime: "2025-01-15T11:16:40Z"
message: deletion scheduled at 2025-01-17 11:16:40
reason: ScheduledDeletion
status: "False"
type: Delete
In order to cancel the deletion, you can remove the greenhouse.sap/delete-cluster
annotation:
kubectl annotate cluster mycluster-1 greenhouse.sap/delete-cluster- --namespace=my-org
the -
at the end of the annotation name is used to remove the annotation.
Impact
When a Cluster
resource is scheduled for deletion, all Plugin
resources associated with the Cluster
resource will skip the reconciliation process.
When the deletion schedule is reached, the Cluster
resource will be deleted and all associated resources Plugin
resources will be deleted as well.
In order to delete a Cluster
resource immediately -
- annotate the
Cluster
resource with greenhouse.sap/delete-cluster
. (see Schedule Deletion) - update the
greenhouse.sap/deletion-schedule
annotation to the current date and time.
You can also annotate the Cluster
resource with greenhouse.sap/delete-cluster
and greenhouse.sap/deletion-schedule
at the same time and set the current date and time for deletion.
The time and date should be in YYYY-MM-DD HH:MM:SS
format or golang’s time.DateTime
format.
The time should be in UTC timezone.
Troubleshooting
If the cluster deletion has failed, you can troubleshoot the issue by inspecting -
Cluster
resource status conditions, specifically the KubeConfigValid
condition.- status conditions of the
Plugin
resources associated with the Cluster
resource. There will be a clear indication of the issue in HelmReconcileFailed
condition.