Cluster management
Manage your Kubernetes clusters via Greenhouse.
Greenhouse enables organizations to register their Kubernetes clusters within the platform, providing a centralized interface for managing and monitoring these clusters.
Once registered, users can perform tasks related to cluster management, such as deploying applications, scaling resources, and configuring access control, all within the Greenhouse platform.
This section provides guides for the management of Kubernetes clusters within Greenhouse.
1 - Cluster onboarding
Onboard an existing Kubernetes cluster to Greenhouse.
Content Overview
This guides describes how to onboard an existing Kubernetes cluster to your Greenhouse organization.
If you don’t have an organization yet please reach out to the Greenhouse administrators.
While all members of an organization can see existing clusters, their management requires org-admin
or cluster-admin
privileges.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Preparation
Download the latest greenhousectl
binary from here.
Onboarding a Cluster
to Greenhouse will require you to authenticate to two different Kubernetes clusters via respective kubeconfig
files:
greenhouse
: The cluster your Greenhouse installation is running on. You need organization-admin
or cluster-admin
priviledges.bootstrap
: The cluster you want to onboard. You need system:masters
privileges.
For consistency we will refer to those two clusters by their names from now on.
You need to have the kubeconfig
files for both the greenhouse
and the bootstrap
cluster at hand. The kubeconfig
file for the greenhouse
cluster can be downloaded via the Greenhouse dashboard:
Organization > Clusters > Access Greenhouse cluster.
Onboard
For accessing the bootstrap
cluster, the greenhousectl
will expect your default Kubernetes kubeconfig
file and context
to be set to bootstrap
. This can be achieved by passing the --kubeconfig
flag or by setting the KUBECONFIG
env var.
The location of the kubeconfig
file to the greenhouse
cluster is passed via the --greenhouse-kubeconfig
flag.
greenhousectl cluster bootstrap --kubeconfig=<path/to/bootstrap-kubeconfig-file> --greenhouse-kubeconfig <path/to/greenhouse-kubeconfig-file> --org <greenhouse-organization-name> --cluster-name <name>
Since Greenhouse generates URLs which contain the cluster name, we highly recommend to choose a short cluster name.
In particular for Gardener Clusters setting a short name is mandatory, because Gardener has very long cluster names, e.g. garden-greenhouse--monitoring-external
.
A typical output when you ran the command looks like
2024-02-01T09:34:55.522+0100 INFO setup Loaded kubeconfig {"context": "default", "host": "https://api.greenhouse-qa.eu-nl-1.cloud.sap"}
2024-02-01T09:34:55.523+0100 INFO setup Loaded client kubeconfig {"host": "https://api.monitoring.greenhouse.shoot.canary.k8s-hana.ondemand.com"}
2024-02-01T09:34:56.579+0100 INFO setup Bootstraping cluster {"clusterName": "monitoring", "orgName": "ccloud"}
2024-02-01T09:34:56.639+0100 INFO setup created namespace {"name": "ccloud"}
2024-02-01T09:34:56.696+0100 INFO setup created serviceAccount {"name": "greenhouse"}
2024-02-01T09:34:56.810+0100 INFO setup created clusterRoleBinding {"name": "greenhouse"}
2024-02-01T09:34:57.189+0100 INFO setup created clusterSecret {"name": "monitoring"}
2024-02-01T09:34:58.309+0100 INFO setup Bootstraping cluster finished {"clusterName": "monitoring", "orgName": "ccloud"}
After onboarding
- List all clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
- Show the details of a cluster:
kubectl --namespace=<greenhouse-organization-name> get cluster <name> -o yaml
Example:
apiVersion: greenhouse.sap/v1alpha1
kind: Cluster
metadata:
creationTimestamp: "2024-02-07T10:23:23Z"
finalizers:
- greenhouse.sap/cleanup
generation: 1
name: monitoring
namespace: ccloud
resourceVersion: "282792586"
uid: 0db6e464-ec36-459e-8a05-4ad668b57f42
spec:
accessMode: direct
maxTokenValidity: 72h
status:
bearerTokenExpirationTimestamp: "2024-02-09T06:28:57Z"
kubernetesVersion: v1.27.8
statusConditions:
conditions:
- lastTransitionTime: "2024-02-09T06:28:57Z"
status: "True"
type: Ready
When the status.kubernetesVersion
field shows the correct version of the Kubernetes cluster, the cluster was successfully bootstrapped in Greenhouse.
Then status.conditions
will contain a Condition
with type=Ready
and status="true""
In the remote cluster, a new namespace is created and contains some resources managed by Greenhouse.
The namespace has the same name as your organization in Greenhouse.
Trouble shooting
If the bootstrapping failed, you can find details about why it failed in the Cluster.statusConditions
. More precisely there will be a condition of type=KubeConfigValid
which might have hints in the message
field. This is also displayed in the UI on the Cluster
details view.
Reruning the onboarding command with an updated kubeConfig
file will fix these issues.