This is the multi-page printable view of this section. Click here to print.
User guides
1 - Organization management
This section provides guides for the management your organization in Greenhouse.
1.1 - Creating an organization
Before you begin
This guides describes how to create an organization in Greenhouse.
Creating an organization
An organization within the Greenhouse cloud operations platform is a separate unit with its own configuration, teams, and resources tailored to their requirements. These organizations can represent different teams, departments, or projects within an enterprise, and they operate independently within the Greenhouse platform. They allow for the isolation and management of resources and configurations specific to their needs. Since each Organization will receive its own Kubernetes Namespace within the Greenhouse cluster.
While Greenhouse is build on the idea of a self-service API and automation driven platform, the workflow to onboard an organization to Greenhouse involves reaching out to the Greenhouse administrators. This ensures all pre-requisites are met, the organization is configured correctly and the administrators of the Organization understand the platform capabilities.
| ❗ Please note that the name of an organization is immutable. |
|---|
Steps
IdP Group An IdP Group is required to configure the administrators of the organization. Onboarding without an Group is not possible, as this would leave the organization without any administrators having access. Please include the name of the IdP Group in the message to the Greenhouse team when signing up.
Identity Provider
The authentication for the users belonging to your organization is based on the OpenID Connect (OIDC) standard.
Please include the parameters for your OIDC provider in the message to the Greenhouse team when signing up.Greenhouse organization
A Greenhouse administrator applies the following configuration to the central Greenhouse cluster.
Bear in mind that the name of the organization is immutable and will be part of all URLs.apiVersion: v1 kind: Namespace metadata: name: my-organization --- apiVersion: v1 kind: Secret metadata: name: oidc-config namespace: my-organization type: Opaque data: clientID: ... clientSecret: ... --- apiVersion: greenhouse.sap/v1alpha1 kind: Organization metadata: name: my-organization spec: authentication: oidc: clientIDReference: key: clientID name: oidc-config clientSecretReference: key: clientSecret name: oidc-config issuer: https://... scim: baseURL: URL to the SCIM server. basicAuthUser: secret: name: Name of the secret in the same namespace. key: Key in the secret holding the user value. basicAuthPw: secret: name: Name of the secret in the same namespace. key: Key in the secret holding the password value. description: My new organization displayName: Short name of the organization mappedOrgAdminIdPGroup: Name of the group in the IDP that should be mapped to the organization admin role.
Setting up Team members synchronization with Greenhouse
Team members synchronization with Greenhouse requires access to SCIM API.
For the members to be reflected in a Team’s status, the created Organization needs to be configured with URL and credentials of the SCIM API. SCIM API is used to get members for teams in the organization based on the IDP groups set for teams.
IDP group for the organization admin team must be set to the mappedOrgAdminIdPGroup field in the Organization configuration. It is required for the synchronization to work. IDP groups for remaining teams in the organization should be set in their respective configurations - Team’s mappedIdpGroup field.
2 - Cluster management
Greenhouse enables organizations to register their Kubernetes clusters within the platform, providing a centralized interface for managing and monitoring these clusters.
Once registered, users can perform tasks related to cluster management, such as deploying applications, scaling resources, and configuring access control, all within the Greenhouse platform.
This section provides guides for the management of Kubernetes clusters within Greenhouse.
2.1 - Cluster onboarding
Content Overview
This guides describes how to onboard an existing Kubernetes cluster to your Greenhouse organization.
If you don’t have an organization yet please reach out to the Greenhouse administrators.
While all members of an organization can see existing clusters, their management requires org-admin or cluster-admin privileges.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Preparation
Download the latest greenhousectl binary from here.
Onboarding a Cluster to Greenhouse will require you to authenticate to two different Kubernetes clusters via respective kubeconfig files:
greenhouse: The cluster your Greenhouse installation is running on. You needorganization-adminorcluster-adminprivileges.bootstrap: The cluster you want to onboard. You needsystem:mastersprivileges.
For consistency we will refer to those two clusters by their names from now on.
You need to have the kubeconfig files for both the greenhouse and the bootstrap cluster at hand. The kubeconfig file for the greenhouse cluster can be downloaded via the Greenhouse dashboard:
Organization > Clusters > Access Greenhouse cluster.
Onboard
For accessing the bootstrap cluster, the greenhousectl will expect your default Kubernetes kubeconfig file and context to be set to bootstrap. This can be achieved by passing the --kubeconfig flag or by setting the KUBECONFIG env var.
The location of the kubeconfig file to the greenhouse cluster is passed via the --greenhouse-kubeconfig flag.
greenhousectl cluster bootstrap --kubeconfig=<path/to/bootstrap-kubeconfig-file> --greenhouse-kubeconfig <path/to/greenhouse-kubeconfig-file> --org <greenhouse-organization-name> --cluster-name <name>
Since Greenhouse generates URLs which contain the cluster name, we highly recommend to choose a short cluster name.
In particular for Gardener Clusters setting a short name is mandatory, because Gardener has very long cluster names, e.g. garden-greenhouse--monitoring-external.
A typical output when you run the command looks like
2024-02-01T09:34:55.522+0100 INFO setup Loaded kubeconfig {"context": "default", "host": "https://api.greenhouse.tld"}
2024-02-01T09:34:55.523+0100 INFO setup Loaded client kubeconfig {"host": "https://api.remote.tld"}
2024-02-01T09:34:56.579+0100 INFO setup Bootstraping cluster {"clusterName": "monitoring", "orgName": "ccloud"}
2024-02-01T09:34:56.639+0100 INFO setup created namespace {"name": "ccloud"}
2024-02-01T09:34:56.696+0100 INFO setup created serviceAccount {"name": "greenhouse"}
2024-02-01T09:34:56.810+0100 INFO setup created clusterRoleBinding {"name": "greenhouse"}
2024-02-01T09:34:57.189+0100 INFO setup created clusterSecret {"name": "monitoring"}
2024-02-01T09:34:58.309+0100 INFO setup Bootstraping cluster finished {"clusterName": "monitoring", "orgName": "ccloud"}
After onboarding
- List all clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
- Show the details of a cluster:
kubectl --namespace=<greenhouse-organization-name> get cluster <name> -o yaml
Example:
apiVersion: greenhouse.sap/v1alpha1
kind: Cluster
metadata:
creationTimestamp: "2024-02-07T10:23:23Z"
finalizers:
- greenhouse.sap/cleanup
generation: 1
name: monitoring
namespace: ccloud
resourceVersion: "282792586"
uid: 0db6e464-ec36-459e-8a05-4ad668b57f42
spec:
accessMode: direct
maxTokenValidity: 72h
status:
bearerTokenExpirationTimestamp: "2024-02-09T06:28:57Z"
kubernetesVersion: v1.27.8
statusConditions:
conditions:
- lastTransitionTime: "2024-02-09T06:28:57Z"
status: "True"
type: Ready
When the status.kubernetesVersion field shows the correct version of the Kubernetes cluster, the cluster was successfully bootstrapped in Greenhouse.
Then status.conditions will contain a Condition with type=Ready and status="true""
In the remote cluster, a new namespace is created and contains some resources managed by Greenhouse. The namespace has the same name as your organization in Greenhouse.
Troubleshooting
If bootstrapping fails, you can inspect the Cluster.statusConditions for more details. The type=KubeConfigValid condition may contain hints in the message field. Additional insights can be found in the type=PermissionsVerified and type=ManagedResourcesDeployed conditions, which indicate whether ServiceAccount has valid permissions and whether required resources were successfully deployed. These conditions are also visible in the UI on the Cluster details view.
Reruning the onboarding command with an updated kubeConfig file will fix these issues.
2.2 - Remote Cluster Connectivity with OIDC
Content Overview
This guide describes how to onboard an existing Kubernetes cluster to your Greenhouse Organization with OIDC configuration. If you don’t have a Greenhouse Organization yet, please reach out to the Greenhouse administrators.
While all members of an Organization can see existing Clusters, their management requires org-admin or
cluster-admin privileges.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
OIDC Overview
Starting from Kubernetes v1.21, the feature Service Account Issuer Discovery transforms the Kubernetes API server into an OIDC identity provider. This setup facilitates the issuance of tokens, via service accounts to pods, which are recognizable by external services outside the Kubernetes cluster, thereby establishing an authentication pathway between the pod within the cluster and external services including those on Azure, AWS, etc.
Starting from Kubernetes v1.30, Structured Authentication Configuration moved to beta and the feature gate is enabled by default. This feature allows configuring multiple OIDC issuers and passing them as a configuration file to the Kubernetes API server.
More information on Structured Authentication Configuration can be found at https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration
With the combination of Service Account Issuer Discovery and Structured Authentication Configuration, Cluster to Cluster trust can be established.
A remote cluster can add the Greenhouse cluster’s Service Account Issuer as an
OIDC issuer in its Structured Authentication Configuration. This allows the Greenhouse cluster to authenticate
against said remote cluster, using an in-cluster service account token.
The OIDC Remote Cluster Connectivity is illustrated below -
(Pattern: `prefix:system:serviceaccount:org-name:cluster-name`) User->>AC: Applies Kubernetes Secret with OIDC parameters
(Namespace: Organization's Namespace) AC-->>AC: `Bootstrap controller creates ServiceAccount
(Sets OIDC Secret as owner on SA)` AC-->>AC: Bootstrap controller requests Token from ServiceAccount AC-->>AC: Bootstrap controller writes/updates KubeConfig in OIDC Secret
(Key: greenhouseKubeconfig) AC-->>AC: Bootstrap controller creates Cluster CR
(Sets Cluster as owner on OIDC Secret) AC-->>AC: Cluster controller fetches KubeConfig from Secret AC->>RC: Cluster controller requests Kubernetes Version & Node Status RC-->>AC: π Introspects Incoming Token
(Introspection towards Admin-Cluster Service Account Issuer URL) RC-->>RC: π Verifies Authorization via RBAC RC->>AC: β Responds with Requested Resources or β Authentication/Authorization Failure AC-->>AC: β° Periodic rotation of Kubeconfig in OIDC Secret
(key: greenhouseKubeconfig)
Preparation
The Greenhouse cluster should expose the /.well-known/openid-configuration over an unauthenticated endpoint to allow
remote clusters to fetch the OIDC configuration.
Some cloud providers or managed Kubernetes services might not expose the Service Account Issuer Discovery as an unauthenticated endpoint. In such cases, you can serve this configuration from a different endpoint and set this as the discoveryURL in structured authentication configuration.
Check out https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-authentication-configuration for more information.
Configure the OIDC issuer in the Structured Authentication Configuration of the remote cluster.
Example Structured Authentication Configuration file
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://<greenhouse-service-account-issuer>
audiences:
- greenhouse # audience should be greenhouse
claimMappings:
username:
claim: 'sub' # claim to be used as username
prefix: 'greenhouse:' # prefix to be added to the username to prevent impersonation (can be any string of your choice)
# additional trusted issuers
# - issuer:
Add RBAC rules to the remote cluster, authorizing Greenhouse to manage Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: greenhouse-<cluster-name>-oidc-access
subjects:
- kind: User
apiGroup: rbac.authorization.k8s.io
name: greenhouse:system:serviceaccount:<your-organization-namespace>:<cluster-name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
The subject kind User name must follow the pattern of
<prefix>:system:serviceaccount:<your-organization-namespace>:<cluster-name>.
<prefix> is the prefix used in the Structured Authentication Configuration file for the username claim mapping.
For convenience purposes, the `prefix` is set to `greenhouse:` in the example `Structured Authentication Configuration`
but it can be any string identifier of your choice.
If you use '-' in prefix, for example, `identifier-` then the subject name should be `identifier-system:serviceaccount:<your-organization-namespace>:<cluster-name>`.
Onboard
You can now onboard the remote Cluster to your Greenhouse Organization by applying a Secret in the following format:
apiVersion: v1
kind: Secret
metadata:
annotations:
"oidc.greenhouse.sap/api-server-url": "https://<remote-cluster-api-server-url>"
name: <cluster-name> # ensure the name provided here is the same as the <cluster-name> in the ClusterRoleBinding
namespace: <organization-namespace>
data:
ca.crt: <double-encoded-ca.crt> # remote cluster CA certificate base64 encoded
type: greenhouse.sap/oidc # secret type
Mandatory fields:
- the annotation
oidc.greenhouse.sap/api-server-urlmust have a valid URL pointing to the remote cluster’s API server - the
ca.crtfield must contain the remote cluster’s CA certificate - the type of the Secret must be
greenhouse.sap/oidc - the name of the secret must equal the
<cluster-name>used in theClusterRoleBindingSubject
ca.crt is the certificate-authority-data from the kubeconfig file of the remote cluster.
The certificate-authority-data can be extracted from the ConfigMap kube-root-ca.crt. This ConfigMap is present in every Namespace.
If the certificate is extracted from kube-root-ca.crt then it should be base64 encoded twice before adding it to the
secret.
example:
$ kubectl get configmap kube-root-ca.crt -n kube-system -o jsonpath='{.data.ca\.crt}' | base64 | base64
If the certificate is extracted from the KubeConfig file then the certificate is already base64 encoded, so the encoding is needed only once.
Apply the Secret to the Organization Namespace to onboard the remote cluster.
$ kubectl apply -f <oidc-secret-file>.yaml
Troubleshooting
If the bootstrapping failed, you can find details about why it failed in the Cluster.status.statusConditions.
More precisely, there will be conditions of type=KubeConfigValid, type=PermissionsVerified, type=ManagedResourcesDeployed, and type=Ready, each of which may provide helpful context in the message field.
These conditions are also displayed in the UI on the Cluster details view.
If there is any error message regarding RBAC then check the ClusterRoleBinding and ensure the subject name is correct
If there is any authentication error then you might see a message similar to
the server has asked for the client to provide credentials,
in such cases verify the Structured Authentication Configuration and ensure the issuer and audiences are correct.
The API Server logs in the remote cluster will provide more information on the authentication errors.
2.3 - Cluster offboarding
Content Overview
This guides describes how to off-board an existing Kubernetes cluster in your Greenhouse organization.
While all members of an organization can see existing clusters, their management requires org-admin or cluster-admin privileges.
NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.
Pre-requisites
Offboarding a Cluster in Greenhouse requires authenticating to the greenhouse cluster via kubeconfig file:
greenhouse: The cluster where Greenhouse installation is running on.organization-adminorcluster-adminprivileges is needed for deleting aClusterresource.
Schedule Deletion
By default Cluster resource deletion is blocked by ValidatingWebhookConfiguration in Greenhouse.
This is done to prevent accidental deletion of cluster resources.
List the clusters in your Greenhouse organization:
kubectl --namespace=<greenhouse-organization-name> get clusters
A typical output when you run the command looks like
NAME AGE ACCESSMODE READY
mycluster-1 15d direct True
mycluster-2 35d direct True
mycluster-3 108d direct True
Delete a Cluster resource by annotating it with greenhouse.sap/delete-cluster: "true".
Example:
kubectl annotate cluster mycluster-1 greenhouse.sap/delete-cluster=true --namespace=my-org
Once the Cluster resource is annotated, the Cluster will be scheduled for deletion in 48 hours (UTC time).
This is reflected in the Cluster resource annotations and in the status conditions.
View the deletion schedule by inspecting the Cluster resource:
kubectl get cluster mycluster-1 --namespace=my-org -o yaml
A typical output when you run the command looks like
apiVersion: greenhouse.sap/v1alpha1
kind: Cluster
metadata:
annotations:
greenhouse.sap/delete-cluster: "true"
greenhouse.sap/deletion-schedule: "2025-01-17 11:16:40"
finalizers:
- greenhouse.sap/cleanup
name: mycluster-1
namespace: my-org
spec:
accessMode: direct
kubeConfig:
maxTokenValidity: 72
status:
...
statusConditions:
conditions:
...
- lastTransitionTime: "2025-01-15T11:16:40Z"
message: deletion scheduled at 2025-01-17 11:16:40
reason: ScheduledDeletion
status: "False"
type: Delete
In order to cancel the deletion, you can remove the greenhouse.sap/delete-cluster annotation:
kubectl annotate cluster mycluster-1 greenhouse.sap/delete-cluster- --namespace=my-org
the
-at the end of the annotation name is used to remove the annotation.
Impact
When a Cluster resource is scheduled for deletion, all Plugin resources associated with the Cluster resource will skip the reconciliation process.
When the deletion schedule is reached, the Cluster resource will be deleted and all associated resources Plugin resources will be deleted as well.
Immediate Deletion
In order to delete a Cluster resource immediately -
- annotate the
Clusterresource withgreenhouse.sap/delete-cluster. (see Schedule Deletion) - update the
greenhouse.sap/deletion-scheduleannotation to the current date and time.
You can also annotate the Cluster resource with greenhouse.sap/delete-cluster and greenhouse.sap/deletion-schedule at the same time and set the current date and time for deletion.
The time and date should be in
YYYY-MM-DD HH:MM:SSformat or golang’stime.DateTimeformat. The time should be in UTC timezone.
Troubleshooting
If the cluster deletion has failed, you can troubleshoot the issue by inspecting -
Clusterresource status conditions, specifically theKubeConfigValidcondition.- status conditions of the
Pluginresources associated with theClusterresource. There will be a clear indication of the issue inHelmReconcileFailedcondition.
3 - Plugin management
Plugins extends the capabilities of the Greenhouse cloud operations platform, adding specific features or functionalities to tailor and enhance the platform for specific organizational needs.
These plugins are integral to Greenhouse’ extensibility, allowing users to customize their cloud operations environment and address unique requirements while operating within the Greenhouse ecosystem.
This section provides guides for the management of plugins for Kubernetes clusters within Greenhouse.
3.1 - Testing a Plugin
Overview

Plugin Testing Requirements
All Plugins contributed to Plugin-Extensions repository should include comprehensive Helm Chart Tests using the bats/bats-detik testing framework. This ensures our Plugins are robust, deployable, and catch potential issues early in the development cycle.
What is bats/bats-detik?
The bats/bats-detik framework simplifies end-to-end (e2e) Testing in Kubernetes. It combines the Bash Automated Testing System (bats) with Kubernetes-specific assertions (detik). This allows you to write test cases using natural language-like syntax, making your tests easier to read and maintain.
Implementing Tests
Create a
/testsfolder inside your Plugin’s Helm Charttemplatesfolder to store your test resources.ConfigMap definition:
- Create a
test-<plugin-name>-config.yamlfile in thetemplates/testsdirectory to define aConfigMapthat will hold your test script. - This
ConfigMapcontains the test scriptrun.shthat will be executed by the testPodto run your tests.
- Create a
{{- if .Values.testFramework.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-test
namespace: {{ .Release.Namespace }}
labels:
type: integration-test
data:
run.sh: |-
#!/usr/bin/env bats
load "/usr/lib/bats/bats-detik/utils"
load "/usr/lib/bats/bats-detik/detik"
DETIK_CLIENT_NAME="kubectl"
@test "Verify successful deployment and running status of the {{ .Release.Name }}-operator pod" {
verify "there is 1 deployment named '{{ .Release.Name }}-operator'"
verify "there is 1 service named '{{ .Release.Name }}-operator'"
try "at most 2 times every 5s to get pods named '{{ .Release.Name }}-operator' and verify that '.status.phase' is 'running'"
}
@test "Verify successful creation and bound status of {{ .Release.Name }} persistent volume claims" {
try "at most 3 times every 5s to get persistentvolumeclaims named '{{ .Release.Name }}.*' and verify that '.status.phase' is 'Bound'"
}
@test "Verify successful creation and available replicas of {{ .Release.Name }} Prometheus resource" {
try "at most 3 times every 5s to get prometheuses named '{{ .Release.Name }}' and verify that '.status.availableReplicas' is more than '0'"
}
@test "Verify creation of required custom resource definitions (CRDs) for {{ .Release.Name }}" {
verify "there is 1 customresourcedefinition named 'prometheuses'"
verify "there is 1 customresourcedefinition named 'podmonitors'"
}
{{- end -}}
Note: You can use this guide for reference when writing your test assertions.
Test Pod Definition:
- Create a
test-<plugin-name>.yamlfile in thetemplates/testsdirectory to define aPodthat will run your tests. - This test
Podwill mount theConfigMapcreated in the previous step and will execute the test scriptrun.sh.
- Create a
{{- if .Values.testFramework.enabled -}}
apiVersion: v1
kind: Pod
metadata:
name: {{ .Release.Name }}-test
namespace: {{ .Release.Namespace }}
labels:
type: integration-test
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
spec:
serviceAccountName: {{ .Release.Name }}-test
containers:
- name: bats-test
image: "{{ .Values.testFramework.image.registry}}/{{ .Values.testFramework.image.repository}}:{{ .Values.testFramework.image.tag }}"
imagePullPolicy: {{ .Values.testFramework.image.pullPolicy }}
command: ["bats", "-t", "/tests/run.sh"]
volumeMounts:
- name: tests
mountPath: /tests
readOnly: true volumes:
- name: tests
configMap:
name: {{ .Release.Name }}-test
restartPolicy: Never
{{- end -}}
- RBAC Permissions:
- Create the necessary RBAC resources in the
templates/testsfolder with a dedicatedServiceAccountand role authorisations so that the testPodcan cover test the cases. - You can use test-permissions.yaml from the
kube-monitoringas a reference to configure RBAC permissions for your test Pod.
- Configure the Test Framework in Plugin’s
values.yaml:- Add the following configuration to your Plugin’s
values.yamlfile:
- Add the following configuration to your Plugin’s
testFramework:
enabled: true
image:
registry: ghcr.io
repository: cloudoperators/greenhouse-extensions-integration-test
tag: main
imagePullPolicy: IfNotPresent
- Running the Tests:
Important: Once you have completed all the steps above, you are ready to run the tests. However, before running the tests, ensure that you perform a fresh Helm installation or upgrade of your Plugin’s Helm release against your test Kubernetes cluster (for example, Minikube or Kind) by executing the following command:
# For a new installation
helm install <Release name> <chart-path>
# For an upgrade
helm upgrade <Release name> <chart-path>
- After the Helm installation or upgrade is successful, run the tests against the same test Kubernetes Cluster by executing the following command.
helm test <Release name>
Plugin Testing with dependencies during Pull Requests
Overview
Some Plugins require other Plugins to be installed in the Cluster for their tests to run successfully. To support this, each Plugin can declare required dependencies using a test-dependencies.yaml file.
[!NOTE]
Thetest-dependencies.yamlfile is required if other Plugins need to be installed in the Kind Cluster created by the GitHub Actions workflow before running tests during a Pull Request for the Plugin.
How It Works
- Each Plugin can optionally include a
test-dependencies.yamlfile in the Pluginβs root directory (e.g.,Thanos/test-dependencies.yaml). - This file defines both the dependencies (other Plugins) that should be installed before testing begins and custom values for these dependencies.
Example test-dependencies.yaml
dependencies:
- kube-monitoring
values:
kubeMonitoring:
prometheus:
enabled: true
serviceMonitor:
enabled: false
prometheusSpec:
thanos:
objectStorageConfig:
secret:
type: FILESYSTEM
config:
directory: "/test"
prefix: ""
In this example, the Plugin:
- Declares
kube-monitoringas a dependency that must be installed first - Provides custom values for this dependent Plugin, specifically configuring Prometheus settings
Dependency Structure
The test-dependencies.yaml file supports:
- dependencies: A list of Plugin names that should be installed before testing the current Plugin.
- values: Custom configuration values to be applied when installing dependencies
Automation during Pull Requests
The GitHub Actions workflow automatically:
- Detects Plugins that are changed in the Pull Request.
- Parses the
test-dependencies.yamlfor each changed Plugin if present. - Installs listed dependencies in order
- Proceeds with Helm Chart linting and testing
Testing Values Configuration
Parent Plugin Configuration
- A Plugin may optionally provide a
<plugin-name>/ci/test-values.yamlfile - The GitHub Actions workflow will use this values file for testing the Plugin if it exists
- This allows you to customize values specifically for CI testing, without modifying the default
values.yaml
Dependent Plugin Configuration
- Values for dependent Plugins should be specified in the values section of your Plugin’s
test-dependencies.yamlfile. - This allows you to customize the configuration of dependent Plugins when they are installed for testing.
- The values specified in the
test-dependencies.yamlfile will override the default values of the dependent Plugins.
Example File Structure:
alert/
βββ charts/
βββ ci/
β βββ test-values.yaml
βββ test-dependencies.yaml
Contribution Checklist
Before submitting a Pull Request:
- Ensure your Plugin’s Helm Chart includes a
/testsdirectory. - Verify the presence of
test-<plugin-name>.yaml,test-<plugin-name>-config.yaml, andtest-permissions.yamlfiles. - Test your Plugin thoroughly using
helm test <release-name>and confirm that all tests pass against a test Kubernetes Cluster. - Include a brief description of the tests in your Pull Request.
- Make sure that your Plugin’s Chart Directory and the Plugin’s Upstream Chart Repository are added to this Greenhouse-Extensions Helm Test Config File. This will ensure that your Plugin’s tests are automatically run in the GitHub Actions workflow when you submit a Pull Request for this Plugin.
- Note that the dependencies of your Plugin’s Helm Chart might also have their own tests. If so, ensure that the tests of the dependencies are also passing.
- If your Plugin relies on other Plugins for testing, please follow the Plugin Testing with Dependencies section for declaring those dependencies.
Important Notes
- Test Coverage: Aim for comprehensive test coverage to ensure your Plugin’s reliability.
3.2 - Plugin deployment
Before you begin
This guides describes how to configure and deploy a Greenhouse plugin.
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: kube-monitoring-martin
namespace: <organization namespace> # same namespace in remote cluster for resources
spec:
clusterName: <name of the remote cluster >
disabled: false
displayName: <any human readable name>
pluginDefinition: <pluginDefinition name>
releaseNamespace: <namespace> # namespace in remote cluster where the plugin is deployed
releaseName: <helm release name> # name of the helm release that will be created
optionValues:
- name: <from the plugin options>
value: <from the plugin options>
- ...
Exposed services and ingresses
Plugins deploying Helm Charts into remote clusters support exposing their services and ingresses in two ways:
Service exposure via service-proxy
Services can be exposed through Greenhouse’s central service-proxy by adding this annotation:
annotations:
greenhouse.sap/expose: "true"
For services with multiple ports, you can specify which port to expose:
annotations:
greenhouse.sap/expose: "true"
greenhouse.sap/exposed-named-port: "https" # optional, defaults to first port
Direct ingress exposure
Ingresses can be exposed directly using their external URLs:
annotations:
greenhouse.sap/expose: "true"
greenhouse.sap/exposed-host: "api.example.com" # optional, for multi-host ingresses
Both types of exposures appear in the Plugin’s status.exposedServices with different types: service or ingress.
Deploying a Plugin
Create the Plugin resource via the command:
kubectl --namespace=<organization name> create -f plugin.yaml
After deployment
Check with
kubectl --namespace=<organization name> get pluginhas been properly created. When all components of the plugin are successfully created, the plugin should show the state configured.Check in the remote cluster that all plugin resources are created in the organization namespace.
URLs for exposed services and ingresses
After deploying the plugin to a remote cluster, the ExposedServices section in Plugin’s status provides an overview of the exposed resources. It maps URLs to both services and ingresses found in the manifest.
Service-proxy URLs (for services)
- Services exposed through service-proxy use the pattern:
https://$cluster--$hash.$organization.$basedomain - The
$hashis computed fromservice--$namespace
Direct ingress URLs (for ingresses)
- Ingresses are exposed using their actual hostnames:
https://api.example.comorhttp://internal.service.com - Protocol (http/https) is automatically detected from the ingress TLS configuration
- The host is taken from
greenhouse.sap/exposed-hostannotation or defaults to the first host rule
Both types are listed together in status.exposedServices with their respective types for easy identification.
3.3 - Managing Plugins for multiple clusters
Managing Plugins for multiple clusters
This guide describes how to configure and deploy a Greenhouse Plugin with the same configuration into multiple clusters.
The PluginPreset resource is used to create and deploy Plugins with a the identical configuration into multiple clusters. The list of clusters the Plugins will be deployed to is determind by a LabelSelector.
As a result, whenever a cluster, that matches the ClusterSelector is onboarded or offboarded, the Controller for the PluginPresets will take care of the Plugin Lifecycle. This means creating or deleting the Plugin for the respective cluster.
The same validation applies to the PluginPreset as to the Plugin. This includes immutable PluginDefinition and ReleaseNamespace fields, as well as the validation of the OptionValues against the PluginDefinition.
In case the PluginPreset is updated all of the Plugin instances that are managed by the PluginPreset will be updated as well. Each Plugin instance that is created from a PluginPreset has a label greenhouse.sap/pluginpreset: <PluginPreset name>. Also the name of the Plugin follows the scheme <PluginPreset name>-<cluster name>.
Changes that are done directly on a Plugin which was created from a PluginPreset will be overwritten immediately by the PluginPreset Controller. All changes must be performed on the PluginPreset itself. If a Plugin already existed with the same name as the PluginPreset would create, this Plugin will be ignored in following reconciliations.
A PluginPreset with the annotation greenhouse.sap/prevent-deletion may not be deleted. This is to prevent the accidental deletion of a PluginPreset including the managed Plugins and their deployed Helm releases. Only after removing the annotation it is possible to delete a PluginPreset.
Example PluginPreset
apiVersion: greenhouse.sap/v1alpha1
kind: PluginPreset
metadata:
name: kube-monitoring-preset
namespace: <organization namespace>
spec:
plugin: # this embeds the PluginSpec
displayName: <any human readable name>
pluginDefinition: <PluginDefinition name> # k get plugindefinition
releaseNamespace: <namespace> # namespace where the plugin is deployed to on the remote cluster. Will be created if not exists
optionValues:
- name: <from the PluginDefinition options>
value: <from the PluginDefinition options>
- ..
clusterSelector: # LabelSelector for the clusters the Plugin should be deployed to
matchLabels:
<label-key>: <label-value>
clusterOptionOverrides: # allows you to override specific options in a given cluster
- clusterName: <cluster name where we want to override values>
overrides:
- name: <option name to override>
value: <new value>
- ..
- ..
3.4 - Plugin Catalog
Before you begin
This guides describes how to explore the catalog of Greenhouse PluginDefinitions.
While all members of an organization can see the Plugin catalog, enabling, disabling and configuration PluginDefinitions for an organization requires organization admin privileges.
Exploring the PluginDefinition catalog
The PluginDefinition resource describes the backend and frontend components as well as mandatory configuration options of a Greenhouse extension.
While the PluginDefinition catalog is managed by the Greenhouse administrators and the respective domain experts, administrators of an organization can configure and tailor Plugins to their specific requirements.
NOTE: The UI also provides a preliminary catalog of Plugins under Organization> Plugin> Add Plugin.
Run the following command to see all available PluginDefinitions.
$ kubectl get plugindefinition NAME VERSION DESCRIPTION AGE cert-manager 1.1.0 Automated certificate management in Kubernetes 182d digicert-issuer 1.2.0 Extensions to the cert-manager for DigiCert support 182d disco 1.0.0 Automated DNS management using the Designate Ingress CNAME operator (DISCO) 179d doop 1.0.0 Holistic overview on Gatekeeper policies and violations 177d external-dns 1.0.0 The kubernetes-sigs/external-dns plugin. 186d heureka 1.0.0 Plugin for Heureka, the patch management system. 177d ingress-nginx 1.1.0 Ingress NGINX controller 187d kube-monitoring 1.0.1 Kubernetes native deployment and management of Prometheus, Alertmanager and related monitoring components. 51d prometheus-alertmanager 1.0.0 Prometheus alertmanager 60d supernova 1.0.0 Supernova, the holistic alert management UI 187d teams2slack 1.1.0 Manage Slack handles and channels based on Greenhouse teams and their members 115d
4 - Team management
A team is a group of users with shared responsibilities for managing and operating cloud resources within a Greenhouse organization.
These teams enable efficient collaboration, access control, and task assignment, allowing organizations to effectively organize their users and streamline cloud operations within the Greenhouse platform.
This section provides guides for the management of teams within an organization.
4.1 - Role-based access control on remote clusters
Greenhouse Team RBAC user guide
Role-Based Access Control (RBAC) in Greenhouse allows Organization administrators to manage the access of Teams on Clusters. TeamRole and TeamRoleBindings are used to manage the RBAC on remote Clusters. These two Custom Resource Definitions allow for fine-grained control over the permissions of each Team within each Cluster and Namespace.
Contents
Before you begin
This guide is intended for users who want to manage Role-Based Access Control (RBAC) for Teams on remote clusters managed by Greenhouse. It assumes you have a basic understanding of Kubernetes RBAC concepts and the Greenhouse platform.
π Permissions
- Create/Update TeamRoles and TeamRoleBindings in the Organization namespace.
- View Teams and Clusters in the Organization namespace
By default the necessary authorizations are provieded via the role:<organization>:admin RoleBinding that is granted to members of the Organizations Admin Team.
You can check the permissions inside the Organization namespace by running the following command:
kubectl auth can-i --list --namespace=<organization-namespace>
π» Software
- kubectl: The Kubernetes command-line tool which allows you to manage Kubernetes cluster resources.
Overview
- TeamRole: Defines a set of permissions that can be assigned to teams and individual users.
- TeamRoleBinding: Assigns a TeamRole to a specific Team and/or a list of users fo Clusters and (optionally) Namespaces.
Defining TeamRoles
TeamRoles define the actions a Team can perform on a Kubernetes cluster. For each Organziation a set of TeamRoles is seeded. The syntax of the TeamRole’s .spec is following the Kubernetes RBAC API.
Example
This TeamRole named pod-read grants read access to Pods.
apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
name: pod-read
spec:
rules:
- apiGroups:
- ""
resources:
- "pods"
verbs:
- "get"
- "list"
Seeded default TeamRoles
Greenhouse provides a set of default TeamRoles that are seeded to all clusters:
| TeamRole | Description | APIGroups | Resources | Verbs |
|---|---|---|---|---|
cluster-admin | Full privileges | * | * | * |
cluster-viewer | get, list and watch all resources | * | * | get, list, watch |
cluster-developer | Aggregated role. Greenhouse aggregates the application-developer and the cluster-viewer. Further TeamRoles can be aggregated. | |||
application-developer | Set of permissions on pods, deployments and statefulsets necessary to develop applications on k8s | apps | deployments, statefulsets | patch |
| "" | pods, pods/portforward, pods/eviction, pods/proxy, pods/log, pods/status, | get, list, watch, create, update, patch, delete | ||
node-maintainer | get and patch nodes | "" | nodes | get, patch |
namespace-creator | All permissions on namespaces | "" | namespaces | * |
Defining TeamRoleBindings
TeamRoleBindings link a Team, a TeamRole, one or more Clusters and optionally one or more Namespaces together. Once the TeamRoleBinding is created, the Team will have the permissions defined in the TeamRole within the specified Clusters and Namespaces. This allows for fine-grained control over the permissions of each Team within each Cluster. The TeamRoleBinding Controller within Greenhouse deploys RBAC resources to the targeted Clusters. The referenced TeamRole is created as a rbacv1.ClusterRole. In case the TeamRoleBinding references a Namespace, it is considered to be namespace-scoped. Hence, the controller will create a rbacv1.RoleBinding which links the Team with the rbacv1.ClusterRole. In case no Namespace is referenced, the Controller will create a cluster-scoped rbacv1.ClusterRoleBinding instead.
Assigning TeamRoles to Teams on a single Cluster
Roles are assigned to Teams through the TeamRoleBinding configuration, which links Teams to their respective roles within specific clusters.
This TeamRoleBinding assigns the pod-read TeamRole to the Team named my-team in the Cluster named my-cluster.
Example: team-rolebindings.yaml
apiVersion: greenhouse.sap/v1alpha2
kind: TeamRoleBinding
metadata:
name: my-team-read-access
spec:
teamRef: my-team
roleRef: pod-read
clusterSelector:
clusterName: my-cluster
Assigning TeamRoles to Teams on multiple Clusters
A LabelSelector can be used to assign a TeamRoleBinding to multiple Clusters.
This TeamRoleBinding assigns the pod-read TeamRole to the Team named my-team in all Clusters that have the label environment: production set.
apiVersion: greenhouse.sap/v1alpha2
kind: TeamRoleBinding
metadata:
name: production-cluster-admins
spec:
teamRef: my-team
roleRef: pod-read
clusterSelector:
labelSelector:
matchLabels:
environment: production
Aggregating TeamRoles
It is possible with Kubernetes RBAC to aggregate rbacv1.ClusterRoles. This is also supported for TeamRoles. All label specified on a TeamRole’s .spec.Labels will be set on the rbacv1.ClusterRole created on the target cluster. This makes it possible to aggregate multiple rbacv1.ClusterRole resources by using a rbacv1.AggregationRule. This can be specified on a TeamRole by setting .spec.aggregationRule.
More details on the concept of Aggregated ClusterRoles can be found in the Kubernetes documentation: Aggregated ClusterRoles
[!NOTE] A TeamRole is only created on a cluster if it is referenced by a TeamRoleBinding. If a TeamRole is not referenced by a TeamRoleBinding it will not be created on any target cluster. A TeamRoleBinding referencing a TeamRole with an aggregationRule will only provide the correct access, if there is at least one TeamRoleBinding referencing a TeamRole with the corresponding label deployed to the same cluster.
The following example shows how an AggregationRule can be used with TeamRoles and TeamRoleBindings.
This TeamRole specifies .spec.Labels. The labels will be applied to the resulting ClusterRole on the target cluster.
apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
name: pod-read
spec:
labels:
aggregate: "true"
rules:
- apiGroups:
- ""
resources:
- "pods"
verbs:
- "get"
- "list"
This TeamRoleBinding assigns the pod-read TeamRole to the Team named my-team in all Clusters with the label environment: production.
apiVersion: greenhouse.sap/v1alpha2
kind: TeamRoleBinding
metadata:
name: production-pod-read
spec:
teamRef: my-team
roleRef: pod-read
clusterSelector:
labelSelector:
matchLabels:
environment: production
Access granted by TeamRoleBinding can also be restricted to specified Namespaces. This can be achieved by specifying the .spec.namespaces field in the TeamRoleBinding.
Setting dedicated Namespaces results in RoleBindings being created in the specified Namespaces. The Team will then only have access to the Pods in the specified Namespaces. The TeamRoleBinding controller will create a non-existing Namespace, only if the field .spec.createNamespaces is set to true on the TeamRoleBinding. If this field is not set, the TeamRoleBinding controller will not create the Namespace or the RBAC resources.
Deleting a TeamRoleBinding will only result in the deletion of the RBAC resources but will never result in the deletion of the Namespace.
apiVersion: greenhouse.sap/v1alpha2
kind: TeamRoleBinding
metadata:
name: production-pod-read
spec:
teamRef: my-team
roleRef: pod-read
clusterSelector:
labelSelector:
matchLabels:
environment: production
namespaces:
- kube-system
# createNamespaces: true # optional, if set the TeamRoleBinding will create the namespaces if they do not exist
This TeamRole has a .spec.aggregationRule set. This aggregationRule will be added to the ClusterRole created on the target clusters. With the aggregationRule set it will aggregate the ClusterRoles created by the TeamRoles with the label aggregate: "true". The Team will have the permissions of both TeamRoles and will be able to get, list, update and patch Pods.
apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
name: aggregated-role
spec:
aggregationRule:
clusterRoleSelectors:
- matchLabels:
"aggregate": "true"
apiVersion: greenhouse.sap/v1alpha2
kind: TeamRoleBinding
metadata:
name: aggregated-rolebinding
spec:
teamRef: operators
roleRef: aggregated-role
clusterSelector:
labelSelector:
matchLabels:
environment: production
Updating TeamRoleBindings
Updating the RoleRef of a ClusterRoleBinding and RoleBinding is not allowed, but requires recreating the TeamRoleBinding resources. See ClusterRoleBinding docs for more information. This is to allow giving out permissions to update the subjects, while avoiding that privileges are changed. Furthermore, changing the TeamRole can change the extent of a binding significantly. Therefore it needs to be recreated.
After the TeamRoleBinding has been created, it can be updated with some limitations. Similarly to RoleBindings, the .spec.roleRef and .spec.teamRef can not be changed.
The TeamRoleBinding’s .spec.namespaces can be amended to include more namespaces. However, the scope of the TeamRoleBinding cannot be changed. If a TeamRoleBinding has been created with .spec.namespaces specified, it is namespace-scoped, and cannot be changed to cluster-scoped by removing the .spec.namespaces. The reverse is true for a cluster-scoped TeamRoleBinding, where it is not possible to add .spec.namespaces once created.
4.2 - Team creation
Before you begin
This guides describes how to create a team in your Greenhouse organization.
While all members of an organization can see existing teams, their management requires organization admin privileges.
Creating a team
The team resource is used to structure members of your organization and assign fine-grained access and permission levels.
Each Team must be backed by a group in the identity provider (IdP) of the Organization.
- IdP group should be set on the
mappedIdPGroupfield in Team configuration. - This, along with SCIM API configured in the Organization, allows for synchronization of Team members with Greenhouse.
NOTE: The UI is currently in development. For now this guides describes the onboarding workflow via command line.
- To onboard a new cluster provide the kubeconfig file with a static, short-lived token.
It should look similar to this example:cat <<EOF | kubectl apply -f - apiVersion: greenhouse.sap/v1alpha1 kind: Team metadata: name: <name> spec: description: My new team mappedIdPGroup: <IdP group name> EOF