Plugin management
Manage plugins for your Kubernetes clusters via Greenhouse.
Plugins extends the capabilities of the Greenhouse cloud operations platform, adding specific features or functionalities to tailor and enhance the platform for specific organizational needs.
These plugins are integral to Greenhouse’ extensibility, allowing users to customize their cloud operations environment and address unique requirements while operating within the Greenhouse ecosystem.
This section provides guides for the management of plugins for Kubernetes clusters within Greenhouse.
1 - Local Plugin Development
Develop a new Greenhouse Plugin against a local development environment.
Introduction
Let’s illustrate how to leverage Greenhouse Plugins to deploy a Helm Chart into a remote cluster within the local development environment.
This guide will walk you through the process of spinning up the local development environment, creating a new Greenhouse PluginDefinition and deploying it to a local kind cluster.
At the end of the guide you will have spun up the local development environment, onboarded a Cluster, created a PluginDefinition and deployed it as a Plugin to the onboarded Cluster.
[!NOTE]
This guide assumes you already have a working Helm chart and will not cover how to create a Helm Chart from scratch. For more information on how to create a Helm Chart, please refer to the Helm documentation.
Requirements
Starting the local develoment environment
Follow the Local Development documentation to spin up the local Greenhouse development environment.
This will provide you with a local Greenhouse instance running, filled with some example Greenhouse resources and the Greenhouse UI running on http://localhost:3000
.
Onboarding a Cluster
In this step we will create and onboard a new Cluster to the local Greenhouse instance. The local cluster will be created utilizing kind.
In order to onboard a kind cluster follow the onboarding a cluster secton of the dev-env README.
After onboarding the cluster you should see the new Cluster in the Greenhouse UI.
Prepare Helm Chart
For this example we will use the bitnami nginx Helm Chart.
The packaged chart can be downloaded with:
helm pull oci://registry-1.docker.io/bitnamicharts/nginx --destination ./
After unpacking the *.tgz
file there is a folder named nginx
containing the Helm Chart.
Generating a PluginDefinition from a Helm Chart
Using the files of the Helm Chart we will create a new Greenhouse PluginDefinition using the greenhousectl
CLI.
greenhousectl plugin generate ./nginx ./nginx-plugin
This will create a new folder nginx-plugin
containing the PluginDefinition in a nested structure.
Modifying the PluginDefinition
The generated PluginDefinition contains a plugindefinition.yaml
file which defines the PluginDefinition. But there are still a few steps required to make it work.
Specify the Helm Chart repository
After generating the PluginDefinition the .spec.helmChart.repository
field in the plugindefinition.yaml
contains a TODO comment. This field should be set to the repository where the Helm Chart is stored.
For the bitnami nginx Helm Chart this would be oci://registry-1.docker.io/bitnamicharts
.
Specify the UI application
A PluginDefinition may specify a UI application that will be integrated into the Greenhouse UI. This tutorial does not cover how to create a UI application. Therefore the section .spec.uiApplication
in the plugindefinition.yaml
should be removed.
[!INFORMATION]
The UI section of the dev-env readme provides a brief introduction developing a frontend application for Greenhouse.
Modify the Options
The PluginDefinition contains a section .spec.options
which defines options that can be set when deploying the Plugin to a Cluster. These options have been generated based on the Helm Chart values.yaml file. You can modify the options to fit your needs.
In general the options are defined as follows:
options:
- default: true
value: abcd123
description: automountServiceAccountToken
name: automountServiceAccountToken
required: false
type: ""
default specifies if the option should provide a default value. If this is set to true, the value specified will be used as the default value. The Plugin can still provide a different value for this option.
description provides a description for the option.
name specifies the Helm Chart value name, as it is used within the Chart’s template files.
required specifies if the option is required. This will be used by the Greenhouse Controllers to determine if a Plugin is valid.
type specifies the type of the option. This can be any of [string, secret, bool, int, list, map]
. This will be used by the Greenhouse Controllers to validate the provided value.
For this tutorial we will remove all options.
Deploying a Plugin to the Kind Cluster
After modifying the PluginDefinition we can deploy it to the local Greenhouse cluster and create a Plugin that will deploy the nginx
to the onboarded cluster.
kubectl --kubeconfig=./envtest/kubeconfig apply -f ./nginx-plugin/nginx/17.3.2/plugindefinition.yaml
plugindefinition.greenhouse.sap/nginx-17.3.2 created
The Plugin can be configured using the Greenhouse UI running on http://localhost:3000
.
Follow the following steps to deploy a Plugin for the created PluginDefinition into the onboarded kind cluster:
- Navigate on the Greenhouse UI to
Organization>Plugins
. - Click on the
Add Plugin
button. - Select the
nginx-17.3.2
PluginDefinition. - Click on the
Configure Plugin
button. - Select the cluster in the drop-down.
- Click on the
Create Plugin
button.
After the Plugin has been created the Plugin Overview page will show the status of the plugin.
Theh deployment can also be verified in the onboarded cluster by checking the pods in the test-org
namespace of the kind cluster.
kind export kubeconfig --name remote-cluster
Set kubectl context to "kind-remote-cluster"
k get pods -n test-org
NAME READY STATUS RESTARTS AGE
nginx-remote-cluster-758bf47c77-pz72l 1/1 Running 0 2m11s
Development Tips
Local Helm Charts
Instead of uploading the Helm Chart to a chart repository, it is possible to load it from the filesystem of the Greenhouse container.
This can be especially useful if you are developing your own chart for a PluginDefinition, as it speeds up the testing loop.
The Docker compose setup mounts the dev-env/helm-charts
directory and watches for any changes. This means you can point to this local chart in your plugindefinition.yaml
as such:
helmChart:
name: helm-charts/{filename}.tgz
repository:
2 - Testing a Plugin
Guidelines for testing plugins contributed to the Greenhouse project.
Plugin Testing Requirements
All plugins contributed to plugin-extensions repository should include comprehensive Helm Chart Tests using the bats/bats-detik
testing framework. This ensures our plugins are robust, deployable, and catch potential issues early in the development cycle.
What is bats/bats-detik?
The bats/bats-detik framework simplifies end-to-end (e2e) Testing in Kubernetes. It combines the Bash Automated Testing System (bats
) with Kubernetes-specific assertions (detik
). This allows you to write test cases using natural language-like syntax, making your tests easier to read and maintain.
Implementing Tests
Create a /tests
folder inside your Plugin’s Helm Chart templates
folder to store your test resources.
ConfigMap definition:
- Create a
test-<plugin-name>-config.yaml
file in the templates/tests
directory to define a ConfigMap
that will hold your test script. - This
ConfigMap
contains the test script run.sh
that will be executed by the test Pod
to run your tests.
{{- if .Values.testFramework.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-test
namespace: {{ .Release.Namespace }}
labels:
type: integration-test
annotations:
"helm.sh/hook": test
"helm.sh/hook-weight": "-5" # Installed and upgraded before the test pod
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
data:
run.sh: |-
#!/usr/bin/env bats
load "/usr/lib/bats/bats-detik/utils"
load "/usr/lib/bats/bats-detik/detik"
DETIK_CLIENT_NAME="kubectl"
@test "Verify successful deployment and running status of the {{ .Release.Name }}-operator pod" {
verify "there is 1 deployment named '{{ .Release.Name }}-operator'"
verify "there is 1 service named '{{ .Release.Name }}-operator'"
try "at most 2 times every 5s to get pods named '{{ .Release.Name }}-operator' and verify that '.status.phase' is 'running'"
}
@test "Verify successful creation and bound status of {{ .Release.Name }} persistent volume claims" {
try "at most 3 times every 5s to get persistentvolumeclaims named '{{ .Release.Name }}.*' and verify that '.status.phase' is 'Bound'"
}
@test "Verify successful creation and available replicas of {{ .Release.Name }} Prometheus resource" {
try "at most 3 times every 5s to get prometheuses named '{{ .Release.Name }}' and verify that '.status.availableReplicas' is more than '0'"
}
@test "Verify creation of required custom resource definitions (CRDs) for {{ .Release.Name }}" {
verify "there is 1 customresourcedefinition named 'prometheuses'"
verify "there is 1 customresourcedefinition named 'podmonitors'"
}
{{- end -}}
Note: You can use this guide for reference when writing your test assertions.
Test Pod Definition:
- Create a
test-<plugin-name>.yaml
file in the templates/tests
directory to define a Pod
that will run your tests. - This test
Pod
will mount the ConfigMap
created in the previous step and will execute the test script run.sh
.
{{- if .Values.testFramework.enabled -}}
apiVersion: v1
kind: Pod
metadata:
name: {{ .Release.Name }}-test
namespace: {{ .Release.Namespace }}
labels:
type: integration-test
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
spec:
serviceAccountName: {{ .Release.Name }}-test
containers:
- name: bats-test
image: "{{ .Values.testFramework.image.registry}}/{{ .Values.testFramework.image.repository}}:{{ .Values.testFramework.image.tag }}"
imagePullPolicy: {{ .Values.testFramework.image.pullPolicy }}
command: ["bats", "-t", "/tests/run.sh"]
volumeMounts:
- name: tests
mountPath: /tests
readOnly: true volumes:
- name: tests
configMap:
name: {{ .Release.Name }}-test
restartPolicy: Never
{{- end -}}
- RBAC Permissions:
- Create the necessary RBAC resources in the
templates/tests
folder with a dedicated ServiceAccount
and role authorisations so that the test Pod
can cover test the cases. - You can use test-permissions.yaml from the
kube-monitoring
as a reference to configure RBAC permissions for your test Pod.
- Configure the Test Framework in Plugin’s
values.yaml
:- Add the following configuration to your Plugin’s
values.yaml
file:
testFramework:
enabled: true
image:
registry: ghcr.io
repository: cloudoperators/greenhouse-extensions-integration-test
tag: main
imagePullPolicy: IfNotPresent
- Running the Tests:
Important: Once you have completed all the steps above, you are ready to run the tests. However, before running the tests, ensure that you perform a fresh Helm installation or upgrade of your Plugin’s Helm release against your test Kubernetes cluster (for example, Minikube or Kind) by executing the following command:
# For a new installation
helm install <Release name> <chart-path>
# For an upgrade
helm upgrade <Release name> <chart-path>
- After the Helm installation or upgrade is successful, run the tests against the same test Kubernetes cluster by executing the following command.
Contribution Checklist
Before submitting a pull request:
- Ensure your Plugin’s Helm Chart includes a
/tests
directory. - Verify the presence of
test-<plugin-name>.yaml
, test-<plugin-name>-config.yaml
, and test-permissions.yaml
files. - Test your Plugin thoroughly using
helm test <release-name>
and confirm that all tests pass against a test Kubernetes cluster. - Include a brief description of the tests in your pull request.
- Make sure that your Plugin’s Chart Directory and the Plugin’s Upstream Chart Repository are added to this greenhouse-extensions helm test config file. This will ensure that your Plugin’s tests are automatically run in the GitHub Actions workflow when you submit a pull request for this Plugin.
- Note that the dependencies of your Plugin’s helm chart might also have their own tests. If so, ensure that the tests of the dependencies are also passing.
Important Notes
- Test Coverage: Aim for comprehensive test coverage to ensure your Plugin’s reliability.
- Test Isolation: Design tests that don’t interfere with other plugins or production environments.
3 - Plugin deployment
Deploy a Greenhouse plugin to an existing Kubernetes cluster.
Before you begin
This guides describes how to configure and deploy a Greenhouse plugin.
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: kube-monitoring-martin
namespace: <organization namespace> # same namespace in remote cluster for resources
spec:
clusterName: <name of the remote cluster > # k get cluster
disabled: false
displayName: <any human readable name>
pluginDefinition: <plugin name> # k get plugin
optionValues:
- name: <from the plugin options>
value: <from the plugin options>
- ...
Exposed services
Plugins deploying Helm Charts into remote clusters support exposed services.
By adding the following label to a service in helm chart it will become accessible from the central greenhouse system via a service proxy:
greenhouse.sap/expose: "true"
Deploying a Plugin
Create the Plugin resource via the command:
kubectl --namespace=<organization name> create -f plugin.yaml
After deployment
Check with kubectl --namespace=<organization name> get plugin
has been properly created. When all components of the plugin are successfully created, the plugin should show the state configured.
Check in the remote cluster that all plugin resources are created in the organization namespace.
URLs for exposed services
After deploying the plugin to a remote cluster, ExposedServices section in Plugin’s status provides an overview of the Plugins services that are centrally exposed. It maps the exposed URL to the service found in the manifest.
- The URLs for exposed services are created in the following pattern:
$https://$cluster--$hash.$organisation.$basedomain
. The $hash
is computed from service--$namespace
. - When deploying a plugin to the central cluster, the exposed services won’t have their URLs defined, which will be reflected in the Plugin’s Status.
4 - Managing Plugins for multiple clusters
Deploy a Greenhouse Plugin with the same configuration into multiple clusters.
Managing Plugins for multiple clusters
This guide describes how to configure and deploy a Greenhouse Plugin with the same configuration into multiple clusters.
The PluginPreset resource is used to create and deploy Plugins with a the identical configuration into multiple clusters. The list of clusters the Plugins will be deployed to is determind by a LabelSelector.
As a result, whenever a cluster, that matches the ClusterSelector is onboarded or offboarded, the Controller for the PluginPresets will take care of the Plugin Lifecycle. This means creating or deleting the Plugin for the respective cluster.
The same validation applies to the PluginPreset as to the Plugin. This includes immutable PluginDefinition and ReleaseNamespace fields, as well as the validation of the OptionValues against the PluginDefinition.
In case the PluginPreset is updated all of the Plugin instances that are managed by the PluginPreset will be updated as well. Each Plugin instance that is created from a PluginPreset has a label greenhouse.sap/pluginpreset: <PluginPreset name>
. Also the name of the Plugin follows the scheme <PluginPreset name>-<cluster name>
.
Changes that are done directly on a Plugin which was created from a PluginPreset will be overwritten immediately by the PluginPreset Controller. All changes must be performed on the PluginPreset itself.
If a Plugin already existed with the same name as the PluginPreset would create, this Plugin will be ignored in following reconciliations.
Example PluginPreset
apiVersion: greenhouse.sap/v1alpha1
kind: PluginPreset
metadata:
name: kube-monitoring-preset
namespace: <organization namespace>
spec:
plugin: # this embeds the PluginSpec
displayName: <any human readable name>
pluginDefinition: <PluginDefinition name> # k get plugindefinition
releaseNamespace: <namespace> # namespace where the plugin is deployed to on the remote cluster. Will be created if not exists
optionValues:
- name: <from the PluginDefinition options>
value: <from the PluginDefinition options>
- ..
clusterSelector: # LabelSelector for the clusters the Plugin should be deployed to
matchLabels:
<label-key>: <label-value>
clusterOptionOverrides: # allows you to override specific options in a given cluster
- clusterName: <cluster name where we want to override values>
overrides:
- name: <option name to override>
value: <new value>
- ..
- ..
5 - Plugin Catalog
Explore the catalog of Greenhouse PluginDefinitions
Before you begin
This guides describes how to explore the catalog of Greenhouse PluginDefinitions.
While all members of an organization can see the Plugin catalog, enabling, disabling and configuration PluginDefinitions for an organization requires organization admin privileges.
Exploring the PluginDefinition catalog
The PluginDefinition resource describes the backend and frontend components as well as mandatory configuration options of a Greenhouse extension.
While the PluginDefinition catalog is managed by the Greenhouse administrators and the respective domain experts, administrators of an organization can configure and tailor Plugins to their specific requirements.
NOTE: The UI also provides a preliminary catalog of Plugins under Organization> Plugin> Add Plugin.
Run the following command to see all available PluginDefinitions.
$ kubectl get plugindefinition
NAME VERSION DESCRIPTION AGE
cert-manager 1.1.0 Automated certificate management in Kubernetes 182d
digicert-issuer 1.2.0 Extensions to the cert-manager for DigiCert support 182d
disco 1.0.0 Automated DNS management using the Designate Ingress CNAME operator (DISCO) 179d
doop 1.0.0 Holistic overview on Gatekeeper policies and violations 177d
external-dns 1.0.0 The kubernetes-sigs/external-dns plugin. 186d
heureka 1.0.0 Plugin for Heureka, the patch management system. 177d
ingress-nginx 1.1.0 Ingress NGINX controller 187d
kube-monitoring 1.0.1 Kubernetes native deployment and management of Prometheus, Alertmanager and related monitoring components. 51d
prometheus-alertmanager 1.0.0 Prometheus alertmanager 60d
supernova 1.0.0 Supernova, the holistic alert management UI 187d
teams2slack 1.1.0 Manage Slack handles and channels based on Greenhouse teams and their members 115d