Local development setup
What is Greenhouse?
Greenhouse is a Kubernetes operator build with Kubebuilder and a UI on top of the k8s API.
It expands the Kubernetes API via CustomResourceDefinitions. The different aspects of the CRDs are reconciled by several controllers. It also acts as an admission webhook.
The Greenhouse Dashboard is a UI acting on the k8s apiserver of the cluster Greenhouse is running in. The UI itself is a Juno application containing several micro frontends.
Greenhouse provides a couple of cli commands based on make to run a local Greenhouse instance.
- Setting up the development environment
- Run local Greenhouse
- Developing Greenhouse core functionality:
- Greenhouse Dashboard
- Greenhouse Extensions
- Additional information
This handy CLI tool will help you to setup your development environment in no time.
Prerequisites
Usage
Build greenhousectl from source by running the following command: make cli
[!NOTE]
The CLI binary will be available in thebinfolder
Setting up the development environment
There are multiple local development environment setup available for the Greenhouse project. You can choose the one that fits your needs.
All commands will spin up KinD clusters and setup the necessary components
If you have a ~/.kube/config file then KinD will automatically merge the kubeconfig of the created cluster(s).
Use kubectl config use-context kind-greenhouse-admin to switch to greenhouse admin cluster context.
Use kubectl config use-context kind-greenhouse-remote to switch to greenhouse remote cluster context.
If you do not have the contexts of the created cluster(s) in ~/.kube/config file then you can extract it from the
operating system’s tmp folder, where the CLI will write kubeconfig of the created KinD clusters.
[!NOTE]
linux / macOS: inunixlike systems you can find thekubeconfigat$TMPDIR/greenhouse/<clusterName>.kubeconfig
windows: inwindowsmany tmp folders exist so the CLI can write thekubeconfigto the first non-empty value from%TMP%,%TEMP%,%USERPROFILE%The path where the
kubeconfigis written will be displayed in the terminal after the command is executed by the CLI
use kubectl --kubeconfig=<path to admin / remote kubeconfig> to interact with the local greenhouse clusters
Run Greenhouse Locally
make setup
- This will install the operator, the dashboard, cors-proxy and a sample organization with an onboarded remote cluster
- port-forward the
cors-proxybykubectl port-forward svc/greenhouse-cors-proxy 9090:80 -n greenhouse & - port-forward the
dashboardbykubectl port-forward svc/greenhouse-dashboard 5001:80 -n greenhouse & - Access the local
demoorganization on the Greenhouse dashboard on localhost:5001
Develop Controllers locally and run the webhook server in-cluster
make setup-controller-dev
[!NOTE] set the environment variable
CONTROLLERS_ONLY=truein your debugger configurationIf no environment variable is set, the webhook server will error out due to the missing certs
Develop Admission Webhook server locally
make setup-webhook-dev
[!NOTE] set the environment variable
WEBHOOK_ONLY=truein your debugger configuration if you only want to run the webhook server
Develop Controllers and Admission Webhook server locally
WITH_CONTROLLERS=false DEV_MODE=true make setup-manager
This will modify the ValidatingWebhookConfiguration and MutatingWebhookConfiguration to use the
host.docker.internal (macOS / windows) or ipv4 (linux) address for the webhook server and write the
webhook certs to /tmp/k8s-webhook-server/serving-certs.
Now you can run the webhook server and the controllers locally
Since both need to be run locally no CONTROLLERS_ONLY or WEBHOOK_ONLY environment variables are needed in your
debugger configuration
[!NOTE] The dev setup will modify the webhook configurations to have 30s timeout for the webhook requests, but when break points are used to debug webhook requests, it can result into timeouts. In such cases, modify the CR with a dummy annotation to re-trigger the webhook request and reconciliation
Running Greenhouse Dashboard in-cluster
make setup-dashboard
[!NOTE] You will need to port-forward the cors-proxy service and the dashboard service to access the dashboard
Information on how to access the dashboard is displayed after the command is executed
Run Greenhouse Core for UI development
The Greenhouse UI consists of a Juno application hosting several micro frontends (MFEs). To develop the UI you will need a local Greenhouse cluster api-server as backend for your local UI:
- Startup the environment as in Run local Greenhouse
- The Greenhouse UI expects an
appProps.jsonwith the necessary parameters to run - This
appProps.jsonConfigMapis created in thegreenhousenamespace by the local installation to configure the in-cluster dashboard. - You can
- either create and use your own
appProps.jsonfile when running the UI locally - or retrieve the generated
appProps.jsonin-cluster by executingkubectl get cm greenhouse-dashboard-app-props -n greenhouse -o=json | jq -r '.data.["appProps.json"]'
- either create and use your own
- After port-forwarding
cors-proxyservice, it should be used asapiEndpointinappProps.json - Start the dashboard locally (more information on how to run the dashboard locally can be found in the Juno Repository)
Test Plugin / Greenhouse Extension charts locally (Deprecated)
[!NOTE] This setup is deprecated and will be removed in the future. Please refer to Test Greenhouse Extensions with local OCI registry
PLUGIN_DIR=<absolute-path-to-charts-dir> make setup
- This will install a full running setup of operator, dashboard, sample organization with an onboarded remote cluster
- Additionally, it will mount the plugin charts directory on to the
nodeof theKinDcluster - The operator deployment has a hostPath volume mount to the plugin charts directory from the
nodeof theKinDcluster
To test your local Chart (now mounted to the KinD cluster) with a plugindefinition.yaml you would need to adjust .spec.helmChart.name to use the local chart.
With the provided mounting mechanism it will always live in local/plugins/ within the KinD cluster.
Modify spec.helmChart.name to point to the local file path of the chart that needs to be tested
Example Scenario:
You have cloned the Greenhouse Extensions repository,
and you want to test cert-manager plugin chart locally.
apiVersion: greenhouse.sap/v1alpha1
kind: PluginDefinition
metadata:
name: cert-manager
spec:
description: Automated TLS certificate management
displayName: Certificate manager
docMarkDownUrl: >-
https://raw.githubusercontent.com/cloudoperators/greenhouse-extensions/main/cert-manager/README.md
helmChart:
name: 'local/plugins/<path-to-cert-manager-chart-folder>'
repository: '' # <- has to be empty
version: '' # <- has to be empty
...
Apply the plugindefinition.yaml to the admin cluster
kubectl --kubeconfig=<your-kind-config> apply -f plugindefinition.yaml
Test Greenhouse Extensions with local OCI registry
Greenhouse controllers use FluxCD under the hood to deploy Plugins (Helm charts). In order to test your local Helm chart, you can push it to a local registry that is provided during the setup.
make setup
- This will install a full running setup of operator, dashboard, sample organization with an onboarded remote cluster
- This will also install Flux and a local OCI registry.
Clone the Greenhouse Extensions repository
git clone https://github.com/cloudoperators/greenhouse-extensions
cd greenhouse-extensions
Prepare Environment Variables
export PKG=$(helm package $PWD/perses/charts -d ./bin | awk '{print $NF}')
export REGISTRY_CA=$PWD/bin/ca.crt
export OCI=oci://127.0.0.1:5000/cloudoperators/greenhouse-extensions/charts
[!NOTE] The bin folder is ignored by git, so it is safe to temporarily store the packaged chart and the registry CA there.
Extract Registry CA
kubectl --context=kind-greenhouse-admin get secret local-registry-tls-certs \
-n flux-system -o jsonpath='{.data.ca\.crt}' | base64 -d > "$REGISTRY_CA"
Port-Forward Registry Service
kubectl --context=kind-greenhouse-admin port-forward svc/registry -n flux-system 5000:5000&
[!NOTE] use
&to run the command in the background, so you can continue using the terminal with environment variables set.
Push Package to Local Registry
helm push $PKG $OCI --ca-file "$REGISTRY_CA" --plain-http=false
Apply Perses PluginDefinition
Before applying the PluginDefinition, change the registry in .spec.helmChart.repository from ghcr.io to registry.flux-system.svc.cluster.local:5000
The below command replaces ghcr.io with the local registry address in the .spec.helmChart.repository field of the plugindefinition.yaml and save the modified file to bin/perses.yaml
yq eval '.spec.helmChart.repository |= sub("ghcr.io", "registry.flux-system.svc.cluster.local:5000")' perses/plugindefinition.yaml > ./bin/$(yq eval '.metadata.name' perses/plugindefinition.yaml).yaml
Apply the PluginDefinition to the admin cluster
kubectl --context=kind-greenhouse-admin apply -f bin/perses.yaml -n demo
[!NOTE] demo is the organization namespace in Greenhouse local setup.
Verify PluginDefinition is Ready
kubectl --context=kind-greenhouse-admin get plugindefinition -n demo perses
NAME VERSION CATALOG READY AGE
perses 0.10.4 True 52m
Test the Plugin
now you can proceed with applying the below Plugin to test your changes.
cat <<EOF | kubectl apply -f -
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: perses
namespace: demo
spec:
clusterName: kind-greenhouse-remote # demo onboarded kind cluster
displayName: perses test
optionValues:
- name: perses.sidecar.enabled
value: true
pluginDefinitionRef:
name: perses
kind: PluginDefinition
releaseNamespace: kube-monitoring
EOF
Watch the Plugin deployment in real time
kubectl --context=kind-greenhouse-admin get plugin perses -n demo -w
NAME DISPLAY NAME PLUGIN DEFINITION CLUSTER RELEASE NAME RELEASE NAMESPACE READY VERSION AGE
perses perses test perses kind-greenhouse-remote perses kube-monitoring False 2s
perses perses test perses kind-greenhouse-remote perses kube-monitoring False 14s
perses perses test perses kind-greenhouse-remote perses kube-monitoring True 0.10.4 14s
Verify the Perses Helm chart is successfully deployed in the kind-greenhouse-remote cluster by checking the perses release in the kube-monitoring namespace
helm status perses -n kube-monitoring --kube-context kind-greenhouse-remote
NAME: perses
LAST DEPLOYED: Thu Feb 12 13:45:46 2026
NAMESPACE: kube-monitoring
STATUS: deployed
REVISION: 1
Additional information
When setting up your development environment, certain resources are modified for development convenience.
- The Greenhouse controllers and webhook server deployments use the same image to run. The logic is separated by environment variables.
- The
greenhouse-controller-managerdeployment has environment variableCONTROLLERS_ONLYCONTROLLERS_ONLY=truewill only run the controllers- changing the value to
falsewill run the webhook server and will error out due to missing certs
- The
greenhouse-webhookdeployment has environment variableWEBHOOK_ONLYWEBHOOK_ONLY=truewill only run the webhook server- changing the value to
falsewill skip the webhook server. When greenhouseCustomResourcesare applied, the Kubernetes Validating and Mutating Webhook phase will error out due to webhook endpoints not being available
if DevMode is enabled for webhooks then depending on the OS the webhook manifests are altered by removing
clientConfig.service and replacing it with clientConfig.url, allowing you to debug the code locally.
linux- the ipv4 addr fromdocker0interface is used - ex:https://172.17.0.2:9443/<path>macOS- host.docker.internal is used - ex:https://host.docker.internal:9443/<path>windows- ideallyhost.docker.internalshould work, otherwise please reach out with a contribution <3- webhook certs are generated by
cert-managerin-cluster, and they are extracted and saved to/tmp/k8s-webhook-server/serving-certs kubeconfigof the created cluster(s) are saved to/tmp/greenhouse/<clusterName>.kubeconfig