1 - Local development setup
How to run a local Greenhouse setup for development
What is Greenhouse?
Greenhouse is a Kubernetes operator build with Kubebuilder and a UI on top of the k8s API.
It expands the Kubernetes API via CustomResourceDefinitions. The different aspects of the CRDs are reconciled by several controllers. It also acts as an admission webhook.
The Greenhouse Dashboard is a UI acting on the k8s apiserver of the cluster Greenhouse is running in. The UI itself is a Juno application containing several micro frontends.
Greenhouse provides a couple of cli commands based on make
to run a local Greenhouse instance.
This handy CLI tool will help you to setup your development environment in no time.
Prerequisites
Usage
Build greenhousectl
from source by running the following command: make cli
[!NOTE]
The CLI binary will be available in the bin
folder
Setting up the development environment
There are multiple local development environment setup available for the Greenhouse project. You can choose the one that
fits your needs.
All commands will spin up KinD clusters and setup the necessary components
If you have a ~/.kube/config
file then KinD
will automatically merge the kubeconfig
of the created cluster(s).
Use kubectl config use-context kind-greenhouse-admin
to switch to greenhouse admin
cluster context.
Use kubectl config use-context kind-greenhouse-remote
to switch to greenhouse remote
cluster context.
If you do not have the contexts of the created cluster(s) in ~/.kube/config
file then you can extract it from the
operating system’s tmp
folder, where the CLI will write kubeconfig
of the created KinD
clusters.
[!NOTE]
linux / macOS
: in unix
like systems you can find the kubeconfig
at $TMPDIR/greenhouse/<clusterName>.kubeconfig
windows
: in windows
many tmp folders exist so the CLI can write the kubeconfig
to the first non-empty value from
%TMP%
, %TEMP%
, %USERPROFILE%
The path where the kubeconfig
is written will be displayed in the terminal after the command is executed by the CLI
use kubectl --kubeconfig=<path to admin / remote kubeconfig>
to interact with the local greenhouse
clusters
Run Greenhouse Locally
- This will install the operator, the dashboard, cors-proxy and a sample organization with an onboarded remote cluster
- port-forward the
cors-proxy
by kubectl port-forward svc/greenhouse-cors-proxy 9090:80 -n greenhouse &
- port-forward the
dashboard
by kubectl port-forward svc/greenhouse-dashboard 5001:80 -n greenhouse &
- Access the local
demo
organization on the Greenhouse dashboard on localhost:5001
Develop Controllers locally and run the webhook server in-cluster
make setup-controller-dev
[!NOTE]
set the environment variable CONTROLLERS_ONLY=true
in your debugger configuration
If no environment variable is set, the webhook server will error out due to the missing certs
Develop Admission Webhook server locally
[!NOTE]
set the environment variable WEBHOOK_ONLY=true
in your debugger configuration if you only want to run the webhook
server
Develop Controllers and Admission Webhook server locally
WITH_CONTROLLERS=false DEV_MODE=true make setup-manager
This will modify the ValidatingWebhookConfiguration
and MutatingWebhookConfiguration
to use the
host.docker.internal
(macOS / windows) or ipv4
(linux) address for the webhook server and write the
webhook certs to /tmp/k8s-webhook-server/serving-certs
.
Now you can run the webhook server and the controllers locally
Since both need to be run locally no CONTROLLERS_ONLY
or WEBHOOK_ONLY
environment variables are needed in your
debugger configuration
[!NOTE]
The dev setup will modify the webhook configurations to have 30s timeout for the webhook requests, but
when break points are used to debug webhook requests, it can result into timeouts.
In such cases, modify the CR with a dummy annotation to re-trigger the webhook request and reconciliation
Running Greenhouse Dashboard in-cluster
[!NOTE]
You will need to port-forward the cors-proxy service and the dashboard service to access the dashboard
Information on how to access the dashboard is displayed after the command is executed
Run Greenhouse Core for UI development
The Greenhouse UI consists of a Juno application hosting several micro frontends (MFEs). To develop the UI you will need a local Greenhouse cluster api-server as backend for your local UI:
- Startup the environment as in Run local Greenhouse
- The Greenhouse UI expects an
appProps.json
with the necessary parameters to run - This
appProps.json
ConfigMap
is created in the greenhouse
namespace by the local installation to configure the in-cluster dashboard. - You can
- either create and use your own
appProps.json
file when running the UI locally - or retrieve the generated
appProps.json
in-cluster by executing
kubectl get cm greenhouse-dashboard-app-props -n greenhouse -o=json | jq -r '.data.["appProps.json"]'
- After port-forwarding
cors-proxy
service, it should be used as apiEndpoint
in appProps.json
- Start the dashboard locally (more information on how to run the dashboard locally can be found in
the Juno Repository)
Test Plugin / Greenhouse Extension charts locally
PLUGIN_DIR=<absolute-path-to-charts-dir> make setup
- This will install a full running setup of operator, dashboard, sample organization with an onboarded remote cluster
- Additionally, it will mount the plugin charts directory on to the
node
of the KinD
cluster - The operator deployment has a hostPath volume mount to the plugin charts directory from the
node
of the KinD
cluster
To test your local Chart (now mounted to the KinD cluster) with a plugindefinition.yaml
you would need to adjust .spec.helmChart.name
to use the local chart.
With the provided mounting mechanism it will always live in local/plugins/
within the KinD cluster.
Modify spec.helmChart.name
to point to the local file path of the chart that needs to be tested
Example Scenario:
You have cloned the Greenhouse Extensions repository,
and you want to test cert-manager
plugin chart locally.
apiVersion: greenhouse.sap/v1alpha1
kind: PluginDefinition
metadata:
name: cert-manager
spec:
description: Automated TLS certificate management
displayName: Certificate manager
docMarkDownUrl: >-
https://raw.githubusercontent.com/cloudoperators/greenhouse-extensions/main/cert-manager/README.md
helmChart:
name: 'local/plugins/<path-to-cert-manager-chart-folder>'
repository: '' # <- has to be empty
version: '' # <- has to be empty
...
Apply the plugindefinition.yaml
to the admin
cluster
kubectl --kubeconfig=<your-kind-config> apply -f plugindefinition.yaml
When setting up your development environment, certain resources are modified for development convenience.
- The Greenhouse controllers and webhook server deployments use the same image to run. The logic is separated by
environment variables.
- The
greenhouse-controller-manager
deployment has environment variable CONTROLLERS_ONLY
CONTROLLERS_ONLY=true
will only run the controllers- changing the value to
false
will run the webhook server and will error out due to missing certs
- The
greenhouse-webhook
deployment has environment variable WEBHOOK_ONLY
WEBHOOK_ONLY=true
will only run the webhook server- changing the value to
false
will skip the webhook server. When greenhouse CustomResources
are applied,
the Kubernetes Validating and Mutating Webhook phase will error out due to webhook endpoints not being available
if DevMode
is enabled for webhooks then depending on the OS the webhook manifests are altered by removing
clientConfig.service
and replacing it with clientConfig.url
, allowing you to debug the code locally.
linux
- the ipv4 addr from docker0
interface is used - ex: https://172.17.0.2:9443/<path>
macOS
- host.docker.internal is used - ex: https://host.docker.internal:9443/<path>
windows
- ideally host.docker.internal
should work, otherwise please reach out with a contribution <3- webhook certs are generated by
cert-manager
in-cluster, and they are
extracted and saved to /tmp/k8s-webhook-server/serving-certs
kubeconfig
of the created cluster(s) are saved to /tmp/greenhouse/<clusterName>.kubeconfig
greenhousectl dev setup
setup dev environment with a configuration file
greenhousectl dev setup [flags]
Examples
# Setup Greenhouse dev environment with a configuration file
greenhousectl dev setup -f dev-env/dev.config.yaml
- This will create an admin and a remote cluster
- Install CRDs, Webhook definitions, RBACs, Certs, etc... for Greenhouse into the admin cluster
- Depending on the devMode, it will install the webhook in-cluster or enable it for local development
Overriding certain values in dev.config.yaml:
- Override devMode for webhook development with d=true or devMode=true
- Override helm chart installation with c=true or crdOnly=true
e.g. greenhousectl dev setup -f dev-env/dev.config.yaml d=true
Options
-f, --config string configuration file path - e.g. -f dev-env/dev.config.yaml
-h, --help help for setup
greenhousectl dev setup dashboard
setup dashboard for local development with a configuration file
greenhousectl dev setup dashboard [flags]
Examples
# Setup Greenhouse dev environment with a configuration file
greenhousectl dev setup dashboard -f dev-env/ui.config.yaml
- Installs the Greenhouse dashboard and CORS proxy into the admin cluster
Options
-f, --config string configuration file path - e.g. -f dev-env/ui.config.yaml
-h, --help help for dashboard
Generating Docs
To generate the markdown documentation, run the following command:
2 - Contributing a Plugin
Contributing a Plugin to Greenhouse
What is a Plugin?
A Plugin is a key component that provides additional features, functionalities and may add new tools or integrations to the Greenhouse project.
They are developed de-centrally by the domain experts.
A YAML specification outlines the components that are to be installed and describes mandatory and optional, instance-specific configuration values.
It can consist of two main parts:
Juno micro frontend
This integrates with the Greenhouse dashboard, allowing users to interact with the Plugin’s features seamlessly within the Greenhouse UI.
Backend component
It can include backend logic that supports the Plugin’s functionality.
Contribute
Additional ideas for plugins are very welcome!
The Greenhouse plugin catalog is defined in the Greenhouse extensions repository.
To get started, please file an issues and provide a concise description of the proposed plugin here.
A Greenhouse plugin consists of a juno micro frontend that integrates with the Greenhouse UI and/or a backend component described via Helm chart.
Contributing a plugin requires the technical skills to write Helm charts and proficiency in JavaScript.
Moreover, documentation needs to be developed to help users understand the plugin capabilities as well as how to incorporate it.
Additionally, the plugin needs to be maintained by at least one individual or a team to ensure ongoing functionality and usability within the Greenhouse ecosystem.
Development
Developing a plugin for the Greenhouse platform involves several steps, including defining the plugin, creating the necessary components, and integrating them into Greenhouse.
Here’s a high-level overview of how to develop a plugin for Greenhouse:
Define the Plugin:
- Clearly define the purpose and functionality of your plugin.
- What problem does it solve, and what features will it provide?
Plugin Definition (plugindefinition.yaml):
- Create a
plugindefinition.yaml
(API Reference) file in the root of your repository to specify the plugin’s metadata and configuration options. This YAML file should include details like the plugin’s description, version, and any configuration values required. - Provide a list of
PluginOptions
which are values that are consumed to configure the actual Plugin
instance of your PluginDefinition
.
Greenhouse always provides some global values that are injected into your Plugin
upon deployment:global.greenhouse.organizationName
: The name
of your Organization
global.greenhouse.teamNames
: All available Teams
in your Organization
global.greenhouse.clusterNames
: All available Clusters
in your Organization
global.greenhouse.clusterName
: The name
of the Cluster
this Plugin
instance is deployed to.global.greenhouse.baseDomain
: The base domain of your Greenhouse installationglobal.greenhouse.ownedBy
: The owner (usually a owning Team
) of this Plugin
instance
Plugin Components:
- Develop the plugin’s components, which may include both frontend and backend components.
- For the frontend, you can use Juno microfrontend components to integrate with the Greenhouse UI seamlessly.
- The backend component handles the logic and functionality of your plugin. This may involve interacting with external APIs, processing data, and more.
Testing & Validation:
- Test your plugin thoroughly to ensure it works as intended. Verify that both the frontend and backend components function correctly.
- Implement validation for your plugin’s configuration options. This helps prevent users from providing incorrect or incompatible values.
- Implement Helm Chart Tests for your plugin if it includes a Helm Chart. For more information on how to write Helm Chart Tests, please refer to this guide.
Documentation:
- Create comprehensive documentation for your plugin. This should include installation instructions, configuration details, and usage guidelines.
Integration with Greenhouse:
- Integrate your plugin with the Greenhouse platform by configuring it using the Greenhouse UI. This may involve specifying which organizations can use the plugin and setting up any required permissions.
Publishing:
- Publish your plugin to Greenhouse once it’s fully tested and ready for use. This makes it available for organizations to install and configure.
Support and Maintenance:
- Provide ongoing support for your plugin, including bug fixes and updates to accommodate changes in Greenhouse or external dependencies.
Community Involvement:
- Consider engaging with the Greenhouse community, if applicable, by seeking feedback, addressing issues, and collaborating with other developers.
3 - Greenhouse Controller Development
How to contribute a new controller to the Greenhouse project.
Bootstrap a new Controller
Before getting started please make sure you have read the contribution guidelines.
Greenhouse is build using Kubebuilder as the framework for Kubernetes controllers. To create a new controller, you can use the kubebuilder
CLI tool.
This project was generated with Kubebuilder v4.
It’s necessary to create a symlink from cmd/greenhouse/main.go
to cmd/main.go
in to run the Kubebuider scaffolding commands.
ln $(pwd)/cmd/greenhouse/main.go $(pwd)/cmd/main.go
To create a new controller, run the following command:
kubebuilder create api --group greenhouse --version v1alpha1 --kind MyResource
Now that the files have been generated, they need to be copied to the correct location. The generated files are located in api/greenhouse/v1alpha1
and controller/greenhouse
. The correct locations for the files are api/v1alpha1
and pkg/controller/<kind>
respectively.
After moving the files, any imports need to be updated to point to the new locations.
Also ensure that the entry for the resource in the PROJECT
file points to the correct location.
The new Kind should be added to the list under charts/manager/crds/kustomization.yaml
The new Controller needs to be registered in the controllers manager cmd/greenhouse/main.go
.
All other generated files can be deleted.
Now you can generate all manifests with make generate-manifests
and start implementing your controller logic.
Implementing the Controller
Within Greenhouse the controllers implement the lifecycle.Reconciler
interface. This allows for consistency between the controllers and ensures finalizers, status updates and other common controller logic is implemented in a consistent way. For examples on how this is used please refer to the existing controllers.
Testing the Controller
Unit/Integration tests for the controllers use Kubebuilder’s envtest environment and are implemented using Ginkgo and Gomega. For examples on how to write tests please refer to the existing tests. There are also some helper functions in the internal/test
package that can be used to simplify the testing of controllers.
For e2e tests, please refer to the test/e2e/README.md
.