This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Contribute

Contribute to the Greenhouse platform.

The Greenhouse core platform serves as a comprehensive cloud operations solution, providing centralized control and management for cloud infrastructure and applications.
Its extensibility is achieved through the development and integration of plugins, allowing organizations to adapt and enhance the platform to accommodate their specific operational needs, ultimately promoting efficiency and compliance across their cloud environments.

The Greenhouse team welcomes all contributions to the project.

1 - Local development setup

How to run a local Greenhouse setup for development

What is Greenhouse?

Greenhouse is a Kubernetes operator build with Kubebuilder and a UI on top of the k8s API.

It expands the Kubernetes API via CustomResourceDefinitions. The different aspects of the CRDs are reconciled by several controllers. It also acts as an admission webhook.

The Greenhouse Dashboard is a UI acting on the k8s apiserver of the cluster Greenhouse is running in. It includes a dashboard and an Organization admin consisting of several Juno micro frontends.

This guide provides the following:

  1. Local Setup 1.1 Mock k8s Server

    1.2 Greenhouse Controller

    1.3 Greenhouse UI

    1.4 docker compose

    1.5 Bootstrap

  2. Run And Debug The Code

  3. Run The Tests

Local Setup

Quick start the local setup with docker compose

Note: As for the time being, the images published in our registry are linux/amd64 only. Export env var to set your docker to use this architecture, if your default differs:

export DOCKER_DEFAULT_PLATFORM=linux/amd64

Env Var Overview

Env VarMeaning
KUBEBUILDER_ATTACH_CONTROL_PLANE_OUTPUTIf set to true, the mock server will additionally log apiserver and etcd logs
DEV_ENV_CONTEXTMocks permissions on the mock api server, see Mock k8s Server for details

Mock k8s Server, a.k.a. envtest

The Greenhouse controller needs a Kubernetes API to run it’s reconciliation against. This k8s API needs to know about the Greenhouse CRDs to maintain the state of their respective resources. It also needs to know about any running admission/validation webhooks.

We provide a local mock k8s apiserver and etcd leveraging the envtest package of SIG controller-runtime. This comes with the CRDs and MutatingWebhookConfiguration installed and provides a little bit of utility. Find the docker image on our registry.

Additionally it will bootstrap some users with different permissions in a test-org. The test-org resource does not yet exist on the apiserver, but can be bootstrapped from the test-org.yaml which is done for you if you use the docker compose setup.

Running the image will:

  • spin up the apiserver and etcd
  • deploy CRDs and the webhook
  • create some users with respective contexts and certificates
  • finally proxy the apiserver via kubectl proxy to 127.0.0.1:8090.

The latter is done to avoid painful authentication to the local apiserver.

We still can showcase different permission levels on the apiserver, by setting context via the env var DEV_ENV_CONTEXT.

DEV_ENV_CONTEXTPermissions
unsetall, a.k.a. k8s cluster-admin
test-org-memberorg-member as provided by the org controller
test-org-adminorg-admin as provided by the org controller
test-org-cluster-admincluster-admin as provided by the org controller
test-org-plugin-adminplugin-admin as provided by the org controller

To access the running apiserver instance, some kubeconfig files and client certificates are created on the container in the /envtest folder.

The internal.kubeconfig file uses the different certificates and contextes to directly address the apiserver running on port 6884.

The kubeconfig file uses the proxied context without authentication running on port 8090. It is also scoped to the namespace test-org.

Chose the respective ports to be exposed on your localhost when running the image or expose them all by running in network host mode.

We are reusing the autogenerated certificates of the dev-env for authenticating the webhook server on localhost. The files are stored on /webhook-certs on the container.

It is good practice to mount local volumes to these folders, running the image as such:

docker run --network host -e DEV_ENV_CONTEXT=<your-context> -v ./envtest:/envtest -v /tmp/k8s-webhook-server/serving-certs:/webhook-certs  ghcr.io/cloudoperators/greenhouse-dev-env:main

Greenhouse Controller

Run your local go code from ./cmd/greenhouse with the minimal configuration necessary (this example points the controller to run against the local mock apiserver):

go run . --dns-domain localhost --kubeconfig ./envtest/kubeconfig

Make sure the webhook server certs are placed in /tmp/k8s-webhook-server/serving-certs

Or run our greenhouse image as such:

docker run --network host -e KUBECONFIG=/envtest/kubeconfig -v ./envtest:/envtest -v /tmp/k8s-webhook-server/serving-certs:/tmp/k8s-webhook-server/serving-certs ghcr.io/cloudoperators/greenhouse:main --dns-domain localhost

See all available flags here.

Greenhouse UI

Use the latest upstream

Either pull or start the docker-compose to retrieve the latest juno-app-greenhouse release.

Building the UI locally

The Greenhouse UI is located in the cloudoperators/juno repository.

  1. Clone the repository cloudoperators/juno

  2. Run docker buildx build:

    $ docker buildx build --platform=linux/amd64 -t ghcr.io/cloudoperators/juno-app-greenhouse:latest -f apps/greenhouse/docker/Dockerfile .  
    

    NOTE: Building the image is rather resource heavy on your machine. For reference, using colima:

    Able to build?PROFILESTATUSARCHCPUSMEMORYDISKRUNTIMEADDRESS
    defaultRunningaarch6424GiB100GiBdocker
    defaultRunningaarch6448GiB100GiBdocker
  3. Start the UI Docker

    $ docker run -p 3000:80 -v ./ui/appProps.json:/appProps.json ghcr.io/cloudoperators/juno-app-greenhouse:latest
    

    Note: We inject a props template prepared for dev-env expecting the k8s api to run on 127.0.0.1:8090, which is the default exported by the mock api server image. Also authentication will be mocked. Have a look at the props template to point your local UI to other running Greenhouse instances.

  4. Start the UI with node (with support for live reloads). Follow the instructions in the cloudoperators/juno, here.

  5. Access the UI on localhost:3000.

Note: Running the code locally only watches and live reloads the local code (changes) of the dashboard micro frontend (MFE). This is not true for the embedded MFEs. Run those separately with respective props pointing to the mock k8s apiserver for development.

docker compose

If you do not need or want to run your local code but want to run a set of Greenhouse images we provide a setup with docker compose:

  1. Navigate to the dev-env dir, and start docker compose.

    cd ./dev-env
    docker compose up
    
  2. You might need to build the dev-ui image manually, in that case follow the steps above.

  3. (Alternative) The network-host.docker-compose.yaml provides the same setup but starts all containers in host network mode instead.

Bootstrap

The docker-compose setup per default bootstraps an Organization test-org to your cluster, which is the bare minimum to get the dev-env working.

By running:

docker compose run bootstrap kubectl apply -f /bootstrap/additional_resources

or by uncommenting the “additional resources” in the command of the bootstrap container in the docker-compose file, the following resources items would be created automatically:

Note: These resources are intended to showcase the UI and produce a lot of “noise” on the Greenhouse controller.

Add any additional resources you need to the ./bootstrap folder.

Run And Debug The Code

Spin up the envtest container only, e.g. via:

docker compose up envtest

Reuse the certs created by envtest for locally serving the webhooks by copying them to the default location kubebuilder expects webhook certs at:

cp ./webhook-certs/* /tmp/k8s-webhook-server/serving-certs

Note: use $TMPDIR on MacOS for /tmp

Start your debugging process in respective IDE exposing the envtest kubeconfig at ./envtest/kubeconfig. Do not forget to pass the --dns-domain=localhost flag.

Run The Tests

For running e2e tests see here.

Same as the local setup our unit tests run against an envtest mock cluster. To install the setup-envtest tool run

make envtest

which will install setup-envtest to your $(LOCALBIN), usually ./bin.

To run all tests from cli:

make test

To run tests independently make sure the $(KUBEBUILDER_ASSETS) env var is set. This variable contains the path to the binary to use for starting up the mock controlplane with the respective k8s version on your architecture.

Print the path by executing:

./bin/setup-envtest use <your-preferred-k8s-version> -p path

Env Vars Overview In Testing

Env VarMeaning
TEST_EXPORT_KUBECONFIGIf set to true, the kubeconfigs of the envtest controlplanes will be written to temporary files and their location will be printed on screen. Usefull for accessing the mock clusters when setting break points in tests.

2 - Contributing a Plugin

Contributing a Plugin to Greenhouse

What is a Plugin?

A Plugin is a key component that provides additional features, functionalities and may add new tools or integrations to the Greenhouse project.
They are developed de-centrally by the domain experts.
A YAML specification outlines the components that are to be installed and describes mandatory and optional, instance-specific configuration values.

It can consist of two main parts:

  1. Juno micro frontend
    This integrates with the Greenhouse dashboard, allowing users to interact with the Plugin’s features seamlessly within the Greenhouse UI.

  2. Backend component
    It can include backend logic that supports the Plugin’s functionality.

Contribute

Additional ideas for plugins are very welcome!
The Greenhouse plugin catalog is defined in the Greenhouse extensions repository.
To get started, please file an issues and provide a concise description of the proposed plugin here.

A Greenhouse plugin consists of a juno micro frontend that integrates with the Greenhouse UI and/or a backend component described via Helm chart.
Contributing a plugin requires the technical skills to write Helm charts and proficiency in JavaScript.
Moreover, documentation needs to be developed to help users understand the plugin capabilities as well as how to incorporate it.
Additionally, the plugin needs to be maintained by at least one individual or a team to ensure ongoing functionality and usability within the Greenhouse ecosystem.

Development

Developing a plugin for the Greenhouse platform involves several steps, including defining the plugin, creating the necessary components, and integrating them into Greenhouse.
Here’s a high-level overview of how to develop a plugin for Greenhouse:

  1. Define the Plugin:

    • Clearly define the purpose and functionality of your plugin.
    • What problem does it solve, and what features will it provide?
  2. Plugin Configuration (plugin.yml):

    • Create a greenhouse.yml file in the root of your repository to specify the plugin’s metadata and configuration options. This YAML file should include details like the plugin’s description, version, and any configuration values required.
  3. Plugin Components:

    • Develop the plugin’s components, which may include both frontend and backend components.
    • For the frontend, you can use Juno microfrontend components to integrate with the Greenhouse UI seamlessly.
    • The backend component handles the logic and functionality of your plugin. This may involve interacting with external APIs, processing data, and more.
  4. Testing & Validation:

    • Test your plugin thoroughly to ensure it works as intended. Verify that both the frontend and backend components function correctly.
    • Implement validation for your plugin’s configuration options. This helps prevent users from providing incorrect or incompatible values.
    • Implement Helm Chart Tests for your plugin if it includes a Helm Chart. For more information on how to write Helm Chart Tests, please refer to this guide.
  5. Documentation:

    • Create comprehensive documentation for your plugin. This should include installation instructions, configuration details, and usage guidelines.
  6. Integration with Greenhouse:

    • Integrate your plugin with the Greenhouse platform by configuring it using the Greenhouse UI. This may involve specifying which organizations can use the plugin and setting up any required permissions.
  7. Publishing:

    • Publish your plugin to Greenhouse once it’s fully tested and ready for use. This makes it available for organizations to install and configure.
  8. Support and Maintenance:

    • Provide ongoing support for your plugin, including bug fixes and updates to accommodate changes in Greenhouse or external dependencies.
  9. Community Involvement:

    • Consider engaging with the Greenhouse community, if applicable, by seeking feedback, addressing issues, and collaborating with other developers.

3 - Greenhouse Controller Development

How to contribute a new controller to the Greenhouse project.

Bootstrap a new Controller

Before getting started please make sure you have read the contribution guidelines.

Greenhouse is build using Kubebuilder as the framework for Kubernetes controllers. To create a new controller, you can use the kubebuilder CLI tool.

This project was generated with Kubebuilder v3, which requires Kubebuilder CLI <= v3.15.1 Since this project does not follow the Kubebuilder v3 scaffolding structure, it is necessary to create a symlink to the main.go

ln -s ./cmd/greenhouse/main.go main.go

To create a new controller, run the following command:

kubebuilder create api --group greenhouse --version v1alpha1 --kind MyResource

Now that the files have been generated, they need to be copied to the correct location:

mv ./apis/greenhouse/v1alpha1/myresource_types.go ./pkg/apis/v1alpha1/

mv ./controllers/greenhouse/mynewkind_controller.go ./pkg/controllers/<kind>/mynewkind_controller.go

After having moved the files, you need to fix the imports in the mynewkind_controller.go file. Also ensure that the entry for the resource in the PROJECT file points to the correct location. The new Kind should be added to the list under charts/manager/crds/kustomization.yaml The new Controller needs to be registered in the controllers manager cmd/greenhouse/main.go. All other generated files can be deleted.

Now you can generate all manifests with make generate-manifests and start implementing your controller logic.

Implementing the Controller

Within Greenhouse the controllers implement the lifecycle.Reconciler interface. This allows for consistency between the controllers and ensures finalizers, status updates and other common controller logic is implemented in a consistent way. For examples on how this is used please refer to the existing controllers.

Testing the Controller

Unit/Integration tests for the controllers use Kubebuilder’s envtest environment and are implemented using Ginkgo and Gomega. For examples on how to write tests please refer to the existing tests. There are also some helper functions in the pkg/test package that can be used to simplify the testing of controllers.

For e2e tests, please refer to the test/e2e/README.md.