Local development setup
What is Greenhouse?
Greenhouse is a Kubernetes operator build with Kubebuilder and a UI on top of the k8s API.
It expands the Kubernetes API via CustomResourceDefinitions. The different aspects of the CRDs are reconciled by several controllers. It also acts as an admission webhook.
The Greenhouse Dashboard is a UI acting on the k8s apiserver of the cluster Greenhouse is running in. It includes a dashboard and an Organization admin consisting of several Juno micro frontends.
This guide provides the following:
Local Setup 1.1 Mock k8s Server
1.3 Greenhouse UI
1.4 docker compose
1.5 Bootstrap
Local Setup
Quick start the local setup with docker compose
Note: As for the time being, the images published in our registry are
linux/amd64
only. Export env var to set your docker to use this architecture, if your default differs:
export DOCKER_DEFAULT_PLATFORM=linux/amd64
Env Var Overview
Env Var | Meaning |
---|---|
KUBEBUILDER_ATTACH_CONTROL_PLANE_OUTPUT | If set to true , the mock server will additionally log apiserver and etcd logs |
DEV_ENV_CONTEXT | Mocks permissions on the mock api server, see Mock k8s Server for details |
Mock k8s Server, a.k.a. envtest
The Greenhouse controller needs a Kubernetes API to run it’s reconciliation against. This k8s API needs to know about the Greenhouse CRDs to maintain the state of their respective resources. It also needs to know about any running admission/validation webhooks.
We provide a local mock k8s apiserver and etcd leveraging the envtest package of SIG controller-runtime. This comes with the CRDs and MutatingWebhookConfiguration installed and provides a little bit of utility. Find the docker image on our registry.
Additionally it will bootstrap some users with different permissions in a test-org
. The test-org
resource does not yet exist on the apiserver, but can be bootstrapped from the test-org.yaml which is done for you if you use the docker compose setup.
Running the image will:
- spin up the apiserver and etcd
- deploy CRDs and the webhook
- create some users with respective contexts and certificates
- finally proxy the apiserver via
kubectl proxy
to127.0.0.1:8090
.
The latter is done to avoid painful authentication to the local apiserver.
We still can showcase different permission levels on the apiserver, by setting context
via the env var DEV_ENV_CONTEXT
.
DEV_ENV_CONTEXT | Permissions |
---|---|
unset | all, a.k.a. k8s cluster-admin |
test-org-member | org-member as provided by the org controller |
test-org-admin | org-admin as provided by the org controller |
test-org-cluster-admin | cluster-admin as provided by the org controller |
test-org-plugin-admin | plugin-admin as provided by the org controller |
To access the running apiserver instance, some kubeconfig
files and client certificates are created on the container in the /envtest
folder.
The internal.kubeconfig
file uses the different certificates and contextes to directly address the apiserver running on port 6884
.
The kubeconfig
file uses the proxied context without authentication running on port 8090
. It is also scoped to the namespace test-org
.
Chose the respective ports to be exposed on your localhost when running the image or expose them all by running in network host mode.
We are reusing the autogenerated certificates of the dev-env
for authenticating the webhook server on localhost
. The files are stored on /webhook-certs
on the container.
It is good practice to mount local volumes to these folders, running the image as such:
docker run --network host -e DEV_ENV_CONTEXT=<your-context> -v ./envtest:/envtest -v /tmp/k8s-webhook-server/serving-certs:/webhook-certs ghcr.io/cloudoperators/greenhouse-dev-env:main
Greenhouse Controller
Run your local go code from ./cmd/greenhouse
with the minimal configuration necessary (this example points the controller to run against the local mock apiserver):
go run . --dns-domain localhost --kubeconfig ./envtest/kubeconfig
Make sure the webhook server certs are placed in /tmp/k8s-webhook-server/serving-certs
Or run our greenhouse image as such:
docker run --network host -e KUBECONFIG=/envtest/kubeconfig -v ./envtest:/envtest -v /tmp/k8s-webhook-server/serving-certs:/tmp/k8s-webhook-server/serving-certs ghcr.io/cloudoperators/greenhouse:main --dns-domain localhost
See all available flags here.
Greenhouse UI
Use the latest upstream
Either pull or start the docker-compose to retrieve the latest juno-app-greenhouse release.
Building the UI locally
The Greenhouse UI is located in the cloudoperators/juno repository.
Clone the repository cloudoperators/juno
Run docker buildx build:
$ docker buildx build --platform=linux/amd64 -t ghcr.io/cloudoperators/juno-app-greenhouse:latest -f apps/greenhouse/docker/Dockerfile .
NOTE: Building the image is rather resource heavy on your machine. For reference, using colima:
Able to build? PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS ❌ default Running aarch64 2 4GiB 100GiB docker ✅ default Running aarch64 4 8GiB 100GiB docker Start the UI Docker
$ docker run -p 3000:80 -v ./ui/appProps.json:/appProps.json ghcr.io/cloudoperators/juno-app-greenhouse:latest
Note: We inject a props template prepared for dev-env expecting the k8s api to run on
127.0.0.1:8090
, which is the default exported by the mock api server image. Also authentication will be mocked. Have a look at the props template to point your local UI to other running Greenhouse instances.Start the UI with
node
(with support for live reloads). Follow the instructions in thecloudoperators/juno
, here.Access the UI on localhost:3000.
Note: Running the code locally only watches and live reloads the local code (changes) of the dashboard micro frontend (MFE). This is not true for the embedded MFEs. Run those separately with respective props pointing to the mock k8s apiserver for development.
docker compose
If you do not need or want to run your local code but want to run a set of Greenhouse images we provide a setup with docker compose:
Navigate to the dev-env dir, and start
docker compose
.cd ./dev-env docker compose up
You might need to build the
dev-ui
image manually, in that case follow the steps above.(Alternative) The network-host.docker-compose.yaml provides the same setup but starts all containers in host network mode instead.
Bootstrap
The docker-compose setup per default bootstraps an Organization test-org to your cluster, which is the bare minimum to get the dev-env
working.
By running:
docker compose run bootstrap kubectl apply -f /bootstrap/additional_resources
or by uncommenting the “additional resources” in the command of the bootstrap container in the docker-compose file, the following resources items would be created automatically:
- test-team-1, test-team-2, test-team-3 within Organization `test-org',
- respective dummy
teammemberships
for both teams, - cluster-1, cluster-2, cluster-3 and self with different conditions and states,
- some dummy nodes for clusters,
- some plugindefinitions with plugins across the clusters.
Note: These resources are intended to showcase the UI and produce a lot of “noise” on the Greenhouse controller.
Add any additional resources you need to the ./bootstrap
folder.
Run And Debug The Code
Spin up the envtest
container only, e.g. via:
docker compose up envtest
Reuse the certs created by envtest
for locally serving the webhooks by copying them to the default location kubebuilder expects webhook certs at:
cp ./webhook-certs/* /tmp/k8s-webhook-server/serving-certs
Note: use
$TMPDIR
on MacOS for/tmp
Start your debugging process in respective IDE exposing the envtest
kubeconfig at ./envtest/kubeconfig
. Do not forget to pass the --dns-domain=localhost
flag.
Run The Tests
For running e2e
tests see here.
Same as the local setup our unit
tests run against an envtest
mock cluster. To install the setup-envtest tool run
make envtest
which will install setup-envtest
to your $(LOCALBIN)
, usually ./bin
.
To run all tests from cli:
make test
To run tests independently make sure the $(KUBEBUILDER_ASSETS)
env var is set. This variable contains the path to the binary to use for starting up the mock controlplane with the respective k8s version on your architecture.
Print the path by executing:
./bin/setup-envtest use <your-preferred-k8s-version> -p path
Env Vars Overview In Testing
Env Var | Meaning |
---|---|
TEST_EXPORT_KUBECONFIG | If set to true , the kubeconfigs of the envtest controlplanes will be written to temporary files and their location will be printed on screen. Usefull for accessing the mock clusters when setting break points in tests. |