1 - Organization management

Manage your organization in Greenhouse.

This section provides guides for the management your organization in Greenhouse.

1.1 - SAP ID Service

This section provides a step-by-step walkthrough for new users to request an SAP ID Service (IDS) tenant.

NOTE: This document is only available on the SAP-internal documentation page.

1.2 - Creating an organization

Creating an organization in Greenhouse

Before you begin

This guides describes how to create an organization in Greenhouse.

During phase 1 and 2 of the roadmap Greenhouse is only open to selected early adopters.
Please reach out to the Greenhouse team to register and create your organization via Slack or DL Greenhouse.

Creating an organization

An organization within the Greenhouse cloud operations platform is a separate unit with its own configuration, teams, and resources tailored to their requirements.
These organizations can represent different teams, departments, or projects within an enterprise, and they operate independently within the Greenhouse platform. They allow for the isolation and management of resources and configurations specific to their needs.

While the Greenhouse is build on the idea of a self-service API and automation driven platform, the workflow to onboard an organization to Greenhouse currently involves reaching out to the Greenhouse administrators until the official go-live.
This ensures all pre-requisites are met, the organization is configured correctly and the administrators understand the platform capabilities.

:exclamation: Please note that the name of an organization is immutable.

Steps

  1. CAM Profile
    A CAM profile is required to configure the administrators of the organization.
    Please include the name of the profile in the message to the Greenhouse team when signing up.

  2. SAP ID service
    The authentication for the users belonging to your organization is based on the OpenID Connect (OIDC) standard.
    For SAP, we recommend using a SAP ID service (IDS) tenant.
    Please include the parameters for your tenant in the message to the Greenhouse team when signing up.

    If you don’t have a SAP ID Service tenant yet, please refer to the SAP ID Service section for more information.

  3. Greenhouse organization
    A Greenhouse administrator applies the following configuration to the central Greenhouse cluster.
    Bear in mind that the name of the organization is immutable and will be part of all URLs.

    apiVersion: v1
    kind: Namespace
    metadata:
      name: my-organization
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: oidc-config
      namespace: my-organization
    type: Opaque
    data:
      clientID: ...
      clientSecret: ...
    ---
    apiVersion: greenhouse.sap/v1alpha1
    kind: Organization
    metadata:
      name: my-organization
    spec:
      authentication:
        oidc:
          clientIDReference:
            key: clientID
            name: oidc-config
          clientSecretReference:
            key: clientSecret
            name: oidc-config
          issuer: https://...
        scim:
          baseURL: URL to the SCIM server.
          basicAuthUser:
            secret:
              name: Name of the secret in the same namespace.
              key: Key in the secret holding the user value.
          basicAuthPw:
            secret:
              name: Name of the secret in the same namespace.
              key: Key in the secret holding the password value.
      description: My new organization
      displayName: Short name of the organization
      mappedOrgAdminIdPGroup: Name of the group in the IDP that should be mapped to the organization admin role.
    

Setting up Team Membership synchronization with Greenhouse

Team Membership synchronization with Greenhouse requires access to SCIM API.

For the Team Memberships to be created Organization needs to be configured with URL and credentials of the SCIM API. SCIM API is used to get members for teams in the organization based on the IDP groups set for teams.

IDP group for the organization admin team must be set to the mappedOrgAdminIdPGroup field in the Organization configuration. It is required for the synchronization to work. IDP groups for remaining teams in the organization should be set in their respective configurations.

2 - Cluster management

Manage your Kubernetes clusters via Greenhouse.

Greenhouse enables organizations to register their Kubernetes clusters within the platform, providing a centralized interface for managing and monitoring these clusters.
Once registered, users can perform tasks related to cluster management, such as deploying applications, scaling resources, and configuring access control, all within the Greenhouse platform.

This section provides guides for the management of Kubernetes clusters within Greenhouse.

2.1 - Cluster onboarding

Onboard an existing Kubernetes cluster to Greenhouse.

Content Overview

This guides describes how to onboard an existing Kubernetes cluster to your Greenhouse organization.
If you don’t have an organization yet please reach out to the Greenhouse administrators.

While all members of an organization can see existing clusters, their management requires org-admin or cluster-admin privileges.

NOTE: The UI is currently in development. For now this guide describes the onboarding workflow via command line.

Preparation

Download the latest greenhousectl binary from here.

Onboarding a Cluster to Greenhouse will require you to authenticate to two different Kubernetes clusters via respective kubeconfig files:

  • greenhouse: The cluster your Greenhouse installation is running on. You need organization-admin or cluster-admin priviledges.
  • bootstrap: The cluster you want to onboard. You need system:masters privileges.

For consistency we will refer to those two clusters by their names from now on.

You need to have the kubeconfig files for both the greenhouse and the bootstrap cluster at hand. The kubeconfig file for the greenhouse cluster can be downloaded via the Greenhouse dashboard:

Organization > Clusters > Access Greenhouse cluster.

Onboard

For accessing the bootstrap cluster, the greenhousectl will expect your default Kubernetes kubeconfig file and context to be set to bootstrap. This can be achieved by passing the --kubeconfig flag or by setting the KUBECONFIG env var.

The location of the kubeconfig file to the greenhouse cluster is passed via the --greenhouse-kubeconfig flag.

greenhousectl cluster bootstrap --kubeconfig=<path/to/bootstrap-kubeconfig-file> --greenhouse-kubeconfig <path/to/greenhouse-kubeconfig-file> --org <greenhouse-organization-name> --cluster-name <name>

Since Greenhouse generates URLs which contain the cluster name, we highly recommend to choose a short cluster name. In particular for Gardener Clusters setting a short name is mandatory, because Gardener has very long cluster names, e.g. garden-greenhouse--monitoring-external.

A typical output when you ran the command looks like

2024-02-01T09:34:55.522+0100	INFO	setup	Loaded kubeconfig	{"context": "default", "host": "https://api.greenhouse-qa.eu-nl-1.cloud.sap"}
2024-02-01T09:34:55.523+0100	INFO	setup	Loaded client kubeconfig	{"host": "https://api.monitoring.greenhouse.shoot.canary.k8s-hana.ondemand.com"}
2024-02-01T09:34:56.579+0100	INFO	setup	Bootstraping cluster	{"clusterName": "monitoring", "orgName": "ccloud"}
2024-02-01T09:34:56.639+0100	INFO	setup	created namespace	{"name": "ccloud"}
2024-02-01T09:34:56.696+0100	INFO	setup	created serviceAccount	{"name": "greenhouse"}
2024-02-01T09:34:56.810+0100	INFO	setup	created clusterRoleBinding	{"name": "greenhouse"}
2024-02-01T09:34:57.189+0100	INFO	setup	created clusterSecret	{"name": "monitoring"}
2024-02-01T09:34:58.309+0100	INFO	setup	Bootstraping cluster finished	{"clusterName": "monitoring", "orgName": "ccloud"}

After onboarding

  1. List all clusters in your Greenhouse organization:
   kubectl --namespace=<greenhouse-organization-name> get clusters
  1. Show the details of a cluster:
   kubectl --namespace=<greenhouse-organization-name> get cluster <name> -o yaml

Example:

apiVersion: greenhouse.sap/v1alpha1
kind: Cluster
metadata:
  creationTimestamp: "2024-02-07T10:23:23Z"
  finalizers:
    - greenhouse.sap/cleanup
  generation: 1
  name: monitoring
  namespace: ccloud
  resourceVersion: "282792586"
  uid: 0db6e464-ec36-459e-8a05-4ad668b57f42
spec:
  accessMode: direct
  maxTokenValidity: 72h
status:
  bearerTokenExpirationTimestamp: "2024-02-09T06:28:57Z"
  kubernetesVersion: v1.27.8
  statusConditions:
    conditions:
      - lastTransitionTime: "2024-02-09T06:28:57Z"
        status: "True"
        type: Ready

When the status.kubernetesVersion field shows the correct version of the Kubernetes cluster, the cluster was successfully bootstrapped in Greenhouse. Then status.conditions will contain a Condition with type=Ready and status="true""

In the remote cluster, a new namespace is created and contains some resources managed by Greenhouse. The namespace has the same name as your organization in Greenhouse.

Trouble shooting

If the bootstrapping failed, you can find details about why it failed in the Cluster.statusConditions. More precisely there will be a condition of type=KubeConfigValid which might have hints in the message field. This is also displayed in the UI on the Cluster details view. Reruning the onboarding command with an updated kubeConfig file will fix these issues.

3 - Plugin management

Manage plugins for your Kubernetes clusters via Greenhouse.

Plugins extends the capabilities of the Greenhouse cloud operations platform, adding specific features or functionalities to tailor and enhance the platform for specific organizational needs.
These plugins are integral to Greenhouse’ extensibility, allowing users to customize their cloud operations environment and address unique requirements while operating within the Greenhouse ecosystem.

This section provides guides for the management of plugins for Kubernetes clusters within Greenhouse.

3.1 - Local Plugin Development

Develop a new Greenhouse Plugin against a local development environment.

Introduction

Let’s illustrate how to leverage Greenhouse Plugins to deploy a Helm Chart into a remote cluster within the local development environment.

This guide will walk you through the process of spinning up the local development environment, creating a new Greenhouse PluginDefinition and deploying it to a local kind cluster.

At the end of the guide you will have spun up the local development environment, onboarded a Cluster, created a PluginDefinition and deployed it as a Plugin to the onboarded Cluster.

[!NOTE] This guide assumes you already have a working Helm chart and will not cover how to create a Helm Chart from scratch. For more information on how to create a Helm Chart, please refer to the Helm documentation.

Requirements

Starting the local develoment environment

Follow the Local Development documentation to spin up the local Greenhouse development environment.

This will provide you with a local Greenhouse instance running, filled with some example Greenhouse resources and the Greenhouse UI running on http://localhost:3000.

Onboarding a Cluster

In this step we will create and onboard a new Cluster to the local Greenhouse instance. The local cluster will be created utilizing kind.

In order to onboard a kind cluster follow the onboarding a cluster secton of the dev-env README.

After onboarding the cluster you should see the new Cluster in the Greenhouse UI.

Prepare Helm Chart

For this example we will use the bitnami nginx Helm Chart. The packaged chart can be downloaded with:

helm pull oci://registry-1.docker.io/bitnamicharts/nginx --destination ./

After unpacking the *.tgz file there is a folder named nginx containing the Helm Chart.

Generating a PluginDefinition from a Helm Chart

Using the files of the Helm Chart we will create a new Greenhouse PluginDefinition using the greenhousectl CLI.

greenhousectl plugin generate ./nginx ./nginx-plugin

This will create a new folder nginx-plugin containing the PluginDefinition in a nested structure.

Modifying the PluginDefinition

The generated PluginDefinition contains a plugindefinition.yaml file which defines the PluginDefinition. But there are still a few steps required to make it work.

Specify the Helm Chart repository

After generating the PluginDefinition the .spec.helmChart.repository field in the plugindefinition.yaml contains a TODO comment. This field should be set to the repository where the Helm Chart is stored. For the bitnami nginx Helm Chart this would be oci://registry-1.docker.io/bitnamicharts.

Specify the UI application

A PluginDefinition may specify a UI application that will be integrated into the Greenhouse UI. This tutorial does not cover how to create a UI application. Therefore the section .spec.uiApplication in the plugindefinition.yaml should be removed.

[!INFORMATION] The UI section of the dev-env readme provides a brief introduction developing a frontend application for Greenhouse.

Modify the Options

The PluginDefinition contains a section .spec.options which defines options that can be set when deploying the Plugin to a Cluster. These options have been generated based on the Helm Chart values.yaml file. You can modify the options to fit your needs.

In general the options are defined as follows:

options:
  - default: true
    value: abcd123
    description: automountServiceAccountToken
    name: automountServiceAccountToken
    required: false
    type: ""

default specifies if the option should provide a default value. If this is set to true, the value specified will be used as the default value. The Plugin can still provide a different value for this option. description provides a description for the option. name specifies the Helm Chart value name, as it is used within the Chart’s template files. required specifies if the option is required. This will be used by the Greenhouse Controllers to determine if a Plugin is valid. type specifies the type of the option. This can be any of [string, secret, bool, int, list, map]. This will be used by the Greenhouse Controllers to validate the provided value.

For this tutorial we will remove all options.

Deploying a Plugin to the Kind Cluster

After modifying the PluginDefinition we can deploy it to the local Greenhouse cluster and create a Plugin that will deploy the nginx to the onboarded cluster.

  kubectl --kubeconfig=./envtest/kubeconfig apply -f ./nginx-plugin/nginx/17.3.2/plugindefinition.yaml
  plugindefinition.greenhouse.sap/nginx-17.3.2 created

The Plugin can be configured using the Greenhouse UI running on http://localhost:3000. Follow the following steps to deploy a Plugin for the created PluginDefinition into the onboarded kind cluster:

  1. Navigate on the Greenhouse UI to Organization>Plugins.
  2. Click on the Add Plugin button.
  3. Select the nginx-17.3.2 PluginDefinition.
  4. Click on the Configure Plugin button.
  5. Select the cluster in the drop-down.
  6. Click on the Create Plugin button.

After the Plugin has been created the Plugin Overview page will show the status of the plugin.

Theh deployment can also be verified in the onboarded cluster by checking the pods in the test-org namespace of the kind cluster.

kind export kubeconfig --name remote-cluster
Set kubectl context to "kind-remote-cluster"

k get pods -n test-org
NAME                                    READY   STATUS    RESTARTS   AGE
nginx-remote-cluster-758bf47c77-pz72l   1/1     Running   0          2m11s

Development Tips

Local Helm Charts

Instead of uploading the Helm Chart to a chart repository, it is possible to load it from the filesystem of the Greenhouse container. This can be especially useful if you are developing your own chart for a PluginDefinition, as it speeds up the testing loop. The Docker compose setup mounts the dev-env/helm-charts directory and watches for any changes. This means you can point to this local chart in your plugindefinition.yaml as such:

helmChart:
  name: helm-charts/{filename}.tgz
  repository:

3.2 - Testing a Plugin

Guidelines for testing plugins contributed to the Greenhouse project.

Plugin Testing Requirements

All plugins contributed to plugin-extensions repository should include comprehensive Helm Chart Tests using the bats/bats-detik testing framework. This ensures our plugins are robust, deployable, and catch potential issues early in the development cycle.

What is bats/bats-detik?

The bats/bats-detik framework simplifies end-to-end (e2e) Testing in Kubernetes. It combines the Bash Automated Testing System (bats) with Kubernetes-specific assertions (detik). This allows you to write test cases using natural language-like syntax, making your tests easier to read and maintain.

Implementing Tests

  1. Create a /tests folder inside your Plugin’s Helm Chart templates folder to store your test resources.

  2. ConfigMap definition:

    • Create a test-<plugin-name>-config.yaml file in the templates/tests directory to define a ConfigMap that will hold your test script.
    • This ConfigMap contains the test script run.sh that will be executed by the test Pod to run your tests.
{{- if .Values.testFramework.enabled -}}
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-test
  namespace: {{ .Release.Namespace }}
  labels:
    type: integration-test
  annotations:
    "helm.sh/hook": test
    "helm.sh/hook-weight": "-5" # Installed and upgraded before the test pod
    "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
data:
  run.sh: |-

    #!/usr/bin/env bats

    load "/usr/lib/bats/bats-detik/utils"
    load "/usr/lib/bats/bats-detik/detik"

    DETIK_CLIENT_NAME="kubectl"

    @test "Verify successful deployment and running status of the {{ .Release.Name }}-operator pod" {
        verify "there is 1 deployment named '{{ .Release.Name }}-operator'"
        verify "there is 1 service named '{{ .Release.Name }}-operator'"
        try "at most 2 times every 5s to get pods named '{{ .Release.Name }}-operator' and verify that '.status.phase' is 'running'"
    }

    @test "Verify successful creation and bound status of {{ .Release.Name }} persistent volume claims" {
        try "at most 3 times every 5s to get persistentvolumeclaims named '{{ .Release.Name }}.*' and verify that '.status.phase' is 'Bound'"
    }

    @test "Verify successful creation and available replicas of {{ .Release.Name }} Prometheus resource" {
        try "at most 3 times every 5s to get prometheuses named '{{ .Release.Name }}' and verify that '.status.availableReplicas' is more than '0'"
    }

    @test "Verify creation of required custom resource definitions (CRDs) for {{ .Release.Name }}" {
        verify "there is 1 customresourcedefinition named 'prometheuses'"
        verify "there is 1 customresourcedefinition named 'podmonitors'"
    }
{{- end -}}

Note: You can use this guide for reference when writing your test assertions.

  1. Test Pod Definition:

    • Create a test-<plugin-name>.yaml file in the templates/tests directory to define a Pod that will run your tests.
    • This test Pod will mount the ConfigMap created in the previous step and will execute the test script run.sh.
{{- if .Values.testFramework.enabled -}}
apiVersion: v1
kind: Pod
metadata:
  name: {{ .Release.Name }}-test
  namespace: {{ .Release.Namespace }}
  labels:
    type: integration-test
  annotations:
    "helm.sh/hook": test
    "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
spec:
  serviceAccountName: {{ .Release.Name }}-test
  containers:
    - name: bats-test
      image: "{{ .Values.testFramework.image.registry}}/{{ .Values.testFramework.image.repository}}:{{ .Values.testFramework.image.tag }}"
      imagePullPolicy: {{ .Values.testFramework.image.pullPolicy }}
      command: ["bats", "-t", "/tests/run.sh"]
      volumeMounts:
        - name: tests
          mountPath: /tests
          readOnly: true   volumes:
    - name: tests
      configMap:
        name: {{ .Release.Name }}-test
  restartPolicy: Never
{{- end -}}
  1. RBAC Permissions:
  • Create the necessary RBAC resources in the templates/tests folder with a dedicated ServiceAccount and role authorisations so that the test Pod can cover test the cases.
  • You can use test-permissions.yaml from the kube-monitoring as a reference to configure RBAC permissions for your test Pod.
  1. Configure the Test Framework in Plugin’s values.yaml:
    • Add the following configuration to your Plugin’s values.yaml file:
testFramework:
  enabled: true
  image:
    registry: ghcr.io
    repository: cloudoperators/greenhouse-extensions-integration-test
    tag: main
  imagePullPolicy: IfNotPresent
  1. Running the Tests:

Important: Once you have completed all the steps above, you are ready to run the tests. However, before running the tests, ensure that you perform a fresh Helm installation or upgrade of your Plugin’s Helm release against your test Kubernetes cluster (for example, Minikube or Kind) by executing the following command:

# For a new installation
helm install <Release name> <chart-path>

# For an upgrade
helm upgrade <Release name> <chart-path>
  • After the Helm installation or upgrade is successful, run the tests against the same test Kubernetes cluster by executing the following command.
helm test <Release name>

Contribution Checklist

Before submitting a pull request:

  • Ensure your Plugin’s Helm Chart includes a /tests directory.
  • Verify the presence of test-<plugin-name>.yaml, test-<plugin-name>-config.yaml, and test-permissions.yaml files.
  • Test your Plugin thoroughly using helm test <release-name> and confirm that all tests pass against a test Kubernetes cluster.
  • Include a brief description of the tests in your pull request.
  • Make sure that your Plugin’s Chart Directory and the Plugin’s Upstream Chart Repository are added to this greenhouse-extensions helm test config file. This will ensure that your Plugin’s tests are automatically run in the GitHub Actions workflow when you submit a pull request for this Plugin.
  • Note that the dependencies of your Plugin’s helm chart might also have their own tests. If so, ensure that the tests of the dependencies are also passing.

Important Notes

  • Test Coverage: Aim for comprehensive test coverage to ensure your Plugin’s reliability.
  • Test Isolation: Design tests that don’t interfere with other plugins or production environments.

3.3 - Plugin deployment

Deploy a Greenhouse plugin to an existing Kubernetes cluster.

Before you begin

This guides describes how to configure and deploy a Greenhouse plugin.

apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
  name: kube-monitoring-martin
  namespace: <organization namespace> # same namespace in remote cluster for resources
spec:
  clusterName: <name of the remote cluster > # k get cluster
  disabled: false
  displayName: <any human readable name>
  pluginDefinition: <plugin name> # k get plugin
  optionValues:
    - name: <from the plugin options>
      value: <from the plugin options>
    - ...

Exposed services

Plugins deploying Helm Charts into remote clusters support exposed services.

By adding the following label to a service in helm chart it will become accessible from the central greenhouse system via a service proxy:

greenhouse.sap/expose: "true"

Deploying a Plugin

Create the Plugin resource via the command:

kubectl --namespace=<organization name> create -f plugin.yaml

After deployment

  1. Check with kubectl --namespace=<organization name> get plugin has been properly created. When all components of the plugin are successfully created, the plugin should show the state configured.

  2. Check in the remote cluster that all plugin resources are created in the organization namespace.

URLs for exposed services

After deploying the plugin to a remote cluster, ExposedServices section in Plugin’s status provides an overview of the Plugins services that are centrally exposed. It maps the exposed URL to the service found in the manifest.

  • The URLs for exposed services are created in the following pattern: $https://$cluster--$hash.$organisation.$basedomain. The $hash is computed from service--$namespace.
  • When deploying a plugin to the central cluster, the exposed services won’t have their URLs defined, which will be reflected in the Plugin’s Status.

3.4 - Managing Plugins for multiple clusters

Deploy a Greenhouse Plugin with the same configuration into multiple clusters.

Managing Plugins for multiple clusters

This guide describes how to configure and deploy a Greenhouse Plugin with the same configuration into multiple clusters.

The PluginPreset resource is used to create and deploy Plugins with a the identical configuration into multiple clusters. The list of clusters the Plugins will be deployed to is determind by a LabelSelector.

As a result, whenever a cluster, that matches the ClusterSelector is onboarded or offboarded, the Controller for the PluginPresets will take care of the Plugin Lifecycle. This means creating or deleting the Plugin for the respective cluster.

The same validation applies to the PluginPreset as to the Plugin. This includes immutable PluginDefinition and ReleaseNamespace fields, as well as the validation of the OptionValues against the PluginDefinition.

In case the PluginPreset is updated all of the Plugin instances that are managed by the PluginPreset will be updated as well. Each Plugin instance that is created from a PluginPreset has a label greenhouse.sap/pluginpreset: <PluginPreset name>. Also the name of the Plugin follows the scheme <PluginPreset name>-<cluster name>.

Changes that are done directly on a Plugin which was created from a PluginPreset will be overwritten immediately by the PluginPreset Controller. All changes must be performed on the PluginPreset itself. If a Plugin already existed with the same name as the PluginPreset would create, this Plugin will be ignored in following reconciliations.

Example PluginPreset

apiVersion: greenhouse.sap/v1alpha1
kind: PluginPreset
metadata:
  name: kube-monitoring-preset
  namespace: <organization namespace>
spec:
  plugin: # this embeds the PluginSpec
    displayName: <any human readable name>
    pluginDefinition: <PluginDefinition name> # k get plugindefinition
    releaseNamespace: <namespace> # namespace where the plugin is deployed to on the remote cluster. Will be created if not exists
    optionValues:
      - name: <from the PluginDefinition options>
        value: <from the PluginDefinition options>
      - ..
  clusterSelector: # LabelSelector for the clusters the Plugin should be deployed to
    matchLabels:
      <label-key>: <label-value>
  clusterOptionOverrides: # allows you to override specific options in a given cluster
    - clusterName: <cluster name where we want to override values>
      overrides:
        - name: <option name to override>
          value: <new value>
        - ..
    - ..

3.5 - Plugin Catalog

Explore the catalog of Greenhouse PluginDefinitions

Before you begin

This guides describes how to explore the catalog of Greenhouse PluginDefinitions.

While all members of an organization can see the Plugin catalog, enabling, disabling and configuration PluginDefinitions for an organization requires organization admin privileges.

Exploring the PluginDefinition catalog

The PluginDefinition resource describes the backend and frontend components as well as mandatory configuration options of a Greenhouse extension.
While the PluginDefinition catalog is managed by the Greenhouse administrators and the respective domain experts, administrators of an organization can configure and tailor Plugins to their specific requirements.

NOTE: The UI also provides a preliminary catalog of Plugins under Organization> Plugin> Add Plugin.
  1. Run the following command to see all available PluginDefinitions.

    $ kubectl get plugindefinition
    
    NAME                      VERSION   DESCRIPTION                                                                                                  AGE
    cert-manager              1.1.0     Automated certificate management in Kubernetes                                                               182d
    digicert-issuer           1.2.0     Extensions to the cert-manager for DigiCert support                                                          182d
    disco                     1.0.0     Automated DNS management using the Designate Ingress CNAME operator (DISCO)                                  179d
    doop                      1.0.0     Holistic overview on Gatekeeper policies and violations                                                      177d
    external-dns              1.0.0     The kubernetes-sigs/external-dns plugin.                                                                     186d
    heureka                   1.0.0     Plugin for Heureka, the patch management system.                                                             177d
    ingress-nginx             1.1.0     Ingress NGINX controller                                                                                     187d
    kube-monitoring           1.0.1     Kubernetes native deployment and management of Prometheus, Alertmanager and related monitoring components.   51d
    prometheus-alertmanager   1.0.0     Prometheus alertmanager                                                                                      60d
    supernova                 1.0.0     Supernova, the holistic alert management UI                                                                  187d
    teams2slack               1.1.0     Manage Slack handles and channels based on Greenhouse teams and their members                                115d
    

4 - Team management

Manage teams of your organization via Greenhouse.

A team is a group of users with shared responsibilities for managing and operating cloud resources within a Greenhouse organization.
These teams enable efficient collaboration, access control, and task assignment, allowing organizations to effectively organize their users and streamline cloud operations within the Greenhouse platform.

This section provides guides for the management of teams within an organization.

4.1 - Role-based access control

Creating and managing roles and permissions in Greenhouse.

Contents

Before you begin

This guides describes how to manage roles and permissions in Greenhouse with the help of TeamRoles and TeamRoleBindings.

While all members of an organization can see the permissions configured with TeamRoles & TeamRoleBindings, configuration of these requires OrganizationAdmin privileges.

Greenhouse Team RBAC user guide

Role-Based Access Control (RBAC) in Greenhouse allows organization administrators to regulate access to Kubernetes resources in onboarded Clusters based on the roles of individual users within an Organization. Within Greenhouse the RBAC on remote Clusters is managed using TeamRole and TeamRoleBinding. These two Custom Resource Defintions allow for fine-grained control over the permissions of each Team within each Cluster and Namespace.

Overview

  • TeamRole: Defines a set of permissions that can be assigned to teams.
  • TeamRoleBinding: Assigns a TeamRole to a specific Team for certain Clusters and (optionally) Namespaces.

Defining TeamRoles

TeamRoles define what actions a team can perform within the Kubernetes cluster. Common roles including the below cluster-admin are pre-defined within each organization.

Example

This TeamRole named pod-read grants read access to Pods!.

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
  name: pod-read
spec:
  rules:
    - apiGroups:
        - ""
      resources:
        - "pods"
      verbs:
        - "get"
        - "list"

Seeded default TeamRoles

Greenhouse provides a set of default TeamRoles that are seeded to all clusters:

TeamRoleDescriptionAPIGroupsResourcesVerbs
cluster-adminFull privileges***
cluster-viewerget, list and watch all resources**get, list, watch
cluster-developerAggregated role. Greenhouse aggregates the application-developer and the cluster-viewer. Further TeamRoles can be aggregated.
application-developerSet of permissions on pods, deployments and statefulsets necessary to develop applications on k8sappsdeployments, statefulsetspatch
""pods, pods/portforward, pods/eviction, pods/proxy, pods/log, pods/status,get, list, watch, create, update, patch, delete
node-maintainerget and patch nodes""nodesget, patch
namespace-creatorAll permissions on namespaces""namespaces*

Defining TeamRoleBindings

TeamRoleBindings define the permissions of a Greenhouse Team within Clusters by linking to a specific TeamRole. TeamRoleBindings have a simple specification that links a Team, a TeamRole, one or more Clusters and optionally a one or more Namespaces together. Once the TeamRoleBinding is created, the Team will have the permissions defined in the TeamRole within the specified Clusters and Namespaces. This allows for fine-grained control over the permissions of each Team within each Cluster. The TeamRoleBinding Controller within Greenhouse deploys rbacv1 resources to the targeted Clusters. The referenced TeamRole is created as a rbacv1.ClusterRole. In case the TeamRoleBinding references a Namespace, the Controller will create a rbacv1.RoleBinding which links the Team with the rbacv1.ClusterRole. In case no Namespace is referenced, the Controller will create a rbacv1.ClusterRoleBinding instead.

Assigning TeamRoles to Teams on a single Cluster

Roles are assigned to teams through the TeamRoleBinding configuration, which links teams to their respective roles within specific clusters.

This TeamRoleBinding assigns the pod-read TeamRole to the Team named my-team in the Cluster named my-cluster.

Example: team-rolebindings.yaml

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRoleBinding
metadata:
  name: my-team-read-access
spec:
  teamRef: my-team
  roleRef: pod-read
  clusterName: my-cluster

Assigning TeamRoles to Teams on multiple Clusters

It is also possible to use a LabelSelector to assign TeamRoleBindings to multiple Clusters at once.

This TeamRoleBinding assigns the pod-read TeamRole to the Team named my-team in all Clusters with the label environment: production.

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRoleBinding
metadata:
  name: production-cluster-admins
spec:
  teamRef: my-team
  roleRef: pod-read
  clusterSelector:
    matchLabels:
      environment: production

Aggregating TeamRoles

It is possible with RBAC to aggregate rbacv1.ClusterRoles. This is also supported for TeamRoles. By specifying .spec.Labels on a TeamRole the resulting ClusterRole on the target cluster will have the same labels set. Then it is possible to aggregate multiple ClusterRole resources by using a rbacv1.AggregationRule. This can be specified on a TeamRole by setting .spec.aggregationRule.

More details on the concept of Aggregated ClusterRoles can be found in the Kubernetes documentation: Aggregated ClusterRoles

[!NOTE] A TeamRole is only created on a cluster if it is referenced by a TeamRoleBinding. If a TeamRole is not referenced by a TeamRoleBinding it will not be created on any target cluster. A TeamRoleBinding referencing a TeamRole with an aggregationRule will only provide the correct access, if there is at least one TeamRoleBinding referencing a TeamRole with the corresponding label deployed to the same cluster.

The following example shows how to an AggregationRule can be used with TeamRoles and TeamRoleBindings.

This TeamRole specifies .spec.Labels. The labels will be applied to the resulting ClusterRole on the target cluster.

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
  name: pod-read
spec:
  labels:
    aggregate: "true"
  rules:
    - apiGroups:
        - ""
      resources:
        - "pods"
      verbs:
        - "get"
        - "list"

This TeamRoleBinding assigns the pod-read TeamRole to the Team named my-team in all Clusters with the label environment: production.

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRoleBinding
metadata:
  name: production-pod-read
spec:
  teamRef: my-team
  roleRef: pod-read
  clusterSelector:
    matchLabels:
      environment: production

This creates another TeamRole and TeamRoleBinding including the same labels as above.

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
  name: pod-edit
spec:
  labels:
    aggregate: "true"
  rules:
    - apiGroups:
        - ""
      resources:
        - "pod"
      verbs:
        - "update"
        - "patch"
---
apiVersion: greenhouse.sap/v1alpha1
kind: TeamRoleBinding
metadata:
  name: production-pod-edit
spec:
  teamRef: my-team
  roleRef: pod-edit
  clusterSelector:
    matchLabels:
      environment: production

This TeamRole has an aggregationRule set. This aggregationRule will be added to the ClusterRole created on the target clusters. With the aggregationRule set it will aggregate the ClusterRoles created by the TeamRoles with the label aggregate: "true". The team will have the permissions of both TeamRoles and will be able to get, list, update and patch Pods.

apiVersion: greenhouse.sap/v1alpha1
kind: TeamRole
metadata:
  name: aggregated-role
spec:
  aggregationRule:
    clusterRoleSelectors:
    - matchLabels:
        "aggregate": "true"
apiVersion: greenhouse.sap/v1alpha1
kind: TeamRoleBinding
metadata:
  name: aggregated-rolebinding
spec:
  teamRef: operators
  roleRef: aggregated-role
  clusterSelector:
    matchLabels:
      environment: production

4.2 - Team creation

Create a team within your organization

Before you begin

This guides describes how to create a team in your Greenhouse organization.

While all members of an organization can see existing teams, their management requires organization admin privileges.

Creating a team

The team resource is used to structure members of your organization and assign fine-grained access and permission levels.

Each Team must be backed by a group in the identity provider (IdP) of the Organization.

  • IdP group should be set on the mappedIdPGroup field in Team configuration.
  • This, along with SCIM API configured in the Organization, allows for synchronization of TeamMemberships with Greenhouse.
NOTE: The UI is currently in development. For now this guides describes the onboarding workflow via command line.
  1. To onboard a new cluster provide the kubeconfig file with a static, short-lived token.
    It should look similar to this example:
    cat <<EOF | kubectl apply -f -
       apiVersion: greenhouse.sap/v1alpha1
       kind: Team
       metadata:
       name: <name>
       spec:
          description: My new team
          mappedIdPGroup: <IdP group name>
    EOF