This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Reference

Technical reference documentation for Greenhouse resources

This section contains reference documentation for Greenhouse.

1 - API

Technical reference documentation for Greenhouse API resources

Packages:

greenhouse.sap/v1alpha1

Resource Types:

    Authentication

    (Appears on: OrganizationSpec)

    FieldDescription
    oidc
    OIDCConfig

    OIDConfig configures the OIDC provider.

    scim
    SCIMConfig

    SCIMConfig configures the SCIM client.

    Cluster

    Cluster is the Schema for the clusters API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    ClusterSpec


    accessMode
    ClusterAccessMode

    AccessMode configures how the cluster is accessed from the Greenhouse operator.

    kubeConfig
    ClusterKubeConfig

    KubeConfig contains specific values for KubeConfig for the cluster.

    status
    ClusterStatus

    ClusterAccessMode (string alias)

    (Appears on: ClusterSpec)

    ClusterAccessMode configures the access mode to the customer cluster.

    ClusterConditionType (string alias)

    ClusterConditionType is a valid condition of a cluster.

    ClusterKubeConfig

    (Appears on: ClusterSpec)

    ClusterKubeConfig configures kube config values.

    FieldDescription
    maxTokenValidity
    int32

    MaxTokenValidity specifies the maximum duration for which a token remains valid in hours.

    ClusterKubeconfig

    ClusterKubeconfig is the Schema for the clusterkubeconfigs API ObjectMeta.OwnerReferences is used to link the ClusterKubeconfig to the Cluster ObjectMeta.Generation is used to detect changes in the ClusterKubeconfig and sync local kubeconfig files ObjectMeta.Name is designed to be the same with the Cluster name

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    ClusterKubeconfigSpec


    kubeconfig
    ClusterKubeconfigData
    status
    ClusterKubeconfigStatus

    ClusterKubeconfigAuthInfo

    (Appears on: ClusterKubeconfigAuthInfoItem)

    FieldDescription
    auth-provider
    k8s.io/client-go/tools/clientcmd/api.AuthProviderConfig
    client-certificate-data
    []byte
    client-key-data
    []byte

    ClusterKubeconfigAuthInfoItem

    (Appears on: ClusterKubeconfigData)

    FieldDescription
    name
    string
    user
    ClusterKubeconfigAuthInfo

    ClusterKubeconfigCluster

    (Appears on: ClusterKubeconfigClusterItem)

    FieldDescription
    server
    string
    certificate-authority-data
    []byte

    ClusterKubeconfigClusterItem

    (Appears on: ClusterKubeconfigData)

    FieldDescription
    name
    string
    cluster
    ClusterKubeconfigCluster

    ClusterKubeconfigContext

    (Appears on: ClusterKubeconfigContextItem)

    FieldDescription
    cluster
    string
    user
    string
    namespace
    string

    ClusterKubeconfigContextItem

    (Appears on: ClusterKubeconfigData)

    FieldDescription
    name
    string
    context
    ClusterKubeconfigContext

    ClusterKubeconfigData

    (Appears on: ClusterKubeconfigSpec)

    ClusterKubeconfigData stores the kubeconfig data ready to use kubectl or other local tooling It is a simplified version of clientcmdapi.Config: https://pkg.go.dev/k8s.io/client-go/tools/clientcmd/api#Config

    FieldDescription
    kind
    string
    apiVersion
    string
    clusters
    []ClusterKubeconfigClusterItem
    users
    []ClusterKubeconfigAuthInfoItem
    contexts
    []ClusterKubeconfigContextItem
    current-context
    string
    preferences
    ClusterKubeconfigPreferences

    ClusterKubeconfigPreferences

    (Appears on: ClusterKubeconfigData)

    ClusterKubeconfigSpec

    (Appears on: ClusterKubeconfig)

    ClusterKubeconfigSpec stores the kubeconfig data for the cluster The idea is to use kubeconfig data locally with minimum effort (with local tools or plain kubectl): kubectl get cluster-kubeconfig $NAME -o yaml | yq -y .spec.kubeconfig

    FieldDescription
    kubeconfig
    ClusterKubeconfigData

    ClusterKubeconfigStatus

    (Appears on: ClusterKubeconfig)

    FieldDescription
    statusConditions
    StatusConditions

    ClusterOptionOverride

    (Appears on: PluginPresetSpec)

    ClusterOptionOverride defines which plugin option should be override in which cluster

    FieldDescription
    clusterName
    string
    overrides
    []PluginOptionValue

    ClusterSpec

    (Appears on: Cluster)

    ClusterSpec defines the desired state of the Cluster.

    FieldDescription
    accessMode
    ClusterAccessMode

    AccessMode configures how the cluster is accessed from the Greenhouse operator.

    kubeConfig
    ClusterKubeConfig

    KubeConfig contains specific values for KubeConfig for the cluster.

    ClusterStatus

    (Appears on: Cluster)

    ClusterStatus defines the observed state of Cluster

    FieldDescription
    kubernetesVersion
    string

    KubernetesVersion reflects the detected Kubernetes version of the cluster.

    bearerTokenExpirationTimestamp
    Kubernetes meta/v1.Time

    BearerTokenExpirationTimestamp reflects the expiration timestamp of the bearer token used to access the cluster.

    statusConditions
    StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Cluster.

    nodes
    map[string]./pkg/apis/greenhouse/v1alpha1.NodeStatus

    Nodes provides a map of cluster node names to node statuses

    Condition

    (Appears on: PropagationStatus, StatusConditions)

    Condition contains additional information on the state of a resource.

    FieldDescription
    type
    ConditionType

    Type of the condition.

    status
    Kubernetes meta/v1.ConditionStatus

    Status of the condition.

    reason
    ConditionReason

    Reason is a one-word, CamelCase reason for the condition’s last transition.

    lastTransitionTime
    Kubernetes meta/v1.Time

    LastTransitionTime is the last time the condition transitioned from one status to another.

    message
    string

    Message is an optional human readable message indicating details about the last transition.

    ConditionReason (string alias)

    (Appears on: Condition)

    ConditionReason is a valid reason for a condition of a resource.

    ConditionType (string alias)

    (Appears on: Condition)

    ConditionType is a valid condition of a resource.

    HelmChartReference

    (Appears on: PluginDefinitionSpec, PluginStatus)

    HelmChartReference references a Helm Chart in a chart repository.

    FieldDescription
    name
    string

    Name of the HelmChart chart.

    repository
    string

    Repository of the HelmChart chart.

    version
    string

    Version of the HelmChart chart.

    HelmReleaseStatus

    (Appears on: PluginStatus)

    HelmReleaseStatus reflects the status of a Helm release.

    FieldDescription
    status
    string

    Status is the status of a HelmChart release.

    firstDeployed
    Kubernetes meta/v1.Time

    FirstDeployed is the timestamp of the first deployment of the release.

    lastDeployed
    Kubernetes meta/v1.Time

    LastDeployed is the timestamp of the last deployment of the release.

    NodeStatus

    (Appears on: ClusterStatus)

    FieldDescription
    statusConditions
    StatusConditions

    We mirror the node conditions here for faster reference

    ready
    bool

    Fast track to the node ready condition.

    OIDCConfig

    (Appears on: Authentication)

    FieldDescription
    issuer
    string

    Issuer is the URL of the identity service.

    redirectURI
    string

    RedirectURI is the redirect URI. If none is specified, the Greenhouse ID proxy will be used.

    clientIDReference
    SecretKeyReference

    ClientIDReference references the Kubernetes secret containing the client id.

    clientSecretReference
    SecretKeyReference

    ClientSecretReference references the Kubernetes secret containing the client secret.

    Organization

    Organization is the Schema for the organizations API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    OrganizationSpec


    displayName
    string

    DisplayName is an optional name for the organization to be displayed in the Greenhouse UI. Defaults to a normalized version of metadata.name.

    authentication
    Authentication

    Authentication configures the organizations authentication mechanism.

    description
    string

    Description provides additional details of the organization.

    mappedOrgAdminIdPGroup
    string

    MappedOrgAdminIDPGroup is the IDP group ID identifying org admins

    status
    OrganizationStatus

    OrganizationSpec

    (Appears on: Organization)

    OrganizationSpec defines the desired state of Organization

    FieldDescription
    displayName
    string

    DisplayName is an optional name for the organization to be displayed in the Greenhouse UI. Defaults to a normalized version of metadata.name.

    authentication
    Authentication

    Authentication configures the organizations authentication mechanism.

    description
    string

    Description provides additional details of the organization.

    mappedOrgAdminIdPGroup
    string

    MappedOrgAdminIDPGroup is the IDP group ID identifying org admins

    OrganizationStatus

    (Appears on: Organization)

    OrganizationStatus defines the observed state of an Organization

    FieldDescription
    statusConditions
    StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Organization.

    Plugin

    Plugin is the Schema for the plugins API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginSpec


    pluginDefinition
    string

    PluginDefinition is the name of the PluginDefinition this instance is for.

    displayName
    string

    DisplayName is an optional name for the Plugin to be displayed in the Greenhouse UI. This is especially helpful to distinguish multiple instances of a PluginDefinition in the same context. Defaults to a normalized version of metadata.name.

    disabled
    bool

    Disabled indicates that the plugin is administratively disabled.

    optionValues
    []PluginOptionValue

    Values are the values for a PluginDefinition instance.

    clusterName
    string

    ClusterName is the name of the cluster the plugin is deployed to. If not set, the plugin is deployed to the greenhouse cluster.

    releaseNamespace
    string

    ReleaseNamespace is the namespace in the remote cluster to which the backend is deployed. Defaults to the Greenhouse managed namespace if not set.

    status
    PluginStatus

    PluginDefinition

    PluginDefinition is the Schema for the PluginDefinitions API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginDefinitionSpec


    displayName
    string

    DisplayName provides a human-readable label for the pluginDefinition.

    description
    string

    Description provides additional details of the pluginDefinition.

    helmChart
    HelmChartReference

    HelmChart specifies where the Helm Chart for this pluginDefinition can be found.

    uiApplication
    UIApplicationReference

    UIApplication specifies a reference to a UI application

    options
    []PluginOption

    RequiredValues is a list of values required to create an instance of this PluginDefinition.

    version
    string

    Version of this pluginDefinition

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI. Defaults to alphabetical sorting if not provided or on conflict.

    icon
    string

    Icon specifies the icon to be used for this plugin in the Greenhouse UI. Icons can be either: - A string representing a juno icon in camel case from this list: https://github.com/sapcc/juno/blob/main/libs/juno-ui-components/src/components/Icon/Icon.component.js#L6-L52 - A publicly accessible image reference to a .png file. Will be displayed 100x100px

    docMarkDownUrl
    string

    DocMarkDownUrl specifies the URL to the markdown documentation file for this plugin. Source needs to allow all CORS origins.

    status
    PluginDefinitionStatus

    PluginDefinitionSpec

    (Appears on: PluginDefinition)

    PluginDefinitionSpec defines the desired state of PluginDefinitionSpec

    FieldDescription
    displayName
    string

    DisplayName provides a human-readable label for the pluginDefinition.

    description
    string

    Description provides additional details of the pluginDefinition.

    helmChart
    HelmChartReference

    HelmChart specifies where the Helm Chart for this pluginDefinition can be found.

    uiApplication
    UIApplicationReference

    UIApplication specifies a reference to a UI application

    options
    []PluginOption

    RequiredValues is a list of values required to create an instance of this PluginDefinition.

    version
    string

    Version of this pluginDefinition

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI. Defaults to alphabetical sorting if not provided or on conflict.

    icon
    string

    Icon specifies the icon to be used for this plugin in the Greenhouse UI. Icons can be either: - A string representing a juno icon in camel case from this list: https://github.com/sapcc/juno/blob/main/libs/juno-ui-components/src/components/Icon/Icon.component.js#L6-L52 - A publicly accessible image reference to a .png file. Will be displayed 100x100px

    docMarkDownUrl
    string

    DocMarkDownUrl specifies the URL to the markdown documentation file for this plugin. Source needs to allow all CORS origins.

    PluginDefinitionStatus

    (Appears on: PluginDefinition)

    PluginDefinitionStatus defines the observed state of PluginDefinition

    PluginOption

    (Appears on: PluginDefinitionSpec)

    FieldDescription
    name
    string

    Name/Key of the config option.

    default
    k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON
    (Optional)

    Default provides a default value for the option

    description
    string

    Description provides a human-readable text for the value as shown in the UI.

    displayName
    string

    DisplayName provides a human-readable label for the configuration option

    required
    bool

    Required indicates that this config option is required

    type
    PluginOptionType

    Type of this configuration option.

    regex
    string

    Regex specifies a match rule for validating configuration options.

    PluginOptionType (string alias)

    (Appears on: PluginOption)

    PluginOptionType specifies the type of PluginOption.

    PluginOptionValue

    (Appears on: ClusterOptionOverride, PluginSpec)

    PluginOptionValue is the value for a PluginOption.

    FieldDescription
    name
    string

    Name of the values.

    value
    k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON

    Value is the actual value in plain text.

    valueFrom
    ValueFromSource

    ValueFrom references a potentially confidential value in another source.

    PluginPreset

    PluginPreset is the Schema for the PluginPresets API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginPresetSpec


    plugin
    PluginSpec

    PluginSpec is the spec of the plugin to be deployed by the PluginPreset.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the clusters the plugin bundle should be deployed to.

    clusterOptionOverrides
    []ClusterOptionOverride

    ClusterOptionOverrides define plugin option values to override by the PluginPreset

    status
    PluginPresetStatus

    PluginPresetSpec

    (Appears on: PluginPreset)

    PluginPresetSpec defines the desired state of PluginPreset

    FieldDescription
    plugin
    PluginSpec

    PluginSpec is the spec of the plugin to be deployed by the PluginPreset.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the clusters the plugin bundle should be deployed to.

    clusterOptionOverrides
    []ClusterOptionOverride

    ClusterOptionOverrides define plugin option values to override by the PluginPreset

    PluginPresetStatus

    (Appears on: PluginPreset)

    PluginPresetStatus defines the observed state of PluginPreset

    FieldDescription
    statusConditions
    StatusConditions

    StatusConditions contain the different conditions that constitute the status of the PluginPreset.

    PluginSpec

    (Appears on: Plugin, PluginPresetSpec)

    PluginSpec defines the desired state of Plugin

    FieldDescription
    pluginDefinition
    string

    PluginDefinition is the name of the PluginDefinition this instance is for.

    displayName
    string

    DisplayName is an optional name for the Plugin to be displayed in the Greenhouse UI. This is especially helpful to distinguish multiple instances of a PluginDefinition in the same context. Defaults to a normalized version of metadata.name.

    disabled
    bool

    Disabled indicates that the plugin is administratively disabled.

    optionValues
    []PluginOptionValue

    Values are the values for a PluginDefinition instance.

    clusterName
    string

    ClusterName is the name of the cluster the plugin is deployed to. If not set, the plugin is deployed to the greenhouse cluster.

    releaseNamespace
    string

    ReleaseNamespace is the namespace in the remote cluster to which the backend is deployed. Defaults to the Greenhouse managed namespace if not set.

    PluginStatus

    (Appears on: Plugin)

    PluginStatus defines the observed state of Plugin

    FieldDescription
    helmReleaseStatus
    HelmReleaseStatus

    HelmReleaseStatus reflects the status of the latest HelmChart release. This is only configured if the pluginDefinition is backed by HelmChart.

    version
    string

    Version contains the latest pluginDefinition version the config was last applied with successfully.

    helmChart
    HelmChartReference

    HelmChart contains a reference the helm chart used for the deployed pluginDefinition version.

    uiApplication
    UIApplicationReference

    UIApplication contains a reference to the frontend that is used for the deployed pluginDefinition version.

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI.

    description
    string

    Description provides additional details of the plugin.

    exposedServices
    map[string]./pkg/apis/greenhouse/v1alpha1.Service

    ExposedServices provides an overview of the Plugins services that are centrally exposed. It maps the exposed URL to the service found in the manifest.

    statusConditions
    StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Plugin.

    PropagationStatus

    (Appears on: TeamRoleBindingStatus)

    PropagationStatus defines the observed state of the TeamRoleBinding’s associated rbacv1 resources on a Cluster

    FieldDescription
    clusterName
    string

    ClusterName is the name of the cluster the rbacv1 resources are created on.

    condition
    Condition

    Condition is the overall Status of the rbacv1 resources created on the cluster

    SCIMConfig

    (Appears on: Authentication)

    FieldDescription
    baseURL
    string

    URL to the SCIM server.

    basicAuthUser
    ValueFromSource

    User to be used for basic authentication.

    basicAuthPw
    ValueFromSource

    Password to be used for basic authentication.

    SecretKeyReference

    (Appears on: OIDCConfig, ValueFromSource)

    SecretKeyReference specifies the secret and key containing the value.

    FieldDescription
    name
    string

    Name of the secret in the same namespace.

    key
    string

    Key in the secret to select the value from.

    Service

    (Appears on: PluginStatus)

    Service references a Kubernetes service of a Plugin.

    FieldDescription
    namespace
    string

    Namespace is the namespace of the service in the target cluster.

    name
    string

    Name is the name of the service in the target cluster.

    port
    int32

    Port is the port of the service.

    protocol
    string

    Protocol is the protocol of the service.

    StatusConditions

    (Appears on: ClusterKubeconfigStatus, ClusterStatus, NodeStatus, OrganizationStatus, PluginPresetStatus, PluginStatus, TeamMembershipStatus, TeamRoleBindingStatus, TeamStatus)

    A StatusConditions contains a list of conditions. Only one condition of a given type may exist in the list.

    FieldDescription
    conditions
    []Condition

    Team

    Team is the Schema for the teams API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamSpec


    description
    string

    Description provides additional details of the team.

    mappedIdPGroup
    string

    IdP group id matching team.

    joinUrl
    string

    URL to join the IdP group.

    status
    TeamStatus

    TeamMembership

    TeamMembership is the Schema for the teammemberships API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamMembershipSpec


    members
    []User
    (Optional)

    Members list users that are part of a team.

    status
    TeamMembershipStatus

    TeamMembershipSpec

    (Appears on: TeamMembership)

    TeamMembershipSpec defines the desired state of TeamMembership

    FieldDescription
    members
    []User
    (Optional)

    Members list users that are part of a team.

    TeamMembershipStatus

    (Appears on: TeamMembership)

    TeamMembershipStatus defines the observed state of TeamMembership

    FieldDescription
    lastSyncedTime
    Kubernetes meta/v1.Time
    (Optional)

    LastSyncedTime is the information when was the last time the membership was synced

    lastUpdateTime
    Kubernetes meta/v1.Time
    (Optional)

    LastChangedTime is the information when was the last time the membership was actually changed

    statusConditions
    StatusConditions

    StatusConditions contain the different conditions that constitute the status of the TeamMembership.

    TeamRole

    TeamRole is the Schema for the TeamRoles API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamRoleSpec


    rules
    []Kubernetes rbac/v1.PolicyRule

    Rules is a list of rbacv1.PolicyRules used on a managed RBAC (Cluster)Role

    aggregationRule
    Kubernetes rbac/v1.AggregationRule

    AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole on the remote cluster

    labels
    map[string]string

    Labels are applied to the ClusterRole created on the remote cluster. This allows using TeamRoles as part of AggregationRules by other TeamRoles

    status
    TeamRoleStatus

    TeamRoleBinding

    TeamRoleBinding is the Schema for the rolebindings API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamRoleBindingSpec


    teamRoleRef
    string

    TeamRoleRef references a Greenhouse TeamRole by name

    teamRef
    string

    TeamRef references a Greenhouse Team by name

    clusterName
    string

    ClusterName is the name of the cluster the rbacv1 resources are created on.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the Clusters the TeamRoleBinding should be deployed to.

    namespaces
    []string

    Namespaces is the immutable list of namespaces in the Greenhouse Clusters to apply the RoleBinding to. If empty, a ClusterRoleBinding will be created on the remote cluster, otherwise a RoleBinding per namespace.

    status
    TeamRoleBindingStatus

    TeamRoleBindingSpec

    (Appears on: TeamRoleBinding)

    TeamRoleBindingSpec defines the desired state of a TeamRoleBinding

    FieldDescription
    teamRoleRef
    string

    TeamRoleRef references a Greenhouse TeamRole by name

    teamRef
    string

    TeamRef references a Greenhouse Team by name

    clusterName
    string

    ClusterName is the name of the cluster the rbacv1 resources are created on.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the Clusters the TeamRoleBinding should be deployed to.

    namespaces
    []string

    Namespaces is the immutable list of namespaces in the Greenhouse Clusters to apply the RoleBinding to. If empty, a ClusterRoleBinding will be created on the remote cluster, otherwise a RoleBinding per namespace.

    TeamRoleBindingStatus

    (Appears on: TeamRoleBinding)

    TeamRoleBindingStatus defines the observed state of the TeamRoleBinding

    FieldDescription
    statusConditions
    StatusConditions

    StatusConditions contain the different conditions that constitute the status of the TeamRoleBinding.

    clusters
    []PropagationStatus

    PropagationStatus is the list of clusters the TeamRoleBinding is applied to

    TeamRoleSpec

    (Appears on: TeamRole)

    TeamRoleSpec defines the desired state of a TeamRole

    FieldDescription
    rules
    []Kubernetes rbac/v1.PolicyRule

    Rules is a list of rbacv1.PolicyRules used on a managed RBAC (Cluster)Role

    aggregationRule
    Kubernetes rbac/v1.AggregationRule

    AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole on the remote cluster

    labels
    map[string]string

    Labels are applied to the ClusterRole created on the remote cluster. This allows using TeamRoles as part of AggregationRules by other TeamRoles

    TeamRoleStatus

    (Appears on: TeamRole)

    TeamRoleStatus defines the observed state of a TeamRole

    TeamSpec

    (Appears on: Team)

    TeamSpec defines the desired state of Team

    FieldDescription
    description
    string

    Description provides additional details of the team.

    mappedIdPGroup
    string

    IdP group id matching team.

    joinUrl
    string

    URL to join the IdP group.

    TeamStatus

    (Appears on: Team)

    TeamStatus defines the observed state of Team

    FieldDescription
    statusConditions
    StatusConditions

    UIApplicationReference

    (Appears on: PluginDefinitionSpec, PluginStatus)

    UIApplicationReference references the UI pluginDefinition to use.

    FieldDescription
    url
    string

    URL specifies the url to a built javascript asset. By default, assets are loaded from the Juno asset server using the provided name and version.

    name
    string

    Name of the UI application.

    version
    string

    Version of the frontend application.

    User

    (Appears on: TeamMembershipSpec)

    User specifies a human person.

    FieldDescription
    id
    string

    ID is the unique identifier of the user.

    firstName
    string

    FirstName of the user.

    lastName
    string

    LastName of the user.

    email
    string

    Email of the user.

    ValueFromSource

    (Appears on: PluginOptionValue, SCIMConfig)

    ValueFromSource is a valid source for a value.

    FieldDescription
    secret
    SecretKeyReference

    Secret references the secret containing the value.

    This page was automatically generated with gen-crd-api-reference-docs

    2 - Plugin Catalog

    Plugin Catalog overview

    This section provides an overview of the available PluginDefinitions in Greenhouse.

    2.1 - Alerts

    Learn more about the alerts plugin. Use it to activate Prometheus alert management for your Greenhouse organisation.

    The main terminologies used in this document can be found in core-concepts.

    Overview

    This Plugin includes a preconfigured Prometheus Alertmanager, which is deployed and managed via the Prometheus Operator, and Supernova, an advanced user interface for Prometheus Alertmanager. Certificates are automatically generated to enable sending alerts from Prometheus to Alertmanager. These alerts can too be sent as Slack notifications with a provided set of notification templates.

    Components included in this Plugin:

    This Plugin usually is deployed along the kube-monitoring Plugin and does not deploy the Prometheus Operator itself. However, if you are intending to use it stand-alone, you need to explicitly enable the deployment of Prometheus Operator, otherwise it will not work. It can be done in the configuration interface of the plugin.

    Alerts Plugin Architecture

    Disclaimer

    This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.

    The Plugin is a deeply configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates.

    It is intended as a platform that can be extended by following the guide.

    Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

    Quick start

    This guide provides a quick and straightforward way to use alerts as a Greenhouse Plugin on your Kubernetes cluster.

    Prerequisites

    • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
    • kube-monitoring plugin (which brings in Prometheus Operator) OR stand alone: awareness to enable the deployment of Prometheus Operator with this plugin

    Step 1:

    You can install the alerts package in your cluster with Helm manually or let the Greenhouse platform lifecycle it for you automatically. For the latter, you can either:

    1. Go to Greenhouse dashboard and select the Alerts Plugin from the catalog. Specify the cluster and required option values.
    2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

    Step 2:

    After the installation, you can access the Supernova UI by navigating to the Alerts tab in the Greenhouse dashboard.

    Step 3:

    Greenhouse regularly performs integration tests that are bundled with alerts. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.

    Configuration

    Prometheus Alertmanager options

    NameDescriptionValue
    alerts.commonLabelsLabels to apply to all resources{}
    alerts.alertmanager.enabledDeploy Prometheus Alertmanagertrue
    alerts.alertmanager.annotationsAnnotations for Alertmanager{}
    alerts.alertmanager.configAlertmanager configuration directives.{}
    alerts.alertmanager.ingress.enabledDeploy Alertmanager Ingressfalse
    alerts.alertmanager.ingress.hostsMust be provided if Ingress is enabled.[]
    alerts.alertmanager.ingress.tlsMust be a valid TLS configuration for Alertmanager Ingress. Supernova UI passes the client certificate to retrieve alerts.{}
    alerts.alertmanager.ingress.ingressClassnameSpecifies the ingress-controllernginx
    alerts.alertmanager.servicemonitor.additionalLabelskube-monitoring plugin: <plugin.name> to scrape Alertmanager metrics.{}
    alerts.alertmanager.alertmanagerConfig.slack.routes[].nameName of the Slack route.""
    alerts.alertmanager.alertmanagerConfig.slack.routes[].channelSlack channel to post alerts to. Must be defined with slack.webhookURL.""
    alerts.alertmanager.alertmanagerConfig.slack.routes[].webhookURLSlack webhookURL to post alerts to. Must be defined with slack.channel.""
    alerts.alertmanager.alertmanagerConfig.slack.routes[].matchersList of matchers that the alert’s label should match. matchType , name , regex , value[]
    alerts.alertmanager.alertmanagerConfig.webhook.routes[].nameName of the webhook route.""
    alerts.alertmanager.alertmanagerConfig.webhook.routes[].urlWebhook url to post alerts to.""
    alerts.alertmanager.alertmanagerConfig.webhook.routes[].matchersList of matchers that the alert’s label should match. matchType , name , regex , value[]
    alerts.defaultRules.createCreates community Alertmanager alert rules.true
    alerts.defaultRules.labelskube-monitoring plugin: <plugin.name> to evaluate Alertmanager rules.{}
    alerts.alertmanager.alertmanagerSpec.alertmanagerConfigurationAlermanagerConfig to be used as top level configurationfalse

    Supernova options

    theme: Override the default theme. Possible values are "theme-light" or "theme-dark" (default)

    endpoint: Alertmanager API Endpoint URL /api/v2. Should be one of alerts.alertmanager.ingress.hosts

    silenceExcludedLabels: SilenceExcludedLabels are labels that are initially excluded by default when creating a silence. However, they can be added if necessary when utilizing the advanced options in the silence form.The labels must be an array of strings. Example: ["pod", "pod_name", "instance"]

    filterLabels: FilterLabels are the labels shown in the filter dropdown, enabling users to filter alerts based on specific criteria. The ‘Status’ label serves as a default filter, automatically computed from the alert status attribute and will be not overwritten. The labels must be an array of strings. Example: ["app", "cluster", "cluster_type"]

    predefinedFilters: PredefinedFilters are filters applied through in the UI to differentiate between contexts through matching alerts with regular expressions. They are loaded by default when the application is loaded. The format is a list of objects including name, displayname and matchers (containing keys corresponding value). Example:

    [
      {
        "name": "prod",
        "displayName": "Productive System",
        "matchers": {
          "region": "^prod-.*"
        }
      }
    ]
    

    silenceTemplates: SilenceTemplates are used in the Modal (schedule silence) to allow pre-defined silences to be used to scheduled maintenance windows. The format consists of a list of objects including description, editable_labels (array of strings specifying the labels that users can modify), fixed_labels (map containing fixed labels and their corresponding values), status, and title. Example:

    "silenceTemplates": [
        {
          "description": "Description of the silence template",
          "editable_labels": ["region"],
          "fixed_labels": {
            "name": "Marvin",
          },
          "status": "active",
          "title": "Silence"
        }
      ]
    

    Managing Alertmanager configuration

    ref:

    By default, the Alertmanager instances will start with a minimal configuration which isn’t really useful since it doesn’t send any notification when receiving alerts.

    You have multiple options to provide the Alertmanager configuration:

    1. You can use alerts.alertmanager.config to define a Alertmanager configuration. Example below.
    config:
      global:
        resolve_timeout: 5m
      inhibit_rules:
        - source_matchers:
            - "severity = critical"
          target_matchers:
            - "severity =~ warning|info"
          equal:
            - "namespace"
            - "alertname"
        - source_matchers:
            - "severity = warning"
          target_matchers:
            - "severity = info"
          equal:
            - "namespace"
            - "alertname"
        - source_matchers:
            - "alertname = InfoInhibitor"
          target_matchers:
            - "severity = info"
          equal:
            - "namespace"
      route:
        group_by: ["namespace"]
        group_wait: 30s
        group_interval: 5m
        repeat_interval: 12h
        receiver: "null"
        routes:
          - receiver: "null"
            matchers:
              - alertname =~ "InfoInhibitor|Watchdog"
      receivers:
        - name: "null"
      templates:
        - "/etc/alertmanager/config/*.tmpl"
    
    1. You can discover AlertmanagerConfig objects. The spec.alertmanagerConfigSelector is always set to matchLabels: plugin: <name> to tell the operator which AlertmanagerConfigs objects should be selected and merged with the main Alertmanager configuration. Note: The default strategy for a AlertmanagerConfig object to match alerts is OnNamespace.
    apiVersion: monitoring.coreos.com/v1alpha1
    kind: AlertmanagerConfig
    metadata:
      name: config-example
      labels:
        alertmanagerConfig: example
        pluginDefinition: alerts-example
    spec:
      route:
        groupBy: ["job"]
        groupWait: 30s
        groupInterval: 5m
        repeatInterval: 12h
        receiver: "webhook"
      receivers:
        - name: "webhook"
          webhookConfigs:
            - url: "http://example.com/"
    
    1. You can use alerts.alertmanager.alertmanagerSpec.alertmanagerConfiguration to reference an AlertmanagerConfig object in the same namespace which defines the main Alertmanager configuration.
    # Example with select a global alertmanagerconfig
    alertmanagerConfiguration:
      name: global-alertmanager-configuration
    

    Examples

    Deploy alerts with Alertmanager

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: alerts
    spec:
      pluginDefinition: alerts
      disabled: false
      displayName: Alerts
      optionValues:
        - name: alerts.alertmanager.enabled
          value: true
        - name: alerts.alertmanager.ingress.enabled
          value: true
        - name: alerts.alertmanager.ingress.hosts
          value:
            - alertmanager.dns.example.com
        - name: alerts.alertmanager.ingress.tls
          value:
            - hosts:
                - alertmanager.dns.example.com
              secretName: tls-alertmanager-dns-example-com
        - name: alerts.alertmanagerConfig.slack.routes
          value:
            - channel: slack-warning-channel
              webhookURL: https://hooks.slack.com/services/some-id
              matchers:
                - name: severity
                  matchType: "="
                  value: "warning"
            - channel: slack-critical-channel
              webhookURL: https://hooks.slack.com/services/some-id
              matchers:
                - name: severity
                  matchType: "="
                  value: "critical"
        - name: alerts.alertmanagerConfig.webhook.routes
          value:
            - name: webhook-route
              url: https://some-webhook-url
              matchers:
                - name: alertname
                  matchType: "=~"
                  value: ".*"
        - name: alerts.alertmanager.serviceMonitor.additionalLabels
          value:
            plugin: kube-monitoring
        - name: alerts.defaultRules.create
          value: true
        - name: alerts.defaultRules.labels
          value:
            plugin: kube-monitoring
        - name: endpoint
          value: https://alertmanager.dns.example.com/api/v2
        - name: filterLabels
          value:
            - job
            - severity
            - status
        - name: silenceExcludedLabels
          value:
            - pod
            - pod_name
            - instance
    

    Deploy alerts without Alertmanager (Bring your own Alertmanager - Supernova UI only)

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: alerts
    spec:
      pluginDefinition: alerts
      disabled: false
      displayName: Alerts
      optionValues:
        - name: alerts.alertmanager.enabled
          value: false
        - name: alerts.alertmanager.ingress.enabled
          value: false
        - name: alerts.defaultRules.create
          value: false
        - name: endpoint
          value: https://alertmanager.dns.example.com/api/v2
        - name: filterLabels
          value:
            - job
            - severity
            - status
        - name: silenceExcludedLabels
          value:
            - pod
            - pod_name
            - instance
    

    2.2 - Cert-manager

    This Plugin provides the cert-manager to automate the management of TLS certificates.

    Configuration

    This section highlights configuration of selected Plugin features.
    All available configuration options are described in the plugin.yaml.

    Ingress shim

    An Ingress resource in Kubernetes configures external access to services in a Kubernetes cluster.
    Securing ingress resources with TLS certificates is a common use-case and the cert-manager can be configured to handle these via the ingress-shim component.
    It can be enabled by deploying an issuer in your organization and setting the following options on this plugin.

    OptionTypeDescription
    cert-manager.ingressShim.defaultIssuerNamestringName of the cert-manager issuer to use for TLS certificates
    cert-manager.ingressShim.defaultIssuerKindstringKind of the cert-manager issuer to use for TLS certificates
    cert-manager.ingressShim.defaultIssuerGroupstringGroup of the cert-manager issuer to use for TLS certificates

    2.3 - Decentralized Observer of Policies (Violations)

    This directory contains the Greenhouse plugin for the Decentralized Observer of Policies (DOOP).

    DOOP

    To perform automatic validations on Kubernetes objects, we run a deployment of OPA Gatekeeper in each cluster. This dashboard aggregates all policy violations reported by those Gatekeeper instances.

    2.4 - Designate Ingress CNAME operator (DISCO)

    This Plugin provides the Designate Ingress CNAME operator (DISCO) to automate management of DNS entries in OpenStack Designate for Ingress and Services in Kubernetes.

    2.5 - DigiCert issuer

    This Plugin provides the digicert-issuer, an external Issuer extending the cert-manager with the DigiCert cert-central API.

    2.6 - External DNS

    This Plugin provides the external DNS operator) which synchronizes exposed Kubernetes Services and Ingresses with DNS providers.

    2.7 - Github Guard

    Github Guard Greenhouse Plugin manages Github teams, team memberships and repository & team assignments.

    Hierarchy of Custom Resources

    Custom Resources

    Github – an installation of Github App

    apiVersion: githubguard.sap/v1
    kind: Github
    metadata:
      name: com
    spec:
      webURL: https://github.com
      v3APIURL: https://api.github.com
      integrationID: 420328
      clientUserAgent: greenhouse-github-guard
      secret: github-com-secret
    

    GithubOrganization with Feature & Action Flags

    apiVersion: githubguard.sap/v1
    kind: GithubOrganization
    metadata:
      name: com--greenhouse-sandbox
      labels:
        githubguard.sap/addTeam: "true"
        githubguard.sap/removeTeam: "true"
        githubguard.sap/addOrganizationOwner: "true"
        githubguard.sap/removeOrganizationOwner: "true"
        githubguard.sap/addRepositoryTeam: "true"
        githubguard.sap/removeRepositoryTeam: "true"
        githubguard.sap/dryRun: "false"
    

    Default team & repository assignments:

    GithubTeamRepository for exception team & repository assignments

    GithubUsername for external username matching

    apiVersion: githubguard.sap/v1
    kind: GithubUsername
    metadata:
      annotations: 
        last-check-timestamp: 1681614602 
       name: com-I313226 
    spec:
      userID: greenhouse_onuryilmaz
      githubUsername: onuryilmaz
      github: com
    

    2.8 - Ingress NGINX

    This plugin contains the ingress NGINX controller.

    Example

    To instantiate the plugin create a Plugin like:

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: ingress-nginx
    spec:
      pluginDefinition: ingress-nginx-v4.4.0
      values:
        - name: controller.service.loadBalancerIP
          value: 1.2.3.4
    

    2.9 - Kubernetes Monitoring

    Learn more about the kube-monitoring plugin. Use it to activate Kubernetes monitoring for your Greenhouse cluster.

    The main terminologies used in this document can be found in core-concepts.

    Overview

    Observability is often required for operation and automation of service offerings. To get the insights provided by an application and the container runtime environment, you need telemetry data in the form of metrics or logs sent to backends such as Prometheus or OpenSearch. With the kube-monitoring Plugin, you will be able to cover the metrics part of the observability stack.

    This Plugin includes a pre-configured package of components that help make getting started easy and efficient. At its core, an automated and managed Prometheus installation is provided using the prometheus-operator. This is complemented by Prometheus target configuration for the most common Kubernetes components providing metrics by default. In addition, Cloud operators curated Prometheus alerting rules and Plutono dashboards are included to provide a comprehensive monitoring solution out of the box.

    kube-monitoring

    Components included in this Plugin:

    Disclaimer

    It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.

    The Plugin is a deeply configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates.

    It is intended as a platform that can be extended by following the guide.

    Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

    Quick start

    This guide provides a quick and straightforward way to use kube-monitoring as a Greenhouse Plugin on your Kubernetes cluster.

    Prerequisites

    • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.

    Step 1:

    You can install the kube-monitoring package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle it for you automatically. For the latter, you can either:

    1. Go to Greenhouse dashboard and select the Kubernetes Monitoring plugin from the catalog. Specify the cluster and required option values.
    2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

    Step 2:

    After installation, Greenhouse will provide a generated link to the Prometheus user interface. This is done via the annotation greenhouse.sap/expose: “true” at the Prometheus Service resource.

    Step 3:

    Greenhouse regularly performs integration tests that are bundled with kube-monitoring. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.

    Configuration

    Global options

    NameDescriptionValue
    global.commonLabelsLabels to add to all resources. This can be used to add a support_group or service label to all resources and alerting rules.true

    Prometheus-operator options

    NameDescriptionValue
    kubeMonitoring.prometheusOperator.enabledManages Prometheus and Alertmanager componentstrue
    kubeMonitoring.prometheusOperator.alertmanagerInstanceNamespacesFilter namespaces to look for prometheus-operator Alertmanager resources[]
    kubeMonitoring.prometheusOperator.alertmanagerConfigNamespacesFilter namespaces to look for prometheus-operator AlertmanagerConfig resources[]
    kubeMonitoring.prometheusOperator.prometheusInstanceNamespacesFilter namespaces to look for prometheus-operator Prometheus resources[]

    Kubernetes component scraper options

    NameDescriptionValue
    kubeMonitoring.kubernetesServiceMonitors.enabledFlag to disable all the kubernetes component scraperstrue
    kubeMonitoring.kubeApiServer.enabledComponent scraping the kube api servertrue
    kubeMonitoring.kubelet.enabledComponent scraping the kubelet and kubelet-hosted cAdvisortrue
    kubeMonitoring.coreDns.enabledComponent scraping coreDns. Use either this or kubeDnstrue
    kubeMonitoring.kubeEtcd.enabledComponent scraping etcdtrue
    kubeMonitoring.kubeStateMetrics.enabledComponent scraping kube state metricstrue
    kubeMonitoring.nodeExporter.enabledDeploy node exporter as a daemonset to all nodestrue
    kubeMonitoring.kubeControllerManager.enabledComponent scraping the kube controller managerfalse
    kubeMonitoring.kubeScheduler.enabledComponent scraping kube schedulerfalse
    kubeMonitoring.kubeProxy.enabledComponent scraping kube proxyfalse
    kubeMonitoring.kubeDns.enabledComponent scraping kubeDns. Use either this or coreDnsfalse

    Prometheus options

    NameDescriptionValue
    kubeMonitoring.prometheus.enabledDeploy a Prometheus instancetrue
    kubeMonitoring.prometheus.annotationsAnnotations for Prometheus{}
    kubeMonitoring.prometheus.tlsConfig.caCertCA certificate to verify technical clients at Prometheus IngressSecret
    kubeMonitoring.prometheus.ingress.enabledDeploy Prometheus Ingresstrue
    kubeMonitoring.prometheus.ingress.hostsMust be provided if Ingress is enabled.[]
    kubeMonitoring.prometheus.ingress.ingressClassnameSpecifies the ingress-controllernginx
    kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storageHow large the persistent volume should be to house the prometheus database. Default 50Gi.""
    kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.storageClassNameThe storage class to use for the persistent volume.""
    kubeMonitoring.prometheus.prometheusSpec.scrapeIntervalInterval between consecutive scrapes. Defaults to 30s""
    kubeMonitoring.prometheus.prometheusSpec.scrapeTimeoutNumber of seconds to wait for target to respond before erroring""
    kubeMonitoring.prometheus.prometheusSpec.evaluationIntervalInterval between consecutive evaluations""
    kubeMonitoring.prometheus.prometheusSpec.externalLabelsExternal labels to add to any time series or alerts when communicating with external systems like Alertmanager{}
    kubeMonitoring.prometheus.prometheusSpec.ruleSelectorPrometheusRules to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
    kubeMonitoring.prometheus.prometheusSpec.serviceMonitorSelectorServiceMonitors to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
    kubeMonitoring.prometheus.prometheusSpec.podMonitorSelectorPodMonitors to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
    kubeMonitoring.prometheus.prometheusSpec.probeSelectorProbes to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
    kubeMonitoring.prometheus.prometheusSpec.scrapeConfigSelectorscrapeConfigs to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
    kubeMonitoring.prometheus.prometheusSpec.retentionHow long to retain metrics""
    kubeMonitoring.prometheus.prometheusSpec.logLevelLog level to be configured for Prometheus""
    kubeMonitoring.prometheus.prometheusSpec.additionalScrapeConfigsNext to ScrapeConfig CRD, you can use AdditionalScrapeConfigs, which allows specifying additional Prometheus scrape configurations""
    kubeMonitoring.prometheus.prometheusSpec.additionalArgsAllows setting additional arguments for the Prometheus container[]

    Alertmanager options

    NameDescriptionValue
    alerts.enabledTo send alerts to Alertmanagerfalse
    alerts.alertmanager.hostsList of Alertmanager hosts Prometheus can send alerts to[]
    alerts.alertmanager.tlsConfig.certTLS certificate for communication with AlertmanagerSecret
    alerts.alertmanager.tlsConfig.keyTLS key for communication with AlertmanagerSecret

    Examples

    Deploy kube-monitoring into a remote cluster

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: kube-monitoring
    spec:
      pluginDefinition: kube-monitoring
      disabled: false
      optionValues:
        - name: kubeMonitoring.prometheus.prometheusSpec.retention
          value: 30d
        - name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
          value: 100Gi
        - name: kubeMonitoring.prometheus.service.labels
          value:
            greenhouse.sap/expose: "true"
        - name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
          value:
            cluster: example-cluster
            organization: example-org
            region: example-region
        - name: alerts.enabled
          value: true
        - name: alerts.alertmanagers.hosts
          value:
            - alertmanager.dns.example.com
        - name: alerts.alertmanagers.tlsConfig.cert
          valueFrom:
            secret:
              key: tls.crt
              name: tls-<org-name>-prometheus-auth
        - name: alerts.alertmanagers.tlsConfig.key
          valueFrom:
            secret:
              key: tls.key
              name: tls-<org-name>-prometheus-auth
    

    Deploy Prometheus only

    Example Plugin to deploy Prometheus with the kube-monitoring Plugin.

    NOTE: If you are using kube-monitoring for the first time in your cluster, it is necessary to set kubeMonitoring.prometheusOperator.enabled to true.

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: example-prometheus-name
    spec:
      pluginDefinition: kube-monitoring
      disabled: false
      optionValues:
        - name: kubeMonitoring.defaultRules.create
          value: false
        - name: kubeMonitoring.kubernetesServiceMonitors.enabled
          value: false
        - name: kubeMonitoring.prometheusOperator.enabled
          value: false
        - name: kubeMonitoring.kubeStateMetrics.enabled
          value: false
        - name: kubeMonitoring.nodeExporter.enabled
          value: false
        - name: kubeMonitoring.prometheus.prometheusSpec.retention
          value: 30d
        - name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
          value: 100Gi
        - name: kubeMonitoring.prometheus.service.labels
          value:
            greenhouse.sap/expose: "true"
        - name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
          value:
            cluster: example-cluster
            organization: example-org
            region: example-region
        - name: alerts.enabled
          value: true
        - name: alerts.alertmanagers.hosts
          value:
            - alertmanager.dns.example.com
        - name: alerts.alertmanagers.tlsConfig.cert
          valueFrom:
            secret:
              key: tls.crt
              name: tls-<org-name>-prometheus-auth
        - name: alerts.alertmanagers.tlsConfig.key
          valueFrom:
            secret:
              key: tls.key
              name: tls-<org-name>-prometheus-auth
    

    Extension of the plugin

    kube-monitoring can be extended with your own Prometheus alerting rules and target configurations via the Custom Resource Definitions (CRDs) of the Prometheus operator. The user-defined resources to be incorporated with the desired configuration are defined via label selections.

    The CRD PrometheusRule enables the definition of alerting and recording rules that can be used by Prometheus or Thanos Rule instances. Alerts and recording rules are reconciled and dynamically loaded by the operator without having to restart Prometheus or Thanos Rule.

    kube-monitoring Prometheus will automatically discover and load the rules that match labels plugin: <plugin-name>.

    Example:

    apiVersion: monitoring.coreos.com/v1
    kind: PrometheusRule
    metadata:
      name: example-prometheus-rule
      labels:
        plugin: <metadata.name> 
        ## e.g plugin: kube-monitoring
    spec:
     groups:
       - name: example-group
         rules:
         ...
    

    The CRDs PodMonitor, ServiceMonitor, Probe and ScrapeConfig allow the definition of a set of target endpoints to be scraped by Prometheus. The operator will automatically discover and load the configurations that match labels plugin: <plugin-name>.

    Example:

    apiVersion: monitoring.coreos.com/v1
    kind: PodMonitor
    metadata:
      name: example-pod-monitor
      labels:
        plugin: <metadata.name> 
        ## e.g plugin: kube-monitoring
    spec:
      selector:
        matchLabels:
          app: example-app
      namespaceSelector:
        matchNames:
          - example-namespace
      podMetricsEndpoints:
        - port: http
      ...
    

    2.10 - Logshipper

    This Plugin is intended for shipping container and systemd logs to an Elasticsearch/ OpenSearch cluster. It uses fluentbit to collect logs. The default configuration can be found under chart/templates/fluent-bit-configmap.yaml.

    Components included in this Plugin:

    Owner

    1. @ivogoman

    Parameters

    NameDescriptionValue
    fluent-bit.parserParser used for container logs. [docker|cri] labels“cri”
    fluent-bit.backend.opensearch.hostHost for the Elastic/OpenSearch HTTP Input
    fluent-bit.backend.opensearch.portPort for the Elastic/OpenSearch HTTP Input
    fluent-bit.backend.opensearch.http_userUsername for the Elastic/OpenSearch HTTP Input
    fluent-bit.backend.opensearch.http_passwordPassword for the Elastic/OpenSearch HTTP Input
    fluent-bit.backend.opensearch.hostHost for the Elastic/OpenSearch HTTP Input
    fluent-bit.filter.additionalValueslist of Key-Value pairs to label logs labels[]
    fluent-bit.customConfig.inputsmulti-line string containing additional inputs
    fluent-bit.customConfig.filtersmulti-line string containing additional filters
    fluent-bit.customConfig.outputsmulti-line string containing additional outputs

    Custom Configuration

    To add custom configuration to the fluent-bit configuration please check the fluentbit documentation here. The fluent-bit.customConfig.inputs, fluent-bit.customConfig.filters and fluent-bit.customConfig.outputs parameters can be used to add custom configuration to the default configuration. The configuration should be added as a multi-line string. Inputs are rendered after the default inputs, filters are rendered after the default filters and before the additional values are added. Outputs are rendered after the default outputs. The additional values are added to all logs disregaring the source.

    Example Input configuration:

    fluent-bit:
      config:
        inputs: |
          [INPUT]
              Name             tail-audit
              Path             /var/log/containers/greenhouse-controller*.log
              Parser           {{ default "cri" ( index .Values "fluent-bit" "parser" ) }}
              Tag              audit.*
              Refresh_Interval 5
              Mem_Buf_Limit    50MB
              Skip_Long_Lines  Off
              Ignore_Older     1m
              DB               /var/log/fluent-bit-tail-audit.pos.db      
    

    Logs collected by the default configuration are prefixed with default_. In case that logs from additional inputs are to be send and processed by the same filters and outputs, the prefix should be used as well.

    In case additional secrets are required the fluent-bit.env field can be used to add them to the environment of the fluent-bit container. The secrets should be created by adding them to the fluent-bit.backend field.

    fluent-bit:
      backend:
        audit:
          http_user: top-secret-audit
          http_password: top-secret-audit
          host: "audit.test"
          tls:
            enabled: true
            verify: true
            debug: false
    

    2.11 - OpenTelemetry

    Learn more about the OpenTelemetry Plugin. Use it to enable the ingestion, collection and export of telemetry signals (logs and metrics) for your Greenhouse cluster.

    The main terminologies used in this document can be found in core-concepts.

    Overview

    OpenTelemetry is an observability framework and toolkit for creating and managing telemetry data such as metrics, logs and traces. Unlike other observability tools, OpenTelemetry is vendor and tool agnostic, meaning it can be used with a variety of observability backends, including open source tools such as OpenSearch and Prometheus.

    The focus of the plugin is to provide easy-to-use configurations for common use cases of receiving, processing and exporting telemetry data in Kubernetes. The storage and visualization of the same is intentionally left to other tools.

    Components included in this Plugin:

    Architecture

    TBD: Architecture picture

    Note

    It is the intention to add more configuration over time and contributions of your very own configuration is highly appreciated. If you discover bugs or want to add functionality to the plugin, feel free to create a pull request.

    Quick Start

    This guide provides a quick and straightforward way to use OpenTelemetry as a Greenhouse Plugin on your Kubernetes cluster.

    Prerequisites

    • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
    • For logs, a OpenSearch instance to store. If you don’t have one, reach out to your observability team to get access to one.
    • For metrics, a Prometheus instance to store. If you don’t have one, install a kube-monitoring Plugin first.

    Step 1:

    You can install the OpenTelemetry package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle do it for you automatically. For the latter, you can either:

    1. Go to Greenhouse dashboard and select the OpenTelemetry plugin from the catalog. Specify the cluster and required option values.
    2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

    Step 2:

    The package will deploy the OpenTelemetry Operator which works as a manager for the collectors and auto-instrumentation of the workload. By default, the package will include a configuration for collecting metrics and logs. The log-collector is currently processing data from the preconfigured receivers:

    • Files via the Filelog Receiver
    • Kubernetes Events from the Kubernetes API server
    • Journald events from systemd journal
    • its own metrics

    You can disable the collection of logs by setting open_telemetry.LogCollector.enabled to false. The same is true for disabling metrics: open_telemetry.MetricsCollector.enabled to false.

    Based on the backend selection the telemetry data will be exporter to the backend.

    Step 3:

    Greenhouse regularly performs integration tests that are bundled with OpenTelemetry. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.

    Configuration

    NameDescriptionTyperequired
    openTelemetry.logsCollector.enabledActivates the standard configuration for logsboolfalse
    openTelemetry.metricsCollector.enabledActivates the standard configuration for metricsboolfalse
    openTelemetry.openSearchLogs.usernameUsername for OpenSearch endpointsecretfalse
    openTelemetry.openSearchLogs.passwordPassword for OpenSearch endpointsecretfalse
    openTelemetry.openSearchLogs.endpointEndpoint URL for OpenSearchsecretfalse
    openTelemetry.regionRegion label for loggingstringfalse
    openTelemetry.clusterCluster label for loggingstringfalse
    openTelemetry.prometheus.additionalLabelsLabel selector for Prometheus resources to be picked-up by the operatormapfalse
    prometheusRules.additionalRuleLabelsAdditional labels for PrometheusRule alertsmapfalse
    openTelemetry.prometheus.serviceMonitor.enabledActivates the service-monitoring for the Logs Collectorboolfalse
    openTelemetry.prometheus.podMonitor.enabledActivates the pod-monitoring for the Logs Collectorboolfalse
    openTelemetry-operator.admissionWebhooks.certManager.enabledActivate to use the CertManager for generating self-signed certificatesbooltrue
    opentelemetry-operator.admissionWebhooks.autoGenerateCert.enabledActivate to use Helm to create self-signed certificatesboolfalse
    opentelemetry-operator.admissionWebhooks.autoGenerateCert.recreateActivate to recreate the cert after a defined period (certPeriodDays default is 365)boolfalse
    opentelemetry-operator.kubeRBACProxy.enabledActivate to enable Kube-RBAC-Proxy for OpenTelemetryboolfalse
    opentelemetry-operator.manager.prometheusRule.defaultRules.enabledActivate to enable default rules for monitoring the OpenTelemetry Managerboolfalse
    opentelemetry-operator.manager.prometheusRule.enabledActivate to enable rules for monitoring the OpenTelemetry Managerboolfalse

    Examples

    TBD

    2.12 - Plutono

    Learn more about the plutono Plugin. Use it to install the web dashboarding system Plutono to collect, correlate, and visualize Prometheus metrics for your Greenhouse cluster.

    The main terminologies used in this document can be found in core-concepts.

    Overview

    Observability is often required for the operation and automation of service offerings. Plutono provides you with tools to display Prometheus metrics on live dashboards with insightful charts and visualizations. In the Greenhouse context, this complements the kube-monitoring plugin, which automatically acts as a Plutono data source which is recognized by Plutono. In addition, the Plugin provides a mechanism that automates the lifecycle of datasources and dashboards without having to restart Plutono.

    Plutono Architecture

    Disclaimer

    This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.

    Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

    Quick Start

    This guide provides a quick and straightforward way how to use Plutono as a Greenhouse Plugin on your Kubernetes cluster.

    Prerequisites

    • A running and Greenhouse-managed Kubernetes cluster
    • kube-monitoring Plugin installed to have at least one Prometheus instance running in the cluster

    The plugin works by default with anonymous access enabled. If you use the standard configuration in the kube-monitoring plugin, the data source and some kubernetes-operations dashboards are already pre-installed.

    Step 1: Add your dashboards

    Dashboards are selected from ConfigMaps across namespaces. The plugin searches for ConfigMaps with the label plutono-dashboard: "true" and imports them into Plutono. The ConfigMap must contain a key like my-dashboard.json with the dashboard JSON content. Example

    A guide on how to create dashboards can be found here.

    Step 2: Add your datasources

    Data sources are selected from Secrets across namespaces. The plugin searches for Secrets with the label plutono-dashboard: "true" and imports them into Plutono. The Secrets should contain valid datasource configuration YAML. Example

    Configuration

    ParameterDescriptionDefault
    plutono.replicasNumber of nodes1
    plutono.deploymentStrategyDeployment strategy{ "type": "RollingUpdate" }
    plutono.livenessProbeLiveness Probe settings{ "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }
    plutono.readinessProbeReadiness Probe settings{ "httpGet": { "path": "/api/health", "port": 3000 } }
    plutono.securityContextDeployment securityContext{"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472}
    plutono.priorityClassNameName of Priority Class to assign podsnil
    plutono.image.registryImage registryghcr.io
    plutono.image.repositoryImage repositorycredativ/plutono
    plutono.image.tagOverrides the Plutono image tag whose default is the chart appVersion (Must be >= 5.0.0)``
    plutono.image.shaImage sha (optional)``
    plutono.image.pullPolicyImage pull policyIfNotPresent
    plutono.image.pullSecretsImage pull secrets (can be templated)[]
    plutono.service.enabledEnable plutono servicetrue
    plutono.service.ipFamiliesKubernetes service IP families[]
    plutono.service.ipFamilyPolicyKubernetes service IP family policy""
    plutono.service.typeKubernetes service typeClusterIP
    plutono.service.portKubernetes port where service is exposed80
    plutono.service.portNameName of the port on the serviceservice
    plutono.service.appProtocolAdds the appProtocol field to the service``
    plutono.service.targetPortInternal service is port3000
    plutono.service.nodePortKubernetes service nodePortnil
    plutono.service.annotationsService annotations (can be templated){}
    plutono.service.labelsCustom labels{}
    plutono.service.clusterIPinternal cluster service IPnil
    plutono.service.loadBalancerIPIP address to assign to load balancer (if supported)nil
    plutono.service.loadBalancerSourceRangeslist of IP CIDRs allowed access to lb (if supported)[]
    plutono.service.externalIPsservice external IP addresses[]
    plutono.service.externalTrafficPolicychange the default externalTrafficPolicynil
    plutono.headlessServiceCreate a headless servicefalse
    plutono.extraExposePortsAdditional service ports for sidecar containers[]
    plutono.hostAliasesadds rules to the pod’s /etc/hosts[]
    plutono.ingress.enabledEnables Ingressfalse
    plutono.ingress.annotationsIngress annotations (values are templated){}
    plutono.ingress.labelsCustom labels{}
    plutono.ingress.pathIngress accepted path/
    plutono.ingress.pathTypeIngress type of pathPrefix
    plutono.ingress.hostsIngress accepted hostnames["chart-example.local"]
    plutono.ingress.extraPathsIngress extra paths to prepend to every host configuration. Useful when configuring custom actions with AWS ALB Ingress Controller. Requires ingress.hosts to have one or more host entries.[]
    plutono.ingress.tlsIngress TLS configuration[]
    plutono.ingress.ingressClassNameIngress Class Name. MAY be required for Kubernetes versions >= 1.18""
    plutono.resourcesCPU/Memory resource requests/limits{}
    plutono.nodeSelectorNode labels for pod assignment{}
    plutono.tolerationsToleration labels for pod assignment[]
    plutono.affinityAffinity settings for pod assignment{}
    plutono.extraInitContainersInit containers to add to the plutono pod{}
    plutono.extraContainersSidecar containers to add to the plutono pod""
    plutono.extraContainerVolumesVolumes that can be mounted in sidecar containers[]
    plutono.extraLabelsCustom labels for all manifests{}
    plutono.schedulerNameName of the k8s scheduler (other than default)nil
    plutono.persistence.enabledUse persistent volume to store datafalse
    plutono.persistence.typeType of persistence (pvc or statefulset)pvc
    plutono.persistence.sizeSize of persistent volume claim10Gi
    plutono.persistence.existingClaimUse an existing PVC to persist data (can be templated)nil
    plutono.persistence.storageClassNameType of persistent volume claimnil
    plutono.persistence.accessModesPersistence access modes[ReadWriteOnce]
    plutono.persistence.annotationsPersistentVolumeClaim annotations{}
    plutono.persistence.finalizersPersistentVolumeClaim finalizers[ "kubernetes.io/pvc-protection" ]
    plutono.persistence.extraPvcLabelsExtra labels to apply to a PVC.{}
    plutono.persistence.subPathMount a sub dir of the persistent volume (can be templated)nil
    plutono.persistence.inMemory.enabledIf persistence is not enabled, whether to mount the local storage in-memory to improve performancefalse
    plutono.persistence.inMemory.sizeLimitSizeLimit for the in-memory local storagenil
    plutono.persistence.disableWarningHide NOTES warning, useful when persiting to a databasefalse
    plutono.initChownData.enabledIf false, don’t reset data ownership at startuptrue
    plutono.initChownData.image.registryinit-chown-data container image registrydocker.io
    plutono.initChownData.image.repositoryinit-chown-data container image repositorybusybox
    plutono.initChownData.image.taginit-chown-data container image tag1.31.1
    plutono.initChownData.image.shainit-chown-data container image sha (optional)""
    plutono.initChownData.image.pullPolicyinit-chown-data container image pull policyIfNotPresent
    plutono.initChownData.resourcesinit-chown-data pod resource requests & limits{}
    plutono.schedulerNameAlternate scheduler namenil
    plutono.envExtra environment variables passed to pods{}
    plutono.envValueFromEnvironment variables from alternate sources. See the API docs on EnvVarSource for format details. Can be templated{}
    plutono.envFromSecretName of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated""
    plutono.envFromSecretsList of Kubernetes secrets (must be manually created in the same namespace) containing values to be added to the environment. Can be templated[]
    plutono.envFromConfigMapsList of Kubernetes ConfigMaps (must be manually created in the same namespace) containing values to be added to the environment. Can be templated[]
    plutono.envRenderSecretSensible environment variables passed to pods and stored as secret. (passed through tpl){}
    plutono.enableServiceLinksInject Kubernetes services as environment variables.true
    plutono.extraSecretMountsAdditional plutono server secret mounts[]
    plutono.extraVolumeMountsAdditional plutono server volume mounts[]
    plutono.extraVolumesAdditional Plutono server volumes[]
    plutono.automountServiceAccountTokenMounted the service account token on the plutono pod. Mandatory, if sidecars are enabledtrue
    plutono.createConfigmapEnable creating the plutono configmaptrue
    plutono.extraConfigmapMountsAdditional plutono server configMap volume mounts (values are templated)[]
    plutono.extraEmptyDirMountsAdditional plutono server emptyDir volume mounts[]
    plutono.pluginsPlugins to be loaded along with Plutono[]
    plutono.datasourcesConfigure plutono datasources (passed through tpl){}
    plutono.alertingConfigure plutono alerting (passed through tpl){}
    plutono.notifiersConfigure plutono notifiers{}
    plutono.dashboardProvidersConfigure plutono dashboard providers{}
    plutono.dashboardsDashboards to import{}
    plutono.dashboardsConfigMapsConfigMaps reference that contains dashboards{}
    plutono.plutono.iniPlutono’s primary configuration{}
    global.imageRegistryGlobal image pull registry for all images.null
    global.imagePullSecretsGlobal image pull secrets (can be templated). Allows either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style).[]
    plutono.ldap.enabledEnable LDAP authenticationfalse
    plutono.ldap.existingSecretThe name of an existing secret containing the ldap.toml file, this must have the key ldap-toml.""
    plutono.ldap.configPlutono’s LDAP configuration""
    plutono.annotationsDeployment annotations{}
    plutono.labelsDeployment labels{}
    plutono.podAnnotationsPod annotations{}
    plutono.podLabelsPod labels{}
    plutono.podPortNameName of the plutono port on the podplutono
    plutono.lifecycleHooksLifecycle hooks for podStart and preStop Example{}
    plutono.sidecar.image.registrySidecar image registryquay.io
    plutono.sidecar.image.repositorySidecar image repositorykiwigrid/k8s-sidecar
    plutono.sidecar.image.tagSidecar image tag1.26.0
    plutono.sidecar.image.shaSidecar image sha (optional)""
    plutono.sidecar.imagePullPolicySidecar image pull policyIfNotPresent
    plutono.sidecar.resourcesSidecar resources{}
    plutono.sidecar.securityContextSidecar securityContext{}
    plutono.sidecar.enableUniqueFilenamesSets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable. If set to true the sidecar will create unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces.false
    plutono.sidecar.alerts.enabledEnables the cluster wide search for alerts and adds/updates/deletes them in plutonofalse
    plutono.sidecar.alerts.labelLabel that config maps with alerts should have to be addedplutono_alert
    plutono.sidecar.alerts.labelValueLabel value that config maps with alerts should have to be added""
    plutono.sidecar.alerts.searchNamespaceNamespaces list. If specified, the sidecar will search for alerts config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces.nil
    plutono.sidecar.alerts.watchMethodMethod to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.WATCH
    plutono.sidecar.alerts.resourceShould the sidecar looks into secrets, configmaps or both.both
    plutono.sidecar.alerts.reloadURLFull url of datasource configuration reload API endpoint, to invoke after a config-map change"http://localhost:3000/api/admin/provisioning/alerting/reload"
    plutono.sidecar.alerts.skipReloadEnabling this omits defining the REQ_URL and REQ_METHOD environment variablesfalse
    plutono.sidecar.alerts.initAlertsSet to true to deploy the alerts sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time.false
    plutono.sidecar.alerts.extraMountsAdditional alerts sidecar volume mounts.[]
    plutono.sidecar.dashboards.enabledEnables the cluster wide search for dashboards and adds/updates/deletes them in plutonofalse
    plutono.sidecar.dashboards.SCProviderEnables creation of sidecar providertrue
    plutono.sidecar.dashboards.provider.nameUnique name of the plutono providersidecarProvider
    plutono.sidecar.dashboards.provider.orgidId of the organisation, to which the dashboards should be added1
    plutono.sidecar.dashboards.provider.folderLogical folder in which plutono groups dashboards""
    plutono.sidecar.dashboards.provider.folderUidAllows you to specify the static UID for the logical folder above""
    plutono.sidecar.dashboards.provider.disableDeleteActivate to avoid the deletion of imported dashboardsfalse
    plutono.sidecar.dashboards.provider.allowUiUpdatesAllow updating provisioned dashboards from the UIfalse
    plutono.sidecar.dashboards.provider.typeProvider typefile
    plutono.sidecar.dashboards.provider.foldersFromFilesStructureAllow Plutono to replicate dashboard structure from filesystem.false
    plutono.sidecar.dashboards.watchMethodMethod to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.WATCH
    plutono.sidecar.skipTlsVerifySet to true to skip tls verification for kube api callsnil
    plutono.sidecar.dashboards.labelLabel that config maps with dashboards should have to be addedplutono_dashboard
    plutono.sidecar.dashboards.labelValueLabel value that config maps with dashboards should have to be added""
    plutono.sidecar.dashboards.folderFolder in the pod that should hold the collected dashboards (unless sidecar.dashboards.defaultFolderName is set). This path will be mounted./tmp/dashboards
    plutono.sidecar.dashboards.folderAnnotationThe annotation the sidecar will look for in configmaps to override the destination folder for filesnil
    plutono.sidecar.dashboards.defaultFolderNameThe default folder name, it will create a subfolder under the sidecar.dashboards.folder and put dashboards in there insteadnil
    plutono.sidecar.dashboards.searchNamespaceNamespaces list. If specified, the sidecar will search for dashboards config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces.nil
    plutono.sidecar.dashboards.scriptAbsolute path to shell script to execute after a configmap got reloaded.nil
    plutono.sidecar.dashboards.reloadURLFull url of dashboards configuration reload API endpoint, to invoke after a config-map change"http://localhost:3000/api/admin/provisioning/dashboards/reload"
    plutono.sidecar.dashboards.skipReloadEnabling this omits defining the REQ_USERNAME, REQ_PASSWORD, REQ_URL and REQ_METHOD environment variablesfalse
    plutono.sidecar.dashboards.resourceShould the sidecar looks into secrets, configmaps or both.both
    plutono.sidecar.dashboards.extraMountsAdditional dashboard sidecar volume mounts.[]
    plutono.sidecar.datasources.enabledEnables the cluster wide search for datasources and adds/updates/deletes them in plutonofalse
    plutono.sidecar.datasources.labelLabel that config maps with datasources should have to be addedplutono_datasource
    plutono.sidecar.datasources.labelValueLabel value that config maps with datasources should have to be added""
    plutono.sidecar.datasources.searchNamespaceNamespaces list. If specified, the sidecar will search for datasources config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces.nil
    plutono.sidecar.datasources.watchMethodMethod to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.WATCH
    plutono.sidecar.datasources.resourceShould the sidecar looks into secrets, configmaps or both.both
    plutono.sidecar.datasources.reloadURLFull url of datasource configuration reload API endpoint, to invoke after a config-map change"http://localhost:3000/api/admin/provisioning/datasources/reload"
    plutono.sidecar.datasources.skipReloadEnabling this omits defining the REQ_URL and REQ_METHOD environment variablesfalse
    plutono.sidecar.datasources.initDatasourcesSet to true to deploy the datasource sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any datasources defined at startup time.false
    plutono.sidecar.notifiers.enabledEnables the cluster wide search for notifiers and adds/updates/deletes them in plutonofalse
    plutono.sidecar.notifiers.labelLabel that config maps with notifiers should have to be addedplutono_notifier
    plutono.sidecar.notifiers.labelValueLabel value that config maps with notifiers should have to be added""
    plutono.sidecar.notifiers.searchNamespaceNamespaces list. If specified, the sidecar will search for notifiers config-maps (or secrets) inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces.nil
    plutono.sidecar.notifiers.watchMethodMethod to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.WATCH
    plutono.sidecar.notifiers.resourceShould the sidecar looks into secrets, configmaps or both.both
    plutono.sidecar.notifiers.reloadURLFull url of notifier configuration reload API endpoint, to invoke after a config-map change"http://localhost:3000/api/admin/provisioning/notifications/reload"
    plutono.sidecar.notifiers.skipReloadEnabling this omits defining the REQ_URL and REQ_METHOD environment variablesfalse
    plutono.sidecar.notifiers.initNotifiersSet to true to deploy the notifier sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any notifiers defined at startup time.false
    plutono.smtp.existingSecretThe name of an existing secret containing the SMTP credentials.""
    plutono.smtp.userKeyThe key in the existing SMTP secret containing the username."user"
    plutono.smtp.passwordKeyThe key in the existing SMTP secret containing the password."password"
    plutono.admin.existingSecretThe name of an existing secret containing the admin credentials (can be templated).""
    plutono.admin.userKeyThe key in the existing admin secret containing the username."admin-user"
    plutono.admin.passwordKeyThe key in the existing admin secret containing the password."admin-password"
    plutono.serviceAccount.automountServiceAccountTokenAutomount the service account token on all pods where is service account is usedfalse
    plutono.serviceAccount.annotationsServiceAccount annotations
    plutono.serviceAccount.createCreate service accounttrue
    plutono.serviceAccount.labelsServiceAccount labels{}
    plutono.serviceAccount.nameService account name to use, when empty will be set to created account if serviceAccount.create is set else to default``
    plutono.serviceAccount.nameTestService account name to use for test, when empty will be set to created account if serviceAccount.create is set else to defaultnil
    plutono.rbac.createCreate and use RBAC resourcestrue
    plutono.rbac.namespacedCreates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the plutono instancefalse
    plutono.rbac.useExistingRoleSet to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here.nil
    plutono.rbac.pspEnabledCreate PodSecurityPolicy (with rbac.create, grant roles permissions as well)false
    plutono.rbac.pspUseAppArmorEnforce AppArmor in created PodSecurityPolicy (requires rbac.pspEnabled)false
    plutono.rbac.extraRoleRulesAdditional rules to add to the Role[]
    plutono.rbac.extraClusterRoleRulesAdditional rules to add to the ClusterRole[]
    plutono.commandDefine command to be executed by plutono container at startupnil
    plutono.argsDefine additional args if command is usednil
    plutono.testFramework.enabledWhether to create test-related resourcestrue
    plutono.testFramework.image.registrytest-framework image registry.docker.io
    plutono.testFramework.image.repositorytest-framework image repository.bats/bats
    plutono.testFramework.image.tagtest-framework image tag.v1.4.1
    plutono.testFramework.imagePullPolicytest-framework image pull policy.IfNotPresent
    plutono.testFramework.securityContexttest-framework securityContext{}
    plutono.downloadDashboards.envEnvironment variables to be passed to the download-dashboards container{}
    plutono.downloadDashboards.envFromSecretName of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated""
    plutono.downloadDashboards.resourcesResources of download-dashboards container{}
    plutono.downloadDashboardsImage.registryCurl docker image registrydocker.io
    plutono.downloadDashboardsImage.repositoryCurl docker image repositorycurlimages/curl
    plutono.downloadDashboardsImage.tagCurl docker image tag7.73.0
    plutono.downloadDashboardsImage.shaCurl docker image sha (optional)""
    plutono.downloadDashboardsImage.pullPolicyCurl docker image pull policyIfNotPresent
    plutono.namespaceOverrideOverride the deployment namespace"" (Release.Namespace)
    plutono.serviceMonitor.enabledUse servicemonitor from prometheus operatorfalse
    plutono.serviceMonitor.namespaceNamespace this servicemonitor is installed in
    plutono.serviceMonitor.intervalHow frequently Prometheus should scrape1m
    plutono.serviceMonitor.pathPath to scrape/metrics
    plutono.serviceMonitor.schemeScheme to use for metrics scrapinghttp
    plutono.serviceMonitor.tlsConfigTLS configuration block for the endpoint{}
    plutono.serviceMonitor.labelsLabels for the servicemonitor passed to Prometheus Operator{}
    plutono.serviceMonitor.scrapeTimeoutTimeout after which the scrape is ended30s
    plutono.serviceMonitor.relabelingsRelabelConfigs to apply to samples before scraping.[]
    plutono.serviceMonitor.metricRelabelingsMetricRelabelConfigs to apply to samples before ingestion.[]
    plutono.revisionHistoryLimitNumber of old ReplicaSets to retain10
    plutono.networkPolicy.enabledEnable creation of NetworkPolicy resources.false
    plutono.networkPolicy.allowExternalDon’t require client label for connectionstrue
    plutono.networkPolicy.explicitNamespacesSelectorA Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed{}
    plutono.networkPolicy.ingressEnable the creation of an ingress network policytrue
    plutono.networkPolicy.egress.enabledEnable the creation of an egress network policyfalse
    plutono.networkPolicy.egress.portsAn array of ports to allow for the egress[]
    plutono.enableKubeBackwardCompatibilityEnable backward compatibility of kubernetes where pod’s definition version below 1.13 doesn’t have the enableServiceLinks optionfalse

    Example of extraVolumeMounts and extraVolumes

    Configure additional volumes with extraVolumes and volume mounts with extraVolumeMounts.

    Example for extraVolumeMounts and corresponding extraVolumes:

    extraVolumeMounts:
      - name: plugins
        mountPath: /var/lib/plutono/plugins
        subPath: configs/plutono/plugins
        readOnly: false
      - name: dashboards
        mountPath: /var/lib/plutono/dashboards
        hostPath: /usr/shared/plutono/dashboards
        readOnly: false
    
    extraVolumes:
      - name: plugins
        existingClaim: existing-plutono-claim
      - name: dashboards
        hostPath: /usr/shared/plutono/dashboards
    

    Volumes default to emptyDir. Set to persistentVolumeClaim, hostPath, csi, or configMap for other types. For a persistentVolumeClaim, specify an existing claim name with existingClaim.

    Import dashboards

    There are a few methods to import dashboards to Plutono. Below are some examples and explanations as to how to use each method:

    dashboards:
      default:
        some-dashboard:
          json: |
            {
              "annotations":
    
              ...
              # Complete json file here
              ...
    
              "title": "Some Dashboard",
              "uid": "abcd1234",
              "version": 1
            }        
        custom-dashboard:
          # This is a path to a file inside the dashboards directory inside the chart directory
          file: dashboards/custom-dashboard.json
        prometheus-stats:
          # Ref: https://plutono.com/dashboards/2
          gnetId: 2
          revision: 2
          datasource: Prometheus
        loki-dashboard-quick-search:
          gnetId: 12019
          revision: 2
          datasource:
          - name: DS_PROMETHEUS
            value: Prometheus
        local-dashboard:
          url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
    

    Create a dashboard

    1. Click Dashboards in the main menu.

    2. Click New and select New Dashboard.

    3. Click Add new empty panel.

    4. Important: Add a datasource variable as they are provisioned in the cluster.

      • Go to Dashboard settings.
      • Click Variables.
      • Click Add variable.
      • General: Configure the variable with a proper Name as Type Datasource.
      • Data source options: Select the data source Type e.g. Prometheus.
      • Click Update.
      • Go back.
    5. Develop your panels.

      • On the Edit panel view, choose your desired Visualization.
      • Select the datasource variable you just created.
      • Write or construct a query in the query language of your data source.
      • Move and resize the panels as needed.
    6. Optionally add a tag to the dashboard to make grouping easier.

      • Go to Dashboard settings.
      • In the General section, add a Tag.
    7. Click Save. Note that the dashboard is saved in the browser’s local storage.

    8. Export the dashboard.

      • Go to Dashboard settings.
      • Click JSON Model.
      • Copy the JSON model.
      • Go to your Github repository and create a new JSON file in the dashboards directory.

    BASE64 dashboards

    Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit) A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk. If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.

    Gerrit use case

    Gerrit API for download files has the following schema: https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content where {project-name} and {file-id} usually has ‘/’ in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard the url value is https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content

    Sidecar for dashboards

    If the parameter sidecar.dashboards.enabled is set, a sidecar container is deployed in the plutono pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with a label as defined in sidecar.dashboards.label. The files defined in those configmaps are written to a folder and accessed by plutono. Changes to the configmaps are monitored and the imported dashboards are deleted/updated.

    A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside one configmap is currently not properly mirrored in plutono.

    NOTE: Configure your data sources in your dashboards as variables to keep them portable across clusters.

    Example dashboard config:

    Folder structure:

    dashboards/
    ├── dashboard1.json
    ├── dashboard2.json
    templates/
    ├──dashboard-json-configmap.yaml
    

    Helm template to create a configmap for each dashboard:

    {{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
    ---
    apiVersion: v1
    kind: ConfigMap
    
    metadata:
      name: {{ printf "%s-%s" $.Release.Name $path | replace "/" "-" | trunc 63 }}
      labels:
        plutono-dashboard: "true"
    
    data:
    {{ printf "%s: |-" $path | replace "/" "-" | indent 2 }}
    {{ printf "%s" $bytes | indent 4 }}
    
    {{- end }}
    

    Sidecar for datasources

    If the parameter sidecar.datasources.enabled is set, an init container is deployed in the plutono pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and filters out the ones with a label as defined in sidecar.datasources.label. The files defined in those secrets are written to a folder and accessed by plutono on startup. Using these yaml files, the data sources in plutono can be imported.

    Should you aim for reloading datasources in Plutono each time the config is changed, set sidecar.datasources.skipReload: false and adjust sidecar.datasources.reloadURL to http://<svc-name>.<namespace>.svc.cluster.local/api/admin/provisioning/datasources/reload.

    Secrets are recommended over configmaps for this usecase because datasources usually contain private data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.

    Example datasource config:

    apiVersion: v1
    kind: Secret
    metadata:
      name: plutono-datasources
      labels:
        # default value for: sidecar.datasources.label
        plutono-datasource: "true"
    stringData:
      datasources.yaml: |-
        apiVersion: 1
        datasources:
          - name: my-prometheus 
            type: prometheus
            access: proxy
            orgId: 1
            url: my-url-domain:9090
            isDefault: false
            jsonData:
              httpMethod: 'POST'
            editable: false    
    

    NOTE: If you might include credentials in your datasource configuration, make sure to not use stringdata but base64 encoded data instead.

    apiVersion: v1
    kind: Secret
    metadata:
      name: my-datasource
      labels:
        plutono-datasource: "true"
    data:
      # The key must contain a unique name and the .yaml file type
      my-datasource.yaml: {{ include (print $.Template.BasePath "my-datasource.yaml") . | b64enc }}
    

    Example values to add a datasource adapted from Grafana:

    datasources:
     datasources.yaml:
      apiVersion: 1
      datasources:
          # <string, required> Sets the name you use to refer to
          # the data source in panels and queries.
        - name: my-prometheus 
          # <string, required> Sets the data source type.
          type: prometheus
          # <string, required> Sets the access mode, either
          # proxy or direct (Server or Browser in the UI).
          # Some data sources are incompatible with any setting
          # but proxy (Server).
          access: proxy
          # <int> Sets the organization id. Defaults to orgId 1.
          orgId: 1
          # <string> Sets a custom UID to reference this
          # data source in other parts of the configuration.
          # If not specified, Plutono generates one.
          uid:
          # <string> Sets the data source's URL, including the
          # port.
          url: my-url-domain:9090
          # <string> Sets the database user, if necessary.
          user:
          # <string> Sets the database name, if necessary.
          database:
          # <bool> Enables basic authorization.
          basicAuth:
          # <string> Sets the basic authorization username.
          basicAuthUser:
          # <bool> Enables credential headers.
          withCredentials:
          # <bool> Toggles whether the data source is pre-selected
          # for new panels. You can set only one default
          # data source per organization.
          isDefault: false
          # <map> Fields to convert to JSON and store in jsonData.
          jsonData:
            httpMethod: 'POST'
            # <bool> Enables TLS authentication using a client
            # certificate configured in secureJsonData.
            # tlsAuth: true
            # <bool> Enables TLS authentication using a CA
            # certificate.
            # tlsAuthWithCACert: true
          # <map> Fields to encrypt before storing in jsonData.
          secureJsonData:
            # <string> Defines the CA cert, client cert, and
            # client key for encrypted authentication.
            # tlsCACert: '...'
            # tlsClientCert: '...'
            # tlsClientKey: '...'
            # <string> Sets the database password, if necessary.
            # password:
            # <string> Sets the basic authorization password.
            # basicAuthPassword:
          # <int> Sets the version. Used to compare versions when
          # updating. Ignored when creating a new data source.
          version: 1
          # <bool> Allows users to edit data sources from the
          # Plutono UI.
          editable: false
    

    How to serve Plutono with a path prefix (/plutono)

    In order to serve Plutono with a prefix (e.g., http://example.com/plutono), add the following to your values.yaml.

    ingress:
      enabled: true
      annotations:
        kubernetes.io/ingress.class: "nginx"
        nginx.ingress.kubernetes.io/rewrite-target: /$1
        nginx.ingress.kubernetes.io/use-regex: "true"
    
      path: /plutono/?(.*)
      hosts:
        - k8s.example.dev
    
    plutono.ini:
      server:
        root_url: http://localhost:3000/plutono # this host can be localhost
    

    How to securely reference secrets in plutono.ini

    This example uses Plutono file providers for secret values and the extraSecretMounts configuration flag (Additional plutono server secret mounts) to mount the secrets.

    In plutono.ini:

    plutono.ini:
      [auth.generic_oauth]
      enabled = true
      client_id = $__file{/etc/secrets/auth_generic_oauth/client_id}
      client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret}
    

    Existing secret, or created along with helm:

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: auth-generic-oauth-secret
    type: Opaque
    stringData:
      client_id: <value>
      client_secret: <value>
    

    Include in the extraSecretMounts configuration flag:

    - extraSecretMounts:
      - name: auth-generic-oauth-secret-mount
        secretName: auth-generic-oauth-secret
        defaultMode: 0440
        mountPath: /etc/secrets/auth_generic_oauth
        readOnly: true
    

    2.13 - Service exposure test

    This Plugin is just providing a simple exposed service for manual testing.

    By adding the following label to a service it will become accessible from the central greenhouse system via a service proxy:

    greenhouse.sap/expose: "true"

    This plugin create an nginx deployment with an exposed service for testing.

    Configuration

    Specific port

    By default expose would always use the first port. If you need another port, you’ve got to specify it by name:

    greenhouse.sap/exposeNamedPort: YOURPORTNAME

    2.14 - Teams2Slack

    Introduction

    This Plugin provides a Slack integration for a Greenhouse organization.
    It manages Slack entities like channels, groups, handles, etc. and its members based on the teams configured in your Greenhouse organization.

    Important: Please ensure that only one deployment of Teams2slack runs against the same set of groups in slack. Secondary instances should run in the provided Dry-Run mode. Otherwise you might notice inconsistencies if the Teammembership object of a cluster are uneqal.

    Requirments

    • A Kubernetes Cluster to run against
    • The presence of the Greenhouse Teammemberships CRD and corresponding objects.

    Architecture

    architecture

    The Teammembership contain the members of a team. Changes to an object will create an event in Kubernetes. This event will be consumed by the first controller. It creates a mirrored SlackGroup object that reflects the content of the Teammembership Object. This approach has the advantage that deletion of a team can be securely detected with the utilization of finalizers. The second controller detects changes on SlackGroup objects. The users present in a team will be aligned to a slack group.

    Configuration

    Deploy a the Teams2Slack Plugin and it’s Plugin which looks like the following structure (the following structure only includes the mandatory fields):

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: teams2slack
      namespace: default
    spec:
      pluginDefinition: teams2slack
      disabled: false
      optionValues:
        - name: groupNamePrefix
          value: 
        - name: groupNameSuffix
          value: 
        - name: infoChannelID
          value:
        - name: token
          valueFrom:
            secret:
              key: SLACK_TOKEN
              name: teams2slack-secret
    ---
    apiVersion: v1
    kind: Secret
    
    metadata:
      name: teams2slack-secret
    type: Opaque
    data:
      SLACK_TOKEN: // Slack token b64 encoded
    

    The values that can or need to be provided have the following meaning:

    Environment VariableMeaning
    groupNamePrefix (mandatory)The prefix the created slack group should have. Choose a prefix that matches your organization.
    groupNameSuffix (mandatory)The suffix the created slack group should have. Choose a suffix that matches your organization.
    infoChannelID (mandatory)The channel ID created Slack Groups should have. You can currently define one slack ID which will be applied to all created groups. Make sure to take the channel ID and not the channel name.
    token(mandatory)the slack token to authenticate against Slack.
    eventRequeueTimer (optional)If a slack API requests fails due to a network error, or because data is currently fetched, it will be requed to the operators workQueue. Uses the golang date format. (1s = every second 1m = every minute )
    loadDataBackoffTimer (optional)Defines, when a Slack-API data call occurs. Uses the golang data format.
    dryRun (optional)Slack write operations are not executed if value is set to true. Requires a valid. Requires: A valid SLACK_TOKEN; the other environment variables can be mocked.

    2.15 - Thanos

    Learn more about the Thanos Plugin. Use it to enable extended metrics retention and querying across Prometheus servers and Greenhouse clusters.

    The main terminologies used in this document can be found in core-concepts.

    Overview

    Thanos is a set of components that can be used to extend the storage and retrieval of metrics in Prometheus. It allows you to store metrics in a remote object store and query them across multiple Prometheus servers and Greenhouse clusters. This Plugin is intended to provide a set of pre-configured Thanos components that enable a proven composition. At the core, a set of Thanos components is installed that adds long-term storage capability to a single kube-monitoring Plugin and makes both current and historical data available again via one Thanos Query component.

    Thanos Architecture

    The Thanos Sidecar is a component that is deployed as a container together with a Prometheus instance. This allows Thanos to optionally upload metrics to the object store and Thanos Query to access Prometheus data via a common, efficient StoreAPI.

    The Thanos Compact component applies the Prometheus 2.0 Storage Engine compaction process to data uploaded to the object store. The Compactor is also responsible for applying the configured retention and downsampling of the data.

    The Thanos Store also implements the StoreAPI and serves the historical data from an object store. It acts primarily as an API gateway and has no persistence itself.

    Thanos Query implements the Prometheus HTTP v1 API for querying data in a Thanos cluster via PromQL. In short, it collects the data needed to evaluate the query from the connected StoreAPIs, evaluates the query and returns the result.

    This plugin deploys the following Thanos components:

    Planned components:

    This Plugin does not deploy the following components:

    Disclaimer

    It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.

    Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

    Quick start

    This guide provides a quick and straightforward way to use Thanos as a Greenhouse Plugin on your Kubernetes cluster. The guide is meant to build the following setup.

    Prerequisites

    • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
    • Ready to use credentials for a compatible object store
    • kube-monitoring plugin installed. Thanos Sidecar on the Prometheus must be enabled by providing the required object store credentials.

    Step 1:

    Create a Kubernetes Secret with your object store credentials following the Object Store preparation section.

    Step 2:

    Enable the Thanos Sidecar on the Prometheus in the kube-monitoring plugin by providing the required object store credentials. Follow the kube-monitoring plugin enablement section.

    Step 3:

    Create a Thanos Query Plugin by following the Thanos Query section.

    Configuration

    Object Store preparation

    To run Thanos, you need object storage credentials. Get the credentials of your provider and add them to a Kubernetes Secret. The Thanos documentation provides a great overview on the different supported store types.

    Usually this looks somewhat like this

    type: $STORAGE_TYPE
    config:
        user:
        password:
        domain:
        ...
    

    If you’ve got everything in a file, deploy it in your remote cluster in the namespace, where Prometheus and Thanos will be.

    Important: $THANOS_PLUGIN_NAME is needed later for the respective Thanos plugin and they must not be different!

    kubectl create secret generic $THANOS_PLUGIN_NAME-metrics-objectstore --from-file=thanos.yaml=/path/to/your/file
    

    kube-monitoring plugin enablement

    Prometheus in kube-monitoring needs to be altered to have a sidecar and ship metrics to the new object store too. You have to provide the Secret you’ve just created to the (most likely already existing) kube-monitoring plugin. Add this:

    spec:
        optionValues:
          - name: kubeMonitoring.prometheus.prometheusSpec.thanos.objectStorageConfig.existingSecret.key
            value: thanos.yaml
          - name: kubeMonitoring.prometheus.prometheusSpec.thanos.objectStorageConfig.existingSecret.name
            value: $THANOS_PLUGIN_NAME-metrics-objectstore
    

    Values used here are described in the Prometheus Operator Spec.

    Thanos Query

    This is the real deal now: Define your Thanos Query by creating a plugin.

    NOTE1: $THANOS_PLUGIN_NAME needs to be consistent with your secret created earlier.

    NOTE2: The releaseNamespace needs to be the same as to where kube-monitoring resides. By default this is kube-monitoring.

    apiVersion: greenhouse.sap/v1alpha1
    kind: Plugin
    metadata:
      name: $YOUR_CLUSTER_NAME
    spec:
      pluginDefinition: thanos
      disabled: false
      clusterName: $YOUR_CLUSTER_NAME 
      releaseNamespace: kube-monitoring
    

    [OPTIONAL] Handling your Prometheus and Thanos Stores.

    Default Prometheus and Thanos Endpoint

    Thanos Query is automatically adding the Prometheus and Thanos endpoints. If you just have a single Prometheus with Thanos enabled this will work out of the box. Details in the next two chapters. See Standalone Query for your own configuration.

    Prometheus Endpoint

    Thanos Query would check for a service prometheus-operated in the same namespace with this GRPC port to be available 10901. The cli option looks like this and is configured in the Plugin itself:

    --store=prometheus-operated:10901

    Thanos Endpoint

    Thanos Query would check for a Thanos endpoint named like releaseName-store. The associated command line flag for this parameter would look like:

    --store=thanos-kube-store:10901

    If you just have one occurence of this Thanos plugin dpeloyed, the default option would work and does not need anything else.

    Standalone Query

    Standalone Query

    In case you want to achieve a setup like above and have an overarching Thanos Query to run with multiple Stores, you can set it to standalone and add your own store list. Setup your Plugin like this:

    spec:
      optionsValues:
      - name: thanos.query.standalone
        value: true
    

    This would enable you to either:

    • query multiple stores with a single Query

      spec:
        optionsValues:
        - name: thanos.query.stores
          value:
            - thanos-kube-1-store:10901 
            - thanos-kube-2-store:10901 
            - kube-monitoring-1-prometheus:10901 
            - kube-monitoring-2-prometheus:10901 
      
    • query multiple Thanos Queries with a single Query Note that there is no -store suffix here in this case.

      spec:
        optionsValues:
        - name: thanos.query.stores
          value:
            - thanos-kube-1:10901
            - thanos-kube-2:10901
      

    Operations

    Thanos Compactor

    If you deploy the plugin with the default values, Thanos compactor will be shipped too and use the same secret ($THANOS_PLUGIN_NAME-metrics-objectstore) to retrieve, compact and push back timeseries.

    Based on experience, a 100Gi-PVC is used in order not to overload the ephermeral storage of the Kubernetes Nodes. Depending on the configured retention and the amount of metrics, this may not be sufficient and larger volumes may be required. In any case, it is always safe to clear the volume of the compactor and increase it if necessary.

    The object storage costs will be heavily impacted on how granular timeseries are being stored (reference Downsampling). These are the pre-configured defaults, you can change them as needed:

    raw: 777600s (90d)
    5m: 777600s (90d)
    1h: 157680000 (5y)