This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Reference

Technical reference documentation for Greenhouse resources

This section contains reference documentation for Greenhouse.

1 - API

Technical reference documentation for Greenhouse API resources

Packages:

greenhouse.sap/v1alpha1

Resource Types:

    Authentication

    (Appears on: OrganizationSpec)

    FieldDescription
    oidc
    OIDCConfig

    OIDConfig configures the OIDC provider.

    scim
    SCIMConfig

    SCIMConfig configures the SCIM client.

    Cluster

    Cluster is the Schema for the clusters API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    ClusterSpec


    accessMode
    ClusterAccessMode

    AccessMode configures how the cluster is accessed from the Greenhouse operator.

    kubeConfig
    ClusterKubeConfig

    KubeConfig contains specific values for KubeConfig for the cluster.

    status
    ClusterStatus

    ClusterAccessMode (string alias)

    (Appears on: ClusterSpec)

    ClusterAccessMode configures the access mode to the customer cluster.

    ClusterConditionType (string alias)

    ClusterConditionType is a valid condition of a cluster.

    ClusterKubeConfig

    (Appears on: ClusterSpec)

    ClusterKubeConfig configures kube config values.

    FieldDescription
    maxTokenValidity
    int32

    MaxTokenValidity specifies the maximum duration for which a token remains valid in hours.

    ClusterKubeconfig

    ClusterKubeconfig is the Schema for the clusterkubeconfigs API ObjectMeta.OwnerReferences is used to link the ClusterKubeconfig to the Cluster ObjectMeta.Generation is used to detect changes in the ClusterKubeconfig and sync local kubeconfig files ObjectMeta.Name is designed to be the same with the Cluster name

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    ClusterKubeconfigSpec


    kubeconfig
    ClusterKubeconfigData
    status
    ClusterKubeconfigStatus

    ClusterKubeconfigAuthInfo

    (Appears on: ClusterKubeconfigAuthInfoItem)

    FieldDescription
    auth-provider
    k8s.io/client-go/tools/clientcmd/api.AuthProviderConfig
    client-certificate-data
    []byte
    client-key-data
    []byte

    ClusterKubeconfigAuthInfoItem

    (Appears on: ClusterKubeconfigData)

    FieldDescription
    name
    string
    user
    ClusterKubeconfigAuthInfo

    ClusterKubeconfigCluster

    (Appears on: ClusterKubeconfigClusterItem)

    FieldDescription
    server
    string
    certificate-authority-data
    []byte

    ClusterKubeconfigClusterItem

    (Appears on: ClusterKubeconfigData)

    FieldDescription
    name
    string
    cluster
    ClusterKubeconfigCluster

    ClusterKubeconfigContext

    (Appears on: ClusterKubeconfigContextItem)

    FieldDescription
    cluster
    string
    user
    string
    namespace
    string

    ClusterKubeconfigContextItem

    (Appears on: ClusterKubeconfigData)

    FieldDescription
    name
    string
    context
    ClusterKubeconfigContext

    ClusterKubeconfigData

    (Appears on: ClusterKubeconfigSpec)

    ClusterKubeconfigData stores the kubeconfig data ready to use kubectl or other local tooling It is a simplified version of clientcmdapi.Config: https://pkg.go.dev/k8s.io/client-go/tools/clientcmd/api#Config

    FieldDescription
    kind
    string
    apiVersion
    string
    clusters
    []ClusterKubeconfigClusterItem
    users
    []ClusterKubeconfigAuthInfoItem
    contexts
    []ClusterKubeconfigContextItem
    current-context
    string
    preferences
    ClusterKubeconfigPreferences

    ClusterKubeconfigPreferences

    (Appears on: ClusterKubeconfigData)

    ClusterKubeconfigSpec

    (Appears on: ClusterKubeconfig)

    ClusterKubeconfigSpec stores the kubeconfig data for the cluster The idea is to use kubeconfig data locally with minimum effort (with local tools or plain kubectl): kubectl get cluster-kubeconfig $NAME -o yaml | yq -y .spec.kubeconfig

    FieldDescription
    kubeconfig
    ClusterKubeconfigData

    ClusterKubeconfigStatus

    (Appears on: ClusterKubeconfig)

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    ClusterOptionOverride

    (Appears on: PluginPresetSpec)

    ClusterOptionOverride defines which plugin option should be override in which cluster

    FieldDescription
    clusterName
    string
    overrides
    []PluginOptionValue

    ClusterPluginDefinition

    ClusterPluginDefinition is the Schema for the clusterplugindefinitions API.

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginDefinitionSpec


    displayName
    string

    DisplayName provides a human-readable label for the pluginDefinition.

    description
    string

    Description provides additional details of the pluginDefinition.

    helmChart
    HelmChartReference

    HelmChart specifies where the Helm Chart for this pluginDefinition can be found.

    uiApplication
    UIApplicationReference

    UIApplication specifies a reference to a UI application

    options
    []PluginOption

    RequiredValues is a list of values required to create an instance of this PluginDefinition.

    version
    string

    Version of this pluginDefinition

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI. Defaults to alphabetical sorting if not provided or on conflict.

    icon
    string

    Icon specifies the icon to be used for this plugin in the Greenhouse UI. Icons can be either: - A string representing a juno icon in camel case from this list: https://github.com/sapcc/juno/blob/main/libs/juno-ui-components/src/components/Icon/Icon.component.js#L6-L52 - A publicly accessible image reference to a .png file. Will be displayed 100x100px

    docMarkDownUrl
    string

    DocMarkDownUrl specifies the URL to the markdown documentation file for this plugin. Source needs to allow all CORS origins.

    status
    ClusterPluginDefinitionStatus

    ClusterPluginDefinitionStatus

    (Appears on: ClusterPluginDefinition)

    ClusterPluginDefinitionStatus defines the observed state of ClusterPluginDefinition.

    ClusterSpec

    (Appears on: Cluster)

    ClusterSpec defines the desired state of the Cluster.

    FieldDescription
    accessMode
    ClusterAccessMode

    AccessMode configures how the cluster is accessed from the Greenhouse operator.

    kubeConfig
    ClusterKubeConfig

    KubeConfig contains specific values for KubeConfig for the cluster.

    ClusterStatus

    (Appears on: Cluster)

    ClusterStatus defines the observed state of Cluster

    FieldDescription
    kubernetesVersion
    string

    KubernetesVersion reflects the detected Kubernetes version of the cluster.

    bearerTokenExpirationTimestamp
    Kubernetes meta/v1.Time

    BearerTokenExpirationTimestamp reflects the expiration timestamp of the bearer token used to access the cluster.

    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Cluster.

    nodes
    map[string]../greenhouse/api/v1alpha1.NodeStatus

    Nodes provides a map of cluster node names to node statuses

    HelmChartReference

    (Appears on: PluginDefinitionSpec, PluginStatus)

    HelmChartReference references a Helm Chart in a chart repository.

    FieldDescription
    name
    string

    Name of the HelmChart chart.

    repository
    string

    Repository of the HelmChart chart.

    version
    string

    Version of the HelmChart chart.

    HelmReleaseStatus

    (Appears on: PluginStatus)

    HelmReleaseStatus reflects the status of a Helm release.

    FieldDescription
    status
    string

    Status is the status of a HelmChart release.

    firstDeployed
    Kubernetes meta/v1.Time

    FirstDeployed is the timestamp of the first deployment of the release.

    lastDeployed
    Kubernetes meta/v1.Time

    LastDeployed is the timestamp of the last deployment of the release.

    pluginOptionChecksum
    string

    PluginOptionChecksum is the checksum of plugin option values.

    diff
    string

    Diff contains the difference between the deployed helm chart and the helm chart in the last reconciliation

    ManagedPluginStatus

    (Appears on: PluginPresetStatus)

    ManagedPluginStatus defines the Ready condition of a managed Plugin identified by its name.

    FieldDescription
    pluginName
    string
    readyCondition
    Greenhouse meta/v1alpha1.Condition

    NodeStatus

    (Appears on: ClusterStatus)

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    We mirror the node conditions here for faster reference

    ready
    bool

    Fast track to the node ready condition.

    OIDCConfig

    (Appears on: Authentication)

    FieldDescription
    issuer
    string

    Issuer is the URL of the identity service.

    redirectURI
    string

    RedirectURI is the redirect URI to be used for the OIDC flow against the upstream IdP. If none is specified, the Greenhouse ID proxy will be used.

    clientIDReference
    SecretKeyReference

    ClientIDReference references the Kubernetes secret containing the client id.

    clientSecretReference
    SecretKeyReference

    ClientSecretReference references the Kubernetes secret containing the client secret.

    oauth2ClientRedirectURIs
    []string

    OAuth2ClientRedirectURIs are a registered set of redirect URIs. When redirecting from the idproxy to the client application, the URI requested to redirect to must be contained in this list.

    Organization

    Organization is the Schema for the organizations API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    OrganizationSpec


    displayName
    string

    DisplayName is an optional name for the organization to be displayed in the Greenhouse UI. Defaults to a normalized version of metadata.name.

    authentication
    Authentication

    Authentication configures the organizations authentication mechanism.

    description
    string

    Description provides additional details of the organization.

    mappedOrgAdminIdPGroup
    string

    MappedOrgAdminIDPGroup is the IDP group ID identifying org admins

    status
    OrganizationStatus

    OrganizationSpec

    (Appears on: Organization)

    OrganizationSpec defines the desired state of Organization

    FieldDescription
    displayName
    string

    DisplayName is an optional name for the organization to be displayed in the Greenhouse UI. Defaults to a normalized version of metadata.name.

    authentication
    Authentication

    Authentication configures the organizations authentication mechanism.

    description
    string

    Description provides additional details of the organization.

    mappedOrgAdminIdPGroup
    string

    MappedOrgAdminIDPGroup is the IDP group ID identifying org admins

    OrganizationStatus

    (Appears on: Organization)

    OrganizationStatus defines the observed state of an Organization

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Organization.

    Plugin

    Plugin is the Schema for the plugins API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginSpec


    pluginDefinition
    string

    PluginDefinition is the name of the PluginDefinition this instance is for.

    displayName
    string

    DisplayName is an optional name for the Plugin to be displayed in the Greenhouse UI. This is especially helpful to distinguish multiple instances of a PluginDefinition in the same context. Defaults to a normalized version of metadata.name.

    optionValues
    []PluginOptionValue

    Values are the values for a PluginDefinition instance.

    clusterName
    string

    ClusterName is the name of the cluster the plugin is deployed to. If not set, the plugin is deployed to the greenhouse cluster.

    releaseNamespace
    string

    ReleaseNamespace is the namespace in the remote cluster to which the backend is deployed. Defaults to the Greenhouse managed namespace if not set.

    releaseName
    string

    ReleaseName is the name of the helm release in the remote cluster to which the backend is deployed. If the Plugin was already deployed, the Plugin’s name is used as the release name. If this Plugin is newly created, the releaseName is defaulted to the PluginDefinitions HelmChart name.

    status
    PluginStatus

    PluginDefinition

    PluginDefinition is the Schema for the PluginDefinitions API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginDefinitionSpec


    displayName
    string

    DisplayName provides a human-readable label for the pluginDefinition.

    description
    string

    Description provides additional details of the pluginDefinition.

    helmChart
    HelmChartReference

    HelmChart specifies where the Helm Chart for this pluginDefinition can be found.

    uiApplication
    UIApplicationReference

    UIApplication specifies a reference to a UI application

    options
    []PluginOption

    RequiredValues is a list of values required to create an instance of this PluginDefinition.

    version
    string

    Version of this pluginDefinition

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI. Defaults to alphabetical sorting if not provided or on conflict.

    icon
    string

    Icon specifies the icon to be used for this plugin in the Greenhouse UI. Icons can be either: - A string representing a juno icon in camel case from this list: https://github.com/sapcc/juno/blob/main/libs/juno-ui-components/src/components/Icon/Icon.component.js#L6-L52 - A publicly accessible image reference to a .png file. Will be displayed 100x100px

    docMarkDownUrl
    string

    DocMarkDownUrl specifies the URL to the markdown documentation file for this plugin. Source needs to allow all CORS origins.

    status
    PluginDefinitionStatus

    PluginDefinitionSpec

    (Appears on: ClusterPluginDefinition, PluginDefinition)

    PluginDefinitionSpec defines the desired state of PluginDefinitionSpec

    FieldDescription
    displayName
    string

    DisplayName provides a human-readable label for the pluginDefinition.

    description
    string

    Description provides additional details of the pluginDefinition.

    helmChart
    HelmChartReference

    HelmChart specifies where the Helm Chart for this pluginDefinition can be found.

    uiApplication
    UIApplicationReference

    UIApplication specifies a reference to a UI application

    options
    []PluginOption

    RequiredValues is a list of values required to create an instance of this PluginDefinition.

    version
    string

    Version of this pluginDefinition

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI. Defaults to alphabetical sorting if not provided or on conflict.

    icon
    string

    Icon specifies the icon to be used for this plugin in the Greenhouse UI. Icons can be either: - A string representing a juno icon in camel case from this list: https://github.com/sapcc/juno/blob/main/libs/juno-ui-components/src/components/Icon/Icon.component.js#L6-L52 - A publicly accessible image reference to a .png file. Will be displayed 100x100px

    docMarkDownUrl
    string

    DocMarkDownUrl specifies the URL to the markdown documentation file for this plugin. Source needs to allow all CORS origins.

    PluginDefinitionStatus

    (Appears on: PluginDefinition)

    PluginDefinitionStatus defines the observed state of PluginDefinition

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Plugin.

    PluginOption

    (Appears on: PluginDefinitionSpec)

    FieldDescription
    name
    string

    Name/Key of the config option.

    default
    Kubernetes apiextensions/v1.JSON
    (Optional)

    Default provides a default value for the option

    description
    string

    Description provides a human-readable text for the value as shown in the UI.

    displayName
    string

    DisplayName provides a human-readable label for the configuration option

    required
    bool

    Required indicates that this config option is required

    type
    PluginOptionType

    Type of this configuration option.

    regex
    string

    Regex specifies a match rule for validating configuration options.

    PluginOptionType (string alias)

    (Appears on: PluginOption)

    PluginOptionType specifies the type of PluginOption.

    PluginOptionValue

    (Appears on: ClusterOptionOverride, PluginSpec)

    PluginOptionValue is the value for a PluginOption.

    FieldDescription
    name
    string

    Name of the values.

    value
    Kubernetes apiextensions/v1.JSON

    Value is the actual value in plain text.

    valueFrom
    ValueFromSource

    ValueFrom references a potentially confidential value in another source.

    PluginPreset

    PluginPreset is the Schema for the PluginPresets API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    PluginPresetSpec


    plugin
    PluginSpec

    PluginSpec is the spec of the plugin to be deployed by the PluginPreset.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the clusters the plugin bundle should be deployed to.

    clusterOptionOverrides
    []ClusterOptionOverride

    ClusterOptionOverrides define plugin option values to override by the PluginPreset

    status
    PluginPresetStatus

    PluginPresetSpec

    (Appears on: PluginPreset)

    PluginPresetSpec defines the desired state of PluginPreset

    FieldDescription
    plugin
    PluginSpec

    PluginSpec is the spec of the plugin to be deployed by the PluginPreset.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the clusters the plugin bundle should be deployed to.

    clusterOptionOverrides
    []ClusterOptionOverride

    ClusterOptionOverrides define plugin option values to override by the PluginPreset

    PluginPresetStatus

    (Appears on: PluginPreset)

    PluginPresetStatus defines the observed state of PluginPreset

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    StatusConditions contain the different conditions that constitute the status of the PluginPreset.

    pluginStatuses
    []ManagedPluginStatus

    PluginStatuses contains statuses of Plugins managed by the PluginPreset.

    availablePlugins
    int

    AvailablePlugins is the number of available Plugins managed by the PluginPreset.

    readyPlugins
    int

    ReadyPlugins is the number of ready Plugins managed by the PluginPreset.

    failedPlugins
    int

    FailedPlugins is the number of failed Plugins managed by the PluginPreset.

    PluginSpec

    (Appears on: Plugin, PluginPresetSpec)

    PluginSpec defines the desired state of Plugin

    FieldDescription
    pluginDefinition
    string

    PluginDefinition is the name of the PluginDefinition this instance is for.

    displayName
    string

    DisplayName is an optional name for the Plugin to be displayed in the Greenhouse UI. This is especially helpful to distinguish multiple instances of a PluginDefinition in the same context. Defaults to a normalized version of metadata.name.

    optionValues
    []PluginOptionValue

    Values are the values for a PluginDefinition instance.

    clusterName
    string

    ClusterName is the name of the cluster the plugin is deployed to. If not set, the plugin is deployed to the greenhouse cluster.

    releaseNamespace
    string

    ReleaseNamespace is the namespace in the remote cluster to which the backend is deployed. Defaults to the Greenhouse managed namespace if not set.

    releaseName
    string

    ReleaseName is the name of the helm release in the remote cluster to which the backend is deployed. If the Plugin was already deployed, the Plugin’s name is used as the release name. If this Plugin is newly created, the releaseName is defaulted to the PluginDefinitions HelmChart name.

    PluginStatus

    (Appears on: Plugin)

    PluginStatus defines the observed state of Plugin

    FieldDescription
    helmReleaseStatus
    HelmReleaseStatus

    HelmReleaseStatus reflects the status of the latest HelmChart release. This is only configured if the pluginDefinition is backed by HelmChart.

    version
    string

    Version contains the latest pluginDefinition version the config was last applied with successfully.

    helmChart
    HelmChartReference

    HelmChart contains a reference the helm chart used for the deployed pluginDefinition version.

    uiApplication
    UIApplicationReference

    UIApplication contains a reference to the frontend that is used for the deployed pluginDefinition version.

    weight
    int32

    Weight configures the order in which Plugins are shown in the Greenhouse UI.

    description
    string

    Description provides additional details of the plugin.

    exposedServices
    map[string]../greenhouse/api/v1alpha1.Service

    ExposedServices provides an overview of the Plugins services that are centrally exposed. It maps the exposed URL to the service found in the manifest.

    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    StatusConditions contain the different conditions that constitute the status of the Plugin.

    PropagationStatus

    (Appears on: TeamRoleBindingStatus)

    PropagationStatus defines the observed state of the TeamRoleBinding’s associated rbacv1 resources on a Cluster

    FieldDescription
    clusterName
    string

    ClusterName is the name of the cluster the rbacv1 resources are created on.

    condition
    Greenhouse meta/v1alpha1.Condition

    Condition is the overall Status of the rbacv1 resources created on the cluster

    SCIMConfig

    (Appears on: Authentication)

    FieldDescription
    baseURL
    string

    URL to the SCIM server.

    authType
    github.com/cloudoperators/greenhouse/internal/scim.AuthType

    AuthType defined possible authentication type

    basicAuthUser
    ValueFromSource

    User to be used for basic authentication.

    basicAuthPw
    ValueFromSource

    Password to be used for basic authentication.

    bearerToken
    ValueFromSource

    BearerToken to be used for bearer token authorization

    bearerPrefix
    string

    BearerPrefix to be used to defined bearer token prefix

    bearerHeader
    string

    BearerHeader to be used to defined bearer token header

    SecretKeyReference

    (Appears on: OIDCConfig, ValueFromSource)

    SecretKeyReference specifies the secret and key containing the value.

    FieldDescription
    name
    string

    Name of the secret in the same namespace.

    key
    string

    Key in the secret to select the value from.

    Service

    (Appears on: PluginStatus)

    Service references a Kubernetes service of a Plugin.

    FieldDescription
    namespace
    string

    Namespace is the namespace of the service in the target cluster.

    name
    string

    Name is the name of the service in the target cluster.

    port
    int32

    Port is the port of the service.

    protocol
    string

    Protocol is the protocol of the service.

    Team

    Team is the Schema for the teams API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamSpec


    description
    string

    Description provides additional details of the team.

    mappedIdPGroup
    string

    IdP group id matching team.

    joinUrl
    string

    URL to join the IdP group.

    status
    TeamStatus

    TeamRole

    TeamRole is the Schema for the TeamRoles API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamRoleSpec


    rules
    []Kubernetes rbac/v1.PolicyRule

    Rules is a list of rbacv1.PolicyRules used on a managed RBAC (Cluster)Role

    aggregationRule
    Kubernetes rbac/v1.AggregationRule

    AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole on the remote cluster

    labels
    map[string]string

    Labels are applied to the ClusterRole created on the remote cluster. This allows using TeamRoles as part of AggregationRules by other TeamRoles

    status
    TeamRoleStatus

    TeamRoleBinding

    TeamRoleBinding is the Schema for the rolebindings API

    FieldDescription
    metadata
    Kubernetes meta/v1.ObjectMeta
    Refer to the Kubernetes API documentation for the fields of the metadata field.
    spec
    TeamRoleBindingSpec


    teamRoleRef
    string

    TeamRoleRef references a Greenhouse TeamRole by name

    teamRef
    string

    TeamRef references a Greenhouse Team by name

    usernames
    []string

    Usernames defines list of users to add to the (Cluster-)RoleBindings

    clusterName
    string

    ClusterName is the name of the cluster the rbacv1 resources are created on.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the Clusters the TeamRoleBinding should be deployed to.

    namespaces
    []string

    Namespaces is a list of namespaces in the Greenhouse Clusters to apply the RoleBinding to. If empty, a ClusterRoleBinding will be created on the remote cluster, otherwise a RoleBinding per namespace.

    createNamespaces
    bool

    CreateNamespaces when enabled the controller will create namespaces for RoleBindings if they do not exist.

    status
    TeamRoleBindingStatus

    TeamRoleBindingSpec

    (Appears on: TeamRoleBinding)

    TeamRoleBindingSpec defines the desired state of a TeamRoleBinding

    FieldDescription
    teamRoleRef
    string

    TeamRoleRef references a Greenhouse TeamRole by name

    teamRef
    string

    TeamRef references a Greenhouse Team by name

    usernames
    []string

    Usernames defines list of users to add to the (Cluster-)RoleBindings

    clusterName
    string

    ClusterName is the name of the cluster the rbacv1 resources are created on.

    clusterSelector
    Kubernetes meta/v1.LabelSelector

    ClusterSelector is a label selector to select the Clusters the TeamRoleBinding should be deployed to.

    namespaces
    []string

    Namespaces is a list of namespaces in the Greenhouse Clusters to apply the RoleBinding to. If empty, a ClusterRoleBinding will be created on the remote cluster, otherwise a RoleBinding per namespace.

    createNamespaces
    bool

    CreateNamespaces when enabled the controller will create namespaces for RoleBindings if they do not exist.

    TeamRoleBindingStatus

    (Appears on: TeamRoleBinding)

    TeamRoleBindingStatus defines the observed state of the TeamRoleBinding

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions

    StatusConditions contain the different conditions that constitute the status of the TeamRoleBinding.

    clusters
    []PropagationStatus

    PropagationStatus is the list of clusters the TeamRoleBinding is applied to

    TeamRoleSpec

    (Appears on: TeamRole)

    TeamRoleSpec defines the desired state of a TeamRole

    FieldDescription
    rules
    []Kubernetes rbac/v1.PolicyRule

    Rules is a list of rbacv1.PolicyRules used on a managed RBAC (Cluster)Role

    aggregationRule
    Kubernetes rbac/v1.AggregationRule

    AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole on the remote cluster

    labels
    map[string]string

    Labels are applied to the ClusterRole created on the remote cluster. This allows using TeamRoles as part of AggregationRules by other TeamRoles

    TeamRoleStatus

    (Appears on: TeamRole)

    TeamRoleStatus defines the observed state of a TeamRole

    TeamSpec

    (Appears on: Team)

    TeamSpec defines the desired state of Team

    FieldDescription
    description
    string

    Description provides additional details of the team.

    mappedIdPGroup
    string

    IdP group id matching team.

    joinUrl
    string

    URL to join the IdP group.

    TeamStatus

    (Appears on: Team)

    TeamStatus defines the observed state of Team

    FieldDescription
    statusConditions
    Greenhouse meta/v1alpha1.StatusConditions
    members
    []User

    UIApplicationReference

    (Appears on: PluginDefinitionSpec, PluginStatus)

    UIApplicationReference references the UI pluginDefinition to use.

    FieldDescription
    url
    string

    URL specifies the url to a built javascript asset. By default, assets are loaded from the Juno asset server using the provided name and version.

    name
    string

    Name of the UI application.

    version
    string

    Version of the frontend application.

    User

    (Appears on: TeamStatus)

    User specifies a human person.

    FieldDescription
    id
    string

    ID is the unique identifier of the user.

    firstName
    string

    FirstName of the user.

    lastName
    string

    LastName of the user.

    email
    string

    Email of the user.

    ValueFromSource

    (Appears on: PluginOptionValue, SCIMConfig)

    ValueFromSource is a valid source for a value.

    FieldDescription
    secret
    SecretKeyReference

    Secret references the secret containing the value.

    greenhouse.sap/v1alpha2

    Resource Types:

      ClusterSelector

      (Appears on: TeamRoleBindingSpec)

      ClusterSelector specifies a selector for clusters by name or by label

      FieldDescription
      clusterName
      string

      Name of a single Cluster to select.

      labelSelector
      Kubernetes meta/v1.LabelSelector

      LabelSelector is a label query over a set of Clusters.

      PropagationStatus

      (Appears on: TeamRoleBindingStatus)

      PropagationStatus defines the observed state of the TeamRoleBinding’s associated rbacv1 resources on a Cluster

      FieldDescription
      clusterName
      string

      ClusterName is the name of the cluster the rbacv1 resources are created on.

      condition
      Greenhouse meta/v1alpha1.Condition

      Condition is the overall Status of the rbacv1 resources created on the cluster

      TeamRoleBinding

      TeamRoleBinding is the Schema for the rolebindings API

      FieldDescription
      metadata
      Kubernetes meta/v1.ObjectMeta
      Refer to the Kubernetes API documentation for the fields of the metadata field.
      spec
      TeamRoleBindingSpec


      teamRoleRef
      string

      TeamRoleRef references a Greenhouse TeamRole by name

      teamRef
      string

      TeamRef references a Greenhouse Team by name

      usernames
      []string

      Usernames defines list of users to add to the (Cluster-)RoleBindings

      clusterSelector
      ClusterSelector

      ClusterSelector is used to select a Cluster or Clusters the TeamRoleBinding should be deployed to.

      namespaces
      []string

      Namespaces is a list of namespaces in the Greenhouse Clusters to apply the RoleBinding to. If empty, a ClusterRoleBinding will be created on the remote cluster, otherwise a RoleBinding per namespace.

      createNamespaces
      bool

      CreateNamespaces when enabled the controller will create namespaces for RoleBindings if they do not exist.

      status
      TeamRoleBindingStatus

      TeamRoleBindingSpec

      (Appears on: TeamRoleBinding)

      TeamRoleBindingSpec defines the desired state of a TeamRoleBinding

      FieldDescription
      teamRoleRef
      string

      TeamRoleRef references a Greenhouse TeamRole by name

      teamRef
      string

      TeamRef references a Greenhouse Team by name

      usernames
      []string

      Usernames defines list of users to add to the (Cluster-)RoleBindings

      clusterSelector
      ClusterSelector

      ClusterSelector is used to select a Cluster or Clusters the TeamRoleBinding should be deployed to.

      namespaces
      []string

      Namespaces is a list of namespaces in the Greenhouse Clusters to apply the RoleBinding to. If empty, a ClusterRoleBinding will be created on the remote cluster, otherwise a RoleBinding per namespace.

      createNamespaces
      bool

      CreateNamespaces when enabled the controller will create namespaces for RoleBindings if they do not exist.

      TeamRoleBindingStatus

      (Appears on: TeamRoleBinding)

      TeamRoleBindingStatus defines the observed state of the TeamRoleBinding

      FieldDescription
      statusConditions
      Greenhouse meta/v1alpha1.StatusConditions

      StatusConditions contain the different conditions that constitute the status of the TeamRoleBinding.

      clusters
      []PropagationStatus

      PropagationStatus is the list of clusters the TeamRoleBinding is applied to

      This page was automatically generated with gen-crd-api-reference-docs

      2 - Plugin Catalog

      Plugin Catalog overview

      This section provides an overview of the available PluginDefinitions in Greenhouse.

      2.1 - Alerts

      Learn more about the alerts plugin. Use it to activate Prometheus alert management for your Greenhouse organisation.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      This Plugin includes a preconfigured Prometheus Alertmanager, which is deployed and managed via the Prometheus Operator, and Supernova, an advanced user interface for Prometheus Alertmanager. Certificates are automatically generated to enable sending alerts from Prometheus to Alertmanager. These alerts can too be sent as Slack notifications with a provided set of notification templates.

      Components included in this Plugin:

      This Plugin usually is deployed along the kube-monitoring Plugin and does not deploy the Prometheus Operator itself. However, if you are intending to use it stand-alone, you need to explicitly enable the deployment of Prometheus Operator, otherwise it will not work. It can be done in the configuration interface of the plugin.

      Alerts Plugin Architecture

      Disclaimer

      This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.

      The Plugin is a deeply configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates.

      It is intended as a platform that can be extended by following the guide.

      Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

      Quick start

      This guide provides a quick and straightforward way to use alerts as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
      • kube-monitoring plugin (which brings in Prometheus Operator) OR stand alone: awareness to enable the deployment of Prometheus Operator with this plugin

      Step 1:

      You can install the alerts package in your cluster with Helm manually or let the Greenhouse platform lifecycle it for you automatically. For the latter, you can either:

      1. Go to Greenhouse dashboard and select the Alerts Plugin from the catalog. Specify the cluster and required option values.
      2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

      Step 2:

      After the installation, you can access the Supernova UI by navigating to the Alerts tab in the Greenhouse dashboard.

      Step 3:

      Greenhouse regularly performs integration tests that are bundled with alerts. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.

      Configuration

      Prometheus Alertmanager options

      NameDescriptionValue
      global.caCertAdditional caCert to add to the CA bundle""
      alerts.commonLabelsLabels to apply to all resources{}
      alerts.defaultRules.createCreates community Alertmanager alert rules.true
      alerts.defaultRules.labelskube-monitoring plugin: <plugin.name> to evaluate Alertmanager rules.{}
      alerts.alertmanager.enabledDeploy Prometheus Alertmanagertrue
      alerts.alertmanager.annotationsAnnotations for Alertmanager{}
      alerts.alertmanager.configAlertmanager configuration directives.{}
      alerts.alertmanager.ingress.enabledDeploy Alertmanager Ingressfalse
      alerts.alertmanager.ingress.hostsMust be provided if Ingress is enabled.[]
      alerts.alertmanager.ingress.tlsMust be a valid TLS configuration for Alertmanager Ingress. Supernova UI passes the client certificate to retrieve alerts.{}
      alerts.alertmanager.ingress.ingressClassnameSpecifies the ingress-controllernginx
      alerts.alertmanager.servicemonitor.additionalLabelskube-monitoring plugin: <plugin.name> to scrape Alertmanager metrics.{}
      alerts.alertmanager.alertmanagerConfig.slack.routes[].nameName of the Slack route.""
      alerts.alertmanager.alertmanagerConfig.slack.routes[].channelSlack channel to post alerts to. Must be defined with slack.webhookURL.""
      alerts.alertmanager.alertmanagerConfig.slack.routes[].webhookURLSlack webhookURL to post alerts to. Must be defined with slack.channel.""
      alerts.alertmanager.alertmanagerConfig.slack.routes[].matchersList of matchers that the alert’s label should match. matchType , name , regex , value[]
      alerts.alertmanager.alertmanagerConfig.webhook.routes[].nameName of the webhook route.""
      alerts.alertmanager.alertmanagerConfig.webhook.routes[].urlWebhook url to post alerts to.""
      alerts.alertmanager.alertmanagerConfig.webhook.routes[].matchersList of matchers that the alert’s label should match. matchType , name , regex , value[]
      alerts.alertmanager.alertmanagerSpec.alertmanagerConfigurationAlermanagerConfig to be used as top level configurationfalse
      alerts.alertmanager.alertmanagerConfig.webhook.routes[].matchersList of matchers that the alert’s label should match. matchType , name , regex , value[]

      cert-manager options

      NameDescriptionValue
      alerts.certManager.enabledCreates jetstack/cert-manager resources to generate Issuer and Certificates for Prometheus authentication.true
      alerts.certManager.rootCert.durationDuration, how long the root certificate is valid."5y"
      alerts.certManager.admissionCert.durationDuration, how long the admission certificate is valid."1y"
      alerts.certManager.issuerRef.nameName of the existing Issuer to use.""

      Supernova options

      theme: Override the default theme. Possible values are "theme-light" or "theme-dark" (default)

      endpoint: Alertmanager API Endpoint URL /api/v2. Should be one of alerts.alertmanager.ingress.hosts

      silenceExcludedLabels: SilenceExcludedLabels are labels that are initially excluded by default when creating a silence. However, they can be added if necessary when utilizing the advanced options in the silence form.The labels must be an array of strings. Example: ["pod", "pod_name", "instance"]

      filterLabels: FilterLabels are the labels shown in the filter dropdown, enabling users to filter alerts based on specific criteria. The ‘Status’ label serves as a default filter, automatically computed from the alert status attribute and will be not overwritten. The labels must be an array of strings. Example: ["app", "cluster", "cluster_type"]

      predefinedFilters: PredefinedFilters are filters applied through in the UI to differentiate between contexts through matching alerts with regular expressions. They are loaded by default when the application is loaded. The format is a list of objects including name, displayname and matchers (containing keys corresponding value). Example:

      [
        {
          "name": "prod",
          "displayName": "Productive System",
          "matchers": {
            "region": "^prod-.*"
          }
        }
      ]
      

      silenceTemplates: SilenceTemplates are used in the Modal (schedule silence) to allow pre-defined silences to be used to scheduled maintenance windows. The format consists of a list of objects including description, editable_labels (array of strings specifying the labels that users can modify), fixed_labels (map containing fixed labels and their corresponding values), status, and title. Example:

      "silenceTemplates": [
          {
            "description": "Description of the silence template",
            "editable_labels": ["region"],
            "fixed_labels": {
              "name": "Marvin",
            },
            "status": "active",
            "title": "Silence"
          }
        ]
      

      Managing Alertmanager configuration

      ref:

      By default, the Alertmanager instances will start with a minimal configuration which isn’t really useful since it doesn’t send any notification when receiving alerts.

      You have multiple options to provide the Alertmanager configuration:

      1. You can use alerts.alertmanager.config to define a Alertmanager configuration. Example below.
      config:
        global:
          resolve_timeout: 5m
        inhibit_rules:
          - source_matchers:
              - "severity = critical"
            target_matchers:
              - "severity =~ warning|info"
            equal:
              - "namespace"
              - "alertname"
          - source_matchers:
              - "severity = warning"
            target_matchers:
              - "severity = info"
            equal:
              - "namespace"
              - "alertname"
          - source_matchers:
              - "alertname = InfoInhibitor"
            target_matchers:
              - "severity = info"
            equal:
              - "namespace"
        route:
          group_by: ["namespace"]
          group_wait: 30s
          group_interval: 5m
          repeat_interval: 12h
          receiver: "null"
          routes:
            - receiver: "null"
              matchers:
                - alertname =~ "InfoInhibitor|Watchdog"
        receivers:
          - name: "null"
        templates:
          - "/etc/alertmanager/config/*.tmpl"
      
      1. You can discover AlertmanagerConfig objects. The spec.alertmanagerConfigSelector is always set to matchLabels: plugin: <name> to tell the operator which AlertmanagerConfigs objects should be selected and merged with the main Alertmanager configuration. Note: The default strategy for a AlertmanagerConfig object to match alerts is OnNamespace.
      apiVersion: monitoring.coreos.com/v1alpha1
      kind: AlertmanagerConfig
      metadata:
        name: config-example
        labels:
          alertmanagerConfig: example
          pluginDefinition: alerts-example
      spec:
        route:
          groupBy: ["job"]
          groupWait: 30s
          groupInterval: 5m
          repeatInterval: 12h
          receiver: "webhook"
        receivers:
          - name: "webhook"
            webhookConfigs:
              - url: "http://example.com/"
      
      1. You can use alerts.alertmanager.alertmanagerSpec.alertmanagerConfiguration to reference an AlertmanagerConfig object in the same namespace which defines the main Alertmanager configuration.
      # Example with select a global alertmanagerconfig
      alertmanagerConfiguration:
        name: global-alertmanager-configuration
      

      TLS Certificate Requirement

      Greenhouse onboarded Prometheus installations need to communicate with the Alertmanager component to enable processing of alerts. If an Alertmanager Ingress is enabled, this requires a TLS certificate to be configured and trusted by Alertmanger to ensure the communication. To enable automatic self-signed TLS certificate provisioning via cert-manager, set the alerts.certManager.enabled value to true.

      Note: Prerequisite of this feature is a installed jetstack/cert-manager which can be implemented via the Greenhouse cert-manager Plugin.

      Examples

      Deploy alerts with Alertmanager

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: alerts
      spec:
        pluginDefinition: alerts
        disabled: false
        displayName: Alerts
        optionValues:
          - name: alerts.alertmanager.enabled
            value: true
          - name: alerts.alertmanager.ingress.enabled
            value: true
          - name: alerts.alertmanager.ingress.hosts
            value:
              - alertmanager.dns.example.com
          - name: alerts.alertmanager.ingress.tls
            value:
              - hosts:
                  - alertmanager.dns.example.com
                secretName: tls-alertmanager-dns-example-com
          - name: alerts.alertmanagerConfig.slack.routes
            value:
              - channel: slack-warning-channel
                webhookURL: https://hooks.slack.com/services/some-id
                matchers:
                  - name: severity
                    matchType: "="
                    value: "warning"
              - channel: slack-critical-channel
                webhookURL: https://hooks.slack.com/services/some-id
                matchers:
                  - name: severity
                    matchType: "="
                    value: "critical"
          - name: alerts.alertmanagerConfig.webhook.routes
            value:
              - name: webhook-route
                url: https://some-webhook-url
                matchers:
                  - name: alertname
                    matchType: "=~"
                    value: ".*"
          - name: alerts.alertmanager.serviceMonitor.additionalLabels
            value:
              plugin: kube-monitoring
          - name: alerts.defaultRules.create
            value: true
          - name: alerts.defaultRules.labels
            value:
              plugin: kube-monitoring
          - name: endpoint
            value: https://alertmanager.dns.example.com/api/v2
          - name: filterLabels
            value:
              - job
              - severity
              - status
          - name: silenceExcludedLabels
            value:
              - pod
              - pod_name
              - instance
      

      Deploy alerts without Alertmanager (Bring your own Alertmanager - Supernova UI only)

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: alerts
      spec:
        pluginDefinition: alerts
        disabled: false
        displayName: Alerts
        optionValues:
          - name: alerts.alertmanager.enabled
            value: false
          - name: alerts.alertmanager.ingress.enabled
            value: false
          - name: alerts.defaultRules.create
            value: false
          - name: endpoint
            value: https://alertmanager.dns.example.com/api/v2
          - name: filterLabels
            value:
              - job
              - severity
              - status
          - name: silenceExcludedLabels
            value:
              - pod
              - pod_name
              - instance
      

      2.2 - Audit Logs Plugin

      Learn more about the Audit Logs Plugin. Use it to enable the ingestion, collection and export of telemetry signals (logs and metrics) for your Greenhouse cluster.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      OpenTelemetry is an observability framework and toolkit for creating and managing telemetry data such as metrics, logs and traces. Unlike other observability tools, OpenTelemetry is vendor and tool agnostic, meaning it can be used with a variety of observability backends, including open source tools such as OpenSearch and Prometheus.

      The focus of the Plugin is to provide easy-to-use configurations for common use cases of receiving, processing and exporting telemetry data in Kubernetes. The storage and visualization of the same is intentionally left to other tools.

      Components included in this Plugin:

      Architecture

      OpenTelemetry Architecture

      Note

      It is the intention to add more configuration over time and contributions of your very own configuration is highly appreciated. If you discover bugs or want to add functionality to the Plugin, feel free to create a pull request.

      Quick Start

      This guide provides a quick and straightforward way to use OpenTelemetry for Logs as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
      • For logs, a OpenSearch instance to store. If you don’t have one, reach out to your observability team to get access to one.
      • We recommend a running cert-manager in the cluster before installing the Logs Plugin
      • To gather metrics, you must have a Prometheus instance in the onboarded cluster for storage and for managing Prometheus specific CRDs. If you don not have an instance, install the kube-monitoring Plugin first.
      • The Audit Logs Plugin currently requires the OpenTelemetry Operator bundled in the Logs Plugin to be installed in the same cluster beforehand. This is a technical limitation of the Audit Logs Plugin and will be removed in future releases.

      Step 1:

      You can install the Logs package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle do it for you automatically. For the latter, you can either:

      1. Go to Greenhouse dashboard and select the Logs Plugin from the catalog. Specify the cluster and required option values.
      2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

      Step 2:

      The package will deploy the OpenTelemetry collectors and auto-instrumentation of the workload. By default, the package will include a configuration for collecting metrics and logs. The log-collector is currently processing data from the preconfigured receivers:

      • Files via the Filelog Receiver
      • Kubernetes Events from the Kubernetes API server
      • Journald events from systemd journal
      • its own metrics

      Based on the backend selection the telemetry data will be exporter to the backend.

      Failover Connector

      The Logs Plugin comes with a Failover Connector for OpenSearch for two users. The connector will periodically try to establish a stable connection for the prefered user (failover_username_a) and in case of a failed try, the connector will try to establish a connection with the fallback user (failover_username_b). This feature can be used to secure the shipping of logs in case of expiring credentials or password rotation.

      Values

      KeyTypeDefaultDescription
      auditLogs.clusterstringnilCluster label for Logging
      auditLogs.collectorImage.repositorystring"ghcr.io/cloudoperators/opentelemetry-collector-contrib"overrides the default image repository for the OpenTelemetry Collector image.
      auditLogs.collectorImage.tagstring"5b6e153"overrides the default image tag for the OpenTelemetry Collector image.
      auditLogs.customLabelsstringnilCustom labels to apply to all OpenTelemetry related resources
      auditLogs.openSearchLogs.endpointstringnilEndpoint URL for OpenSearch
      auditLogs.openSearchLogs.failoverobject{"enabled":true}Activates the failover mechanism for shipping logs using the failover_username_band failover_password_b credentials in case the credentials failover_username_a and failover_password_a have expired.
      auditLogs.openSearchLogs.failover_password_astringnilPassword for OpenSearch endpoint
      auditLogs.openSearchLogs.failover_password_bstringnilSecond Password (as a failover) for OpenSearch endpoint
      auditLogs.openSearchLogs.failover_username_astringnilUsername for OpenSearch endpoint
      auditLogs.openSearchLogs.failover_username_bstringnilSecond Username (as a failover) for OpenSearch endpoint
      auditLogs.openSearchLogs.indexstringnilName for OpenSearch index
      auditLogs.prometheus.additionalLabelsobject{}Label selectors for the Prometheus resources to be picked up by prometheus-operator.
      auditLogs.prometheus.podMonitorobject{"enabled":false}Activates the service-monitoring for the Logs Collector.
      auditLogs.prometheus.rulesobject{"additionalRuleLabels":null,"annotations":{},"create":true,"disabled":[],"labels":{}}Default rules for monitoring the opentelemetry components.
      auditLogs.prometheus.rules.additionalRuleLabelsstringnilAdditional labels for PrometheusRule alerts.
      auditLogs.prometheus.rules.annotationsobject{}Annotations for PrometheusRules.
      auditLogs.prometheus.rules.createbooltrueEnables PrometheusRule resources to be created.
      auditLogs.prometheus.rules.disabledlist[]PrometheusRules to disable.
      auditLogs.prometheus.rules.labelsobject{}Labels for PrometheusRules.
      auditLogs.prometheus.serviceMonitorobject{"enabled":false}Activates the pod-monitoring for the Logs Collector.
      auditLogs.regionstringnilRegion label for Logging
      commonLabelsstringnilCommon labels to apply to all resources

      Examples

      TBD

      2.3 - Cert-manager

      This Plugin provides the cert-manager to automate the management of TLS certificates.

      Configuration

      This section highlights configuration of selected Plugin features.
      All available configuration options are described in the plugin.yaml.

      Ingress shim

      An Ingress resource in Kubernetes configures external access to services in a Kubernetes cluster.
      Securing ingress resources with TLS certificates is a common use-case and the cert-manager can be configured to handle these via the ingress-shim component.
      It can be enabled by deploying an issuer in your organization and setting the following options on this plugin.

      OptionTypeDescription
      cert-manager.ingressShim.defaultIssuerNamestringName of the cert-manager issuer to use for TLS certificates
      cert-manager.ingressShim.defaultIssuerKindstringKind of the cert-manager issuer to use for TLS certificates
      cert-manager.ingressShim.defaultIssuerGroupstringGroup of the cert-manager issuer to use for TLS certificates

      2.4 - Decentralized Observer of Policies (Violations)

      This directory contains the Greenhouse plugin for the Decentralized Observer of Policies (DOOP).

      DOOP

      To perform automatic validations on Kubernetes objects, we run a deployment of OPA Gatekeeper in each cluster. This dashboard aggregates all policy violations reported by those Gatekeeper instances.

      2.5 - Designate Ingress CNAME operator (DISCO)

      This Plugin provides the Designate Ingress CNAME operator (DISCO) to automate management of DNS entries in OpenStack Designate for Ingress and Services in Kubernetes.

      2.6 - DigiCert issuer

      This Plugin provides the digicert-issuer, an external Issuer extending the cert-manager with the DigiCert cert-central API.

      2.7 - External DNS

      This Plugin provides the external DNS operator) which synchronizes exposed Kubernetes Services and Ingresses with DNS providers.

      2.8 - Ingress NGINX

      This plugin contains the ingress NGINX controller.

      Example

      To instantiate the plugin create a Plugin like:

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: ingress-nginx
      spec:
        pluginDefinition: ingress-nginx-v4.4.0
        values:
          - name: controller.service.loadBalancerIP
            value: 1.2.3.4
      

      2.9 - Kubernetes Monitoring

      Learn more about the kube-monitoring plugin. Use it to activate Kubernetes monitoring for your Greenhouse cluster.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      Observability is often required for operation and automation of service offerings. To get the insights provided by an application and the container runtime environment, you need telemetry data in the form of metrics or logs sent to backends such as Prometheus or OpenSearch. With the kube-monitoring Plugin, you will be able to cover the metrics part of the observability stack.

      This Plugin includes a pre-configured package of components that help make getting started easy and efficient. At its core, an automated and managed Prometheus installation is provided using the prometheus-operator. This is complemented by Prometheus target configuration for the most common Kubernetes components providing metrics by default. In addition, Cloud operators curated Prometheus alerting rules and Plutono dashboards are included to provide a comprehensive monitoring solution out of the box.

      kube-monitoring

      Components included in this Plugin:

      Disclaimer

      It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.

      The Plugin is a deeply configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates.

      It is intended as a platform that can be extended by following the guide.

      Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

      Quick start

      This guide provides a quick and straightforward way to use kube-monitoring as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.

      Step 1:

      You can install the kube-monitoring package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle it for you automatically. For the latter, you can either:

      1. Go to Greenhouse dashboard and select the Kubernetes Monitoring plugin from the catalog. Specify the cluster and required option values.
      2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

      Step 2:

      After installation, Greenhouse will provide a generated link to the Prometheus user interface. This is done via the annotation greenhouse.sap/expose: “true” at the Prometheus Service resource.

      Step 3:

      Greenhouse regularly performs integration tests that are bundled with kube-monitoring. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.

      Values

      absent-metrics-operator options

      KeyTypeDefaultDescription
      absentMetricsOperator.enabledboolfalseEnable absent-metrics-operator

      Alertmanager options

      KeyTypeDefaultDescription
      alerts.alertmanagers.hostslist[]List of Alertmanager hostsd alerts to
      alerts.alertmanagers.tlsConfig.certstring""TLS certificate for communication with Alertmanager
      alerts.alertmanagers.tlsConfig.keystring""TLS key for communication with Alertmanager
      alerts.enabledboolfalseTo send alerts to Alertmanager

      Blackbox exporter config

      KeyTypeDefaultDescription
      blackboxExporter.enabledboolfalseTo enable Blackbox Exporter (supported probers: grpc-prober)
      blackboxExporter.extraVolumeslist- name: blackbox-exporter-tls secret: defaultMode: 420 secretName: <secretName>TLS secret of the Thanos global instance to mount for probing, mandatory for using Blackbox exporter.

      Global options

      KeyTypeDefaultDescription
      global.commonLabelsobject{}Labels to apply to all resources This can be used to add a support_group or service label to all resources and alerting rules.

      Kubernetes component scraper options

      KeyTypeDefaultDescription
      kubeMonitoring.coreDns.enabledbooltrueComponent scraping coreDns. Use either this or kubeDns
      kubeMonitoring.kubeApiServer.enabledbooltrueComponent scraping the kube API server
      kubeMonitoring.kubeControllerManager.enabledboolfalseComponent scraping the kube controller manager
      kubeMonitoring.kubeDns.enabledboolfalseComponent scraping kubeDns. Use either this or coreDns
      kubeMonitoring.kubeEtcd.enabledbooltrueComponent scraping etcd
      kubeMonitoring.kubeProxy.enabledboolfalseComponent scraping kube proxy
      kubeMonitoring.kubeScheduler.enabledboolfalseComponent scraping kube scheduler
      kubeMonitoring.kubeStateMetrics.enabledbooltrueComponent scraping kube state metrics
      kubeMonitoring.kubelet.enabledbooltrueComponent scraping the kubelet and kubelet-hosted cAdvisor
      kubeMonitoring.kubernetesServiceMonitors.enabledbooltrueFlag to disable all the Kubernetes component scrapers
      kubeMonitoring.nodeExporter.enabledbooltrueDeploy node exporter as a daemonset to all nodes

      Prometheus options

      KeyTypeDefaultDescription
      kubeMonitoring.prometheus.annotationsobject{}Annotations for Prometheus
      kubeMonitoring.prometheus.enabledbooltrueDeploy a Prometheus instance
      kubeMonitoring.prometheus.ingress.enabledboolfalseDeploy Prometheus Ingress
      kubeMonitoring.prometheus.ingress.hostslist[]Must be provided if Ingress is enabled
      kubeMonitoring.prometheus.ingress.ingressClassnamestring"nginx"Specifies the ingress-controller
      kubeMonitoring.prometheus.prometheusSpec.additionalArgslist[]Allows setting additional arguments for the Prometheus container
      kubeMonitoring.prometheus.prometheusSpec.additionalScrapeConfigsstring""Next to ScrapeConfig CRD, you can use AdditionalScrapeConfigs, which allows specifying additional Prometheus scrape configurations
      kubeMonitoring.prometheus.prometheusSpec.evaluationIntervalstring""Interval between consecutive evaluations
      kubeMonitoring.prometheus.prometheusSpec.externalLabelsobject{}External labels to add to any time series or alerts when communicating with external systems like Alertmanager
      kubeMonitoring.prometheus.prometheusSpec.logLevelstring""Log level to be configured for Prometheus
      kubeMonitoring.prometheus.prometheusSpec.podMonitorSelector.matchLabelsobject{ plugin: <metadata.name> }PodMonitors to be selected for target discovery.
      kubeMonitoring.prometheus.prometheusSpec.probeSelector.matchLabelsobject{ plugin: <metadata.name> }Probes to be selected for target discovery.
      kubeMonitoring.prometheus.prometheusSpec.retentionstring""How long to retain metrics
      kubeMonitoring.prometheus.prometheusSpec.ruleSelector.matchLabelsobject{ plugin: <metadata.name> }PrometheusRules to be selected for target discovery. If {}, select all PrometheusRules
      kubeMonitoring.prometheus.prometheusSpec.scrapeConfigSelector.matchLabelsobject{ plugin: <metadata.name> }scrapeConfigs to be selected for target discovery.
      kubeMonitoring.prometheus.prometheusSpec.scrapeIntervalstring""Interval between consecutive scrapes. Defaults to 30s
      kubeMonitoring.prometheus.prometheusSpec.scrapeTimeoutstring""Number of seconds to wait for target to respond before erroring
      kubeMonitoring.prometheus.prometheusSpec.serviceMonitorSelector.matchLabelsobject{ plugin: <metadata.name> }ServiceMonitors to be selected for target discovery. If {}, select all ServiceMonitors
      kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resourcesobject{"requests":{"storage":"50Gi"}}How large the persistent volume should be to house the Prometheus database. Default 50Gi.
      kubeMonitoring.prometheus.tlsConfig.caCertstring"Secret"CA certificate to verify technical clients at Prometheus Ingress

      Prometheus-operator options

      KeyTypeDefaultDescription
      kubeMonitoring.prometheusOperator.alertmanagerConfigNamespaceslist[]Filter namespaces to look for prometheus-operator AlertmanagerConfig resources
      kubeMonitoring.prometheusOperator.alertmanagerInstanceNamespaceslist[]Filter namespaces to look for prometheus-operator Alertmanager resources
      kubeMonitoring.prometheusOperator.enabledbooltrueManages Prometheus and Alertmanager components
      kubeMonitoring.prometheusOperator.prometheusInstanceNamespaceslist[]Filter namespaces to look for prometheus-operator Prometheus resources

      Absent-metrics-operator

      The kube-monitoring Plugin can optionally deploy and configure the absent-metrics-operator to help detect missing or absent metrics in your Prometheus setup. This operator automatically generates alerts when expected metrics are not present, improving observability and alerting coverage.

      Service Discovery

      The kube-monitoring Plugin provides a PodMonitor to automatically discover the Prometheus metrics of the Kubernetes Pods in any Namespace. The PodMonitor is configured to detect the metrics endpoint of the Pods if the following annotations are set:

      metadata:
        annotations:
          greenhouse/scrape: “true”
          greenhouse/target: <kube-monitoring plugin name>
      

      Note: The annotations needs to be added manually to have the pod scraped and the port name needs to match.

      Examples

      Deploy kube-monitoring into a remote cluster

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: kube-monitoring
      spec:
        pluginDefinition: kube-monitoring
        disabled: false
        optionValues:
          - name: kubeMonitoring.prometheus.prometheusSpec.retention
            value: 30d
          - name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
            value: 100Gi
          - name: kubeMonitoring.prometheus.service.labels
            value:
              greenhouse.sap/expose: "true"
          - name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
            value:
              cluster: example-cluster
              organization: example-org
              region: example-region
          - name: alerts.enabled
            value: true
          - name: alerts.alertmanagers.hosts
            value:
              - alertmanager.dns.example.com
          - name: alerts.alertmanagers.tlsConfig.cert
            valueFrom:
              secret:
                key: tls.crt
                name: tls-<org-name>-prometheus-auth
          - name: alerts.alertmanagers.tlsConfig.key
            valueFrom:
              secret:
                key: tls.key
                name: tls-<org-name>-prometheus-auth
      

      Deploy Prometheus only

      Example Plugin to deploy Prometheus with the kube-monitoring Plugin.

      NOTE: If you are using kube-monitoring for the first time in your cluster, it is necessary to set kubeMonitoring.prometheusOperator.enabled to true.

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: example-prometheus-name
      spec:
        pluginDefinition: kube-monitoring
        disabled: false
        optionValues:
          - name: kubeMonitoring.defaultRules.create
            value: false
          - name: kubeMonitoring.kubernetesServiceMonitors.enabled
            value: false
          - name: kubeMonitoring.prometheusOperator.enabled
            value: false
          - name: kubeMonitoring.kubeStateMetrics.enabled
            value: false
          - name: kubeMonitoring.nodeExporter.enabled
            value: false
          - name: kubeMonitoring.prometheus.prometheusSpec.retention
            value: 30d
          - name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
            value: 100Gi
          - name: kubeMonitoring.prometheus.service.labels
            value:
              greenhouse.sap/expose: "true"
          - name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
            value:
              cluster: example-cluster
              organization: example-org
              region: example-region
          - name: alerts.enabled
            value: true
          - name: alerts.alertmanagers.hosts
            value:
              - alertmanager.dns.example.com
          - name: alerts.alertmanagers.tlsConfig.cert
            valueFrom:
              secret:
                key: tls.crt
                name: tls-<org-name>-prometheus-auth
          - name: alerts.alertmanagers.tlsConfig.key
            valueFrom:
              secret:
                key: tls.key
                name: tls-<org-name>-prometheus-auth
      

      Extension of the plugin

      kube-monitoring can be extended with your own Prometheus alerting rules and target configurations via the Custom Resource Definitions (CRDs) of the Prometheus operator. The user-defined resources to be incorporated with the desired configuration are defined via label selections.

      The CRD PrometheusRule enables the definition of alerting and recording rules that can be used by Prometheus or Thanos Rule instances. Alerts and recording rules are reconciled and dynamically loaded by the operator without having to restart Prometheus or Thanos Rule.

      kube-monitoring Prometheus will automatically discover and load the rules that match labels plugin: <plugin-name>.

      Example:

      apiVersion: monitoring.coreos.com/v1
      kind: PrometheusRule
      metadata:
        name: example-prometheus-rule
        labels:
          plugin: <metadata.name>
          ## e.g plugin: kube-monitoring
      spec:
       groups:
         - name: example-group
           rules:
           ...
      

      The CRDs PodMonitor, ServiceMonitor, Probe and ScrapeConfig allow the definition of a set of target endpoints to be scraped by Prometheus. The operator will automatically discover and load the configurations that match labels plugin: <plugin-name>.

      Example:

      apiVersion: monitoring.coreos.com/v1
      kind: PodMonitor
      metadata:
        name: example-pod-monitor
        labels:
          plugin: <metadata.name>
          ## e.g plugin: kube-monitoring
      spec:
        selector:
          matchLabels:
            app: example-app
        namespaceSelector:
          matchNames:
            - example-namespace
        podMetricsEndpoints:
          - port: http
        ...
      

      2.10 - Logs Plugin

      Learn more about the Logs Plugin. Use it to enable the ingestion, collection and export of telemetry signals (logs and metrics) for your Greenhouse cluster.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      OpenTelemetry is an observability framework and toolkit for creating and managing telemetry data such as metrics, logs and traces. Unlike other observability tools, OpenTelemetry is vendor and tool agnostic, meaning it can be used with a variety of observability backends, including open source tools such as OpenSearch and Prometheus.

      The focus of the Plugin is to provide easy-to-use configurations for common use cases of receiving, processing and exporting telemetry data in Kubernetes. The storage and visualization of the same is intentionally left to other tools.

      Components included in this Plugin:

      Architecture

      OpenTelemetry Architecture

      Note

      It is the intention to add more configuration over time and contributions of your very own configuration is highly appreciated. If you discover bugs or want to add functionality to the Plugin, feel free to create a pull request.

      Quick Start

      This guide provides a quick and straightforward way to use OpenTelemetry for Logs as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
      • For logs, a OpenSearch instance to store. If you don’t have one, reach out to your observability team to get access to one.
      • We recommend a running cert-manager in the cluster before installing the Logs Plugin
      • To gather metrics, you must have a Prometheus instance in the onboarded cluster for storage and for managing Prometheus specific CRDs. If you don not have an instance, install the kube-monitoring Plugin first.

      Step 1:

      You can install the Logs package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle do it for you automatically. For the latter, you can either:

      1. Go to Greenhouse dashboard and select the Logs Plugin from the catalog. Specify the cluster and required option values.
      2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

      Step 2:

      The package will deploy the OpenTelemetry Operator which works as a manager for the collectors and auto-instrumentation of the workload. By default, the package will include a configuration for collecting metrics and logs. The log-collector is currently processing data from the preconfigured receivers:

      • Files via the Filelog Receiver
      • Kubernetes Events from the Kubernetes API server
      • Journald events from systemd journal
      • its own metrics

      You can disable the collection of logs by setting openTelemetry.logCollector.enabled to false. The same is true for disabling the collection of metrics by setting openTelemetry.metricsCollector.enabled to false. The logsCollector comes with a standard set of log-processing, such as adding cluster information and common labels for Journald events. In addition we provide default pipelines for common log types. Currently the following log types have default configurations that can be enabled (requires logsCollector.enabled to true):

      1. KVM: openTelemetry.logsCollector.kvmConfig: Logs from Kernel-based Virtual Machines (KVMs) providing insights into virtualization activities, resource usage, and system performance
      2. Ceph:openTelemetry.logsCollector.cephConfig: Logs from Ceph storage systems, capturing information about cluster operations, performance metrics, and health status

      These default configurations provide common labels and Grok parsing for logs emitted through the respective services.

      Based on the backend selection the telemetry data will be exporter to the backend.

      Step 3:

      Greenhouse regularly performs integration tests that are bundled with the Logs Plugin. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the Plugin status and also in the Greenhouse dashboard.

      Failover Connector

      The Logs Plugin comes with a Failover Connector for OpenSearch for two users. The connector will periodically try to establish a stable connection for the prefered user (failover_username_a) and in case of a failed try, the connector will try to establish a connection with the fallback user (failover_username_b). This feature can be used to secure the shipping of logs in case of expiring credentials or password rotation.

      Values

      KeyTypeDefaultDescription
      commonLabelsobject{}common labels to apply to all resources.
      openTelemetry.clusterstringnilCluster label for Logging
      openTelemetry.customLabelsobject{}custom Labels applied to servicemonitor, secrets and collectors
      openTelemetry.logsCollector.cephConfigobject{"enabled":false}Activates the configuration for Ceph logs (requires logsCollector to be enabled).
      openTelemetry.logsCollector.enabledbooltrueActivates the standard configuration for Logs.
      openTelemetry.logsCollector.failoverobject{"enabled":true}Activates the failover mechanism for shipping logs using the failover_username_band failover_password_b credentials in case the credentials failover_username_a and failover_password_a have expired.
      openTelemetry.logsCollector.kvmConfigobject{"enabled":false}Activates the configuration for KVM logs (requires logsCollector to be enabled).
      openTelemetry.metricsCollectorobject{"enabled":false}Activates the standard configuration for metrics.
      openTelemetry.openSearchLogs.endpointstringnilEndpoint URL for OpenSearch
      openTelemetry.openSearchLogs.failover_password_astringnilPassword for OpenSearch endpoint
      openTelemetry.openSearchLogs.failover_password_bstringnilSecond Password (as a failover) for OpenSearch endpoint
      openTelemetry.openSearchLogs.failover_username_astringnilUsername for OpenSearch endpoint
      openTelemetry.openSearchLogs.failover_username_bstringnilSecond Username (as a failover) for OpenSearch endpoint
      openTelemetry.openSearchLogs.indexstringnilName for OpenSearch index
      openTelemetry.prometheus.additionalLabelsobject{}Label selectors for the Prometheus resources to be picked up by prometheus-operator.
      openTelemetry.prometheus.podMonitorobject{"enabled":true}Activates the pod-monitoring for the Logs Collector.
      openTelemetry.prometheus.rulesobject{"additionalRuleLabels":null,"annotations":{},"create":true,"disabled":[],"labels":{}}Default rules for monitoring the opentelemetry components.
      openTelemetry.prometheus.rules.additionalRuleLabelsstringnilAdditional labels for PrometheusRule alerts.
      openTelemetry.prometheus.rules.annotationsobject{}Annotations for PrometheusRules.
      openTelemetry.prometheus.rules.createbooltrueEnables PrometheusRule resources to be created.
      openTelemetry.prometheus.rules.disabledlist[]PrometheusRules to disable.
      openTelemetry.prometheus.rules.labelsobject{}Labels for PrometheusRules.
      openTelemetry.prometheus.serviceMonitorobject{"enabled":true}Activates the service-monitoring for the Logs Collector.
      openTelemetry.regionstringnilRegion label for Logging
      opentelemetry-operator.admissionWebhooks.autoGenerateCertobject{"recreate":false}Activate to use Helm to create self-signed certificates.
      opentelemetry-operator.admissionWebhooks.autoGenerateCert.recreateboolfalseActivate to recreate the cert after a defined period (certPeriodDays default is 365).
      opentelemetry-operator.admissionWebhooks.certManagerobject{"enabled":false}Activate to use the CertManager for generating self-signed certificates.
      opentelemetry-operator.admissionWebhooks.failurePolicystring"Ignore"Defines if the admission webhooks should Ignore errors or Fail on errors when communicating with the API server.
      opentelemetry-operator.crds.createboolfalseThe required CRDs used by this dependency are version-controlled in this repository under ./crds. If you want to use the upstream CRDs, set this variable to `true``.
      opentelemetry-operator.kubeRBACProxyobject{"enabled":false}the kubeRBACProxy can be enabled to allow the operator perform RBAC authorization against the Kubernetes API.
      opentelemetry-operator.manager.collectorImage.repositorystring"ghcr.io/cloudoperators/opentelemetry-collector-contrib"overrides the default image repository for the OpenTelemetry Collector image.
      opentelemetry-operator.manager.collectorImage.tagstring"5a4f148"overrides the default image tag for the OpenTelemetry Collector image.
      opentelemetry-operator.manager.image.repositorystring"ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator"overrides the default image repository for the OpenTelemetry Operator image.
      opentelemetry-operator.manager.image.tagstring"v0.127.0"overrides the default tag repository for the OpenTelemetry Operator image.
      opentelemetry-operator.manager.serviceMonitorobject{"enabled":true}Enable serviceMonitor for Prometheus metrics scrape
      opentelemetry-operator.nameOverridestring"operator"Provide a name in place of the default name opentelemetry-operator.
      testFramework.enabledbooltrueActivates the Helm chart testing framework.
      testFramework.image.registrystring"ghcr.io"Defines the image registry for the test framework.
      testFramework.image.repositorystring"cloudoperators/greenhouse-extensions-integration-test"Defines the image repository for the test framework.
      testFramework.image.tagstring"main"Defines the image tag for the test framework.
      testFramework.imagePullPolicystring"IfNotPresent"Defines the image pull policy for the test framework.

      Examples

      TBD

      2.11 - Logshipper

      This Plugin is intended for shipping container and systemd logs to an Elasticsearch/ OpenSearch cluster. It uses fluentbit to collect logs. The default configuration can be found under chart/templates/fluent-bit-configmap.yaml.

      Components included in this Plugin:

      Owner

      1. @ivogoman

      Parameters

      NameDescriptionValue
      fluent-bit.parserParser used for container logs. [docker|cri] labels“cri”
      fluent-bit.backend.opensearch.hostHost for the Elastic/OpenSearch HTTP Input
      fluent-bit.backend.opensearch.portPort for the Elastic/OpenSearch HTTP Input
      fluent-bit.backend.opensearch.http_userUsername for the Elastic/OpenSearch HTTP Input
      fluent-bit.backend.opensearch.http_passwordPassword for the Elastic/OpenSearch HTTP Input
      fluent-bit.backend.opensearch.hostHost for the Elastic/OpenSearch HTTP Input
      fluent-bit.filter.additionalValueslist of Key-Value pairs to label logs labels[]
      fluent-bit.customConfig.inputsmulti-line string containing additional inputs
      fluent-bit.customConfig.filtersmulti-line string containing additional filters
      fluent-bit.customConfig.outputsmulti-line string containing additional outputs

      Custom Configuration

      To add custom configuration to the fluent-bit configuration please check the fluentbit documentation here. The fluent-bit.customConfig.inputs, fluent-bit.customConfig.filters and fluent-bit.customConfig.outputs parameters can be used to add custom configuration to the default configuration. The configuration should be added as a multi-line string. Inputs are rendered after the default inputs, filters are rendered after the default filters and before the additional values are added. Outputs are rendered after the default outputs. The additional values are added to all logs disregaring the source.

      Example Input configuration:

      fluent-bit:
        config:
          inputs: |
            [INPUT]
                Name             tail-audit
                Path             /var/log/containers/greenhouse-controller*.log
                Parser           {{ default "cri" ( index .Values "fluent-bit" "parser" ) }}
                Tag              audit.*
                Refresh_Interval 5
                Mem_Buf_Limit    50MB
                Skip_Long_Lines  Off
                Ignore_Older     1m
                DB               /var/log/fluent-bit-tail-audit.pos.db
      

      Logs collected by the default configuration are prefixed with default_. In case that logs from additional inputs are to be send and processed by the same filters and outputs, the prefix should be used as well.

      In case additional secrets are required the fluent-bit.env field can be used to add them to the environment of the fluent-bit container. The secrets should be created by adding them to the fluent-bit.backend field.

      fluent-bit:
        backend:
          audit:
            http_user: top-secret-audit
            http_password: top-secret-audit
            host: "audit.test"
            tls:
              enabled: true
              verify: true
              debug: false
      

      2.12 - OpenSearch

      OpenSearch Plugin

      The OpenSearch plugin sets up an OpenSearch environment using the OpenSearch Operator, automating deployment, provisioning, management, and orchestration of OpenSearch clusters and dashboards. It functions as the backend for logs gathered by collectors such as OpenTelemetry collectors, enabling storage and visualization of logs for Greenhouse-onboarded Kubernetes clusters.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      OpenSearch is a distributed search and analytics engine designed for real-time log and event data analysis. The OpenSearch Operator simplifies the management of OpenSearch clusters by providing declarative APIs for configuration and scaling.

      Components included in this Plugin:

      • OpenSearch Operator
      • OpenSearch Cluster Management
      • OpenSearch Dashboards Deployment
      • OpenSearch Index Management
      • OpenSearch Security Configuration

      Architecture

      OpenSearch Architecture

      The OpenSearch Operator automates the management of OpenSearch clusters within a Kubernetes environment. The architecture consists of:

      • OpenSearchCluster CRD: Defines the structure and configuration of OpenSearch clusters, including node roles, scaling policies, and version management.
      • OpenSearchDashboards CRD: Manages OpenSearch Dashboards deployments, ensuring high availability and automatic upgrades.
      • OpenSearchISMPolicy CRD: Implements index lifecycle management, defining policies for retention, rollover, and deletion.
      • OpenSearchIndexTemplate CRD: Enables the definition of index mappings, settings, and template structures.
      • Security Configuration via OpenSearchRole and OpenSearchUser: Manages authentication and authorization for OpenSearch users and roles.

      Note

      More configurations will be added over time, and contributions of custom configurations are highly appreciated. If you discover bugs or want to add functionality to the plugin, feel free to create a pull request.

      Quick Start

      This guide provides a quick and straightforward way to use OpenSearch as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
      • The OpenSearch Operator installed via Helm or Kubernetes manifests.
      • An OpenTelemetry or similar log ingestion pipeline configured to send logs to OpenSearch.

      Installation

      Install via Greenhouse

      1. Navigate to the Greenhouse Dashboard.
      2. Select the OpenSearch plugin from the catalog.
      3. Specify the target cluster and configuration options.

      Values

      KeyTypeDefaultDescription
      certManager.defaults.durations.castring"8760h"Validity period for CA certificates (1 year)
      certManager.defaults.durations.leafstring"4800h"Validity period for leaf certificates (200 days to comply with CA/B Forum baseline requirements)
      certManager.defaults.privateKey.algorithmstring"RSA"Algorithm used for generating private keys
      certManager.defaults.privateKey.encodingstring"PKCS8"Encoding format for private keys (PKCS8 recommended)
      certManager.defaults.privateKey.sizeint2048Key size in bits for RSA keys
      certManager.defaults.usageslist["digital signature","key encipherment","server auth","client auth"]List of extended key usages for certificates
      certManager.enablebooltrueEnable cert-manager integration for issuing TLS certificates
      certManager.httpDnsNameslist["opensearch-client.tld"]Override HTTP DNS names for OpenSearch client endpoints
      certManager.issuer.caobject{"name":"opensearch-ca-issuer"}Name of the CA Issuer to be used for internal certs
      certManager.issuer.digicertobject{"group":"certmanager.cloud.sap","kind":"DigicertIssuer","name":"digicert-issuer"}API group for the DigicertIssuer custom resource
      certManager.issuer.selfSignedobject{"name":"opensearch-issuer"}Name of the self-signed issuer used to sign the internal CA certificate
      cluster.actionGroupslist[]List of OpensearchActionGroup. Check values.yaml file for examples.
      cluster.cluster.annotationsobject{}OpenSearchCluster annotations
      cluster.cluster.bootstrap.additionalConfigobject{}bootstrap additional configuration, key-value pairs that will be added to the opensearch.yml configuration
      cluster.cluster.bootstrap.affinityobject{}bootstrap pod affinity rules
      cluster.cluster.bootstrap.jvmstring""bootstrap pod jvm options. If jvm is not provided then the java heap size will be set to half of resources.requests.memory which is the recommend value for data nodes. If jvm is not provided and resources.requests.memory does not exist then value will be -Xmx512M -Xms512M
      cluster.cluster.bootstrap.nodeSelectorobject{}bootstrap pod node selectors
      cluster.cluster.bootstrap.resourcesobject{}bootstrap pod cpu and memory resources
      cluster.cluster.bootstrap.tolerationslist[]bootstrap pod tolerations
      cluster.cluster.client.service.annotationsobject{}Annotations to add to the service, e.g. disco.
      cluster.cluster.client.service.enabledbooltrueEnable or disable the external client service.
      cluster.cluster.client.service.externalIPslist[]List of external IPs to expose the service on.
      cluster.cluster.client.service.loadBalancerSourceRangeslist[]List of allowed IP ranges for external access when service type is LoadBalancer.
      cluster.cluster.client.service.portslist[{"name":"http","port":9200,"protocol":"TCP","targetPort":9200}]Ports to expose for the client service.
      cluster.cluster.client.service.typestring"ClusterIP"Kubernetes service type. Defaults to ClusterIP, but should be set to LoadBalancer to expose OpenSearch client nodes externally.
      cluster.cluster.confMgmt.smartScalerbooltrueEnable nodes to be safely removed from the cluster
      cluster.cluster.dashboards.additionalConfigobject{}Additional properties for opensearch_dashboards.yaml
      cluster.cluster.dashboards.affinityobject{}dashboards pod affinity rules
      cluster.cluster.dashboards.annotationsobject{}dashboards annotations
      cluster.cluster.dashboards.basePathstring""dashboards Base Path for Opensearch Clusters running behind a reverse proxy
      cluster.cluster.dashboards.enablebooltrueEnable dashboards deployment
      cluster.cluster.dashboards.envlist[]dashboards pod env variables
      cluster.cluster.dashboards.imagestring"docker.io/opensearchproject/opensearch-dashboards"dashboards image
      cluster.cluster.dashboards.imagePullPolicystring"IfNotPresent"dashboards image pull policy
      cluster.cluster.dashboards.imagePullSecretslist[]dashboards image pull secrets
      cluster.cluster.dashboards.labelsobject{}dashboards labels
      cluster.cluster.dashboards.nodeSelectorobject{}dashboards pod node selectors
      cluster.cluster.dashboards.opensearchCredentialsSecretobject{}Secret that contains fields username and password for dashboards to use to login to opensearch, must only be supplied if a custom securityconfig is provided
      cluster.cluster.dashboards.pluginsListlist[]List of dashboards plugins to install
      cluster.cluster.dashboards.podSecurityContextobject{}dasboards pod security context configuration
      cluster.cluster.dashboards.replicasint1number of dashboards replicas
      cluster.cluster.dashboards.resourcesobject{}dashboards pod cpu and memory resources
      cluster.cluster.dashboards.securityContextobject{}dashboards security context configuration
      cluster.cluster.dashboards.service.loadBalancerSourceRangeslist[]source ranges for a loadbalancer
      cluster.cluster.dashboards.service.typestring"ClusterIP"dashboards service type
      cluster.cluster.dashboards.tls.caSecretobject{"name":"opensearch-ca-cert"}Secret that contains the ca certificate as ca.crt. If this and generate=true is set the existing CA cert from that secret is used to generate the node certs. In this case must contain ca.crt and ca.key fields
      cluster.cluster.dashboards.tls.enableboolfalseEnable HTTPS for dashboards
      cluster.cluster.dashboards.tls.generateboolfalsegenerate certificate, if false secret must be provided
      cluster.cluster.dashboards.tls.secretobject{"name":"opensearch-http-cert"}Optional, name of a TLS secret that contains ca.crt, tls.key and tls.crt data. If ca.crt is in a different secret provide it via the caSecret field
      cluster.cluster.dashboards.tolerationslist[]dashboards pod tolerations
      cluster.cluster.dashboards.versionstring"2.19.1"dashboards version
      cluster.cluster.general.additionalConfigobject{}Extra items to add to the opensearch.yml
      cluster.cluster.general.additionalVolumeslist[]Additional volumes to mount to all pods in the cluster. Supported volume types configMap, emptyDir, secret (with default Kubernetes configuration schema)
      cluster.cluster.general.drainDataNodesbooltrueControls whether to drain data notes on rolling restart operations
      cluster.cluster.general.httpPortint9200Opensearch service http port
      cluster.cluster.general.imagestring"docker.io/opensearchproject/opensearch"Opensearch image
      cluster.cluster.general.imagePullPolicystring"IfNotPresent"Default image pull policy
      cluster.cluster.general.keystorelist[]Populate opensearch keystore before startup
      cluster.cluster.general.monitoring.additionalRuleLabelsobject{}PrometheusRule labels
      cluster.cluster.general.monitoring.enablebooltrueEnable cluster monitoring
      cluster.cluster.general.monitoring.labelsobject{}ServiceMonitor labels
      cluster.cluster.general.monitoring.monitoringUserSecretstring""Secret with ‘username’ and ‘password’ keys for monitoring user. You could also use OpenSearchUser CRD instead of setting it.
      cluster.cluster.general.monitoring.pluginUrlstring"https://github.com/Virtimo/prometheus-exporter-plugin-for-opensearch/releases/download/v2.19.1/prometheus-exporter-2.19.1.0.zip"Custom URL for the monitoring plugin
      cluster.cluster.general.monitoring.scrapeIntervalstring"30s"How often to scrape metrics
      cluster.cluster.general.monitoring.tlsConfigobject{"insecureSkipVerify":true}Override the tlsConfig of the generated ServiceMonitor
      cluster.cluster.general.pluginsListlist[]List of Opensearch plugins to install
      cluster.cluster.general.podSecurityContextobject{}Opensearch pod security context configuration
      cluster.cluster.general.securityContextobject{}Opensearch securityContext
      cluster.cluster.general.serviceAccountstring""Opensearch serviceAccount name. If Service Account doesn’t exist it could be created by setting serviceAccount.create and serviceAccount.name
      cluster.cluster.general.serviceNamestring""Opensearch service name
      cluster.cluster.general.setVMMaxMapCountbooltrueEnable setVMMaxMapCount. OpenSearch requires the Linux kernel vm.max_map_count option to be set to at least 262144
      cluster.cluster.general.snapshotRepositorieslist[]Opensearch snapshot repositories configuration
      cluster.cluster.general.vendorstring"Opensearch"
      cluster.cluster.general.versionstring"2.19.1"Opensearch version
      cluster.cluster.ingress.dashboards.annotationsobject{}dashboards ingress annotations
      cluster.cluster.ingress.dashboards.classNamestring""Ingress class name
      cluster.cluster.ingress.dashboards.enabledboolfalseEnable ingress for dashboards service
      cluster.cluster.ingress.dashboards.hostslist[]Ingress hostnames
      cluster.cluster.ingress.dashboards.tlslist[]Ingress tls configuration
      cluster.cluster.ingress.opensearch.annotationsobject{}Opensearch ingress annotations
      cluster.cluster.ingress.opensearch.classNamestring""Opensearch Ingress class name
      cluster.cluster.ingress.opensearch.enabledboolfalseEnable ingress for Opensearch service
      cluster.cluster.ingress.opensearch.hostslist[]Opensearch Ingress hostnames
      cluster.cluster.ingress.opensearch.tlslist[]Opensearch tls configuration
      cluster.cluster.initHelper.imagePullPolicystring"IfNotPresent"initHelper image pull policy
      cluster.cluster.initHelper.imagePullSecretslist[]initHelper image pull secret
      cluster.cluster.initHelper.resourcesobject{}initHelper pod cpu and memory resources
      cluster.cluster.initHelper.versionstring"1.36"initHelper version
      cluster.cluster.labelsobject{}OpenSearchCluster labels
      cluster.cluster.namestring""OpenSearchCluster name, by default release name is used
      cluster.cluster.nodePoolslist[{"component":"main","diskSize":"30Gi","replicas":3,"resources":{"limits":{"cpu":1,"memory":"2Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"roles":["cluster_manager"]},{"component":"data","diskSize":"30Gi","replicas":3,"resources":{"limits":{"cpu":2,"memory":"4Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"roles":["data"]},{"component":"client","diskSize":"30Gi","replicas":1,"resources":{"limits":{"cpu":1,"memory":"2Gi"},"requests":{"cpu":"500m","memory":"1Gi"}},"roles":["client"]}]Opensearch nodes configuration
      cluster.cluster.security.config.adminCredentialsSecretobject{"name":"admin-credentials"}Secret that contains fields username and password to be used by the operator to access the opensearch cluster for node draining. Must be set if custom securityconfig is provided.
      cluster.cluster.security.config.adminSecretobject{"name":"opensearch-admin-cert"}TLS Secret that contains a client certificate (tls.key, tls.crt, ca.crt) with admin rights in the opensearch cluster. Must be set if transport certificates are provided by user and not generated
      cluster.cluster.security.config.securityConfigSecretobject{"name":"opensearch-security-config"}Secret that contains the differnt yml files of the opensearch-security config (config.yml, internal_users.yml, etc)
      cluster.cluster.security.tls.http.caSecretobject{"name":"opensearch-http-cert"}Optional, secret that contains the ca certificate as ca.crt. If this and generate=true is set the existing CA cert from that secret is used to generate the node certs. In this case must contain ca.crt and ca.key fields
      cluster.cluster.security.tls.http.generateboolfalseIf set to true the operator will generate a CA and certificates for the cluster to use, if false - secrets with existing certificates must be supplied
      cluster.cluster.security.tls.http.secretobject{"name":"opensearch-http-cert"}Optional, name of a TLS secret that contains ca.crt, tls.key and tls.crt data. If ca.crt is in a different secret provide it via the caSecret field
      cluster.cluster.security.tls.transport.adminDnlist["CN=admin"]DNs of certificates that should have admin access, mainly used for securityconfig updates via securityadmin.sh, only used when existing certificates are provided
      cluster.cluster.security.tls.transport.caSecretobject{"name":"opensearch-ca-cert"}Optional, secret that contains the ca certificate as ca.crt. If this and generate=true is set the existing CA cert from that secret is used to generate the node certs. In this case must contain ca.crt and ca.key fields
      cluster.cluster.security.tls.transport.generateboolfalseIf set to true the operator will generate a CA and certificates for the cluster to use, if false secrets with existing certificates must be supplied
      cluster.cluster.security.tls.transport.nodesDnlist["CN=opensearch-transport"]Allowed Certificate DNs for nodes, only used when existing certificates are provided
      cluster.cluster.security.tls.transport.perNodeboolfalseSeparate certificate per node
      cluster.cluster.security.tls.transport.secretobject{"name":"opensearch-transport-cert"}Optional, name of a TLS secret that contains ca.crt, tls.key and tls.crt data. If ca.crt is in a different secret provide it via the caSecret field
      cluster.componentTemplateslist[{"_meta":{"description":"Enable full dynamic mapping for all attributes.* keys"},"allowAutoCreate":true,"name":"logs-attributes-dynamic","templateSpec":{"mappings":{"properties":{"attributes":{"dynamic":true,"type":"object"}}}},"version":1}]List of OpensearchComponentTemplate. Check values.yaml file for examples.
      cluster.fullnameOverridestring""
      cluster.indexTemplateslist[]List of OpensearchIndexTemplate. Check values.yaml file for examples.
      cluster.indexTemplatesWorkAroundlist[{"composedOf":["logs-attributes-dynamic"],"dataStream":{"timestamp_field":{"name":"@timestamp"}},"indexPatterns":["logs*"],"name":"logs-index-template","priority":100,"templateSpec":{"mappings":{"properties":{"@timestamp":{"type":"date"},"message":{"type":"text"}}},"settings":{"index":{"number_of_replicas":1,"number_of_shards":1,"refresh_interval":"1s"}}}}]List of OpensearchIndexTemplate. Check values.yaml file for examples.
      cluster.ismPolicieslist[{"defaultState":"hot","description":"Policy to rollover logs after 7d, 30GB or 50M docs and delete after 30d","ismTemplate":{"indexPatterns":["logs*"],"priority":100},"name":"logs-rollover-policy","states":[{"actions":[{"rollover":{"minDocCount":50000000,"minIndexAge":"7d","minSize":"30gb"}}],"name":"hot","transitions":[{"conditions":{"minIndexAge":"30d"},"stateName":"delete"}]},{"actions":[{"delete":{}}],"name":"delete","transitions":[]}]}]List of OpenSearchISMPolicy. Check values.yaml file for examples.
      cluster.nameOverridestring""
      cluster.roleslist[{"clusterPermissions":["cluster_monitor","cluster_composite_ops","cluster:admin/ingest/pipeline/put","cluster:admin/ingest/pipeline/get","indices:admin/template/get","cluster_manage_index_templates"],"indexPermissions":[{"allowedActions":["indices:admin/template/get","indices:admin/template/put","indices:admin/mapping/put","indices:admin/create","indices:data/write/bulk*","indices:data/write/index","indices:data/read*","indices:monitor*","indices_all"],"indexPatterns":["logs*"]}],"name":"logs-write-role"},{"clusterPermissions":["read","cluster:monitor/nodes/stats","cluster:admin/opensearch/ql/datasources/read","cluster:monitor/task/get","cluster:admin/opendistro/reports/definition/create","cluster:admin/opendistro/reports/definition/update","cluster:admin/opendistro/reports/definition/on_demand","cluster:admin/opendistro/reports/definition/delete","cluster:admin/opendistro/reports/definition/get","cluster:admin/opendistro/reports/definition/list","cluster:admin/opendistro/reports/instance/list","cluster:admin/opendistro/reports/instance/get","cluster:admin/opendistro/reports/menu/download","cluster:admin/opensearch/ppl"],"indexPermissions":[{"allowedActions":["search","read","get","indices:monitor/settings/get","indices:admin/create","indices:admin/mappings/get","indices:data/read/search*","indices:admin/get"],"indexPatterns":["*"]}],"name":"logs-read-role"},{"clusterPermissions":["*"],"indexPermissions":[{"allowedActions":["*"],"indexPatterns":["*"]}],"name":"admin-role"}]List of OpensearchRole. Check values.yaml file for examples.
      cluster.serviceAccount.annotationsobject{}Service Account annotations
      cluster.serviceAccount.createboolfalseCreate Service Account
      cluster.serviceAccount.namestring""Service Account name. Set general.serviceAccount to use this Service Account for the Opensearch cluster
      cluster.tenantslist[]List of additional tenants. Check values.yaml file for examples.
      cluster.userslist[{"backendRoles":[],"name":"logs","secretKey":"password","secretName":"logs-credentials"},{"backendRoles":[],"name":"logs2","secretKey":"password","secretName":"logs2-credentials"}]List of OpenSearch user configurations. Each user references a secret (defined in usersCredentials) for authentication.
      cluster.usersCredentialsobject{"admin":{"hash":"","password":"","username":""},"logs":{"password":"","username":""},"logs2":{"password":"","username":""}}List of OpenSearch user credentials. These credentials are used for authenticating users with OpenSearch.
      cluster.usersRoleBindinglist[{"name":"logs-write","roles":["logs-write-role"],"users":["logs","logs2"]},{"backendRoles":[],"name":"logs-read","roles":["logs-read-role"]},{"backendRoles":[],"name":"admin","roles":["admin-role"]}]Allows to link any number of users, backend roles and roles with a OpensearchUserRoleBinding. Each user in the binding will be granted each role Check values.yaml file for examples.
      operator.fullnameOverridestring"opensearch-operator"
      operator.installCRDsboolfalse
      operator.kubeRbacProxy.enablebooltrue
      operator.kubeRbacProxy.image.repositorystring"quay.io/brancz/kube-rbac-proxy"
      operator.kubeRbacProxy.image.tagstring"v0.19.1"
      operator.kubeRbacProxy.livenessProbe.failureThresholdint3
      operator.kubeRbacProxy.livenessProbe.httpGet.pathstring"/healthz"
      operator.kubeRbacProxy.livenessProbe.httpGet.portint10443
      operator.kubeRbacProxy.livenessProbe.httpGet.schemestring"HTTPS"
      operator.kubeRbacProxy.livenessProbe.initialDelaySecondsint10
      operator.kubeRbacProxy.livenessProbe.periodSecondsint15
      operator.kubeRbacProxy.livenessProbe.successThresholdint1
      operator.kubeRbacProxy.livenessProbe.timeoutSecondsint3
      operator.kubeRbacProxy.readinessProbe.failureThresholdint3
      operator.kubeRbacProxy.readinessProbe.httpGet.pathstring"/healthz"
      operator.kubeRbacProxy.readinessProbe.httpGet.portint10443
      operator.kubeRbacProxy.readinessProbe.httpGet.schemestring"HTTPS"
      operator.kubeRbacProxy.readinessProbe.initialDelaySecondsint10
      operator.kubeRbacProxy.readinessProbe.periodSecondsint15
      operator.kubeRbacProxy.readinessProbe.successThresholdint1
      operator.kubeRbacProxy.readinessProbe.timeoutSecondsint3
      operator.kubeRbacProxy.resources.limits.cpustring"50m"
      operator.kubeRbacProxy.resources.limits.memorystring"50Mi"
      operator.kubeRbacProxy.resources.requests.cpustring"25m"
      operator.kubeRbacProxy.resources.requests.memorystring"25Mi"
      operator.kubeRbacProxy.securityContext.allowPrivilegeEscalationboolfalse
      operator.kubeRbacProxy.securityContext.capabilities.drop[0]string"ALL"
      operator.kubeRbacProxy.securityContext.readOnlyRootFilesystembooltrue
      operator.manager.dnsBasestring"cluster.local"
      operator.manager.extraEnvlist[]
      operator.manager.image.pullPolicystring"Always"
      operator.manager.image.repositorystring"opensearchproject/opensearch-operator"
      operator.manager.image.tagstring""
      operator.manager.imagePullSecretslist[]
      operator.manager.livenessProbe.failureThresholdint3
      operator.manager.livenessProbe.httpGet.pathstring"/healthz"
      operator.manager.livenessProbe.httpGet.portint8081
      operator.manager.livenessProbe.initialDelaySecondsint10
      operator.manager.livenessProbe.periodSecondsint15
      operator.manager.livenessProbe.successThresholdint1
      operator.manager.livenessProbe.timeoutSecondsint3
      operator.manager.loglevelstring"debug"
      operator.manager.parallelRecoveryEnabledbooltrue
      operator.manager.pprofEndpointsEnabledboolfalse
      operator.manager.readinessProbe.failureThresholdint3
      operator.manager.readinessProbe.httpGet.pathstring"/readyz"
      operator.manager.readinessProbe.httpGet.portint8081
      operator.manager.readinessProbe.initialDelaySecondsint10
      operator.manager.readinessProbe.periodSecondsint15
      operator.manager.readinessProbe.successThresholdint1
      operator.manager.readinessProbe.timeoutSecondsint3
      operator.manager.resources.limits.cpustring"200m"
      operator.manager.resources.limits.memorystring"500Mi"
      operator.manager.resources.requests.cpustring"100m"
      operator.manager.resources.requests.memorystring"350Mi"
      operator.manager.securityContext.allowPrivilegeEscalationboolfalse
      operator.manager.watchNamespacestringnil
      operator.nameOverridestring""
      operator.namespacestring""
      operator.nodeSelectorobject{}
      operator.podAnnotationsobject{}
      operator.podLabelsobject{}
      operator.priorityClassNamestring""
      operator.securityContext.runAsNonRootbooltrue
      operator.serviceAccount.createbooltrue
      operator.serviceAccount.namestring"opensearch-operator-controller-manager"
      operator.tolerationslist[]
      operator.useRoleBindingsboolfalse

      Usage

      Once deployed, OpenSearch can be accessed via OpenSearch Dashboards.

      kubectl port-forward svc/opensearch-dashboards 5601:5601
      

      Visit http://localhost:5601 in your browser and log in using the configured credentials.

      Conclusion

      This guide ensures that OpenSearch is fully integrated into the Greenhouse ecosystem, providing scalable log management and visualization. Additional custom configurations can be introduced to meet specific operational needs.

      For troubleshooting and further details, check out the OpenSearch documentation.

      2.13 - Perses

      Table of Contents

      Learn more about the Perses Plugin. Use it to visualize Prometheus/Thanos metrics for your Greenhouse remote cluster.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      Observability is often required for the operation and automation of service offerings. Perses is a CNCF project and it aims to become an open-standard for dashboards and visualization. It provides you with tools to display Prometheus metrics on live dashboards with insightful charts and visualizations. In the Greenhouse context, this complements the kube-monitoring plugin, which automatically acts as a Perses data source which is recognized by Perses. In addition, the Plugin provides a mechanism that automates the lifecycle of datasources and dashboards without having to restart Perses.

      Perses Architecture

      Disclaimer

      This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.

      Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

      Quick Start

      This guide provides a quick and straightforward way how to use Perses as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-managed Kubernetes remote cluster
      • kube-monitoring Plugin will integrate into Perses automatically with its own datasource
      • thanos Plugin can be enabled alongside kube-monitoring. Perses then will have both datasources (thanos, kube-monitoring) and will default to thanos to provide access to long term metrics

      The plugin works by default with anonymous access enabled. This plugin comes with some default dashboards and datasources will be automatically discovered by the plugin.

      Step 1: Add your dashboards and datasources

      Dashboards are selected from ConfigMaps across namespaces. The plugin searches for ConfigMaps with the label perses.dev/resource: "true" and imports them into Perses. The ConfigMap must contain a key like my-dashboard.json with the dashboard JSON content. Please refer this section for more information.

      A guide on how to create custom dashboards on the UI can be found here.

      Values

      KeyTypeDefaultDescription
      global.commonLabelsobject{}Labels to add to all resources. This can be used to add a support_group or service label to all resources and alerting rules.
      greenhouse.alertLabelsobjectalertLabels: | support_group: “default” meta: ""Labels to add to the PrometheusRules alerts.
      greenhouse.defaultDashboards.enabledbooltrueBy setting this to true, You will get Perses Self-monitoring dashboards
      perses.additionalLabelsobject{}
      perses.annotationsobject{}Statefulset Annotations
      perses.config.annotationsobject{}Annotations for config
      perses.config.api_prefixstring"/perses"
      perses.config.databaseobject{"file":{"extension":"json","folder":"/perses"}}Database config based on data base type
      perses.config.database.fileobject{"extension":"json","folder":"/perses"}file system configs
      perses.config.frontend.important_dashboardslist[]
      perses.config.frontend.informationstring"# Welcome to Perses!\n\n**Perses is now the default visualization plugin** for Greenhouse platform and will replace Plutono for the visualization of Prometheus and Thanos metrics.\n\n## Documentation\n\n- [Perses Official Documentation](https://perses.dev/)\n- [Perses Greenhouse Plugin Guide](https://cloudoperators.github.io/greenhouse/docs/reference/catalog/perses/)\n- [Create a Custom Dashboard](https://cloudoperators.github.io/greenhouse/docs/reference/catalog/perses/#create-a-custom-dashboard)"Information contains markdown content to be displayed on the Perses home page.
      perses.config.provisioningobject{"folders":["/etc/perses/provisioning"]}provisioning config
      perses.config.schemasobject{"datasources_path":"/etc/perses/cue/schemas/datasources","interval":"5m","panels_path":"/etc/perses/cue/schemas/panels","queries_path":"/etc/perses/cue/schemas/queries","variables_path":"/etc/perses/cue/schemas/variables"}Schemas paths
      perses.config.security.cookieobject{"same_site":"lax","secure":false}cookie config
      perses.config.security.enable_authboolfalseEnable Authentication
      perses.config.security.readonlyboolfalseConfigure Perses instance as readonly
      perses.fullnameOverridestring""Override fully qualified app name
      perses.imageobject{"name":"persesdev/perses","pullPolicy":"IfNotPresent","version":""}Image of Perses
      perses.image.namestring"persesdev/perses"Perses image repository and name
      perses.image.pullPolicystring"IfNotPresent"Default image pull policy
      perses.image.versionstring""Overrides the image tag whose default is the chart appVersion.
      perses.ingressobject{"annotations":{},"enabled":false,"hosts":[{"host":"perses.local","paths":[{"path":"/","pathType":"Prefix"}]}],"ingressClassName":"","tls":[]}Configure the ingress resource that allows you to access Perses Frontend ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
      perses.ingress.annotationsobject{}Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. For a full list of possible ingress annotations, please see ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
      perses.ingress.enabledboolfalseEnable ingress controller resource
      perses.ingress.hostslist[{"host":"perses.local","paths":[{"path":"/","pathType":"Prefix"}]}]Default host for the ingress resource
      perses.ingress.ingressClassNamestring""IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster . ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
      perses.ingress.tlslist[]Ingress TLS configuration
      perses.livenessProbeobject{"enabled":true,"failureThreshold":5,"initialDelaySeconds":10,"periodSeconds":60,"successThreshold":1,"timeoutSeconds":5}Liveness probe configuration Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
      perses.logLevelstring"info"Log level for Perses be configured in available options “panic”, “error”, “warning”, “info”, “debug”, “trace”
      perses.nameOverridestring""Override name of the chart used in Kubernetes object names.
      perses.persistenceobject{"accessModes":["ReadWriteOnce"],"annotations":{},"enabled":false,"labels":{},"securityContext":{"fsGroup":2000},"size":"8Gi"}Persistence parameters
      perses.persistence.accessModeslist["ReadWriteOnce"]PVC Access Modes for data volume
      perses.persistence.annotationsobject{}Annotations for the PVC
      perses.persistence.enabledboolfalseIf disabled, it will use a emptydir volume
      perses.persistence.labelsobject{}Labels for the PVC
      perses.persistence.securityContextobject{"fsGroup":2000}Security context for the PVC when persistence is enabled
      perses.persistence.sizestring"8Gi"PVC Storage Request for data volume
      perses.readinessProbeobject{"enabled":true,"failureThreshold":5,"initialDelaySeconds":5,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":5}Readiness probe configuration Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
      perses.replicasint1Number of pod replicas.
      perses.resourcesobject{}Resource limits & requests. Update according to your own use case as these values might be too low for a typical deployment. ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
      perses.serviceobject{"annotations":{},"labels":{"greenhouse.sap/expose":"true"},"port":8080,"portName":"http","targetPort":8080,"type":"ClusterIP"}Expose the Perses service to be accessed from outside the cluster (LoadBalancer service). or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
      perses.service.annotationsobject{}Annotations to add to the service
      perses.service.labelsobject{"greenhouse.sap/expose":"true"}Labeles to add to the service
      perses.service.portint8080Service Port
      perses.service.portNamestring"http"Service Port Name
      perses.service.targetPortint8080Perses running port
      perses.service.typestring"ClusterIP"Service Type
      perses.serviceAccountobject{"annotations":{},"create":true,"name":""}Service account for Perses to use.
      perses.serviceAccount.annotationsobject{}Annotations to add to the service account
      perses.serviceAccount.createbooltrueSpecifies whether a service account should be created
      perses.serviceAccount.namestring""The name of the service account to use. If not set and create is true, a name is generated using the fullname template
      perses.serviceMonitor.intervalstring"30s"Interval for the serviceMonitor
      perses.serviceMonitor.labelsobject{}Labels to add to the ServiceMonitor so that Prometheus can discover it. These labels should match the ‘serviceMonitorSelector.matchLabels’ and ruleSelector.matchLabels defined in your Prometheus CR.
      perses.serviceMonitor.selector.matchLabelsobject{}Selector used by the ServiceMonitor to find which Perses service to scrape metrics from. These matchLabels should match the labels on your Perses service.
      perses.serviceMonitor.selfMonitorboolfalseCreate a serviceMonitor for Perses
      perses.sidecarobject{"allNamespaces":true,"enabled":true,"label":"perses.dev/resource","labelValue":"true"}Sidecar configuration that watches for ConfigMaps with the specified label/labelValue and loads them into Perses provisioning
      perses.sidecar.allNamespacesbooltruecheck for configmaps from all namespaces. When set to false, it will only check for configmaps in the same namespace as the Perses instance
      perses.sidecar.enabledbooltrueEnable the sidecar container for ConfigMap provisioning
      perses.sidecar.labelstring"perses.dev/resource"Label key to watch for ConfigMaps containing Perses resources
      perses.sidecar.labelValuestring"true"Label value to watch for ConfigMaps containing Perses resources
      perses.volumeMountslist[]Additional VolumeMounts on the output StatefulSet definition.
      perses.volumeslist[]Additional volumes on the output StatefulSet definition.

      Create a custom dashboard

      1. Add a new Project by clicking on ADD PROJECT in the top right corner. Give it a name and click Add.
      2. Add a new dashboard by clicking on ADD DASHBOARD. Give it a name and click Add.
      3. Now you can add variables, panels to your dashboard.
      4. You can group your panels by adding the panels to a Panel Group.
      5. Move and resize the panels as needed.
      6. Watch this gif to learn more.
      7. You do not need to add the kube-monitoring datasource manually. It will be automatically discovered by Perses.
      8. Click Save after you have made changes.
      9. Export the dashboard.
        • Click on the {} icon in the top right corner of the dashboard.
        • Copy the entire JSON model.
        • See the next section for detailed instructions on how and where to paste the copied dashboard JSON model.

      Dashboard-as-Code

      Perses offers the possibility to define dashboards as code (DaC) instead of going through manipulations on the UI. But why would you want to do this? Basically Dashboard-as-Code (DaC) is something that becomes useful at scale, when you have many dashboards to maintain, to keep aligned on certain parts, etc. If you are interested in this, you can check the Perses documentation for more information.

      Add Dashboards as ConfigMaps

      By default, a sidecar container is deployed in the Perses pod. This container watches all configmaps in the cluster and filters out the ones with a label perses.dev/resource: "true". The files defined in those configmaps are written to a folder and this folder is accessed by Perses. Changes to the configmaps are continuously monitored and are reflected in Perses within 10 minutes.

      A recommendation is to use one configmap per dashboard. This way, you can easily manage the dashboards in your git repository.

      Folder structure:

      dashboards/
      ├── dashboard1.json
      ├── dashboard2.json
      ├── prometheusdatasource1.json
      ├── prometheusdatasource2.json
      templates/
      ├──dashboard-json-configmap.yaml
      

      Helm template to create a configmap for each dashboard:

      {{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
      ---
      apiVersion: v1
      kind: ConfigMap
      
      metadata:
        name: {{ printf "%s-%s" $.Release.Name $path | replace "/" "-" | trunc 63 }}
        labels:
          perses.dev/resource: "true"
      
      data:
      {{ printf "%s: |-" $path | replace "/" "-" | indent 2 }}
      {{ printf "%s" $bytes | indent 4 }}
      
      {{- end }}
      

      2.14 - Plutono

      Learn more about the plutono Plugin. Use it to install the web dashboarding system Plutono to collect, correlate, and visualize Prometheus metrics for your Greenhouse cluster.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      Observability is often required for the operation and automation of service offerings. Plutono provides you with tools to display Prometheus metrics on live dashboards with insightful charts and visualizations. In the Greenhouse context, this complements the kube-monitoring plugin, which automatically acts as a Plutono data source which is recognized by Plutono. In addition, the Plugin provides a mechanism that automates the lifecycle of datasources and dashboards without having to restart Plutono.

      Plutono Architecture

      Disclaimer

      This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.

      Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

      Quick Start

      This guide provides a quick and straightforward way how to use Plutono as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-managed Kubernetes cluster
      • kube-monitoring Plugin installed to have at least one Prometheus instance running in the cluster

      The plugin works by default with anonymous access enabled. If you use the standard configuration in the kube-monitoring plugin, the data source and some kubernetes-operations dashboards are already pre-installed.

      Step 1: Add your dashboards

      Dashboards are selected from ConfigMaps across namespaces. The plugin searches for ConfigMaps with the label plutono-dashboard: "true" and imports them into Plutono. The ConfigMap must contain a key like my-dashboard.json with the dashboard JSON content. Example

      A guide on how to create dashboards can be found here.

      Step 2: Add your datasources

      Data sources are selected from Secrets across namespaces. The plugin searches for Secrets with the label plutono-dashboard: "true" and imports them into Plutono. The Secrets should contain valid datasource configuration YAML. Example

      Values

      KeyTypeDefaultDescription
      global.imagePullSecretslist[]To help compatibility with other charts which use global.imagePullSecrets. Allow either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). Can be templated. global: imagePullSecrets: - name: pullSecret1 - name: pullSecret2 or global: imagePullSecrets: - pullSecret1 - pullSecret2
      global.imageRegistrystringnilOverrides the Docker registry globally for all images
      plutono.“plutono.ini”.“auth.anonymous”.enabledbooltrue
      plutono.“plutono.ini”.“auth.anonymous”.org_rolestring"Admin"
      plutono.“plutono.ini”.auth.disable_login_formbooltrue
      plutono.“plutono.ini”.log.modestring"console"
      plutono.“plutono.ini”.paths.datastring"/var/lib/plutono/"
      plutono.“plutono.ini”.paths.logsstring"/var/log/plutono"
      plutono.“plutono.ini”.paths.pluginsstring"/var/lib/plutono/plugins"
      plutono.“plutono.ini”.paths.provisioningstring"/etc/plutono/provisioning"
      plutono.admin.existingSecretstring""
      plutono.admin.passwordKeystring"admin-password"
      plutono.admin.userKeystring"admin-user"
      plutono.adminPasswordstring"strongpassword"
      plutono.adminUserstring"admin"
      plutono.affinityobject{}Affinity for pod assignment (evaluated as template) ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
      plutono.alertingobject{}
      plutono.assertNoLeakedSecretsbooltrue
      plutono.automountServiceAccountTokenbooltrueShould the service account be auto mounted on the pod
      plutono.autoscalingobject{"behavior":{},"enabled":false,"maxReplicas":5,"minReplicas":1,"targetCPU":"60","targetMemory":""}Create HorizontalPodAutoscaler object for deployment type
      plutono.containerSecurityContext.allowPrivilegeEscalationboolfalse
      plutono.containerSecurityContext.capabilities.drop[0]string"ALL"
      plutono.containerSecurityContext.seccompProfile.typestring"RuntimeDefault"
      plutono.createConfigmapbooltrueEnable creating the plutono configmap
      plutono.dashboardProvidersobject{}
      plutono.dashboardsobject{}
      plutono.dashboardsConfigMapsobject{}
      plutono.datasourcesobject{}
      plutono.deploymentStrategyobject{"type":"RollingUpdate"}See kubectl explain deployment.spec.strategy for more # ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
      plutono.dnsConfigobject{}
      plutono.dnsPolicystringnildns configuration for pod
      plutono.downloadDashboards.envobject{}
      plutono.downloadDashboards.envFromSecretstring""
      plutono.downloadDashboards.envValueFromobject{}
      plutono.downloadDashboards.resourcesobject{}
      plutono.downloadDashboards.securityContext.allowPrivilegeEscalationboolfalse
      plutono.downloadDashboards.securityContext.capabilities.drop[0]string"ALL"
      plutono.downloadDashboards.securityContext.seccompProfile.typestring"RuntimeDefault"
      plutono.downloadDashboardsImage.pullPolicystring"IfNotPresent"
      plutono.downloadDashboardsImage.registrystring"docker.io"The Docker registry
      plutono.downloadDashboardsImage.repositorystring"curlimages/curl"
      plutono.downloadDashboardsImage.shastring""
      plutono.downloadDashboardsImage.tagstring"8.14.1"
      plutono.enableKubeBackwardCompatibilityboolfalseEnable backward compatibility of kubernetes where version below 1.13 doesn’t have the enableServiceLinks option
      plutono.enableServiceLinksbooltrue
      plutono.envobject{}
      plutono.envFromConfigMapslist[]The names of conifgmaps in the same kubernetes namespace which contain values to be added to the environment Each entry should contain a name key, and can optionally specify whether the configmap must be defined with an optional key. Name is templated. ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmapenvsource-v1-core
      plutono.envFromSecretstring""The name of a secret in the same kubernetes namespace which contain values to be added to the environment This can be useful for auth tokens, etc. Value is templated.
      plutono.envFromSecretslist[]The names of secrets in the same kubernetes namespace which contain values to be added to the environment Each entry should contain a name key, and can optionally specify whether the secret must be defined with an optional key. Name is templated.
      plutono.envRenderSecretobject{}Sensible environment variables that will be rendered as new secret object This can be useful for auth tokens, etc. If the secret values contains “{{”, they’ll need to be properly escaped so that they are not interpreted by Helm ref: https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function
      plutono.envValueFromobject{}
      plutono.extraConfigmapMountslist[]Values are templated.
      plutono.extraContainerVolumeslist[]Volumes that can be used in init containers that will not be mounted to deployment pods
      plutono.extraContainersstring""Enable an Specify container in extraContainers. This is meant to allow adding an authentication proxy to a plutono pod
      plutono.extraEmptyDirMountslist[]
      plutono.extraExposePortslist[]
      plutono.extraInitContainerslist[]Additional init containers (evaluated as template) ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
      plutono.extraLabelsobject{"plugin":"plutono"}Apply extra labels to common labels.
      plutono.extraObjectslist[]Create a dynamic manifests via values:
      plutono.extraSecretMountslist[]The additional plutono server secret mounts Defines additional mounts with secrets. Secrets must be manually created in the namespace.
      plutono.extraVolumeMountslist[]The additional plutono server volume mounts Defines additional volume mounts.
      plutono.extraVolumeslist[]
      plutono.gossipPortNamestring"gossip"
      plutono.headlessServiceboolfalseCreate a headless service for the deployment
      plutono.hostAliaseslist[]overrides pod.spec.hostAliases in the plutono deployment’s pods
      plutono.image.pullPolicystring"IfNotPresent"
      plutono.image.pullSecretslist[]Optionally specify an array of imagePullSecrets. # Secrets must be manually created in the namespace. # ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ # Can be templated. #
      plutono.image.registrystring"ghcr.io"
      plutono.image.repositorystring"credativ/plutono"
      plutono.image.shastring""
      plutono.image.tagstring"v7.5.39"Overrides the Plutono image tag whose default is the chart appVersion
      plutono.ingress.annotationsobject{}
      plutono.ingress.enabledboolfalse
      plutono.ingress.extraPathslist[]Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
      plutono.ingress.hosts[0]string"chart-example.local"
      plutono.ingress.labelsobject{}
      plutono.ingress.pathstring"/"
      plutono.ingress.pathTypestring"Prefix"pathType is only for k8s >= 1.1=
      plutono.ingress.tlslist[]
      plutono.lifecycleHooksobject{}
      plutono.livenessProbe.failureThresholdint10
      plutono.livenessProbe.httpGet.pathstring"/api/health"
      plutono.livenessProbe.httpGet.portint3000
      plutono.livenessProbe.initialDelaySecondsint60
      plutono.livenessProbe.timeoutSecondsint30
      plutono.namespaceOverridestring""
      plutono.networkPolicy.allowExternalbooltrue@param networkPolicy.ingress When true enables the creation # an ingress network policy #
      plutono.networkPolicy.egress.blockDNSResolutionboolfalse@param networkPolicy.egress.blockDNSResolution When enabled, DNS resolution will be blocked # for all pods in the plutono namespace.
      plutono.networkPolicy.egress.enabledboolfalse@param networkPolicy.egress.enabled When enabled, an egress network policy will be # created allowing plutono to connect to external data sources from kubernetes cluster.
      plutono.networkPolicy.egress.portslist[]@param networkPolicy.egress.ports Add individual ports to be allowed by the egress
      plutono.networkPolicy.egress.tolist[]
      plutono.networkPolicy.enabledboolfalse@param networkPolicy.enabled Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. #
      plutono.networkPolicy.explicitNamespacesSelectorobject{}@param networkPolicy.explicitNamespacesSelector A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed # If explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy’s namespace # and that match other criteria, the ones that have the good label, can reach the plutono. # But sometimes, we want the plutono to be accessible to clients from other namespaces, in this case, we can use this # LabelSelector to select these namespaces, note that the networkPolicy’s namespace should also be explicitly added. # # Example: # explicitNamespacesSelector: # matchLabels: # role: frontend # matchExpressions: # - {key: role, operator: In, values: [frontend]} #
      plutono.networkPolicy.ingressbooltrue@param networkPolicy.allowExternal Don’t require client label for connections # The Policy model to apply. When set to false, only pods with the correct # client label will have network access to plutono port defined. # When true, plutono will accept connections from any source # (with the correct destination port). #
      plutono.nodeSelectorobject{}Node labels for pod assignment ref: https://kubernetes.io/docs/user-guide/node-selection/
      plutono.persistenceobject{"accessModes":["ReadWriteOnce"],"disableWarning":false,"enabled":false,"extraPvcLabels":{},"finalizers":["kubernetes.io/pvc-protection"],"inMemory":{"enabled":false},"lookupVolumeName":true,"size":"10Gi","type":"pvc"}Enable persistence using Persistent Volume Claims ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
      plutono.persistence.extraPvcLabelsobject{}Extra labels to apply to a PVC.
      plutono.persistence.inMemoryobject{"enabled":false}If persistence is not enabled, this allows to mount the # local storage in-memory to improve performance #
      plutono.persistence.lookupVolumeNamebooltrueIf ’lookupVolumeName’ is set to true, Helm will attempt to retrieve the current value of ‘spec.volumeName’ and incorporate it into the template.
      plutono.pluginslist[]
      plutono.podDisruptionBudgetobject{}See kubectl explain poddisruptionbudget.spec for more # ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
      plutono.podPortNamestring"plutono"
      plutono.rbac.createbooltrue
      plutono.rbac.extraClusterRoleRuleslist[]
      plutono.rbac.extraRoleRuleslist[]
      plutono.rbac.namespacedboolfalse
      plutono.rbac.pspEnabledboolfalseUse an existing ClusterRole/Role (depending on rbac.namespaced false/true) useExistingRole: name-of-some-role useExistingClusterRole: name-of-some-clusterRole
      plutono.rbac.pspUseAppArmorboolfalse
      plutono.readinessProbe.httpGet.pathstring"/api/health"
      plutono.readinessProbe.httpGet.portint3000
      plutono.replicasint1
      plutono.resourcesobject{}
      plutono.revisionHistoryLimitint10
      plutono.securityContext.fsGroupint472
      plutono.securityContext.runAsGroupint472
      plutono.securityContext.runAsNonRootbooltrue
      plutono.securityContext.runAsUserint472
      plutono.serviceobject{"annotations":{},"appProtocol":"","enabled":true,"ipFamilies":[],"ipFamilyPolicy":"","labels":{"greenhouse.sap/expose":"true"},"loadBalancerClass":"","loadBalancerIP":"","loadBalancerSourceRanges":[],"port":80,"portName":"service","targetPort":3000,"type":"ClusterIP"}Expose the plutono service to be accessed from outside the cluster (LoadBalancer service). # or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it. # ref: http://kubernetes.io/docs/user-guide/services/ #
      plutono.service.annotationsobject{}Service annotations. Can be templated.
      plutono.service.appProtocolstring""Adds the appProtocol field to the service. This allows to work with istio protocol selection. Ex: “http” or “tcp”
      plutono.service.ipFamilieslist[]Sets the families that should be supported and the order in which they should be applied to ClusterIP as well. Can be IPv4 and/or IPv6.
      plutono.service.ipFamilyPolicystring""Set the ip family policy to configure dual-stack see Configure dual-stack
      plutono.serviceAccount.automountServiceAccountTokenboolfalse
      plutono.serviceAccount.createbooltrue
      plutono.serviceAccount.labelsobject{}
      plutono.serviceAccount.namestringnil
      plutono.serviceAccount.nameTeststringnil
      plutono.serviceMonitor.enabledboolfalseIf true, a ServiceMonitor CR is created for a prometheus operator # https://github.com/coreos/prometheus-operator #
      plutono.serviceMonitor.intervalstring"30s"
      plutono.serviceMonitor.labelsobject{}namespace: monitoring (defaults to use the namespace this chart is deployed to)
      plutono.serviceMonitor.metricRelabelingslist[]
      plutono.serviceMonitor.pathstring"/metrics"
      plutono.serviceMonitor.relabelingslist[]
      plutono.serviceMonitor.schemestring"http"
      plutono.serviceMonitor.scrapeTimeoutstring"30s"
      plutono.serviceMonitor.targetLabelslist[]
      plutono.serviceMonitor.tlsConfigobject{}
      plutono.sidecarobject{}Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders Requires at least Plutono 5 to work and can’t be used together with parameters dashboardProviders, datasources and dashboards
      plutono.sidecar.alerts.envobject{}Additional environment variables for the alerts sidecar
      plutono.sidecar.alerts.labelstring"plutono_alert"label that the configmaps with alert are marked with
      plutono.sidecar.alerts.labelValuestring""value of label that the configmaps with alert are set to
      plutono.sidecar.alerts.resourcestring"both"search in configmap, secret or both
      plutono.sidecar.alerts.searchNamespacestringnilIf specified, the sidecar will search for alert config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces
      plutono.sidecar.alerts.watchMethodstring"WATCH"Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
      plutono.sidecar.dashboards.defaultFolderNamestringnilThe default folder name, it will create a subfolder under the folder and put dashboards in there instead
      plutono.sidecar.dashboards.extraMountslist[]Additional dashboard sidecar volume mounts
      plutono.sidecar.dashboards.folderstring"/tmp/dashboards"folder in the pod that should hold the collected dashboards (unless defaultFolderName is set)
      plutono.sidecar.dashboards.folderAnnotationstringnilIf specified, the sidecar will look for annotation with this name to create folder and put graph here. You can use this parameter together with provider.foldersFromFilesStructureto annotate configmaps and create folder structure.
      plutono.sidecar.dashboards.providerobject{"allowUiUpdates":false,"disableDelete":false,"folder":"","folderUid":"","foldersFromFilesStructure":false,"name":"sidecarProvider","orgid":1,"type":"file"}watchServerTimeout: request to the server, asking it to cleanly close the connection after that. defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S watchServerTimeout: 3600 watchClientTimeout: is a client-side timeout, configuring your local socket. If you have a network outage dropping all packets with no RST/FIN, this is how long your client waits before realizing & dropping the connection. defaults to 66sec (sic!) watchClientTimeout: 60 provider configuration that lets plutono manage the dashboards
      plutono.sidecar.dashboards.provider.allowUiUpdatesboolfalseallow updating provisioned dashboards from the UI
      plutono.sidecar.dashboards.provider.disableDeleteboolfalsedisableDelete to activate a import-only behaviour
      plutono.sidecar.dashboards.provider.folderstring""folder in which the dashboards should be imported in plutono
      plutono.sidecar.dashboards.provider.folderUidstring""folder UID. will be automatically generated if not specified
      plutono.sidecar.dashboards.provider.foldersFromFilesStructureboolfalseallow Plutono to replicate dashboard structure from filesystem
      plutono.sidecar.dashboards.provider.namestring"sidecarProvider"name of the provider, should be unique
      plutono.sidecar.dashboards.provider.orgidint1orgid as configured in plutono
      plutono.sidecar.dashboards.provider.typestring"file"type of the provider
      plutono.sidecar.dashboards.reloadURLstring"http://localhost:3000/api/admin/provisioning/dashboards/reload"Endpoint to send request to reload alerts
      plutono.sidecar.dashboards.searchNamespacestring"ALL"Namespaces list. If specified, the sidecar will search for config-maps/secrets inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces.
      plutono.sidecar.dashboards.sizeLimitobject{}Sets the size limit of the dashboard sidecar emptyDir volume
      plutono.sidecar.datasources.envobject{}Additional environment variables for the datasourcessidecar
      plutono.sidecar.datasources.initDatasourcesboolfalseThis is needed if skipReload is true, to load any datasources defined at startup time. Deploy the datasources sidecar as an initContainer.
      plutono.sidecar.datasources.reloadURLstring"http://localhost:3000/api/admin/provisioning/datasources/reload"Endpoint to send request to reload datasources
      plutono.sidecar.datasources.resourcestring"both"search in configmap, secret or both
      plutono.sidecar.datasources.scriptstringnilAbsolute path to shell script to execute after a datasource got reloaded
      plutono.sidecar.datasources.searchNamespacestring"ALL"If specified, the sidecar will search for datasource config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces
      plutono.sidecar.datasources.watchMethodstring"WATCH"Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
      plutono.sidecar.image.registrystring"quay.io"The Docker registry
      plutono.testFramework.enabledbooltrue
      plutono.testFramework.image.registrystring"ghcr.io"
      plutono.testFramework.image.repositorystring"cloudoperators/greenhouse-extensions-integration-test"
      plutono.testFramework.image.tagstring"main"
      plutono.testFramework.imagePullPolicystring"IfNotPresent"
      plutono.testFramework.resourcesobject{}
      plutono.testFramework.securityContextobject{}
      plutono.tolerationslist[]Tolerations for pod assignment ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
      plutono.topologySpreadConstraintslist[]Topology Spread Constraints ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
      plutono.useStatefulSetboolfalse

      Example of extraVolumeMounts and extraVolumes

      Configure additional volumes with extraVolumes and volume mounts with extraVolumeMounts.

      Example for extraVolumeMounts and corresponding extraVolumes:

      extraVolumeMounts:
        - name: plugins
          mountPath: /var/lib/plutono/plugins
          subPath: configs/plutono/plugins
          readOnly: false
        - name: dashboards
          mountPath: /var/lib/plutono/dashboards
          hostPath: /usr/shared/plutono/dashboards
          readOnly: false
      
      extraVolumes:
        - name: plugins
          existingClaim: existing-plutono-claim
        - name: dashboards
          hostPath: /usr/shared/plutono/dashboards
      

      Volumes default to emptyDir. Set to persistentVolumeClaim, hostPath, csi, or configMap for other types. For a persistentVolumeClaim, specify an existing claim name with existingClaim.

      Import dashboards

      There are a few methods to import dashboards to Plutono. Below are some examples and explanations as to how to use each method:

      dashboards:
        default:
          some-dashboard:
            json: |
              {
                "annotations":
      
                ...
                # Complete json file here
                ...
      
                "title": "Some Dashboard",
                "uid": "abcd1234",
                "version": 1
              }
          custom-dashboard:
            # This is a path to a file inside the dashboards directory inside the chart directory
            file: dashboards/custom-dashboard.json
          prometheus-stats:
            # Ref: https://plutono.com/dashboards/2
            gnetId: 2
            revision: 2
            datasource: Prometheus
          loki-dashboard-quick-search:
            gnetId: 12019
            revision: 2
            datasource:
            - name: DS_PROMETHEUS
              value: Prometheus
          local-dashboard:
            url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
      

      Create a dashboard

      1. Click Dashboards in the main menu.

      2. Click New and select New Dashboard.

      3. Click Add new empty panel.

      4. Important: Add a datasource variable as they are provisioned in the cluster.

        • Go to Dashboard settings.
        • Click Variables.
        • Click Add variable.
        • General: Configure the variable with a proper Name as Type Datasource.
        • Data source options: Select the data source Type e.g. Prometheus.
        • Click Update.
        • Go back.
      5. Develop your panels.

        • On the Edit panel view, choose your desired Visualization.
        • Select the datasource variable you just created.
        • Write or construct a query in the query language of your data source.
        • Move and resize the panels as needed.
      6. Optionally add a tag to the dashboard to make grouping easier.

        • Go to Dashboard settings.
        • In the General section, add a Tag.
      7. Click Save. Note that the dashboard is saved in the browser’s local storage.

      8. Export the dashboard.

        • Go to Dashboard settings.
        • Click JSON Model.
        • Copy the JSON model.
        • Go to your Github repository and create a new JSON file in the dashboards directory.

      BASE64 dashboards

      Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit) A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk. If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.

      Gerrit use case

      Gerrit API for download files has the following schema: https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content where {project-name} and {file-id} usually has ‘/’ in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard the url value is https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content

      Sidecar for dashboards

      If the parameter sidecar.dashboards.enabled is set, a sidecar container is deployed in the plutono pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with a label as defined in sidecar.dashboards.label. The files defined in those configmaps are written to a folder and accessed by plutono. Changes to the configmaps are monitored and the imported dashboards are deleted/updated.

      A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside one configmap is currently not properly mirrored in plutono.

      NOTE: Configure your data sources in your dashboards as variables to keep them portable across clusters.

      Example dashboard config:

      Folder structure:

      dashboards/
      ├── dashboard1.json
      ├── dashboard2.json
      templates/
      ├──dashboard-json-configmap.yaml
      

      Helm template to create a configmap for each dashboard:

      {{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
      ---
      apiVersion: v1
      kind: ConfigMap
      
      metadata:
        name: {{ printf "%s-%s" $.Release.Name $path | replace "/" "-" | trunc 63 }}
        labels:
          plutono-dashboard: "true"
      
      data:
      {{ printf "%s: |-" $path | replace "/" "-" | indent 2 }}
      {{ printf "%s" $bytes | indent 4 }}
      
      {{- end }}
      

      Sidecar for datasources

      If the parameter sidecar.datasources.enabled is set, an init container is deployed in the plutono pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and filters out the ones with a label as defined in sidecar.datasources.label. The files defined in those secrets are written to a folder and accessed by plutono on startup. Using these yaml files, the data sources in plutono can be imported.

      Should you aim for reloading datasources in Plutono each time the config is changed, set sidecar.datasources.skipReload: false and adjust sidecar.datasources.reloadURL to http://<svc-name>.<namespace>.svc.cluster.local/api/admin/provisioning/datasources/reload.

      Secrets are recommended over configmaps for this usecase because datasources usually contain private data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.

      Example datasource config:

      apiVersion: v1
      kind: Secret
      metadata:
        name: plutono-datasources
        labels:
          # default value for: sidecar.datasources.label
          plutono-datasource: "true"
      stringData:
        datasources.yaml: |-
          apiVersion: 1
          datasources:
            - name: my-prometheus
              type: prometheus
              access: proxy
              orgId: 1
              url: my-url-domain:9090
              isDefault: false
              jsonData:
                httpMethod: 'POST'
              editable: false
      

      NOTE: If you might include credentials in your datasource configuration, make sure to not use stringdata but base64 encoded data instead.

      apiVersion: v1
      kind: Secret
      metadata:
        name: my-datasource
        labels:
          plutono-datasource: "true"
      data:
        # The key must contain a unique name and the .yaml file type
        my-datasource.yaml: {{ include (print $.Template.BasePath "my-datasource.yaml") . | b64enc }}
      

      Example values to add a datasource adapted from Grafana:

      datasources:
       datasources.yaml:
        apiVersion: 1
        datasources:
            # <string, required> Sets the name you use to refer to
            # the data source in panels and queries.
          - name: my-prometheus
            # <string, required> Sets the data source type.
            type: prometheus
            # <string, required> Sets the access mode, either
            # proxy or direct (Server or Browser in the UI).
            # Some data sources are incompatible with any setting
            # but proxy (Server).
            access: proxy
            # <int> Sets the organization id. Defaults to orgId 1.
            orgId: 1
            # <string> Sets a custom UID to reference this
            # data source in other parts of the configuration.
            # If not specified, Plutono generates one.
            uid:
            # <string> Sets the data source's URL, including the
            # port.
            url: my-url-domain:9090
            # <string> Sets the database user, if necessary.
            user:
            # <string> Sets the database name, if necessary.
            database:
            # <bool> Enables basic authorization.
            basicAuth:
            # <string> Sets the basic authorization username.
            basicAuthUser:
            # <bool> Enables credential headers.
            withCredentials:
            # <bool> Toggles whether the data source is pre-selected
            # for new panels. You can set only one default
            # data source per organization.
            isDefault: false
            # <map> Fields to convert to JSON and store in jsonData.
            jsonData:
              httpMethod: 'POST'
              # <bool> Enables TLS authentication using a client
              # certificate configured in secureJsonData.
              # tlsAuth: true
              # <bool> Enables TLS authentication using a CA
              # certificate.
              # tlsAuthWithCACert: true
            # <map> Fields to encrypt before storing in jsonData.
            secureJsonData:
              # <string> Defines the CA cert, client cert, and
              # client key for encrypted authentication.
              # tlsCACert: '...'
              # tlsClientCert: '...'
              # tlsClientKey: '...'
              # <string> Sets the database password, if necessary.
              # password:
              # <string> Sets the basic authorization password.
              # basicAuthPassword:
            # <int> Sets the version. Used to compare versions when
            # updating. Ignored when creating a new data source.
            version: 1
            # <bool> Allows users to edit data sources from the
            # Plutono UI.
            editable: false
      

      How to serve Plutono with a path prefix (/plutono)

      In order to serve Plutono with a prefix (e.g., http://example.com/plutono), add the following to your values.yaml.

      ingress:
        enabled: true
        annotations:
          kubernetes.io/ingress.class: "nginx"
          nginx.ingress.kubernetes.io/rewrite-target: /$1
          nginx.ingress.kubernetes.io/use-regex: "true"
      
        path: /plutono/?(.*)
        hosts:
          - k8s.example.dev
      
      plutono.ini:
        server:
          root_url: http://localhost:3000/plutono # this host can be localhost
      

      How to securely reference secrets in plutono.ini

      This example uses Plutono file providers for secret values and the extraSecretMounts configuration flag (Additional plutono server secret mounts) to mount the secrets.

      In plutono.ini:

      plutono.ini:
        [auth.generic_oauth]
        enabled = true
        client_id = $__file{/etc/secrets/auth_generic_oauth/client_id}
        client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret}
      

      Existing secret, or created along with helm:

      ---
      apiVersion: v1
      kind: Secret
      metadata:
        name: auth-generic-oauth-secret
      type: Opaque
      stringData:
        client_id: <value>
        client_secret: <value>
      
      • Include in the extraSecretMounts configuration flag:
      - extraSecretMounts:
        - name: auth-generic-oauth-secret-mount
          secretName: auth-generic-oauth-secret
          defaultMode: 0440
          mountPath: /etc/secrets/auth_generic_oauth
          readOnly: true
      

      2.15 - Prometheus

      Learn more about the prometheus plugin. Use it to deploy a single Prometheus for your Greenhouse cluster.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      Observability is often required for operation and automation of service offerings. To get the insights provided by an application and the container runtime environment, you need telemetry data in the form of metrics or logs sent to backends such as Prometheus or OpenSearch. With the prometheus Plugin, you will be able to cover the metrics part of the observability stack.

      This Plugin includes a pre-configured package of Prometheus that help make getting started easy and efficient. At its core, an automated and managed Prometheus installation is provided using the prometheus-operator.

      Components included in this Plugin:

      Disclaimer

      It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.

      The Plugin is a configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates. The intention is, to deliver a pre-configured package that work out of the box and can be extended by following the guide.

      Also worth to mention, we reuse the existing kube-monitoring Greenhouse plugin helm chart, which already preconfigures Prometheus just by disabling the Kubernetes component scrapers and exporters.

      Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

      Quick start

      This guide provides a quick and straightforward way to deploy prometheus as a Greenhouse Plugin on your Kubernetes cluster.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.

      • Installed prometheus-operator and it’s custom resource definitions (CRDs). As a foundation we recommend installing the kube-monitoring plugin first in your cluster to provide the prometheus-operator and it’s CRDs. There are two paths to do it:

        1. Go to Greenhouse dashboard and select the Prometheus plugin from the catalog. Specify the cluster and required option values.
        2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

      Step 1:

      If you want to run the prometheus plugin without installing kube-monitoring in the first place, then you need to switch kubeMonitoring.prometheusOperator.enabled and kubeMonitoring.crds.enabled to true.

      Step 2:

      After installation, Greenhouse will provide a generated link to the Prometheus user interface. This is done via the annotation greenhouse.sap/expose: “true” at the Prometheus Service resource.

      Step 3:

      Greenhouse regularly performs integration tests that are bundled with prometheus. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.

      Configuration

      Global options

      NameDescriptionValue
      global.commonLabelsLabels to add to all resources. This can be used to add a support_group or service label to all resources and alerting rules.true

      Prometheus-operator options

      NameDescriptionValue
      kubeMonitoring.prometheusOperator.enabledManages Prometheus and Alertmanager componentstrue
      kubeMonitoring.prometheusOperator.alertmanagerInstanceNamespacesFilter namespaces to look for prometheus-operator Alertmanager resources[]
      kubeMonitoring.prometheusOperator.alertmanagerConfigNamespacesFilter namespaces to look for prometheus-operator AlertmanagerConfig resources[]
      kubeMonitoring.prometheusOperator.prometheusInstanceNamespacesFilter namespaces to look for prometheus-operator Prometheus resources[]

      Prometheus options

      NameDescriptionValue
      kubeMonitoring.prometheus.enabledDeploy a Prometheus instancetrue
      kubeMonitoring.prometheus.annotationsAnnotations for Prometheus{}
      kubeMonitoring.prometheus.tlsConfig.caCertCA certificate to verify technical clients at Prometheus IngressSecret
      kubeMonitoring.prometheus.ingress.enabledDeploy Prometheus Ingresstrue
      kubeMonitoring.prometheus.ingress.hostsMust be provided if Ingress is enabled.[]
      kubeMonitoring.prometheus.ingress.ingressClassnameSpecifies the ingress-controllernginx
      kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storageHow large the persistent volume should be to house the prometheus database. Default 50Gi.""
      kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.storageClassNameThe storage class to use for the persistent volume.""
      kubeMonitoring.prometheus.prometheusSpec.scrapeIntervalInterval between consecutive scrapes. Defaults to 30s""
      kubeMonitoring.prometheus.prometheusSpec.scrapeTimeoutNumber of seconds to wait for target to respond before erroring""
      kubeMonitoring.prometheus.prometheusSpec.evaluationIntervalInterval between consecutive evaluations""
      kubeMonitoring.prometheus.prometheusSpec.externalLabelsExternal labels to add to any time series or alerts when communicating with external systems like Alertmanager{}
      kubeMonitoring.prometheus.prometheusSpec.ruleSelectorPrometheusRules to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
      kubeMonitoring.prometheus.prometheusSpec.serviceMonitorSelectorServiceMonitors to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
      kubeMonitoring.prometheus.prometheusSpec.podMonitorSelectorPodMonitors to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
      kubeMonitoring.prometheus.prometheusSpec.probeSelectorProbes to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
      kubeMonitoring.prometheus.prometheusSpec.scrapeConfigSelectorscrapeConfigs to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } }{}
      kubeMonitoring.prometheus.prometheusSpec.retentionHow long to retain metrics""
      kubeMonitoring.prometheus.prometheusSpec.logLevelLog level to be configured for Prometheus""
      kubeMonitoring.prometheus.prometheusSpec.additionalScrapeConfigsNext to ScrapeConfig CRD, you can use AdditionalScrapeConfigs, which allows specifying additional Prometheus scrape configurations""
      kubeMonitoring.prometheus.prometheusSpec.additionalArgsAllows setting additional arguments for the Prometheus container[]

      Alertmanager options

      NameDescriptionValue
      alerts.enabledTo send alerts to Alertmanagerfalse
      alerts.alertmanager.hostsList of Alertmanager hosts Prometheus can send alerts to[]
      alerts.alertmanager.tlsConfig.certTLS certificate for communication with AlertmanagerSecret
      alerts.alertmanager.tlsConfig.keyTLS key for communication with AlertmanagerSecret

      Service Discovery

      The prometheus Plugin provides a PodMonitor to automatically discover the Prometheus metrics of the Kubernetes Pods in any Namespace. The PodMonitor is configured to detect the metrics endpoint of the Pods if the following annotations are set:

      metadata:
        annotations:
          greenhouse/scrape: “true”
          greenhouse/target: <prometheus plugin name>
      

      Note: The annotations needs to be added manually to have the pod scraped and the port name needs to match.

      Examples

      Deploy kube-monitoring into a remote cluster

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: prometheus
      spec:
        pluginDefinition: prometheus
        disabled: false
        optionValues:
          - name: kubeMonitoring.prometheus.prometheusSpec.retention
            value: 30d
          - name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
            value: 100Gi
          - name: kubeMonitoring.prometheus.service.labels
            value:
              greenhouse.sap/expose: "true"
          - name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
            value:
              cluster: example-cluster
              organization: example-org
              region: example-region
          - name: alerts.enabled
            value: true
          - name: alerts.alertmanagers.hosts
            value:
              - alertmanager.dns.example.com
          - name: alerts.alertmanagers.tlsConfig.cert
            valueFrom:
              secret:
                key: tls.crt
                name: tls-prometheus-<org-name>
          - name: alerts.alertmanagers.tlsConfig.key
            valueFrom:
              secret:
                key: tls.key
                name: tls-prometheus-<org-name>
      

      Extension of the plugin

      prometheus can be extended with your own alerting rules and target configurations via the Custom Resource Definitions (CRDs) of the prometheus-operator. The user-defined resources to be incorporated with the desired configuration are defined via label selections.

      The CRD PrometheusRule enables the definition of alerting and recording rules that can be used by Prometheus or Thanos Rule instances. Alerts and recording rules are reconciled and dynamically loaded by the operator without having to restart Prometheus or Thanos Rule.

      prometheus will automatically discover and load the rules that match labels plugin: <plugin-name>.

      Example:

      apiVersion: monitoring.coreos.com/v1
      kind: PrometheusRule
      metadata:
        name: example-prometheus-rule
        labels:
          plugin: <metadata.name> 
          ## e.g plugin: prometheus-network
      spec:
       groups:
         - name: example-group
           rules:
           ...
      

      The CRDs PodMonitor, ServiceMonitor, Probe and ScrapeConfig allow the definition of a set of target endpoints to be scraped by prometheus. The operator will automatically discover and load the configurations that match labels plugin: <plugin-name>.

      Example:

      apiVersion: monitoring.coreos.com/v1
      kind: PodMonitor
      metadata:
        name: example-pod-monitor
        labels:
          plugin: <metadata.name> 
          ## e.g plugin: prometheus-network
      spec:
        selector:
          matchLabels:
            app: example-app
        namespaceSelector:
          matchNames:
            - example-namespace
        podMetricsEndpoints:
          - port: http
        ...
      

      2.16 - Repo Guard

      Repo Guard Greenhouse Plugin manages Github teams, team memberships and repository & team assignments.

      Hierarchy of Custom Resources

      Custom Resources

      Github – an installation of Github App

      apiVersion: githubguard.sap/v1
      kind: Github
      metadata:
        name: com
      spec:
        webURL: https://github.com
        v3APIURL: https://api.github.com
        integrationID: 123456
        clientUserAgent: greenhouse-repo-guard
        secret: github-com-secret
      

      GithubOrganization with Feature & Action Flags

      apiVersion: githubguard.sap/v1
      kind: GithubOrganization
      metadata:
        name: com--greenhouse-sandbox
        labels:
          githubguard.sap/addTeam: "true"
          githubguard.sap/removeTeam: "true"
          githubguard.sap/addOrganizationOwner: "true"
          githubguard.sap/removeOrganizationOwner: "true"
          githubguard.sap/addRepositoryTeam: "true"
          githubguard.sap/removeRepositoryTeam: "true"
          githubguard.sap/dryRun: "false"
      

      Default team & repository assignments:

      GithubTeamRepository for exception team & repository assignments

      apiVersion: githubguard.sap/v1
      kind: GithubAccountLink
      metadata:
        annotations: 
         name: com-123456 
      spec:
        userID: 123456
        githubID: 2042059
        github: com
      

      2.17 - Service exposure test

      This Plugin is just providing a simple exposed service for manual testing.

      By adding the following label to a service it will become accessible from the central greenhouse system via a service proxy:

      greenhouse.sap/expose: "true"

      This plugin create an nginx deployment with an exposed service for testing.

      Configuration

      Specific port

      By default expose would always use the first port. If you need another port, you’ve got to specify it by name:

      greenhouse.sap/exposeNamedPort: YOURPORTNAME

      2.18 - Teams2Slack

      Introduction

      This Plugin provides a Slack integration for a Greenhouse organization.
      It manages Slack entities like channels, groups, handles, etc. and its members based on the teams configured in your Greenhouse organization.

      Important: Please ensure that only one deployment of Teams2slack runs against the same set of groups in slack. Secondary instances should run in the provided Dry-Run mode. Otherwise you might notice inconsistencies if the Teammembership object of a cluster are uneqal.

      Requirments

      • A Kubernetes Cluster to run against
      • The presence of the Greenhouse Teammemberships CRD and corresponding objects.

      Architecture

      architecture

      The Teammembership contain the members of a team. Changes to an object will create an event in Kubernetes. This event will be consumed by the first controller. It creates a mirrored SlackGroup object that reflects the content of the Teammembership Object. This approach has the advantage that deletion of a team can be securely detected with the utilization of finalizers. The second controller detects changes on SlackGroup objects. The users present in a team will be aligned to a slack group.

      Configuration

      Deploy a the Teams2Slack Plugin and it’s Plugin which looks like the following structure (the following structure only includes the mandatory fields):

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: teams2slack
        namespace: default
      spec:
        pluginDefinition: teams2slack
        disabled: false
        optionValues:
          - name: groupNamePrefix
            value: 
          - name: groupNameSuffix
            value: 
          - name: infoChannelID
            value:
          - name: token
            valueFrom:
              secret:
                key: SLACK_TOKEN
                name: teams2slack-secret
      ---
      apiVersion: v1
      kind: Secret
      
      metadata:
        name: teams2slack-secret
      type: Opaque
      data:
        SLACK_TOKEN: // Slack token b64 encoded
      

      The values that can or need to be provided have the following meaning:

      Environment VariableMeaning
      groupNamePrefix (mandatory)The prefix the created slack group should have. Choose a prefix that matches your organization.
      groupNameSuffix (mandatory)The suffix the created slack group should have. Choose a suffix that matches your organization.
      infoChannelID (mandatory)The channel ID created Slack Groups should have. You can currently define one slack ID which will be applied to all created groups. Make sure to take the channel ID and not the channel name.
      token(mandatory)the slack token to authenticate against Slack.
      eventRequeueTimer (optional)If a slack API requests fails due to a network error, or because data is currently fetched, it will be requed to the operators workQueue. Uses the golang date format. (1s = every second 1m = every minute )
      loadDataBackoffTimer (optional)Defines, when a Slack-API data call occurs. Uses the golang data format.
      dryRun (optional)Slack write operations are not executed if value is set to true. Requires a valid. Requires: A valid SLACK_TOKEN; the other environment variables can be mocked.

      2.19 - Thanos

      Learn more about the Thanos Plugin. Use it to enable extended metrics retention and querying across Prometheus servers and Greenhouse clusters.

      The main terminologies used in this document can be found in core-concepts.

      Overview

      Thanos is a set of components that can be used to extend the storage and retrieval of metrics in Prometheus. It allows you to store metrics in a remote object store and query them across multiple Prometheus servers and Greenhouse clusters. This Plugin is intended to provide a set of pre-configured Thanos components that enable a proven composition. At the core, a set of Thanos components is installed that adds long-term storage capability to a single kube-monitoring Plugin and makes both current and historical data available again via one Thanos Query component.

      Thanos Architecture

      The Thanos Sidecar is a component that is deployed as a container together with a Prometheus instance. This allows Thanos to optionally upload metrics to the object store and Thanos Query to access Prometheus data via a common, efficient StoreAPI.

      The Thanos Compact component applies the Prometheus 2.0 Storage Engine compaction process to data uploaded to the object store. The Compactor is also responsible for applying the configured retention and downsampling of the data.

      The Thanos Store also implements the StoreAPI and serves the historical data from an object store. It acts primarily as an API gateway and has no persistence itself.

      Thanos Query implements the Prometheus HTTP v1 API for querying data in a Thanos cluster via PromQL. In short, it collects the data needed to evaluate the query from the connected StoreAPIs, evaluates the query and returns the result.

      This plugin deploys the following Thanos components:

      Planned components:

      This Plugin does not deploy the following components:

      Disclaimer

      It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.

      Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.

      Quick start

      This guide provides a quick and straightforward way to use Thanos as a Greenhouse Plugin on your Kubernetes cluster. The guide is meant to build the following setup.

      Prerequisites

      • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
      • Ready to use credentials for a compatible object store
      • kube-monitoring plugin installed. Thanos Sidecar on the Prometheus must be enabled by providing the required object store credentials.

      Step 1:

      Create a Kubernetes Secret with your object store credentials following the Object Store preparation section.

      Step 2:

      Enable the Thanos Sidecar on the Prometheus in the kube-monitoring plugin by providing the required object store credentials. Follow the kube-monitoring plugin enablement section.

      Step 3:

      Create a Thanos Query Plugin by following the Thanos Query section.

      Configuration

      Object Store preparation

      To run Thanos, you need object storage credentials. Get the credentials of your provider and add them to a Kubernetes Secret. The Thanos documentation provides a great overview on the different supported store types.

      Usually this looks somewhat like this

      type: $STORAGE_TYPE
      config:
          user:
          password:
          domain:
          ...
      

      If you’ve got everything in a file, deploy it in your remote cluster in the namespace, where Prometheus and Thanos will be.

      Important: $THANOS_PLUGIN_NAME is needed later for the respective Thanos plugin and they must not be different!

      kubectl create secret generic $THANOS_PLUGIN_NAME-metrics-objectstore --from-file=thanos.yaml=/path/to/your/file
      

      kube-monitoring plugin enablement

      Prometheus in kube-monitoring needs to be altered to have a sidecar and ship metrics to the new object store too. You have to provide the Secret you’ve just created to the (most likely already existing) kube-monitoring plugin. Add this:

      spec:
          optionValues:
            - name: kubeMonitoring.prometheus.prometheusSpec.thanos.objectStorageConfig.existingSecret.key
              value: thanos.yaml
            - name: kubeMonitoring.prometheus.prometheusSpec.thanos.objectStorageConfig.existingSecret.name
              value: $THANOS_PLUGIN_NAME-metrics-objectstore
      

      Values used here are described in the Prometheus Operator Spec.

      Thanos Query

      This is the real deal now: Define your Thanos Query by creating a plugin.

      NOTE1: $THANOS_PLUGIN_NAME needs to be consistent with your secret created earlier.

      NOTE2: The releaseNamespace needs to be the same as to where kube-monitoring resides. By default this is kube-monitoring.

      apiVersion: greenhouse.sap/v1alpha1
      kind: Plugin
      metadata:
        name: $YOUR_CLUSTER_NAME
      spec:
        pluginDefinition: thanos
        disabled: false
        clusterName: $YOUR_CLUSTER_NAME
        releaseNamespace: kube-monitoring
      

      Thanos Ruler

      Thanos Ruler evaluates Prometheus rules against choosen query API. This allows evaluation of rules using metrics from different Prometheus instances.

      Thanos Ruler

      To enable Thanos Ruler component creation (Thanos Ruler is disabled by default) you have to set:

      spec:
        optionsValues:
        - name: thanos.ruler.enabled
          value: true
      

      Configuration

      Alertmanager

      For Thanos Ruler to communicate with Alertmanager we need to enable the appropriate configuration and provide secret/key names containing necessary SSO key and certificate to the Plugin.

      Example of Plugin setup with Thanos Ruler using Alertmanager

      spec:
        optionsValues:
        - name: thanos.ruler.enabled
          value: true
        - name: thanos.ruler.alertmanagers.enabled
          value: true
        - name: thanos.ruler.alertmanagers.authentication.ssoCert
          valueFrom:
            secret:
              key: $KEY_NAME
              name: $SECRET_NAME
        - name: thanos.ruler.alertmanagers.authentication.ssoKey
          valueFrom:
            secret:
              key: $KEY_NAME
              name: $SECRET_NAME
      

      [OPTIONAL] Handling your Prometheus and Thanos Stores.

      Default Prometheus and Thanos Endpoint

      Thanos Query is automatically adding the Prometheus and Thanos endpoints. If you just have a single Prometheus with Thanos enabled this will work out of the box. Details in the next two chapters. See Standalone Query for your own configuration.

      Prometheus Endpoint

      Thanos Query would check for a service prometheus-operated in the same namespace with this GRPC port to be available 10901. The cli option looks like this and is configured in the Plugin itself:

      --store=prometheus-operated:10901

      Thanos Endpoint

      Thanos Query would check for a Thanos endpoint named like releaseName-store. The associated command line flag for this parameter would look like:

      --store=thanos-kube-store:10901

      If you just have one occurence of this Thanos plugin dpeloyed, the default option would work and does not need anything else.

      Standalone Query

      Standalone Query

      In case you want to achieve a setup like above and have an overarching Thanos Query to run with multiple Stores, you can set it to standalone and add your own store list. Setup your Plugin like this:

      spec:
        optionsValues:
        - name: thanos.query.standalone
          value: true
      

      This would enable you to either:

      • query multiple stores with a single Query

        spec:
          optionsValues:
          - name: thanos.query.stores
            value:
              - thanos-kube-1-store:10901
              - thanos-kube-2-store:10901
              - kube-monitoring-1-prometheus:10901
              - kube-monitoring-2-prometheus:10901
        
      • query multiple Thanos Queries with a single Query Note that there is no -store suffix here in this case.

        spec:
          optionsValues:
          - name: thanos.query.stores
            value:
              - thanos-kube-1:10901
              - thanos-kube-2:10901
        

      Query GRPC Ingress

      To expose the Thanos Query GRPC endpoint externally, you can configure an ingress resource. This is useful for enabling external tools or other clusters to query the Thanos Query component. Example configuration for enabling GRPC ingress:

      grpc:
        enabled: true
        hosts:
          - host: thanos.local
            paths:
              - path: /
                pathType: ImplementationSpecific
      

      TLS Ingress

      To enable TLS for the Thanos Query GRPC endpoint, you can configure a TLS secret. This is useful for securing the communication between external clients and the Thanos Query component. Example configuration for enabling TLS ingress:

      tls: []
        - secretName: ingress-cert
          hosts: [thanos.local]
      

      Thanos Global Query

      In the case of a multi-cluster setup, you may want your Thanos Query to be able to query all Thanos components in all clusters. This is possible by leveraging GRPC Ingress and TLS Ingress. If your remote clusters are reachable via a common domain, you can add the endpoints of the remote clusters to the stores list in the Thanos Query configuration. This allows the Thanos Query to query all Thanos components across all clusters.

      spec:
        optionsValues:
        - name: thanos.query.stores
          value:
            - thanos.local-1:443
            - thanos.local-2:443
            - thanos.local-3:443
      

      Pay attention to port numbers. The default port for GRPC is 443.

      Disable Individual Thanos Components

      It is possible to disable certain Thanos components for your deployment. To do so add the necessary configuration to your Plugin (currently it is not possible to disable the query component)

      - name: thanos.store.enabled
        value: false
      - name: thanos.compactor.enabled
        value: false
      
      Thanos ComponentEnabled by defaultDeactivatableFlag
      QueryTrueFalsen/a
      StoreTrueTruethanos.store.enabled
      CompactorTrueTruethanos.compactor.enabled
      RulerFalseTruethanos.ruler.enabled

      Operations

      Thanos Compactor

      If you deploy the plugin with the default values, Thanos compactor will be shipped too and use the same secret ($THANOS_PLUGIN_NAME-metrics-objectstore) to retrieve, compact and push back timeseries.

      Based on experience, a 100Gi-PVC is used in order not to overload the ephermeral storage of the Kubernetes Nodes. Depending on the configured retention and the amount of metrics, this may not be sufficient and larger volumes may be required. In any case, it is always safe to clear the volume of the compactor and increase it if necessary.

      The object storage costs will be heavily impacted on how granular timeseries are being stored (reference Downsampling). These are the pre-configured defaults, you can change them as needed:

      raw: 777600s (90d)
      5m: 777600s (90d)
      1h: 157680000 (5y)
      

      Thanos ServiceMonitor

      ServiceMonitor configures Prometheus to scrape metrics from all the deployed Thanos components.

      To enable the creation of a ServiceMonitor we can use the Thanos Plugin configuration.

      NOTE: You have to provide the serviceMonitorSelector matchLabels of your Prometheus instance. In the greenhouse context this should look like ‘plugin: $PROMETHEUS_PLUGIN_NAME’

      spec:
        optionsValues:
        - name: thanos.serviceMonitor.selfMonitor
            value: true
        - name: thanos.serviceMonitor.labels
            value:
              plugin: $PROMETHEUS_PLUGIN_NAME
      

      Creating Datasources for Perses

      When deploying Thanos, a Perses datasource is automatically created by default, allowing Perses to fetch data for its visualizations and making it the global default datasource for the selected Perses instance.

      The Perses datasource is created as a configmap, which allows Perses to connect to the Thanos Query API and retrieve metrics. This integration is essential for enabling dashboards and visualizations in Perses.

      Example configuration:

      spec:
        optionsValues:
          - name: thanos.query.persesDatasource.create
            value: true
          - name: thanos.query.persesDatasource.selector
            value: perses.dev/resource: "true"
      

      You can further customize the datasource resource using the selector field if you want to target specific Perses instances.

      Note:

      • The Perses datasource is always created as the global default for Perses.
      • The datasource configmap is required for Perses to fetch data for its visualizations.

      For more details, see the thanos.query.persesDatasource options in the Values table below.

      Blackbox-exporter Integration

      If Blackbox-exporter is enabled and store endpoints are provided, this Thanos deployment will automatically create a ServiceMonitor to probe the specified Thanos GRPC endpoints. Additionally, a PrometheusRule is created to alert in case of failing probes. This allows you to monitor the availability and responsiveness of your Thanos Store components using Blackbox probes and receive alerts if any endpoints become unreachable.

      Values

      KeyTypeDefaultDescription
      blackboxExporter.enabledboolfalseEnable creation of Blackbox exporter resources for probing Thanos stores. It will create ServiceMonitor and PrometheusRule CR to probe store endpoints provided to the helm release (thanos.query.stores) Make sure Blackbox exporter is enabled in kube-monitoring plugin and that it uses same TLS secret as the Thanos instance.
      global.commonLabelsobjectthe chart will add some internal labels automaticallyLabels to apply to all resources
      global.imageRegistrystringnilOverrides the registry globally for all images
      thanos.compactor.additionalArgslist[]Adding additional arguments to Thanos Compactor
      thanos.compactor.annotationsobject{}Annotations to add to the Thanos Compactor resources
      thanos.compactor.compact.cleanupIntervalstring1800sSet Thanos Compactor compact.cleanup-interval
      thanos.compactor.compact.concurrencystring1Set Thanos Compactor compact.concurrency
      thanos.compactor.compact.waitIntervalstring900sSet Thanos Compactor wait-interval
      thanos.compactor.consistencyDelaystring1800sSet Thanos Compactor consistency-delay
      thanos.compactor.containerLabelsobject{}Labels to add to the Thanos Compactor container
      thanos.compactor.deploymentLabelsobject{}Labels to add to the Thanos Compactor deployment
      thanos.compactor.enabledbooltrueEnable Thanos Compactor component
      thanos.compactor.httpGracePeriodstring120sSet Thanos Compactor http-grace-period
      thanos.compactor.logLevelstringinfoThanos Compactor log level
      thanos.compactor.retentionResolution1hstring157680000sSet Thanos Compactor retention.resolution-1h
      thanos.compactor.retentionResolution5mstring7776000sSet Thanos Compactor retention.resolution-5m
      thanos.compactor.retentionResolutionRawstring7776000sSet Thanos Compactor retention.resolution-raw
      thanos.compactor.serviceLabelsobject{}Labels to add to the Thanos Compactor service
      thanos.compactor.volume.labelslist[]Labels to add to the Thanos Compactor PVC resource
      thanos.compactor.volume.sizestring100GiSet Thanos Compactor PersistentVolumeClaim size in Gi
      thanos.grpcAddressstring0.0.0.0:10901GRPC-address used across the stack
      thanos.httpAddressstring0.0.0.0:10902HTTP-address used across the stack
      thanos.image.pullPolicystring"IfNotPresent"Thanos image pull policy
      thanos.image.repositorystring"quay.io/thanos/thanos"Thanos image repository
      thanos.image.tagstring"v0.38.0"Thanos image tag
      thanos.query.additionalArgslist[]Adding additional arguments to Thanos Query
      thanos.query.annotationsobject{}Annotations to add to the Thanos Query resources
      thanos.query.autoDownsamplingbooltrue
      thanos.query.containerLabelsobject{}Labels to add to the Thanos Query container
      thanos.query.deploymentLabelsobject{}Labels to add to the Thanos Query deployment
      thanos.query.ingress.annotationsobject{}Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. For a full list of possible ingress annotations, please see ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
      thanos.query.ingress.enabledboolfalseEnable ingress controller resource
      thanos.query.ingress.grpc.annotationsobject{}Additional annotations for the Ingress resource.(GRPC) To enable certificate autogeneration, place here your cert-manager annotations. For a full list of possible ingress annotations, please see ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
      thanos.query.ingress.grpc.enabledboolfalseEnable ingress controller resource.(GRPC)
      thanos.query.ingress.grpc.hostslist[{"host":"thanos.local","paths":[{"path":"/","pathType":"Prefix"}]}]Default host for the ingress resource.(GRPC)
      thanos.query.ingress.grpc.ingressClassNamestring""IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)(GRPC) This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster . ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
      thanos.query.ingress.grpc.tlslist[]Ingress TLS configuration. (GRPC)
      thanos.query.ingress.hostslist[{"host":"thanos.local","paths":[{"path":"/","pathType":"Prefix"}]}]Default host for the ingress resource
      thanos.query.ingress.ingressClassNamestring""IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster . ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
      thanos.query.ingress.tlslist[]Ingress TLS configuration
      thanos.query.logLevelstringinfoThanos Query log level
      thanos.query.persesDatasource.createbooltrueCreates a Perses datasource for Thanos Query
      thanos.query.persesDatasource.selectorobject{}Label selectors for the Perses sidecar to detect this datasource.
      thanos.query.plutonoDatasource.createboolfalseCreates a Perses datasource for standalone Thanos Query
      thanos.query.plutonoDatasource.selectorobject{}Label selectors for the Plutono sidecar to detect this datasource.
      thanos.query.replicaLabelstringnil
      thanos.query.replicasstringnilNumber of Thanos Query replicas to deploy
      thanos.query.serviceLabelsobject{}Labels to add to the Thanos Query service
      thanos.query.standaloneboolfalse
      thanos.query.storeslist[]
      thanos.query.tls.dataobject{}
      thanos.query.tls.secretNamestring""
      thanos.query.web.externalPrefixstringnil
      thanos.query.web.routePrefixstringnil
      thanos.ruler.alertmanagersobjectnilConfigures the list of Alertmanager endpoints to send alerts to. The configuration format is defined at https://thanos.io/tip/components/rule.md/#alertmanager.
      thanos.ruler.alertmanagers.authentication.enabledbooltrueEnable Alertmanager authentication for Thanos Ruler
      thanos.ruler.alertmanagers.authentication.ssoCertstringnilSSO Cert for Alertmanager authentication
      thanos.ruler.alertmanagers.authentication.ssoKeystringnilSSO Key for Alertmanager authentication
      thanos.ruler.alertmanagers.enabledbooltrueEnable Thanos Ruler Alertmanager config
      thanos.ruler.alertmanagers.hostsstringnilList of hosts endpoints to send alerts to
      thanos.ruler.annotationsobject{}Annotations to add to the Thanos Ruler resources
      thanos.ruler.enabledboolfalseEnable Thanos Ruler components
      thanos.ruler.externalPrefixstring"/ruler"Set Thanos Ruler external prefix
      thanos.ruler.labelsobject{}Labels to add to the Thanos Ruler deployment
      thanos.ruler.matchLabelstringnilTO DO
      thanos.ruler.serviceLabelsobject{}Labels to add to the Thanos Ruler service
      thanos.serviceMonitor.alertLabelsstringalertLabels: | support_group: “default” meta: ""Labels to add to the PrometheusRules alerts.
      thanos.serviceMonitor.dashboardsbooltrueCreate configmaps containing Perses dashboards
      thanos.serviceMonitor.labelsobject{}Labels to add to the ServiceMonitor/PrometheusRules. Make sure label is matching your Prometheus serviceMonitorSelector/ruleSelector configs by default Greenhouse kube-monitoring follows this label pattern plugin: "{{ $.Release.Name }}"
      thanos.serviceMonitor.selfMonitorboolfalseCreate a ServiceMonitor and PrometheusRules for Thanos components. Disabled by default since label is required for Prometheus serviceMonitorSelector/ruleSelector.
      thanos.store.additionalArgslist[]Adding additional arguments to Thanos Store
      thanos.store.annotationsobject{}Annotations to add to the Thanos Store resources
      thanos.store.chunkPoolSizestring4GBSet Thanos Store chunk-pool-size
      thanos.store.containerLabelsobject{}Labels to add to the Thanos Store container
      thanos.store.deploymentLabelsobject{}Labels to add to the Thanos Store deployment
      thanos.store.enabledbooltrueEnable Thanos Store component
      thanos.store.indexCacheSizestring1GBSet Thanos Store index-cache-size
      thanos.store.logLevelstringinfoThanos Store log level
      thanos.store.serviceLabelsobject{}Labels to add to the Thanos Store service