This section provides an overview of the available PluginDefinitions in Greenhouse.
This is the multi-page printable view of this section. Click here to print.
Plugin Catalog
- 1: Alerts
- 2: Cert-manager
- 3: Decentralized Observer of Policies (Violations)
- 4: Designate Ingress CNAME operator (DISCO)
- 5: DigiCert issuer
- 6: External DNS
- 7: Github Guard
- 8: Ingress NGINX
- 9: Kubernetes Monitoring
- 10: Logshipper
- 11: OpenTelemetry
- 12: Plutono
- 13: Service exposure test
- 14: Teams2Slack
- 15: Thanos
1 - Alerts
Learn more about the alerts plugin. Use it to activate Prometheus alert management for your Greenhouse organisation.
The main terminologies used in this document can be found in core-concepts.
Overview
This Plugin includes a preconfigured Prometheus Alertmanager, which is deployed and managed via the Prometheus Operator, and Supernova, an advanced user interface for Prometheus Alertmanager. Certificates are automatically generated to enable sending alerts from Prometheus to Alertmanager. These alerts can too be sent as Slack notifications with a provided set of notification templates.
Components included in this Plugin:
This Plugin usually is deployed along the kube-monitoring Plugin and does not deploy the Prometheus Operator itself. However, if you are intending to use it stand-alone, you need to explicitly enable the deployment of Prometheus Operator, otherwise it will not work. It can be done in the configuration interface of the plugin.
Disclaimer
This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.
The Plugin is a deeply configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates.
It is intended as a platform that can be extended by following the guide.
Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.
Quick start
This guide provides a quick and straightforward way to use alerts as a Greenhouse Plugin on your Kubernetes cluster.
Prerequisites
- A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
- kube-monitoring plugin (which brings in Prometheus Operator) OR stand alone: awareness to enable the deployment of Prometheus Operator with this plugin
Step 1:
You can install the alerts
package in your cluster with Helm manually or let the Greenhouse platform lifecycle it for you automatically. For the latter, you can either:
- Go to Greenhouse dashboard and select the Alerts Plugin from the catalog. Specify the cluster and required option values.
- Create and specify a
Plugin
resource in your Greenhouse central cluster according to the examples.
Step 2:
After the installation, you can access the Supernova UI by navigating to the Alerts
tab in the Greenhouse dashboard.
Step 3:
Greenhouse regularly performs integration tests that are bundled with alerts. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.
Configuration
Prometheus Alertmanager options
Name | Description | Value |
---|---|---|
alerts.commonLabels | Labels to apply to all resources | {} |
alerts.alertmanager.enabled | Deploy Prometheus Alertmanager | true |
alerts.alertmanager.annotations | Annotations for Alertmanager | {} |
alerts.alertmanager.config | Alertmanager configuration directives. | {} |
alerts.alertmanager.ingress.enabled | Deploy Alertmanager Ingress | false |
alerts.alertmanager.ingress.hosts | Must be provided if Ingress is enabled. | [] |
alerts.alertmanager.ingress.tls | Must be a valid TLS configuration for Alertmanager Ingress. Supernova UI passes the client certificate to retrieve alerts. | {} |
alerts.alertmanager.ingress.ingressClassname | Specifies the ingress-controller | nginx |
alerts.alertmanager.servicemonitor.additionalLabels | kube-monitoring plugin: <plugin.name> to scrape Alertmanager metrics. | {} |
alerts.alertmanager.alertmanagerConfig.slack.routes[].name | Name of the Slack route. | "" |
alerts.alertmanager.alertmanagerConfig.slack.routes[].channel | Slack channel to post alerts to. Must be defined with slack.webhookURL . | "" |
alerts.alertmanager.alertmanagerConfig.slack.routes[].webhookURL | Slack webhookURL to post alerts to. Must be defined with slack.channel . | "" |
alerts.alertmanager.alertmanagerConfig.slack.routes[].matchers | List of matchers that the alert’s label should match. matchType , name , regex , value | [] |
alerts.alertmanager.alertmanagerConfig.webhook.routes[].name | Name of the webhook route. | "" |
alerts.alertmanager.alertmanagerConfig.webhook.routes[].url | Webhook url to post alerts to. | "" |
alerts.alertmanager.alertmanagerConfig.webhook.routes[].matchers | List of matchers that the alert’s label should match. matchType , name , regex , value | [] |
alerts.defaultRules.create | Creates community Alertmanager alert rules. | true |
alerts.defaultRules.labels | kube-monitoring plugin: <plugin.name> to evaluate Alertmanager rules. | {} |
alerts.alertmanager.alertmanagerSpec.alertmanagerConfiguration | AlermanagerConfig to be used as top level configuration | false |
Supernova options
theme
: Override the default theme. Possible values are "theme-light"
or "theme-dark"
(default)
endpoint
: Alertmanager API Endpoint URL /api/v2
. Should be one of alerts.alertmanager.ingress.hosts
silenceExcludedLabels
: SilenceExcludedLabels are labels that are initially excluded by default when creating a silence. However, they can be added if necessary when utilizing the advanced options in the silence form.The labels must be an array of strings. Example: ["pod", "pod_name", "instance"]
filterLabels
: FilterLabels are the labels shown in the filter dropdown, enabling users to filter alerts based on specific criteria. The ‘Status’ label serves as a default filter, automatically computed from the alert status attribute and will be not overwritten. The labels must be an array of strings. Example: ["app", "cluster", "cluster_type"]
predefinedFilters
: PredefinedFilters are filters applied through in the UI to differentiate between contexts through matching alerts with regular expressions. They are loaded by default when the application is loaded. The format is a list of objects including name, displayname and matchers (containing keys corresponding value). Example:
[
{
"name": "prod",
"displayName": "Productive System",
"matchers": {
"region": "^prod-.*"
}
}
]
silenceTemplates
: SilenceTemplates are used in the Modal (schedule silence) to allow pre-defined silences to be used to scheduled maintenance windows. The format consists of a list of objects including description, editable_labels (array of strings specifying the labels that users can modify), fixed_labels (map containing fixed labels and their corresponding values), status, and title. Example:
"silenceTemplates": [
{
"description": "Description of the silence template",
"editable_labels": ["region"],
"fixed_labels": {
"name": "Marvin",
},
"status": "active",
"title": "Silence"
}
]
Managing Alertmanager configuration
ref:
- https://prometheus.io/docs/alerting/configuration/#configuration-file
- https://prometheus.io/webtools/alerting/routing-tree-editor/
By default, the Alertmanager instances will start with a minimal configuration which isn’t really useful since it doesn’t send any notification when receiving alerts.
You have multiple options to provide the Alertmanager configuration:
- You can use
alerts.alertmanager.config
to define a Alertmanager configuration. Example below.
config:
global:
resolve_timeout: 5m
inhibit_rules:
- source_matchers:
- "severity = critical"
target_matchers:
- "severity =~ warning|info"
equal:
- "namespace"
- "alertname"
- source_matchers:
- "severity = warning"
target_matchers:
- "severity = info"
equal:
- "namespace"
- "alertname"
- source_matchers:
- "alertname = InfoInhibitor"
target_matchers:
- "severity = info"
equal:
- "namespace"
route:
group_by: ["namespace"]
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: "null"
routes:
- receiver: "null"
matchers:
- alertname =~ "InfoInhibitor|Watchdog"
receivers:
- name: "null"
templates:
- "/etc/alertmanager/config/*.tmpl"
- You can discover
AlertmanagerConfig
objects. Thespec.alertmanagerConfigSelector
is always set tomatchLabels
:plugin: <name>
to tell the operator whichAlertmanagerConfigs
objects should be selected and merged with the main Alertmanager configuration. Note: The default strategy for aAlertmanagerConfig
object to match alerts isOnNamespace
.
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: config-example
labels:
alertmanagerConfig: example
pluginDefinition: alerts-example
spec:
route:
groupBy: ["job"]
groupWait: 30s
groupInterval: 5m
repeatInterval: 12h
receiver: "webhook"
receivers:
- name: "webhook"
webhookConfigs:
- url: "http://example.com/"
- You can use
alerts.alertmanager.alertmanagerSpec.alertmanagerConfiguration
to reference anAlertmanagerConfig
object in the same namespace which defines the main Alertmanager configuration.
# Example with select a global alertmanagerconfig
alertmanagerConfiguration:
name: global-alertmanager-configuration
Examples
Deploy alerts with Alertmanager
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: alerts
spec:
pluginDefinition: alerts
disabled: false
displayName: Alerts
optionValues:
- name: alerts.alertmanager.enabled
value: true
- name: alerts.alertmanager.ingress.enabled
value: true
- name: alerts.alertmanager.ingress.hosts
value:
- alertmanager.dns.example.com
- name: alerts.alertmanager.ingress.tls
value:
- hosts:
- alertmanager.dns.example.com
secretName: tls-alertmanager-dns-example-com
- name: alerts.alertmanagerConfig.slack.routes
value:
- channel: slack-warning-channel
webhookURL: https://hooks.slack.com/services/some-id
matchers:
- name: severity
matchType: "="
value: "warning"
- channel: slack-critical-channel
webhookURL: https://hooks.slack.com/services/some-id
matchers:
- name: severity
matchType: "="
value: "critical"
- name: alerts.alertmanagerConfig.webhook.routes
value:
- name: webhook-route
url: https://some-webhook-url
matchers:
- name: alertname
matchType: "=~"
value: ".*"
- name: alerts.alertmanager.serviceMonitor.additionalLabels
value:
plugin: kube-monitoring
- name: alerts.defaultRules.create
value: true
- name: alerts.defaultRules.labels
value:
plugin: kube-monitoring
- name: endpoint
value: https://alertmanager.dns.example.com/api/v2
- name: filterLabels
value:
- job
- severity
- status
- name: silenceExcludedLabels
value:
- pod
- pod_name
- instance
Deploy alerts without Alertmanager (Bring your own Alertmanager - Supernova UI only)
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: alerts
spec:
pluginDefinition: alerts
disabled: false
displayName: Alerts
optionValues:
- name: alerts.alertmanager.enabled
value: false
- name: alerts.alertmanager.ingress.enabled
value: false
- name: alerts.defaultRules.create
value: false
- name: endpoint
value: https://alertmanager.dns.example.com/api/v2
- name: filterLabels
value:
- job
- severity
- status
- name: silenceExcludedLabels
value:
- pod
- pod_name
- instance
2 - Cert-manager
This Plugin provides the cert-manager to automate the management of TLS certificates.
Configuration
This section highlights configuration of selected Plugin features.
All available configuration options are described in the plugin.yaml.
Ingress shim
An Ingress resource in Kubernetes configures external access to services in a Kubernetes cluster.
Securing ingress resources with TLS certificates is a common use-case and the cert-manager can be configured to handle these via the ingress-shim
component.
It can be enabled by deploying an issuer in your organization and setting the following options on this plugin.
Option | Type | Description |
---|---|---|
cert-manager.ingressShim.defaultIssuerName | string | Name of the cert-manager issuer to use for TLS certificates |
cert-manager.ingressShim.defaultIssuerKind | string | Kind of the cert-manager issuer to use for TLS certificates |
cert-manager.ingressShim.defaultIssuerGroup | string | Group of the cert-manager issuer to use for TLS certificates |
3 - Decentralized Observer of Policies (Violations)
This directory contains the Greenhouse plugin for the Decentralized Observer of Policies (DOOP).
DOOP
To perform automatic validations on Kubernetes objects, we run a deployment of OPA Gatekeeper in each cluster. This dashboard aggregates all policy violations reported by those Gatekeeper instances.
4 - Designate Ingress CNAME operator (DISCO)
This Plugin provides the Designate Ingress CNAME operator (DISCO) to automate management of DNS entries in OpenStack Designate for Ingress and Services in Kubernetes.
5 - DigiCert issuer
This Plugin provides the digicert-issuer, an external Issuer extending the cert-manager with the DigiCert cert-central API.
6 - External DNS
This Plugin provides the external DNS operator) which synchronizes exposed Kubernetes Services and Ingresses with DNS providers.
7 - Github Guard
Github Guard Greenhouse Plugin manages Github teams, team memberships and repository & team assignments.
Hierarchy of Custom Resources
Custom Resources
Github
– an installation of Github App
apiVersion: githubguard.sap/v1
kind: Github
metadata:
name: com
spec:
webURL: https://github.com
v3APIURL: https://api.github.com
integrationID: 420328
clientUserAgent: greenhouse-github-guard
secret: github-com-secret
GithubOrganization
with Feature & Action Flags
apiVersion: githubguard.sap/v1
kind: GithubOrganization
metadata:
name: com--greenhouse-sandbox
labels:
githubguard.sap/addTeam: "true"
githubguard.sap/removeTeam: "true"
githubguard.sap/addOrganizationOwner: "true"
githubguard.sap/removeOrganizationOwner: "true"
githubguard.sap/addRepositoryTeam: "true"
githubguard.sap/removeRepositoryTeam: "true"
githubguard.sap/dryRun: "false"
Default team & repository assignments:
GithubTeamRepository
for exception team & repository assignments
GithubUsername
for external username matching
apiVersion: githubguard.sap/v1
kind: GithubUsername
metadata:
annotations:
last-check-timestamp: 1681614602
name: com-I313226
spec:
userID: greenhouse_onuryilmaz
githubUsername: onuryilmaz
github: com
8 - Ingress NGINX
This plugin contains the ingress NGINX controller.
Example
To instantiate the plugin create a Plugin
like:
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: ingress-nginx
spec:
pluginDefinition: ingress-nginx-v4.4.0
values:
- name: controller.service.loadBalancerIP
value: 1.2.3.4
9 - Kubernetes Monitoring
Learn more about the kube-monitoring plugin. Use it to activate Kubernetes monitoring for your Greenhouse cluster.
The main terminologies used in this document can be found in core-concepts.
Overview
Observability is often required for operation and automation of service offerings. To get the insights provided by an application and the container runtime environment, you need telemetry data in the form of metrics or logs sent to backends such as Prometheus or OpenSearch. With the kube-monitoring Plugin, you will be able to cover the metrics part of the observability stack.
This Plugin includes a pre-configured package of components that help make getting started easy and efficient. At its core, an automated and managed Prometheus installation is provided using the prometheus-operator. This is complemented by Prometheus target configuration for the most common Kubernetes components providing metrics by default. In addition, Cloud operators curated Prometheus alerting rules and Plutono dashboards are included to provide a comprehensive monitoring solution out of the box.
Components included in this Plugin:
- Prometheus
- Prometheus Operator
- Prometheus target configuration for Kubernetes metrics APIs (e.g. kubelet, apiserver, coredns, etcd)
- Prometheus node exporter
- kube-state-metrics
- kubernetes-operations
Disclaimer
It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the plugin according to your needs.
The Plugin is a deeply configured kube-prometheus-stack Helm chart which helps to keep track of versions and community updates.
It is intended as a platform that can be extended by following the guide.
Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.
Quick start
This guide provides a quick and straightforward way to use kube-monitoring as a Greenhouse Plugin on your Kubernetes cluster.
Prerequisites
- A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
Step 1:
You can install the kube-monitoring
package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle it for you automatically. For the latter, you can either:
- Go to Greenhouse dashboard and select the Kubernetes Monitoring plugin from the catalog. Specify the cluster and required option values.
- Create and specify a
Plugin
resource in your Greenhouse central cluster according to the examples.
Step 2:
After installation, Greenhouse will provide a generated link to the Prometheus user interface. This is done via the annotation greenhouse.sap/expose: “true”
at the Prometheus Service
resource.
Step 3:
Greenhouse regularly performs integration tests that are bundled with kube-monitoring. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.
Configuration
Global options
Name | Description | Value |
---|---|---|
global.commonLabels | Labels to add to all resources. This can be used to add a support_group or service label to all resources and alerting rules. | true |
Prometheus-operator options
Name | Description | Value |
---|---|---|
kubeMonitoring.prometheusOperator.enabled | Manages Prometheus and Alertmanager components | true |
kubeMonitoring.prometheusOperator.alertmanagerInstanceNamespaces | Filter namespaces to look for prometheus-operator Alertmanager resources | [] |
kubeMonitoring.prometheusOperator.alertmanagerConfigNamespaces | Filter namespaces to look for prometheus-operator AlertmanagerConfig resources | [] |
kubeMonitoring.prometheusOperator.prometheusInstanceNamespaces | Filter namespaces to look for prometheus-operator Prometheus resources | [] |
Kubernetes component scraper options
Name | Description | Value |
---|---|---|
kubeMonitoring.kubernetesServiceMonitors.enabled | Flag to disable all the kubernetes component scrapers | true |
kubeMonitoring.kubeApiServer.enabled | Component scraping the kube api server | true |
kubeMonitoring.kubelet.enabled | Component scraping the kubelet and kubelet-hosted cAdvisor | true |
kubeMonitoring.coreDns.enabled | Component scraping coreDns. Use either this or kubeDns | true |
kubeMonitoring.kubeEtcd.enabled | Component scraping etcd | true |
kubeMonitoring.kubeStateMetrics.enabled | Component scraping kube state metrics | true |
kubeMonitoring.nodeExporter.enabled | Deploy node exporter as a daemonset to all nodes | true |
kubeMonitoring.kubeControllerManager.enabled | Component scraping the kube controller manager | false |
kubeMonitoring.kubeScheduler.enabled | Component scraping kube scheduler | false |
kubeMonitoring.kubeProxy.enabled | Component scraping kube proxy | false |
kubeMonitoring.kubeDns.enabled | Component scraping kubeDns. Use either this or coreDns | false |
Prometheus options
Name | Description | Value |
---|---|---|
kubeMonitoring.prometheus.enabled | Deploy a Prometheus instance | true |
kubeMonitoring.prometheus.annotations | Annotations for Prometheus | {} |
kubeMonitoring.prometheus.tlsConfig.caCert | CA certificate to verify technical clients at Prometheus Ingress | Secret |
kubeMonitoring.prometheus.ingress.enabled | Deploy Prometheus Ingress | true |
kubeMonitoring.prometheus.ingress.hosts | Must be provided if Ingress is enabled. | [] |
kubeMonitoring.prometheus.ingress.ingressClassname | Specifies the ingress-controller | nginx |
kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage | How large the persistent volume should be to house the prometheus database. Default 50Gi. | "" |
kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.storageClassName | The storage class to use for the persistent volume. | "" |
kubeMonitoring.prometheus.prometheusSpec.scrapeInterval | Interval between consecutive scrapes. Defaults to 30s | "" |
kubeMonitoring.prometheus.prometheusSpec.scrapeTimeout | Number of seconds to wait for target to respond before erroring | "" |
kubeMonitoring.prometheus.prometheusSpec.evaluationInterval | Interval between consecutive evaluations | "" |
kubeMonitoring.prometheus.prometheusSpec.externalLabels | External labels to add to any time series or alerts when communicating with external systems like Alertmanager | {} |
kubeMonitoring.prometheus.prometheusSpec.ruleSelector | PrometheusRules to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } } | {} |
kubeMonitoring.prometheus.prometheusSpec.serviceMonitorSelector | ServiceMonitors to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } } | {} |
kubeMonitoring.prometheus.prometheusSpec.podMonitorSelector | PodMonitors to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } } | {} |
kubeMonitoring.prometheus.prometheusSpec.probeSelector | Probes to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } } | {} |
kubeMonitoring.prometheus.prometheusSpec.scrapeConfigSelector | scrapeConfigs to be selected for target discovery. Defaults to { matchLabels: { plugin: <metadata.name> } } | {} |
kubeMonitoring.prometheus.prometheusSpec.retention | How long to retain metrics | "" |
kubeMonitoring.prometheus.prometheusSpec.logLevel | Log level to be configured for Prometheus | "" |
kubeMonitoring.prometheus.prometheusSpec.additionalScrapeConfigs | Next to ScrapeConfig CRD, you can use AdditionalScrapeConfigs, which allows specifying additional Prometheus scrape configurations | "" |
kubeMonitoring.prometheus.prometheusSpec.additionalArgs | Allows setting additional arguments for the Prometheus container | [] |
Alertmanager options
Name | Description | Value |
---|---|---|
alerts.enabled | To send alerts to Alertmanager | false |
alerts.alertmanager.hosts | List of Alertmanager hosts Prometheus can send alerts to | [] |
alerts.alertmanager.tlsConfig.cert | TLS certificate for communication with Alertmanager | Secret |
alerts.alertmanager.tlsConfig.key | TLS key for communication with Alertmanager | Secret |
Examples
Deploy kube-monitoring into a remote cluster
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: kube-monitoring
spec:
pluginDefinition: kube-monitoring
disabled: false
optionValues:
- name: kubeMonitoring.prometheus.prometheusSpec.retention
value: 30d
- name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
value: 100Gi
- name: kubeMonitoring.prometheus.service.labels
value:
greenhouse.sap/expose: "true"
- name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
value:
cluster: example-cluster
organization: example-org
region: example-region
- name: alerts.enabled
value: true
- name: alerts.alertmanagers.hosts
value:
- alertmanager.dns.example.com
- name: alerts.alertmanagers.tlsConfig.cert
valueFrom:
secret:
key: tls.crt
name: tls-<org-name>-prometheus-auth
- name: alerts.alertmanagers.tlsConfig.key
valueFrom:
secret:
key: tls.key
name: tls-<org-name>-prometheus-auth
Deploy Prometheus only
Example Plugin
to deploy Prometheus with the kube-monitoring
Plugin.
NOTE: If you are using kube-monitoring for the first time in your cluster, it is necessary to set kubeMonitoring.prometheusOperator.enabled
to true
.
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: example-prometheus-name
spec:
pluginDefinition: kube-monitoring
disabled: false
optionValues:
- name: kubeMonitoring.defaultRules.create
value: false
- name: kubeMonitoring.kubernetesServiceMonitors.enabled
value: false
- name: kubeMonitoring.prometheusOperator.enabled
value: false
- name: kubeMonitoring.kubeStateMetrics.enabled
value: false
- name: kubeMonitoring.nodeExporter.enabled
value: false
- name: kubeMonitoring.prometheus.prometheusSpec.retention
value: 30d
- name: kubeMonitoring.prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage
value: 100Gi
- name: kubeMonitoring.prometheus.service.labels
value:
greenhouse.sap/expose: "true"
- name: kubeMonitoring.prometheus.prometheusSpec.externalLabels
value:
cluster: example-cluster
organization: example-org
region: example-region
- name: alerts.enabled
value: true
- name: alerts.alertmanagers.hosts
value:
- alertmanager.dns.example.com
- name: alerts.alertmanagers.tlsConfig.cert
valueFrom:
secret:
key: tls.crt
name: tls-<org-name>-prometheus-auth
- name: alerts.alertmanagers.tlsConfig.key
valueFrom:
secret:
key: tls.key
name: tls-<org-name>-prometheus-auth
Extension of the plugin
kube-monitoring can be extended with your own Prometheus alerting rules and target configurations via the Custom Resource Definitions (CRDs) of the Prometheus operator. The user-defined resources to be incorporated with the desired configuration are defined via label selections.
The CRD PrometheusRule
enables the definition of alerting and recording rules that can be used by Prometheus or Thanos Rule instances. Alerts and recording rules are reconciled and dynamically loaded by the operator without having to restart Prometheus or Thanos Rule.
kube-monitoring Prometheus will automatically discover and load the rules that match labels plugin: <plugin-name>
.
Example:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: example-prometheus-rule
labels:
plugin: <metadata.name>
## e.g plugin: kube-monitoring
spec:
groups:
- name: example-group
rules:
...
The CRDs PodMonitor
, ServiceMonitor
, Probe
and ScrapeConfig
allow the definition of a set of target endpoints to be scraped by Prometheus. The operator will automatically discover and load the configurations that match labels plugin: <plugin-name>
.
Example:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-pod-monitor
labels:
plugin: <metadata.name>
## e.g plugin: kube-monitoring
spec:
selector:
matchLabels:
app: example-app
namespaceSelector:
matchNames:
- example-namespace
podMetricsEndpoints:
- port: http
...
10 - Logshipper
This Plugin is intended for shipping container and systemd logs to an Elasticsearch/ OpenSearch cluster. It uses fluentbit to collect logs. The default configuration can be found under chart/templates/fluent-bit-configmap.yaml
.
Components included in this Plugin:
Owner
- @ivogoman
Parameters
Name | Description | Value |
---|---|---|
fluent-bit.parser | Parser used for container logs. [docker|cri] labels | “cri” |
fluent-bit.backend.opensearch.host | Host for the Elastic/OpenSearch HTTP Input | |
fluent-bit.backend.opensearch.port | Port for the Elastic/OpenSearch HTTP Input | |
fluent-bit.backend.opensearch.http_user | Username for the Elastic/OpenSearch HTTP Input | |
fluent-bit.backend.opensearch.http_password | Password for the Elastic/OpenSearch HTTP Input | |
fluent-bit.backend.opensearch.host | Host for the Elastic/OpenSearch HTTP Input | |
fluent-bit.filter.additionalValues | list of Key-Value pairs to label logs labels | [] |
fluent-bit.customConfig.inputs | multi-line string containing additional inputs | |
fluent-bit.customConfig.filters | multi-line string containing additional filters | |
fluent-bit.customConfig.outputs | multi-line string containing additional outputs |
Custom Configuration
To add custom configuration to the fluent-bit configuration please check the fluentbit documentation here.
The fluent-bit.customConfig.inputs
, fluent-bit.customConfig.filters
and fluent-bit.customConfig.outputs
parameters can be used to add custom configuration to the default configuration. The configuration should be added as a multi-line string.
Inputs are rendered after the default inputs, filters are rendered after the default filters and before the additional values are added. Outputs are rendered after the default outputs.
The additional values are added to all logs disregaring the source.
Example Input configuration:
fluent-bit:
config:
inputs: |
[INPUT]
Name tail-audit
Path /var/log/containers/greenhouse-controller*.log
Parser {{ default "cri" ( index .Values "fluent-bit" "parser" ) }}
Tag audit.*
Refresh_Interval 5
Mem_Buf_Limit 50MB
Skip_Long_Lines Off
Ignore_Older 1m
DB /var/log/fluent-bit-tail-audit.pos.db
Logs collected by the default configuration are prefixed with default_
. In case that logs from additional inputs are to be send and processed by the same filters and outputs, the prefix should be used as well.
In case additional secrets are required the fluent-bit.env
field can be used to add them to the environment of the fluent-bit container. The secrets should be created by adding them to the fluent-bit.backend
field.
fluent-bit:
backend:
audit:
http_user: top-secret-audit
http_password: top-secret-audit
host: "audit.test"
tls:
enabled: true
verify: true
debug: false
11 - OpenTelemetry
Learn more about the OpenTelemetry Plugin. Use it to enable the ingestion, collection and export of telemetry signals (logs and metrics) for your Greenhouse cluster.
The main terminologies used in this document can be found in core-concepts.
Overview
OpenTelemetry is an observability framework and toolkit for creating and managing telemetry data such as metrics, logs and traces. Unlike other observability tools, OpenTelemetry is vendor and tool agnostic, meaning it can be used with a variety of observability backends, including open source tools such as OpenSearch and Prometheus.
The focus of the plugin is to provide easy-to-use configurations for common use cases of receiving, processing and exporting telemetry data in Kubernetes. The storage and visualization of the same is intentionally left to other tools.
Components included in this Plugin:
Architecture
TBD: Architecture picture
Note
It is the intention to add more configuration over time and contributions of your very own configuration is highly appreciated. If you discover bugs or want to add functionality to the plugin, feel free to create a pull request.
Quick Start
This guide provides a quick and straightforward way to use OpenTelemetry as a Greenhouse Plugin on your Kubernetes cluster.
Prerequisites
- A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
- For logs, a OpenSearch instance to store. If you don’t have one, reach out to your observability team to get access to one.
- For metrics, a Prometheus instance to store. If you don’t have one, install a kube-monitoring Plugin first.
Step 1:
You can install the OpenTelemetry
package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle do it for you automatically. For the latter, you can either:
- Go to Greenhouse dashboard and select the OpenTelemetry plugin from the catalog. Specify the cluster and required option values.
- Create and specify a
Plugin
resource in your Greenhouse central cluster according to the examples.
Step 2:
The package will deploy the OpenTelemetry Operator which works as a manager for the collectors and auto-instrumentation of the workload. By default, the package will include a configuration for collecting metrics and logs. The log-collector is currently processing data from the preconfigured receivers:
- Files via the Filelog Receiver
- Kubernetes Events from the Kubernetes API server
- Journald events from systemd journal
- its own metrics
You can disable the collection of logs by setting open_telemetry.LogCollector.enabled
to false
. The same is true for disabling metrics: open_telemetry.MetricsCollector.enabled
to false
.
Based on the backend selection the telemetry data will be exporter to the backend.
Step 3:
Greenhouse regularly performs integration tests that are bundled with OpenTelemetry. These provide feedback on whether all the necessary resources are installed and continuously up and running. You will find messages about this in the plugin status and also in the Greenhouse dashboard.
Configuration
Name | Description | Type | required |
---|---|---|---|
openTelemetry.logsCollector.enabled | Activates the standard configuration for logs | bool | false |
openTelemetry.metricsCollector.enabled | Activates the standard configuration for metrics | bool | false |
openTelemetry.openSearchLogs.username | Username for OpenSearch endpoint | secret | false |
openTelemetry.openSearchLogs.password | Password for OpenSearch endpoint | secret | false |
openTelemetry.openSearchLogs.endpoint | Endpoint URL for OpenSearch | secret | false |
openTelemetry.region | Region label for logging | string | false |
openTelemetry.cluster | Cluster label for logging | string | false |
openTelemetry.prometheus.additionalLabels | Label selector for Prometheus resources to be picked-up by the operator | map | false |
prometheusRules.additionalRuleLabels | Additional labels for PrometheusRule alerts | map | false |
openTelemetry.prometheus.serviceMonitor.enabled | Activates the service-monitoring for the Logs Collector | bool | false |
openTelemetry.prometheus.podMonitor.enabled | Activates the pod-monitoring for the Logs Collector | bool | false |
openTelemetry-operator.admissionWebhooks.certManager.enabled | Activate to use the CertManager for generating self-signed certificates | bool | true |
opentelemetry-operator.admissionWebhooks.autoGenerateCert.enabled | Activate to use Helm to create self-signed certificates | bool | false |
opentelemetry-operator.admissionWebhooks.autoGenerateCert.recreate | Activate to recreate the cert after a defined period (certPeriodDays default is 365) | bool | false |
opentelemetry-operator.kubeRBACProxy.enabled | Activate to enable Kube-RBAC-Proxy for OpenTelemetry | bool | false |
opentelemetry-operator.manager.prometheusRule.defaultRules.enabled | Activate to enable default rules for monitoring the OpenTelemetry Manager | bool | false |
opentelemetry-operator.manager.prometheusRule.enabled | Activate to enable rules for monitoring the OpenTelemetry Manager | bool | false |
Examples
TBD
12 - Plutono
Learn more about the plutono Plugin. Use it to install the web dashboarding system Plutono to collect, correlate, and visualize Prometheus metrics for your Greenhouse cluster.
The main terminologies used in this document can be found in core-concepts.
Overview
Observability is often required for the operation and automation of service offerings. Plutono provides you with tools to display Prometheus metrics on live dashboards with insightful charts and visualizations. In the Greenhouse context, this complements the kube-monitoring plugin, which automatically acts as a Plutono data source which is recognized by Plutono. In addition, the Plugin provides a mechanism that automates the lifecycle of datasources and dashboards without having to restart Plutono.
Disclaimer
This is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.
Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.
Quick Start
This guide provides a quick and straightforward way how to use Plutono as a Greenhouse Plugin on your Kubernetes cluster.
Prerequisites
- A running and Greenhouse-managed Kubernetes cluster
kube-monitoring
Plugin installed to have at least one Prometheus instance running in the cluster
The plugin works by default with anonymous access enabled. If you use the standard configuration in the kube-monitoring plugin, the data source and some kubernetes-operations dashboards are already pre-installed.
Step 1: Add your dashboards
Dashboards are selected from ConfigMaps
across namespaces. The plugin searches for ConfigMaps
with the label plutono-dashboard: "true"
and imports them into Plutono. The ConfigMap
must contain a key like my-dashboard.json
with the dashboard JSON content. Example
A guide on how to create dashboards can be found here.
Step 2: Add your datasources
Data sources are selected from Secrets
across namespaces. The plugin searches for Secrets
with the label plutono-dashboard: "true"
and imports them into Plutono. The Secrets
should contain valid datasource configuration YAML. Example
Configuration
Parameter | Description | Default |
---|---|---|
plutono.replicas | Number of nodes | 1 |
plutono.deploymentStrategy | Deployment strategy | { "type": "RollingUpdate" } |
plutono.livenessProbe | Liveness Probe settings | { "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 } |
plutono.readinessProbe | Readiness Probe settings | { "httpGet": { "path": "/api/health", "port": 3000 } } |
plutono.securityContext | Deployment securityContext | {"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472} |
plutono.priorityClassName | Name of Priority Class to assign pods | nil |
plutono.image.registry | Image registry | ghcr.io |
plutono.image.repository | Image repository | credativ/plutono |
plutono.image.tag | Overrides the Plutono image tag whose default is the chart appVersion (Must be >= 5.0.0 ) | `` |
plutono.image.sha | Image sha (optional) | `` |
plutono.image.pullPolicy | Image pull policy | IfNotPresent |
plutono.image.pullSecrets | Image pull secrets (can be templated) | [] |
plutono.service.enabled | Enable plutono service | true |
plutono.service.ipFamilies | Kubernetes service IP families | [] |
plutono.service.ipFamilyPolicy | Kubernetes service IP family policy | "" |
plutono.service.type | Kubernetes service type | ClusterIP |
plutono.service.port | Kubernetes port where service is exposed | 80 |
plutono.service.portName | Name of the port on the service | service |
plutono.service.appProtocol | Adds the appProtocol field to the service | `` |
plutono.service.targetPort | Internal service is port | 3000 |
plutono.service.nodePort | Kubernetes service nodePort | nil |
plutono.service.annotations | Service annotations (can be templated) | {} |
plutono.service.labels | Custom labels | {} |
plutono.service.clusterIP | internal cluster service IP | nil |
plutono.service.loadBalancerIP | IP address to assign to load balancer (if supported) | nil |
plutono.service.loadBalancerSourceRanges | list of IP CIDRs allowed access to lb (if supported) | [] |
plutono.service.externalIPs | service external IP addresses | [] |
plutono.service.externalTrafficPolicy | change the default externalTrafficPolicy | nil |
plutono.headlessService | Create a headless service | false |
plutono.extraExposePorts | Additional service ports for sidecar containers | [] |
plutono.hostAliases | adds rules to the pod’s /etc/hosts | [] |
plutono.ingress.enabled | Enables Ingress | false |
plutono.ingress.annotations | Ingress annotations (values are templated) | {} |
plutono.ingress.labels | Custom labels | {} |
plutono.ingress.path | Ingress accepted path | / |
plutono.ingress.pathType | Ingress type of path | Prefix |
plutono.ingress.hosts | Ingress accepted hostnames | ["chart-example.local"] |
plutono.ingress.extraPaths | Ingress extra paths to prepend to every host configuration. Useful when configuring custom actions with AWS ALB Ingress Controller. Requires ingress.hosts to have one or more host entries. | [] |
plutono.ingress.tls | Ingress TLS configuration | [] |
plutono.ingress.ingressClassName | Ingress Class Name. MAY be required for Kubernetes versions >= 1.18 | "" |
plutono.resources | CPU/Memory resource requests/limits | {} |
plutono.nodeSelector | Node labels for pod assignment | {} |
plutono.tolerations | Toleration labels for pod assignment | [] |
plutono.affinity | Affinity settings for pod assignment | {} |
plutono.extraInitContainers | Init containers to add to the plutono pod | {} |
plutono.extraContainers | Sidecar containers to add to the plutono pod | "" |
plutono.extraContainerVolumes | Volumes that can be mounted in sidecar containers | [] |
plutono.extraLabels | Custom labels for all manifests | {} |
plutono.schedulerName | Name of the k8s scheduler (other than default) | nil |
plutono.persistence.enabled | Use persistent volume to store data | false |
plutono.persistence.type | Type of persistence (pvc or statefulset ) | pvc |
plutono.persistence.size | Size of persistent volume claim | 10Gi |
plutono.persistence.existingClaim | Use an existing PVC to persist data (can be templated) | nil |
plutono.persistence.storageClassName | Type of persistent volume claim | nil |
plutono.persistence.accessModes | Persistence access modes | [ReadWriteOnce] |
plutono.persistence.annotations | PersistentVolumeClaim annotations | {} |
plutono.persistence.finalizers | PersistentVolumeClaim finalizers | [ "kubernetes.io/pvc-protection" ] |
plutono.persistence.extraPvcLabels | Extra labels to apply to a PVC. | {} |
plutono.persistence.subPath | Mount a sub dir of the persistent volume (can be templated) | nil |
plutono.persistence.inMemory.enabled | If persistence is not enabled, whether to mount the local storage in-memory to improve performance | false |
plutono.persistence.inMemory.sizeLimit | SizeLimit for the in-memory local storage | nil |
plutono.persistence.disableWarning | Hide NOTES warning, useful when persiting to a database | false |
plutono.initChownData.enabled | If false, don’t reset data ownership at startup | true |
plutono.initChownData.image.registry | init-chown-data container image registry | docker.io |
plutono.initChownData.image.repository | init-chown-data container image repository | busybox |
plutono.initChownData.image.tag | init-chown-data container image tag | 1.31.1 |
plutono.initChownData.image.sha | init-chown-data container image sha (optional) | "" |
plutono.initChownData.image.pullPolicy | init-chown-data container image pull policy | IfNotPresent |
plutono.initChownData.resources | init-chown-data pod resource requests & limits | {} |
plutono.schedulerName | Alternate scheduler name | nil |
plutono.env | Extra environment variables passed to pods | {} |
plutono.envValueFrom | Environment variables from alternate sources. See the API docs on EnvVarSource for format details. Can be templated | {} |
plutono.envFromSecret | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | "" |
plutono.envFromSecrets | List of Kubernetes secrets (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | [] |
plutono.envFromConfigMaps | List of Kubernetes ConfigMaps (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | [] |
plutono.envRenderSecret | Sensible environment variables passed to pods and stored as secret. (passed through tpl) | {} |
plutono.enableServiceLinks | Inject Kubernetes services as environment variables. | true |
plutono.extraSecretMounts | Additional plutono server secret mounts | [] |
plutono.extraVolumeMounts | Additional plutono server volume mounts | [] |
plutono.extraVolumes | Additional Plutono server volumes | [] |
plutono.automountServiceAccountToken | Mounted the service account token on the plutono pod. Mandatory, if sidecars are enabled | true |
plutono.createConfigmap | Enable creating the plutono configmap | true |
plutono.extraConfigmapMounts | Additional plutono server configMap volume mounts (values are templated) | [] |
plutono.extraEmptyDirMounts | Additional plutono server emptyDir volume mounts | [] |
plutono.plugins | Plugins to be loaded along with Plutono | [] |
plutono.datasources | Configure plutono datasources (passed through tpl) | {} |
plutono.alerting | Configure plutono alerting (passed through tpl) | {} |
plutono.notifiers | Configure plutono notifiers | {} |
plutono.dashboardProviders | Configure plutono dashboard providers | {} |
plutono.dashboards | Dashboards to import | {} |
plutono.dashboardsConfigMaps | ConfigMaps reference that contains dashboards | {} |
plutono.plutono.ini | Plutono’s primary configuration | {} |
global.imageRegistry | Global image pull registry for all images. | null |
global.imagePullSecrets | Global image pull secrets (can be templated). Allows either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). | [] |
plutono.ldap.enabled | Enable LDAP authentication | false |
plutono.ldap.existingSecret | The name of an existing secret containing the ldap.toml file, this must have the key ldap-toml . | "" |
plutono.ldap.config | Plutono’s LDAP configuration | "" |
plutono.annotations | Deployment annotations | {} |
plutono.labels | Deployment labels | {} |
plutono.podAnnotations | Pod annotations | {} |
plutono.podLabels | Pod labels | {} |
plutono.podPortName | Name of the plutono port on the pod | plutono |
plutono.lifecycleHooks | Lifecycle hooks for podStart and preStop Example | {} |
plutono.sidecar.image.registry | Sidecar image registry | quay.io |
plutono.sidecar.image.repository | Sidecar image repository | kiwigrid/k8s-sidecar |
plutono.sidecar.image.tag | Sidecar image tag | 1.26.0 |
plutono.sidecar.image.sha | Sidecar image sha (optional) | "" |
plutono.sidecar.imagePullPolicy | Sidecar image pull policy | IfNotPresent |
plutono.sidecar.resources | Sidecar resources | {} |
plutono.sidecar.securityContext | Sidecar securityContext | {} |
plutono.sidecar.enableUniqueFilenames | Sets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable. If set to true the sidecar will create unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces. | false |
plutono.sidecar.alerts.enabled | Enables the cluster wide search for alerts and adds/updates/deletes them in plutono | false |
plutono.sidecar.alerts.label | Label that config maps with alerts should have to be added | plutono_alert |
plutono.sidecar.alerts.labelValue | Label value that config maps with alerts should have to be added | "" |
plutono.sidecar.alerts.searchNamespace | Namespaces list. If specified, the sidecar will search for alerts config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces. | nil |
plutono.sidecar.alerts.watchMethod | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | WATCH |
plutono.sidecar.alerts.resource | Should the sidecar looks into secrets, configmaps or both. | both |
plutono.sidecar.alerts.reloadURL | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | "http://localhost:3000/api/admin/provisioning/alerting/reload" |
plutono.sidecar.alerts.skipReload | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | false |
plutono.sidecar.alerts.initAlerts | Set to true to deploy the alerts sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time. | false |
plutono.sidecar.alerts.extraMounts | Additional alerts sidecar volume mounts. | [] |
plutono.sidecar.dashboards.enabled | Enables the cluster wide search for dashboards and adds/updates/deletes them in plutono | false |
plutono.sidecar.dashboards.SCProvider | Enables creation of sidecar provider | true |
plutono.sidecar.dashboards.provider.name | Unique name of the plutono provider | sidecarProvider |
plutono.sidecar.dashboards.provider.orgid | Id of the organisation, to which the dashboards should be added | 1 |
plutono.sidecar.dashboards.provider.folder | Logical folder in which plutono groups dashboards | "" |
plutono.sidecar.dashboards.provider.folderUid | Allows you to specify the static UID for the logical folder above | "" |
plutono.sidecar.dashboards.provider.disableDelete | Activate to avoid the deletion of imported dashboards | false |
plutono.sidecar.dashboards.provider.allowUiUpdates | Allow updating provisioned dashboards from the UI | false |
plutono.sidecar.dashboards.provider.type | Provider type | file |
plutono.sidecar.dashboards.provider.foldersFromFilesStructure | Allow Plutono to replicate dashboard structure from filesystem. | false |
plutono.sidecar.dashboards.watchMethod | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | WATCH |
plutono.sidecar.skipTlsVerify | Set to true to skip tls verification for kube api calls | nil |
plutono.sidecar.dashboards.label | Label that config maps with dashboards should have to be added | plutono_dashboard |
plutono.sidecar.dashboards.labelValue | Label value that config maps with dashboards should have to be added | "" |
plutono.sidecar.dashboards.folder | Folder in the pod that should hold the collected dashboards (unless sidecar.dashboards.defaultFolderName is set). This path will be mounted. | /tmp/dashboards |
plutono.sidecar.dashboards.folderAnnotation | The annotation the sidecar will look for in configmaps to override the destination folder for files | nil |
plutono.sidecar.dashboards.defaultFolderName | The default folder name, it will create a subfolder under the sidecar.dashboards.folder and put dashboards in there instead | nil |
plutono.sidecar.dashboards.searchNamespace | Namespaces list. If specified, the sidecar will search for dashboards config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces. | nil |
plutono.sidecar.dashboards.script | Absolute path to shell script to execute after a configmap got reloaded. | nil |
plutono.sidecar.dashboards.reloadURL | Full url of dashboards configuration reload API endpoint, to invoke after a config-map change | "http://localhost:3000/api/admin/provisioning/dashboards/reload" |
plutono.sidecar.dashboards.skipReload | Enabling this omits defining the REQ_USERNAME, REQ_PASSWORD, REQ_URL and REQ_METHOD environment variables | false |
plutono.sidecar.dashboards.resource | Should the sidecar looks into secrets, configmaps or both. | both |
plutono.sidecar.dashboards.extraMounts | Additional dashboard sidecar volume mounts. | [] |
plutono.sidecar.datasources.enabled | Enables the cluster wide search for datasources and adds/updates/deletes them in plutono | false |
plutono.sidecar.datasources.label | Label that config maps with datasources should have to be added | plutono_datasource |
plutono.sidecar.datasources.labelValue | Label value that config maps with datasources should have to be added | "" |
plutono.sidecar.datasources.searchNamespace | Namespaces list. If specified, the sidecar will search for datasources config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces. | nil |
plutono.sidecar.datasources.watchMethod | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | WATCH |
plutono.sidecar.datasources.resource | Should the sidecar looks into secrets, configmaps or both. | both |
plutono.sidecar.datasources.reloadURL | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | "http://localhost:3000/api/admin/provisioning/datasources/reload" |
plutono.sidecar.datasources.skipReload | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | false |
plutono.sidecar.datasources.initDatasources | Set to true to deploy the datasource sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any datasources defined at startup time. | false |
plutono.sidecar.notifiers.enabled | Enables the cluster wide search for notifiers and adds/updates/deletes them in plutono | false |
plutono.sidecar.notifiers.label | Label that config maps with notifiers should have to be added | plutono_notifier |
plutono.sidecar.notifiers.labelValue | Label value that config maps with notifiers should have to be added | "" |
plutono.sidecar.notifiers.searchNamespace | Namespaces list. If specified, the sidecar will search for notifiers config-maps (or secrets) inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It’s also possible to specify ALL to search in all namespaces. | nil |
plutono.sidecar.notifiers.watchMethod | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | WATCH |
plutono.sidecar.notifiers.resource | Should the sidecar looks into secrets, configmaps or both. | both |
plutono.sidecar.notifiers.reloadURL | Full url of notifier configuration reload API endpoint, to invoke after a config-map change | "http://localhost:3000/api/admin/provisioning/notifications/reload" |
plutono.sidecar.notifiers.skipReload | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | false |
plutono.sidecar.notifiers.initNotifiers | Set to true to deploy the notifier sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any notifiers defined at startup time. | false |
plutono.smtp.existingSecret | The name of an existing secret containing the SMTP credentials. | "" |
plutono.smtp.userKey | The key in the existing SMTP secret containing the username. | "user" |
plutono.smtp.passwordKey | The key in the existing SMTP secret containing the password. | "password" |
plutono.admin.existingSecret | The name of an existing secret containing the admin credentials (can be templated). | "" |
plutono.admin.userKey | The key in the existing admin secret containing the username. | "admin-user" |
plutono.admin.passwordKey | The key in the existing admin secret containing the password. | "admin-password" |
plutono.serviceAccount.automountServiceAccountToken | Automount the service account token on all pods where is service account is used | false |
plutono.serviceAccount.annotations | ServiceAccount annotations | |
plutono.serviceAccount.create | Create service account | true |
plutono.serviceAccount.labels | ServiceAccount labels | {} |
plutono.serviceAccount.name | Service account name to use, when empty will be set to created account if serviceAccount.create is set else to default | `` |
plutono.serviceAccount.nameTest | Service account name to use for test, when empty will be set to created account if serviceAccount.create is set else to default | nil |
plutono.rbac.create | Create and use RBAC resources | true |
plutono.rbac.namespaced | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the plutono instance | false |
plutono.rbac.useExistingRole | Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here. | nil |
plutono.rbac.pspEnabled | Create PodSecurityPolicy (with rbac.create , grant roles permissions as well) | false |
plutono.rbac.pspUseAppArmor | Enforce AppArmor in created PodSecurityPolicy (requires rbac.pspEnabled ) | false |
plutono.rbac.extraRoleRules | Additional rules to add to the Role | [] |
plutono.rbac.extraClusterRoleRules | Additional rules to add to the ClusterRole | [] |
plutono.command | Define command to be executed by plutono container at startup | nil |
plutono.args | Define additional args if command is used | nil |
plutono.testFramework.enabled | Whether to create test-related resources | true |
plutono.testFramework.image.registry | test-framework image registry. | docker.io |
plutono.testFramework.image.repository | test-framework image repository. | bats/bats |
plutono.testFramework.image.tag | test-framework image tag. | v1.4.1 |
plutono.testFramework.imagePullPolicy | test-framework image pull policy. | IfNotPresent |
plutono.testFramework.securityContext | test-framework securityContext | {} |
plutono.downloadDashboards.env | Environment variables to be passed to the download-dashboards container | {} |
plutono.downloadDashboards.envFromSecret | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | "" |
plutono.downloadDashboards.resources | Resources of download-dashboards container | {} |
plutono.downloadDashboardsImage.registry | Curl docker image registry | docker.io |
plutono.downloadDashboardsImage.repository | Curl docker image repository | curlimages/curl |
plutono.downloadDashboardsImage.tag | Curl docker image tag | 7.73.0 |
plutono.downloadDashboardsImage.sha | Curl docker image sha (optional) | "" |
plutono.downloadDashboardsImage.pullPolicy | Curl docker image pull policy | IfNotPresent |
plutono.namespaceOverride | Override the deployment namespace | "" (Release.Namespace ) |
plutono.serviceMonitor.enabled | Use servicemonitor from prometheus operator | false |
plutono.serviceMonitor.namespace | Namespace this servicemonitor is installed in | |
plutono.serviceMonitor.interval | How frequently Prometheus should scrape | 1m |
plutono.serviceMonitor.path | Path to scrape | /metrics |
plutono.serviceMonitor.scheme | Scheme to use for metrics scraping | http |
plutono.serviceMonitor.tlsConfig | TLS configuration block for the endpoint | {} |
plutono.serviceMonitor.labels | Labels for the servicemonitor passed to Prometheus Operator | {} |
plutono.serviceMonitor.scrapeTimeout | Timeout after which the scrape is ended | 30s |
plutono.serviceMonitor.relabelings | RelabelConfigs to apply to samples before scraping. | [] |
plutono.serviceMonitor.metricRelabelings | MetricRelabelConfigs to apply to samples before ingestion. | [] |
plutono.revisionHistoryLimit | Number of old ReplicaSets to retain | 10 |
plutono.networkPolicy.enabled | Enable creation of NetworkPolicy resources. | false |
plutono.networkPolicy.allowExternal | Don’t require client label for connections | true |
plutono.networkPolicy.explicitNamespacesSelector | A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed | {} |
plutono.networkPolicy.ingress | Enable the creation of an ingress network policy | true |
plutono.networkPolicy.egress.enabled | Enable the creation of an egress network policy | false |
plutono.networkPolicy.egress.ports | An array of ports to allow for the egress | [] |
plutono.enableKubeBackwardCompatibility | Enable backward compatibility of kubernetes where pod’s definition version below 1.13 doesn’t have the enableServiceLinks option | false |
Example of extraVolumeMounts and extraVolumes
Configure additional volumes with extraVolumes
and volume mounts with extraVolumeMounts
.
Example for extraVolumeMounts
and corresponding extraVolumes
:
extraVolumeMounts:
- name: plugins
mountPath: /var/lib/plutono/plugins
subPath: configs/plutono/plugins
readOnly: false
- name: dashboards
mountPath: /var/lib/plutono/dashboards
hostPath: /usr/shared/plutono/dashboards
readOnly: false
extraVolumes:
- name: plugins
existingClaim: existing-plutono-claim
- name: dashboards
hostPath: /usr/shared/plutono/dashboards
Volumes default to emptyDir
. Set to persistentVolumeClaim
,
hostPath
, csi
, or configMap
for other types. For a
persistentVolumeClaim
, specify an existing claim name with
existingClaim
.
Import dashboards
There are a few methods to import dashboards to Plutono. Below are some examples and explanations as to how to use each method:
dashboards:
default:
some-dashboard:
json: |
{
"annotations":
...
# Complete json file here
...
"title": "Some Dashboard",
"uid": "abcd1234",
"version": 1
}
custom-dashboard:
# This is a path to a file inside the dashboards directory inside the chart directory
file: dashboards/custom-dashboard.json
prometheus-stats:
# Ref: https://plutono.com/dashboards/2
gnetId: 2
revision: 2
datasource: Prometheus
loki-dashboard-quick-search:
gnetId: 12019
revision: 2
datasource:
- name: DS_PROMETHEUS
value: Prometheus
local-dashboard:
url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
Create a dashboard
Click Dashboards in the main menu.
Click New and select New Dashboard.
Click Add new empty panel.
Important: Add a datasource variable as they are provisioned in the cluster.
- Go to Dashboard settings.
- Click Variables.
- Click Add variable.
- General: Configure the variable with a proper Name as Type
Datasource
. - Data source options: Select the data source Type e.g.
Prometheus
. - Click Update.
- Go back.
Develop your panels.
- On the Edit panel view, choose your desired Visualization.
- Select the datasource variable you just created.
- Write or construct a query in the query language of your data source.
- Move and resize the panels as needed.
Optionally add a tag to the dashboard to make grouping easier.
- Go to Dashboard settings.
- In the General section, add a Tag.
Click Save. Note that the dashboard is saved in the browser’s local storage.
Export the dashboard.
- Go to Dashboard settings.
- Click JSON Model.
- Copy the JSON model.
- Go to your Github repository and create a new JSON file in the
dashboards
directory.
BASE64 dashboards
Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit) A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk. If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.
Gerrit use case
Gerrit API for download files has the following schema: https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content where {project-name} and {file-id} usually has ‘/’ in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard the url value is https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content
Sidecar for dashboards
If the parameter sidecar.dashboards.enabled
is set, a sidecar container is deployed in the plutono
pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with
a label as defined in sidecar.dashboards.label
. The files defined in those configmaps are written
to a folder and accessed by plutono. Changes to the configmaps are monitored and the imported
dashboards are deleted/updated.
A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside one configmap is currently not properly mirrored in plutono.
NOTE: Configure your data sources in your dashboards as variables to keep them portable across clusters.
Example dashboard config:
Folder structure:
dashboards/
├── dashboard1.json
├── dashboard2.json
templates/
├──dashboard-json-configmap.yaml
Helm template to create a configmap for each dashboard:
{{- range $path, $bytes := .Files.Glob "dashboards/*.json" }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" $.Release.Name $path | replace "/" "-" | trunc 63 }}
labels:
plutono-dashboard: "true"
data:
{{ printf "%s: |-" $path | replace "/" "-" | indent 2 }}
{{ printf "%s" $bytes | indent 4 }}
{{- end }}
Sidecar for datasources
If the parameter sidecar.datasources.enabled
is set, an init container is deployed in the plutono
pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and
filters out the ones with a label as defined in sidecar.datasources.label
. The files defined in
those secrets are written to a folder and accessed by plutono on startup. Using these yaml files,
the data sources in plutono can be imported.
Should you aim for reloading datasources in Plutono each time the config is changed, set sidecar.datasources.skipReload: false
and adjust sidecar.datasources.reloadURL
to http://<svc-name>.<namespace>.svc.cluster.local/api/admin/provisioning/datasources/reload
.
Secrets are recommended over configmaps for this usecase because datasources usually contain private data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those.
Example datasource config:
apiVersion: v1
kind: Secret
metadata:
name: plutono-datasources
labels:
# default value for: sidecar.datasources.label
plutono-datasource: "true"
stringData:
datasources.yaml: |-
apiVersion: 1
datasources:
- name: my-prometheus
type: prometheus
access: proxy
orgId: 1
url: my-url-domain:9090
isDefault: false
jsonData:
httpMethod: 'POST'
editable: false
NOTE: If you might include credentials in your datasource configuration, make sure to not use stringdata but base64 encoded data instead.
apiVersion: v1
kind: Secret
metadata:
name: my-datasource
labels:
plutono-datasource: "true"
data:
# The key must contain a unique name and the .yaml file type
my-datasource.yaml: {{ include (print $.Template.BasePath "my-datasource.yaml") . | b64enc }}
Example values to add a datasource adapted from Grafana:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
# <string, required> Sets the name you use to refer to
# the data source in panels and queries.
- name: my-prometheus
# <string, required> Sets the data source type.
type: prometheus
# <string, required> Sets the access mode, either
# proxy or direct (Server or Browser in the UI).
# Some data sources are incompatible with any setting
# but proxy (Server).
access: proxy
# <int> Sets the organization id. Defaults to orgId 1.
orgId: 1
# <string> Sets a custom UID to reference this
# data source in other parts of the configuration.
# If not specified, Plutono generates one.
uid:
# <string> Sets the data source's URL, including the
# port.
url: my-url-domain:9090
# <string> Sets the database user, if necessary.
user:
# <string> Sets the database name, if necessary.
database:
# <bool> Enables basic authorization.
basicAuth:
# <string> Sets the basic authorization username.
basicAuthUser:
# <bool> Enables credential headers.
withCredentials:
# <bool> Toggles whether the data source is pre-selected
# for new panels. You can set only one default
# data source per organization.
isDefault: false
# <map> Fields to convert to JSON and store in jsonData.
jsonData:
httpMethod: 'POST'
# <bool> Enables TLS authentication using a client
# certificate configured in secureJsonData.
# tlsAuth: true
# <bool> Enables TLS authentication using a CA
# certificate.
# tlsAuthWithCACert: true
# <map> Fields to encrypt before storing in jsonData.
secureJsonData:
# <string> Defines the CA cert, client cert, and
# client key for encrypted authentication.
# tlsCACert: '...'
# tlsClientCert: '...'
# tlsClientKey: '...'
# <string> Sets the database password, if necessary.
# password:
# <string> Sets the basic authorization password.
# basicAuthPassword:
# <int> Sets the version. Used to compare versions when
# updating. Ignored when creating a new data source.
version: 1
# <bool> Allows users to edit data sources from the
# Plutono UI.
editable: false
How to serve Plutono with a path prefix (/plutono)
In order to serve Plutono with a prefix (e.g., http://example.com/plutono), add the following to your values.yaml.
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
path: /plutono/?(.*)
hosts:
- k8s.example.dev
plutono.ini:
server:
root_url: http://localhost:3000/plutono # this host can be localhost
How to securely reference secrets in plutono.ini
This example uses Plutono file providers for secret values and the extraSecretMounts
configuration flag (Additional plutono server secret mounts) to mount the secrets.
In plutono.ini:
plutono.ini:
[auth.generic_oauth]
enabled = true
client_id = $__file{/etc/secrets/auth_generic_oauth/client_id}
client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret}
Existing secret, or created along with helm:
---
apiVersion: v1
kind: Secret
metadata:
name: auth-generic-oauth-secret
type: Opaque
stringData:
client_id: <value>
client_secret: <value>
Include in the extraSecretMounts
configuration flag:
- extraSecretMounts:
- name: auth-generic-oauth-secret-mount
secretName: auth-generic-oauth-secret
defaultMode: 0440
mountPath: /etc/secrets/auth_generic_oauth
readOnly: true
13 - Service exposure test
This Plugin is just providing a simple exposed service for manual testing.
By adding the following label to a service it will become accessible from the central greenhouse system via a service proxy:
greenhouse.sap/expose: "true"
This plugin create an nginx deployment with an exposed service for testing.
Configuration
Specific port
By default expose would always use the first port. If you need another port, you’ve got to specify it by name:
greenhouse.sap/exposeNamedPort: YOURPORTNAME
14 - Teams2Slack
Introduction
This Plugin provides a Slack integration for a Greenhouse organization.
It manages Slack entities like channels, groups, handles, etc. and its members based on the teams configured in your Greenhouse organization.
Important: Please ensure that only one deployment of Teams2slack runs against the same set of groups in slack. Secondary instances should run in the provided Dry-Run mode. Otherwise you might notice inconsistencies if the Teammembership object of a cluster are uneqal.
Requirments
- A Kubernetes Cluster to run against
- The presence of the Greenhouse Teammemberships CRD and corresponding objects.
Architecture
The Teammembership contain the members of a team. Changes to an object will create an event in Kubernetes. This event will be consumed by the first controller. It creates a mirrored SlackGroup object that reflects the content of the Teammembership Object. This approach has the advantage that deletion of a team can be securely detected with the utilization of finalizers. The second controller detects changes on SlackGroup objects. The users present in a team will be aligned to a slack group.
Configuration
Deploy a the Teams2Slack Plugin and it’s Plugin which looks like the following structure (the following structure only includes the mandatory fields):
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: teams2slack
namespace: default
spec:
pluginDefinition: teams2slack
disabled: false
optionValues:
- name: groupNamePrefix
value:
- name: groupNameSuffix
value:
- name: infoChannelID
value:
- name: token
valueFrom:
secret:
key: SLACK_TOKEN
name: teams2slack-secret
---
apiVersion: v1
kind: Secret
metadata:
name: teams2slack-secret
type: Opaque
data:
SLACK_TOKEN: // Slack token b64 encoded
The values that can or need to be provided have the following meaning:
Environment Variable | Meaning |
---|---|
groupNamePrefix (mandatory) | The prefix the created slack group should have. Choose a prefix that matches your organization. |
groupNameSuffix (mandatory) | The suffix the created slack group should have. Choose a suffix that matches your organization. |
infoChannelID (mandatory) | The channel ID created Slack Groups should have. You can currently define one slack ID which will be applied to all created groups. Make sure to take the channel ID and not the channel name. |
token(mandatory) | the slack token to authenticate against Slack. |
eventRequeueTimer (optional) | If a slack API requests fails due to a network error, or because data is currently fetched, it will be requed to the operators workQueue. Uses the golang date format. (1s = every second 1m = every minute ) |
loadDataBackoffTimer (optional) | Defines, when a Slack-API data call occurs. Uses the golang data format. |
dryRun (optional) | Slack write operations are not executed if value is set to true. Requires a valid. Requires: A valid SLACK_TOKEN; the other environment variables can be mocked. |
15 - Thanos
Learn more about the Thanos Plugin. Use it to enable extended metrics retention and querying across Prometheus servers and Greenhouse clusters.
The main terminologies used in this document can be found in core-concepts.
Overview
Thanos is a set of components that can be used to extend the storage and retrieval of metrics in Prometheus. It allows you to store metrics in a remote object store and query them across multiple Prometheus servers and Greenhouse clusters. This Plugin is intended to provide a set of pre-configured Thanos components that enable a proven composition. At the core, a set of Thanos components is installed that adds long-term storage capability to a single kube-monitoring Plugin and makes both current and historical data available again via one Thanos Query component.
The Thanos Sidecar is a component that is deployed as a container together with a Prometheus instance. This allows Thanos to optionally upload metrics to the object store and Thanos Query to access Prometheus data via a common, efficient StoreAPI.
The Thanos Compact component applies the Prometheus 2.0 Storage Engine compaction process to data uploaded to the object store. The Compactor is also responsible for applying the configured retention and downsampling of the data.
The Thanos Store also implements the StoreAPI and serves the historical data from an object store. It acts primarily as an API gateway and has no persistence itself.
Thanos Query implements the Prometheus HTTP v1 API for querying data in a Thanos cluster via PromQL. In short, it collects the data needed to evaluate the query from the connected StoreAPIs, evaluates the query and returns the result.
This plugin deploys the following Thanos components:
Planned components:
This Plugin does not deploy the following components:
- Thanos Sidecar This component is installed in the kube-monitoring plugin.
Disclaimer
It is not meant to be a comprehensive package that covers all scenarios. If you are an expert, feel free to configure the Plugin according to your needs.
Contribution is highly appreciated. If you discover bugs or want to add functionality to the plugin, then pull requests are always welcome.
Quick start
This guide provides a quick and straightforward way to use Thanos as a Greenhouse Plugin on your Kubernetes cluster. The guide is meant to build the following setup.
Prerequisites
- A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
- Ready to use credentials for a compatible object store
- kube-monitoring plugin installed. Thanos Sidecar on the Prometheus must be enabled by providing the required object store credentials.
Step 1:
Create a Kubernetes Secret with your object store credentials following the Object Store preparation section.
Step 2:
Enable the Thanos Sidecar on the Prometheus in the kube-monitoring plugin by providing the required object store credentials. Follow the kube-monitoring plugin enablement section.
Step 3:
Create a Thanos Query Plugin by following the Thanos Query section.
Configuration
Object Store preparation
To run Thanos, you need object storage credentials. Get the credentials of your provider and add them to a Kubernetes Secret. The Thanos documentation provides a great overview on the different supported store types.
Usually this looks somewhat like this
type: $STORAGE_TYPE
config:
user:
password:
domain:
...
If you’ve got everything in a file, deploy it in your remote cluster in the namespace, where Prometheus and Thanos will be.
Important: $THANOS_PLUGIN_NAME
is needed later for the respective Thanos plugin and they must not be different!
kubectl create secret generic $THANOS_PLUGIN_NAME-metrics-objectstore --from-file=thanos.yaml=/path/to/your/file
kube-monitoring plugin enablement
Prometheus in kube-monitoring needs to be altered to have a sidecar and ship metrics to the new object store too. You have to provide the Secret you’ve just created to the (most likely already existing) kube-monitoring plugin. Add this:
spec:
optionValues:
- name: kubeMonitoring.prometheus.prometheusSpec.thanos.objectStorageConfig.existingSecret.key
value: thanos.yaml
- name: kubeMonitoring.prometheus.prometheusSpec.thanos.objectStorageConfig.existingSecret.name
value: $THANOS_PLUGIN_NAME-metrics-objectstore
Values used here are described in the Prometheus Operator Spec.
Thanos Query
This is the real deal now: Define your Thanos Query by creating a plugin.
NOTE1: $THANOS_PLUGIN_NAME
needs to be consistent with your secret created earlier.
NOTE2: The releaseNamespace
needs to be the same as to where kube-monitoring resides. By default this is kube-monitoring.
apiVersion: greenhouse.sap/v1alpha1
kind: Plugin
metadata:
name: $YOUR_CLUSTER_NAME
spec:
pluginDefinition: thanos
disabled: false
clusterName: $YOUR_CLUSTER_NAME
releaseNamespace: kube-monitoring
[OPTIONAL] Handling your Prometheus and Thanos Stores.
Default Prometheus and Thanos Endpoint
Thanos Query is automatically adding the Prometheus and Thanos endpoints. If you just have a single Prometheus with Thanos enabled this will work out of the box. Details in the next two chapters. See Standalone Query for your own configuration.
Prometheus Endpoint
Thanos Query would check for a service prometheus-operated
in the same namespace with this GRPC port to be available 10901
. The cli option looks like this and is configured in the Plugin itself:
--store=prometheus-operated:10901
Thanos Endpoint
Thanos Query would check for a Thanos endpoint named like releaseName-store
. The associated command line flag for this parameter would look like:
--store=thanos-kube-store:10901
If you just have one occurence of this Thanos plugin dpeloyed, the default option would work and does not need anything else.
Standalone Query
In case you want to achieve a setup like above and have an overarching Thanos Query to run with multiple Stores, you can set it to standalone
and add your own store list. Setup your Plugin like this:
spec:
optionsValues:
- name: thanos.query.standalone
value: true
This would enable you to either:
query multiple stores with a single Query
spec: optionsValues: - name: thanos.query.stores value: - thanos-kube-1-store:10901 - thanos-kube-2-store:10901 - kube-monitoring-1-prometheus:10901 - kube-monitoring-2-prometheus:10901
query multiple Thanos Queries with a single Query Note that there is no
-store
suffix here in this case.spec: optionsValues: - name: thanos.query.stores value: - thanos-kube-1:10901 - thanos-kube-2:10901
Operations
Thanos Compactor
If you deploy the plugin with the default values, Thanos compactor will be shipped too and use the same secret ($THANOS_PLUGIN_NAME-metrics-objectstore
) to retrieve, compact and push back timeseries.
Based on experience, a 100Gi-PVC is used in order not to overload the ephermeral storage of the Kubernetes Nodes. Depending on the configured retention and the amount of metrics, this may not be sufficient and larger volumes may be required. In any case, it is always safe to clear the volume of the compactor and increase it if necessary.
The object storage costs will be heavily impacted on how granular timeseries are being stored (reference Downsampling). These are the pre-configured defaults, you can change them as needed:
raw: 777600s (90d)
5m: 777600s (90d)
1h: 157680000 (5y)