Audit Logs Plugin

Learn more about the Audit Logs Plugin. Use it to enable the ingestion, collection and export of telemetry signals (logs and metrics) for your Greenhouse cluster.

The main terminologies used in this document can be found in core-concepts.

Overview

OpenTelemetry is an observability framework and toolkit for creating and managing telemetry data such as metrics, logs and traces. Unlike other observability tools, OpenTelemetry is vendor and tool agnostic, meaning it can be used with a variety of observability backends, including open source tools such as OpenSearch and Prometheus.

The focus of the Plugin is to provide easy-to-use configurations for common use cases of receiving, processing and exporting telemetry data in Kubernetes. The storage and visualization of the same is intentionally left to other tools.

Components included in this Plugin:

Architecture

OpenTelemetry Architecture

Note

It is the intention to add more configuration over time and contributions of your very own configuration is highly appreciated. If you discover bugs or want to add functionality to the Plugin, feel free to create a pull request.

Quick Start

This guide provides a quick and straightforward way to use OpenTelemetry for Logs as a Greenhouse Plugin on your Kubernetes cluster.

Prerequisites

  • A running and Greenhouse-onboarded Kubernetes cluster. If you don’t have one, follow the Cluster onboarding guide.
  • For logs, a OpenSearch instance to store. If you don’t have one, reach out to your observability team to get access to one.
  • We recommend a running cert-manager in the cluster before installing the Logs Plugin
  • To gather metrics, you must have a Prometheus instance in the onboarded cluster for storage and for managing Prometheus specific CRDs. If you don not have an instance, install the kube-monitoring Plugin first.
  • The Audit Logs Plugin currently requires the OpenTelemetry Operator bundled in the Logs Plugin to be installed in the same cluster beforehand. This is a technical limitation of the Audit Logs Plugin and will be removed in future releases.

Step 1:

You can install the Logs package in your cluster by installing it with Helm manually or let the Greenhouse platform lifecycle do it for you automatically. For the latter, you can either:

  1. Go to Greenhouse dashboard and select the Logs Plugin from the catalog. Specify the cluster and required option values.
  2. Create and specify a Plugin resource in your Greenhouse central cluster according to the examples.

Step 2:

The package will deploy the OpenTelemetry collectors and auto-instrumentation of the workload. By default, the package will include a configuration for collecting metrics and logs. The log-collector is currently processing data from the preconfigured receivers:

  • Files via the Filelog Receiver
  • Kubernetes Events from the Kubernetes API server
  • Journald events from systemd journal
  • its own metrics

Based on the backend selection the telemetry data will be exporter to the backend.

Failover Connector

The Logs Plugin comes with a Failover Connector for OpenSearch for two users. The connector will periodically try to establish a stable connection for the prefered user (failover_username_a) and in case of a failed try, the connector will try to establish a connection with the fallback user (failover_username_b). This feature can be used to secure the shipping of logs in case of expiring credentials or password rotation.

Values

KeyTypeDefaultDescription
auditLogs.clusterstringnilCluster label for Logging
auditLogs.collectorImage.repositorystring"ghcr.io/cloudoperators/opentelemetry-collector-contrib"overrides the default image repository for the OpenTelemetry Collector image.
auditLogs.collectorImage.tagstring"5b6e153"overrides the default image tag for the OpenTelemetry Collector image.
auditLogs.customLabelsstringnilCustom labels to apply to all OpenTelemetry related resources
auditLogs.openSearchLogs.endpointstringnilEndpoint URL for OpenSearch
auditLogs.openSearchLogs.failoverobject{"enabled":true}Activates the failover mechanism for shipping logs using the failover_username_band failover_password_b credentials in case the credentials failover_username_a and failover_password_a have expired.
auditLogs.openSearchLogs.failover_password_astringnilPassword for OpenSearch endpoint
auditLogs.openSearchLogs.failover_password_bstringnilSecond Password (as a failover) for OpenSearch endpoint
auditLogs.openSearchLogs.failover_username_astringnilUsername for OpenSearch endpoint
auditLogs.openSearchLogs.failover_username_bstringnilSecond Username (as a failover) for OpenSearch endpoint
auditLogs.openSearchLogs.indexstringnilName for OpenSearch index
auditLogs.prometheus.podMonitorobject{"enabled":false}Activates the service-monitoring for the Logs Collector.
auditLogs.prometheus.serviceMonitorobject{"enabled":false}Activates the pod-monitoring for the Logs Collector.
auditLogs.regionstringnilRegion label for Logging
commonLabelsstringnilCommon labels to apply to all resources

Examples

TBD