Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -743,7 +743,7 @@ Topics:
File: efk-logging-kibana
- Name: Configuring Curator
File: efk-logging-curator
- Name: Configuring Fluentd
- Name: Configuring the logging collector
File: efk-logging-fluentd
- Name: Configuring systemd-journald
File: efk-logging-systemd
Expand Down
3 changes: 0 additions & 3 deletions logging/config/efk-logging-configuring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -69,15 +69,12 @@ environment variable in the `cluster-logging-operator` Deployment.

* You can specify specific nodes for the logging components using node selectors.

////
4.1
* You can specify the Log collectors to deploy to each node in a cluster, either Fluentd or Rsyslog.

[IMPORTANT]
====
The Rsyslog log collector is currently a Technology Preview feature.
====
////

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
10 changes: 4 additions & 6 deletions logging/config/efk-logging-fluentd.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
:context: efk-logging-fluentd
[id="efk-logging-fluentd"]
= Configuring Fluentd
= Configuring the logging collector
include::modules/common-attributes.adoc[]

toc::[]

{product-title} uses Fluentd to collect operations and application logs from your cluster which {product-title} enriches with Kubernetes Pod and Namespace metadata.
{product-title} uses Fluentd or Rsyslog to collect operations and application logs from your cluster which {product-title} enriches with Kubernetes Pod and Namespace metadata.

You can configure log rotation, log location, use an external log aggregator, and make other configurations.
You can configure log rotation, log location, use an external log aggregator, change the log collector, and make other configurations for either log collector.

[NOTE]
====
Expand All @@ -29,11 +29,9 @@ include::modules/efk-logging-fluentd-limits.adoc[leveloffset=+1]
////
4.1
modules/efk-logging-fluentd-log-rotation.adoc[leveloffset=+1]

4.2
modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]
////

include::modules/efk-logging-fluentd-collector.adoc[leveloffset=+1]

include::modules/efk-logging-fluentd-log-location.adoc[leveloffset=+1]

Expand Down
4 changes: 2 additions & 2 deletions logging/config/efk-logging-systemd.adoc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
:context: efk-logging-systemd
[id="efk-logging-systemd"]
= Configuring systemd-journald and rsyslog
= Configuring systemd-journald and Rsyslog
include::modules/common-attributes.adoc[]

toc::[]

Because Fluentd and rsyslog read from the journal, and the journal default
Because Fluentd and Rsyslog read from the journal, and the journal default
settings are very low, journal entries can be lost because the journal cannot keep up
with the logging rate from system services.

Expand Down
5 changes: 5 additions & 0 deletions logging/efk-logging-eventrouter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ The Event Router communicates with the {product-title} and prints {product-title

If Cluster Logging is deployed, you can view the {product-title} events in Kibana.

NOTE:
====
The Event Router is not supported for the Rsyslog log collector.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should add that to the list of differences between fluentd and rsyslog

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Outdated

====

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
// modules required to cover the user story. You can also include other
Expand Down
2 changes: 1 addition & 1 deletion logging/efk-logging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ include::modules/efk-logging-about-curator.adoc[leveloffset=+2]

include::modules/efk-logging-about-eventrouter.adoc[leveloffset=+2]

include::modules/efk-logging-about-crd.adoc[leveloffset=+2]
include::modules/efk-logging-about-crd.adoc[leveloffset=+1]



Expand Down
6 changes: 3 additions & 3 deletions modules/efk-logging-about-components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
[id="efk-logging-about-components_{context}"]
= About cluster logging components

There are currently 4 different types of cluster logging components:
There are currently 5 different types of cluster logging components:

* logStore - This is where the logs will be stored. The current implementation is Elasticsearch.
* collection - This is the component that collects logs from the node, formats them, and stores them in the logStore. The current implementation is Fluentd.
* collection - This is the component that collects logs from the node, formats them, and stores them in the logStore, either Fluentd or Rsyslog.
* visualization - This is the UI component used to view logs, graphs, charts, and so forth. The current implementation is Kibana.
* curation - This is the component that trims logs by age. The current implementation is Curator.

* event routing - This is the component forwards events to cluster logging. The current implementation is Event Router.

In this document, we may refer to logStore or Elasticsearch, visualization or Kibana, curation or Curator, collection or Fluentd, interchangeably, except where noted.

2 changes: 1 addition & 1 deletion modules/efk-logging-about-curator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// * logging/efk-logging.adoc

[id="efk-logging-about-curator_{context}"]
= About Curator
= About logging curation

The Elasticsearch Curator tool performs scheduled maintenance operations on a global and/or on a per-project basis. Curator performs actions daily based on its configuration. Only one Curator Pod is
recommended per Elasticsearch cluster.
Expand Down
2 changes: 1 addition & 1 deletion modules/efk-logging-about-elasticsearch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// * logging/efk-logging.adoc

[id="efk-logging-about-elasticsearch_{context}"]
= About Elasticsearch
= About the logstore

{product-title} uses link:https://www.elastic.co/products/elasticsearch[Elasticsearch (ES)] to organize the log data from Fluentd into datastores, or _indices_.

Expand Down
7 changes: 6 additions & 1 deletion modules/efk-logging-about-eventrouter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,16 @@
// * logging/efk-logging.adoc

[id="efk-logging-about-eventrouter_{context}"]
= About Event Router
= About event routing

The Event Router is a pod that forwards {product-title} events to cluster logging.
You must manually deploy Event Router.

The Event Router collects events and converts them into JSON format, which takes
those events and pushes them to `STDOUT`. Fluentd indexes the events to the
`.operations` index.

NOTE:
====
The Event Router is not supported for the Rsyslog log collector.
====
12 changes: 5 additions & 7 deletions modules/efk-logging-about-fluentd.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,19 +3,17 @@
// * logging/efk-logging.adoc

[id="efk-logging-about-fluentd_{context}"]
= About Fluentd
= About the logging collector

{product-title} uses Fluentd to collect data about your cluster.
{product-title} can use Fluentd or Rsyslog to collect data about your cluster.

Fluentd is deployed as a DaemonSet in {product-title} that deploys pods to each {product-title} node.

Fluentd uses `journald` as the system log source. These are log messages from
the operating system, the container runtime, and {product-title}.
The logging collector is deployed as a DaemonSet in {product-title} that deploys pods to each {product-title} node.
`journald` is the system log source supplying log messages from the operating system, the container runtime, and {product-title}.

The container runtimes provide minimal information to identify the source of log messages: project, pod name,
and container id. This is not sufficient to uniquely identify the source of the logs. If a pod with a given name
and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations,
is not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source.
might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source.
This limitation means log collection and normalization is considered *best effort*.

[IMPORTANT]
Expand Down
2 changes: 1 addition & 1 deletion modules/efk-logging-about-kibana.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
// * logging/efk-logging.adoc

[id="efk-logging-about-kibana_{context}"]
= About Kibana
= About logging visualization

{product-title} uses Kibana to display the log data collected by Fluentd and indexed by Elasticsearch.

Expand Down
2 changes: 1 addition & 1 deletion modules/efk-logging-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ link:https://www.elastic.co/guide/en/kibana/current/introduction.html[Kibana] is
where users and administrators can create rich visualizations and dashboards with the aggregated data.

{product-title} cluster administrators can deploy cluster logging by creating a subscription from the console
in the 'openshift-logging' project. Creating the subscription deploys the Cluster Logging Operator, the Elasticsearch Operator, and the
in the `openshift-logging` project. Creating the subscription deploys the Cluster Logging Operator, the Elasticsearch Operator, and the
other resources necessary to support the deployment of cluster logging. The operators are responsible for deploying, upgrading,
and maintaining cluster logging.

Expand Down
27 changes: 8 additions & 19 deletions modules/efk-logging-configuring-image-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,34 +14,23 @@ You can view the images by running the following command:
----
oc -n openshift-logging set env deployment/cluster-logging-operator --list | grep _IMAGE

ELASTICSEARCH_IMAGE=registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.1 <1>
FLUENTD_IMAGE=registry.redhat.io/openshift4/ose-logging-fluentd:v4.1 <2>
KIBANA_IMAGE=registry.redhat.io/openshift4/ose-logging-kibana5:v4.1 <3>
CURATOR_IMAGE=registry.redhat.io/openshift4/ose-logging-curator5:v4.1 <4>
OAUTH_PROXY_IMAGE=registry.redhat.io/openshift4/ose-oauth-proxy:v4.1 <5>
ELASTICSEARCH_IMAGE=registry.redhat.io/openshift4/ose-logging-elasticsearch5:v4.2 <1>
FLUENTD_IMAGE=registry.redhat.io/openshift4/ose-logging-fluentd:v4.2 <2>
KIBANA_IMAGE=registry.redhat.io/openshift4/ose-logging-kibana5:v4.2 <3>
CURATOR_IMAGE=registry.redhat.io/openshift4/ose-logging-curator5:v4.2 <4>
OAUTH_PROXY_IMAGE=registry.redhat.io/openshift4/ose-oauth-proxy:v4.2 <5>
RSYSLOG_IMAGE=registry.redhat.io/openshift4/ose-logging-rsyslog:v4.2 <6>
----
<1> *ELASTICSEARCH_IMAGE* deploys Elasticsearch.
<2> *FLUENTD_IMAGE* deploys Fluentd.
<3> *KIBANA_IMAGE* deploys Kibana.
<4> *CURATOR_IMAGE* deploys Curator.
<5> *OAUTH_PROXY_IMAGE* defines OAUTH for OpenShift Container Platform.

[NOTE]
====
The values might be different depending on your environment.
====



////
Comment out until 4.1
* *RSYSLOG_IMAGE* deploys Rsyslog, by default `docker.io/viaq/rsyslog:latest`. <1>

<1> The image used for RSYSLOG when deployed. You can change this value using an environment variable. You cannot change this value through the Cluster Logging CR.
<6> *RSYSLOG_IMAGE* deploys Rsyslog.

[NOTE]
====
The Rsyslog log collector is in Technology Preview.
====
////

The values might be different depending on your environment.
2 changes: 0 additions & 2 deletions modules/efk-logging-deploying-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,6 @@ You can set the policy that defines how Elasticsearch shards are replicated acro
* `SingleRedundancy`. A single copy of each shard. Logs are always available and recoverable as long as at least two data nodes exist.
* `ZeroRedundancy`. No copies of any shards. Logs may be unavailable (or lost) in the event a node is down or fails.

////
Log collectors::
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:

Expand All @@ -157,7 +156,6 @@ You can select which log collector is deployed as a Daemonset to each node in th
memory:
type: "fluentd"
----
////

Curator schedule::
You specify the schedule for Curator in the [cron format](https://en.wikipedia.org/wiki/Cron).
Expand Down
5 changes: 5 additions & 0 deletions modules/efk-logging-eventrouter-deploy.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ Use the following steps to deploy Event Router into your cluster.

The following Template object creates the Service Account, ClusterRole, and ClusterRoleBinding required for the Event Router.

NOTE:
====
The Event Router is not supported for the Rsyslog log collector.
====

.Prerequisites

You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the *cluster-admin* role.
Expand Down
8 changes: 4 additions & 4 deletions modules/efk-logging-external-elasticsearch.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
// * logging/efk-logging-external.adoc

[id="efk-logging-external-elasticsearch_{context}"]
= Configuring Fluentd to send logs to an external Elasticsearch instance
= Configuring the log collector to send logs to an external Elasticsearch instance

Fluentd sends logs to the value of the `ES_HOST`, `ES_PORT`, `OPS_HOST`,
The log collector sends logs to the value of the `ES_HOST`, `ES_PORT`, `OPS_HOST`,
and `OPS_PORT` environment variables of the Elasticsearch deployment
configuration. The application logs are directed to the `ES_HOST` destination,
and operations logs to `OPS_HOST`.
and operations logs to `OPS_HOST`.

[NOTE]
====
Expand All @@ -28,7 +28,7 @@ an instance of Fluentd that you control and that is configured with the

To direct logs to a specific Elasticsearch instance:

. Edit the `fluentd` DaemonSet in the *openshift-logging* project:
. Edit the `fluentd` or `rsyslog` DaemonSet in the *openshift-logging* project:
+
[source,yaml]
----
Expand Down
9 changes: 7 additions & 2 deletions modules/efk-logging-external-syslog.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,15 @@
// * logging/efk-logging-external.adoc

[id="efk-logging-external-syslog_{context}"]
= Configuring Fluentd to send logs to an external syslog server
= Configuring log collector to send logs to an external syslog server

Use the `fluent-plugin-remote-syslog` plug-in on the host to send logs to an
external syslog server.
external syslog server.

[NOTE]
====
For Rsyslog, you can edit the Rsyslog ConfigMap to add support for Syslog log forwarding using the *omfwd* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omfwd.html[omfwd: syslog Forwarding Output Module]. To send logs to a different Rsyslog instance, you can the *omrelp* module, see link:https://www.rsyslog.com/doc/v8-stable/configuration/modules/omrelp.html[omrelp: RELP Output Module].
====

.Prerequisite

Expand Down
44 changes: 27 additions & 17 deletions modules/efk-logging-fluentd-alerts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,32 @@
// * logging/efk-logging-fluentd.adoc

[id="efk-logging-fluentd-log-viewing_{context}"]
= Viewing Fluentd logs
= Viewing collected logs

How you view logs depends upon the `LOGGING_FILE_PATH` setting.
How you view logs generated by the log collector, Fluentd or Rsyslog, depends upon the `LOGGING_FILE_PATH` setting.

* If you are using `LOGGING_FILE_PATH=console`, Fluentd writes logs to stdout/stderr`.
You can retrieve the logs with the `oc logs [-f] <pod_name>` command, where the `-f`
is optional, from the project where the pod is located.
+
----
$ oc logs -f <any-log-collector-pod> <1>
----
<1> Specify the name of a Fluentd pod. Use the `-f` option to follow what is being written into the logs.
+
For example
+
----
$ oc logs -f fluentd-ht42r -n openshift-logging
----
+
The contents of log files are printed out, starting with the oldest log.

* If `LOGGING_FILE_PATH` points to a file, the default, use the *logs* utility, from the project,
where the pod is located, to print out the contents of Fluentd log files:
+
----
$ oc exec <any-fluentd-pod> -- logs <1>
$ oc exec <any-log-collector-pod> -- logs <1>
----
<1> Specify the name of a Fluentd pod. Note the space before `logs`.
+
Expand All @@ -24,22 +41,15 @@ $ oc exec fluentd-ht42r -n openshift-logging -- logs
To view the current setting:
+
----
oc -n openshift-logging set env daemonset/fluentd --list | grep LOGGING_FILE_PATH
----
$ oc -n openshift-logging set env daemonset/fluentd --list | grep LOGGING_FILE_PATH

* If you are using `LOGGING_FILE_PATH=console`, Fluentd writes logs to stdout/stderr`.
You can retrieve the logs with the `oc logs [-f] <pod_name>` command, where the `-f`
is optional, from the project where the pod is located.
+
----
$ oc logs -f <any-fluentd-pod> <1>
LOGGING_FILE_PATH=/etc/fluentd/fluentd.log
----
<1> Specify the name of a Fluentd pod. Use the `-f` option to follow what is being written into the logs.
+
For example
+
----
$ oc logs -f fluentd-ht42r -n openshift-logging
$ oc -n openshift-logging set env daemonset/rsyslog --list | grep LOGGING_FILE_PATH

LOGGING_FILE_PATH=/etc/rsyslog/rsyslog.log
----
+
The contents of log files are printed out, starting with the oldest log.


4 changes: 2 additions & 2 deletions modules/efk-logging-fluentd-collector.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ nodeSpec:

collection:
logs:
type: "fluentd" <1>
type: "rsyslog" <1>
----
<1> Set the log collector to `fluentd`, the default, or `rsyslog`.
<1> Set the log collector to `rsyslog` or `fluentd`.

9 changes: 7 additions & 2 deletions modules/efk-logging-fluentd-envvar.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,14 @@
// * logging/efk-logging-fluentd.adoc

[id="efk-logging-fluentd-envvar_{context}"]
= Configuring Fluentd using environment variables
= Configuring the logging collector using environment variables

You can use link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[environment variables] to modify your Fluentd configuration.
You can use environment variables to modify the
configuration of the log collector, Fluentd or Rsyslog.

See the link:https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/README.md[Fluentd README] in Github or the
link:https://github.com/openshift/origin-aggregated-logging/blob/master/rsyslog/README.md[Rsyslog README] for lists of the
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

available environment variables.

.Prerequisite

Expand Down
Loading