Skip to content

Commit eeddd3d

Browse files
committed
Merge pull request #98 from ewolinetz/uuid_indices
Uuid indices
2 parents 2d23373 + 1e627fe commit eeddd3d

File tree

17 files changed

+1001
-765
lines changed

17 files changed

+1001
-765
lines changed

README.md

Lines changed: 39 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,11 @@ Scale down your Fluentd instances to 0.
268268

269269
$ oc scale dc/logging-fluentd --replicas=0
270270

271+
Or if your Fluentd is being deployed using the daemonset controller unlabel all
272+
your nodes.
273+
274+
$ oc label nodes --all logging-infra-fluentd-
275+
271276
Wait until they have properly terminated, this gives them time to properly
272277
flush their current buffer and send any logs they were processing to
273278
Elasticsearch. This helps prevent loss of data.
@@ -287,8 +292,17 @@ Once your ES pods are confirmed to be terminated we can now pull in the latest
287292
EFK images to use as described [here](https://docs.openshift.org/latest/install_config/upgrading/manual_upgrades.html#importing-the-latest-images),
288293
replacing the default namespace with the namespace where logging was installed.
289294

290-
With the latest images in your repository we can now begin to scale back up.
291-
We want to scale ES back up incrementally so that the cluster has time to rebuild.
295+
With the latest images in your repository we can now rerun the deployer to generate
296+
any missing or changed features.
297+
298+
Be sure to delete your oauth client
299+
300+
$ oc delete oauthclient --selector logging-infra=support
301+
302+
Then proceed to follow the same steps as done previously for using the deployer.
303+
After the deployer completes, re-attach your persistent volumes you were using
304+
previously. Next, we want to scale ES back up incrementally so that the cluster
305+
has time to rebuild.
292306

293307
$ oc scale dc/logging-es-{unique_name} --replicas=1
294308

@@ -304,4 +318,26 @@ recovered.
304318
We can now scale Kibana and Fluentd back up to their previous state. Since Fluentd
305319
was shut down and allowed to push its remaining records to ES in the previous
306320
steps it can now pick back up from where it left off with no loss of logs -- so long
307-
as the log files that were not read in are still available on the node.
321+
as the log files that were not read in are still available on the node.
322+
323+
Note:
324+
If your previous deployment did not use a daemonset to schedule Fluentd pods you
325+
will now need to label your nodes to deploy Fluentd to.
326+
327+
$ oc label nodes <node_name> logging-infra-fluentd=true
328+
329+
Or to deploy Fluentd to all your nodes.
330+
331+
$ oc label nodes --all logging-infra-fluentd=true
332+
333+
With this latest version, Kibana will display indices differently now in order
334+
to prevent users from being able to access the logs of previously created
335+
projects that have been deleted.
336+
337+
Due to this change your old logs will not appear automatically. To migrate your
338+
old indices to the new format, rerun the deployer with `-v MODE=migrate` in addition
339+
to your prior flags. This should be run while your ES cluster is running as the
340+
script will need to connect to it to make changes.
341+
Note: This only impacts non-operations logs, operations logs will appear the
342+
same as in previous versions. There should be minimal performance impact to ES
343+
while running this and it will not perform an install.

deployment/README.md

Lines changed: 13 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,14 @@ For examples in this document we will assume the `logging` project.
6565
You can use the `default` or another project if you want. This
6666
implementation has no need to run in any specific project.
6767

68+
## Create missing templates
69+
70+
If your installation did not create templates in the `openshift`
71+
namespace, the `logging-deployer-template` and `logging-deployer-account-template`
72+
templates may not exist. In that case you can create them with the following:
73+
74+
$ oc create -n openshift -f https://raw.githubusercontent.com/openshift/origin-aggregated-logging/v0.2/deployment/deployer.yaml ...
75+
6876
## Create the Deployer Secret
6977

7078
Security parameters for the logging infrastructure
@@ -98,20 +106,14 @@ An invocation supplying a properly signed Kibana cert might be:
98106
## Create Supporting ServiceAccounts
99107

100108
The deployer must run under a service account defined as follows:
109+
(Note: change `:logging:` below to match the project name.)
101110

102-
$ oc create -f - <<API
103-
apiVersion: v1
104-
kind: ServiceAccount
105-
metadata:
106-
name: logging-deployer
107-
secrets:
108-
- name: logging-deployer
109-
API
110-
111-
$ oc policy add-role-to-user edit \
111+
$ oc process -n openshift logging-deployer-account-template | oc create -f -
112+
$ oc policy add-role-to-user edit --serviceaccount logging-deployer
113+
$ oc policy add-role-to-user daemonset-admin --serviceaccount logging-deployer
114+
$ oadm policy add-cluster-role-to-user oauth-editor \
112115
system:serviceaccount:logging:logging-deployer
113116

114-
Note: change `:logging:` above to match the project name.
115117

116118
The policy manipulation is required in order for the deployer pod to
117119
create secrets, templates, and deployments in the project. By default
@@ -156,12 +158,6 @@ You run the deployer by instantiating a template. Here is an example with some p
156158
-v KIBANA_HOSTNAME=kibana.example.com,PUBLIC_MASTER_URL=https://localhost:8443 \
157159
| oc create -f -
158160

159-
If your installation did not create templates in the `openshift`
160-
namespace, the `logging-deployer-template` template may not exist. In
161-
that case you can just process the template source:
162-
163-
$ oc process -f https://raw.githubusercontent.com/openshift/origin-aggregated-logging/v0.1/deployment/deployer.yaml ...
164-
165161
This creates a deployer pod and prints its name. Wait until the pod
166162
is running; this can take up to a few minutes to retrieve the deployer
167163
image from its registry. You can watch it with:
@@ -179,19 +175,6 @@ are given below.
179175

180176
## Deploy the templates created by the deployer
181177

182-
### Supporting definitions
183-
184-
Create the supporting definitions from template (you must be cluster admin):
185-
186-
$ oc process logging-support-template | oc create -f -
187-
188-
Tip: Check the output to make sure that all objects were created
189-
successfully. If any were not, it is probably because one or more
190-
already existed from a previous deployment (potentially in a different
191-
project). You can delete them all before trying again:
192-
193-
$ oc process logging-support-template | oc delete -f -
194-
195178
### ElasticSearch
196179

197180
The deployer creates the number of ElasticSearch instances specified by

0 commit comments

Comments
 (0)