@@ -268,6 +268,11 @@ Scale down your Fluentd instances to 0.
268
268
269
269
$ oc scale dc/logging-fluentd --replicas=0
270
270
271
+ Or if your Fluentd is being deployed using the daemonset controller unlabel all
272
+ your nodes.
273
+
274
+ $ oc label nodes --all logging-infra-fluentd-
275
+
271
276
Wait until they have properly terminated, this gives them time to properly
272
277
flush their current buffer and send any logs they were processing to
273
278
Elasticsearch. This helps prevent loss of data.
@@ -287,8 +292,17 @@ Once your ES pods are confirmed to be terminated we can now pull in the latest
287
292
EFK images to use as described [ here] ( https://docs.openshift.org/latest/install_config/upgrading/manual_upgrades.html#importing-the-latest-images ) ,
288
293
replacing the default namespace with the namespace where logging was installed.
289
294
290
- With the latest images in your repository we can now begin to scale back up.
291
- We want to scale ES back up incrementally so that the cluster has time to rebuild.
295
+ With the latest images in your repository we can now rerun the deployer to generate
296
+ any missing or changed features.
297
+
298
+ Be sure to delete your oauth client
299
+
300
+ $ oc delete oauthclient --selector logging-infra=support
301
+
302
+ Then proceed to follow the same steps as done previously for using the deployer.
303
+ After the deployer completes, re-attach your persistent volumes you were using
304
+ previously. Next, we want to scale ES back up incrementally so that the cluster
305
+ has time to rebuild.
292
306
293
307
$ oc scale dc/logging-es-{unique_name} --replicas=1
294
308
@@ -304,4 +318,26 @@ recovered.
304
318
We can now scale Kibana and Fluentd back up to their previous state. Since Fluentd
305
319
was shut down and allowed to push its remaining records to ES in the previous
306
320
steps it can now pick back up from where it left off with no loss of logs -- so long
307
- as the log files that were not read in are still available on the node.
321
+ as the log files that were not read in are still available on the node.
322
+
323
+ Note:
324
+ If your previous deployment did not use a daemonset to schedule Fluentd pods you
325
+ will now need to label your nodes to deploy Fluentd to.
326
+
327
+ $ oc label nodes <node_name> logging-infra-fluentd=true
328
+
329
+ Or to deploy Fluentd to all your nodes.
330
+
331
+ $ oc label nodes --all logging-infra-fluentd=true
332
+
333
+ With this latest version, Kibana will display indices differently now in order
334
+ to prevent users from being able to access the logs of previously created
335
+ projects that have been deleted.
336
+
337
+ Due to this change your old logs will not appear automatically. To migrate your
338
+ old indices to the new format, rerun the deployer with ` -v MODE=migrate ` in addition
339
+ to your prior flags. This should be run while your ES cluster is running as the
340
+ script will need to connect to it to make changes.
341
+ Note: This only impacts non-operations logs, operations logs will appear the
342
+ same as in previous versions. There should be minimal performance impact to ES
343
+ while running this and it will not perform an install.
0 commit comments