You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guides/file_sharing/glusterfs.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,15 +9,15 @@ update: 11-Feb-2022
9
9
10
10
## Prerequisites
11
11
12
-
* Proficiency with a command-line editor (we are using _vi_ in this example)
12
+
* Proficiency with a commandline editor (using _vi_ in this example)
13
13
* A heavy comfort level with issuing commands from the command-line, viewing logs, and other general systems administrator duties
14
14
* All commands are run as the root user or sudo
15
15
16
16
## Introduction
17
17
18
18
GlusterFS is a distributed file system.
19
19
20
-
It allows for storage of large amount of data distributed across clusters of servers with a very high availability.
20
+
It allows storing large amounts of data distributed across clusters of servers with very high availability.
21
21
22
22
It is composed of a server part to be installed on all the nodes of the server clusters.
23
23
@@ -28,15 +28,15 @@ GlusterFS can operate in two modes:
28
28
* replicated mode: each node of the cluster has all the data.
29
29
* distributed mode: no data redundancy. If a storage fails, the data on the failed node is lost.
30
30
31
-
Both modes can be used together to provide both a replicated and distributed file system as long as you have the right number of servers.
31
+
Both modes can be used together to provide a replicated and distributed file system if you have the correct number of servers.
32
32
33
33
Data is stored inside bricks.
34
34
35
35
> A Brick is the basic unit of storage in GlusterFS, represented by an export directory on a server in the trusted storage pool.
36
36
37
37
## Test platform
38
38
39
-
Our fictitious platform is composed of two servers and a client, all Rocky Linux servers.
39
+
Our fictitious platform is comprises two servers and a client, all Rocky Linux servers.
40
40
41
41
* First node: node1.cluster.local - 192.168.1.10
42
42
* Second node: node2.cluster.local - 192.168.1.11
@@ -139,7 +139,7 @@ $ sudo firewall-cmd --reload
139
139
140
140
## Name resolution
141
141
142
-
You can let DNS handle the name resolution of the servers in your cluster, or you can choose to relieve the servers of this task by inserting records for each of them in your `/etc/hosts` files. This will also keep things running even in the event of a DNS failure.
142
+
You can let DNS handle the name resolution of the servers in your cluster, or you can choose to relieve the servers of this task by inserting records for each of them in your `/etc/hosts` files. This will also keep things running even during a DNS failure.
We are ready to join the two nodes to the same pool.
158
+
We are ready to join the two nodes in the same pool.
159
159
160
160
This command is to be performed only once on a single node (here on node1):
161
161
@@ -201,7 +201,7 @@ volume create: volume1: success: please start the volume to access data
201
201
202
202
!!! Note
203
203
204
-
As the return command says, a 2-node cluster is not the best idea in the world against split brain. But this will suffice for the purposes of our test platform.
204
+
As the return command says, a 2-node cluster is not the best idea in the world against split brain. But this will suffice for our test platform.
205
205
206
206
We can now start the volume to access data:
207
207
@@ -259,7 +259,7 @@ We can already restrict access on the volume a little bit:
259
259
$ sudo gluster volume set volume1 auth.allow 192.168.10.*
260
260
```
261
261
262
-
It's as simple as that
262
+
It is as simple as that.
263
263
264
264
## Clients access
265
265
@@ -291,9 +291,9 @@ total 0
291
291
-rw-r--r--. 2 root root 0 Feb 3 19:21 test
292
292
```
293
293
294
-
Sound good! But what happens if the node 1 fails? It is the one that was specified when mounting the remote access.
294
+
Sounds good! But what happens if node 1 fails? It is the one that was specified when mounting the remote access.
295
295
296
-
Let's stop the node one:
296
+
Let's stop node one:
297
297
298
298
```
299
299
$ sudo shutdown -h now
@@ -338,4 +338,4 @@ Upon connection, the glusterfs client receives a list of nodes it can address, w
338
338
339
339
## Conclusions
340
340
341
-
While there are no current repositories, using the archived repositories that CentOS had for GlusterFS will still work. As outlined, GlusterFS is pretty easy to install and maintain. Using the command line tools is a pretty straight forward process. GlusterFS will help with creating and maintaining high-availability clusters for data storage and redundancy. You can find more information on GlusterFS and tool usage from the [official documentation pages.](https://docs.gluster.org/en/latest/)
341
+
While there are no current repositories, using the archived repositories that CentOS had for GlusterFS will still work. As outlined, GlusterFS is pretty easy to install and maintain. Using the command line tools is a pretty straight forward process. GlusterFS will help create and maintain high-availability clusters for data storage and redundancy. You can find more information on GlusterFS and tool usage from the [official documentation pages.](https://docs.gluster.org/en/latest/)
0 commit comments