Skip to content

Commit c39df87

Browse files
committed
# Minor wording changes
1 parent 771464e commit c39df87

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

docs/guides/file_sharing/glusterfs.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ update: 11-Feb-2022
99

1010
## Prerequisites
1111

12-
* Proficiency with a command-line editor (we are using _vi_ in this example)
12+
* Proficiency with a command line editor (using _vi_ in this example)
1313
* A heavy comfort level with issuing commands from the command-line, viewing logs, and other general systems administrator duties
1414
* All commands are run as the root user or sudo
1515

1616
## Introduction
1717

1818
GlusterFS is a distributed file system.
1919

20-
It allows for storage of large amount of data distributed across clusters of servers with a very high availability.
20+
It allows storing large amounts of data distributed across clusters of servers with very high availability.
2121

2222
It is composed of a server part to be installed on all the nodes of the server clusters.
2323

@@ -28,15 +28,15 @@ GlusterFS can operate in two modes:
2828
* replicated mode: each node of the cluster has all the data.
2929
* distributed mode: no data redundancy. If a storage fails, the data on the failed node is lost.
3030

31-
Both modes can be used together to provide both a replicated and distributed file system as long as you have the right number of servers.
31+
Both modes can be used together to provide a replicated and distributed file system if you have the correct number of servers.
3232

3333
Data is stored inside bricks.
3434

3535
> A Brick is the basic unit of storage in GlusterFS, represented by an export directory on a server in the trusted storage pool.
3636
3737
## Test platform
3838

39-
Our fictitious platform is composed of two servers and a client, all Rocky Linux servers.
39+
Our fictitious platform is comprises two servers and a client, all Rocky Linux servers.
4040

4141
* First node: node1.cluster.local - 192.168.1.10
4242
* Second node: node2.cluster.local - 192.168.1.11
@@ -139,7 +139,7 @@ $ sudo firewall-cmd --reload
139139

140140
## Name resolution
141141

142-
You can let DNS handle the name resolution of the servers in your cluster, or you can choose to relieve the servers of this task by inserting records for each of them in your `/etc/hosts` files. This will also keep things running even in the event of a DNS failure.
142+
You can let DNS handle the name resolution of the servers in your cluster, or you can choose to relieve the servers of this task by inserting records for each of them in your `/etc/hosts` files. This will also keep things running even during a DNS failure.
143143

144144
```
145145
192.168.10.10 node1.cluster.local
@@ -155,7 +155,7 @@ $ sudo systemctl enable glusterfsd.service glusterd.service
155155
$ sudo systemctl start glusterfsd.service glusterd.service
156156
```
157157

158-
We are ready to join the two nodes to the same pool.
158+
We are ready to join the two nodes in the same pool.
159159

160160
This command is to be performed only once on a single node (here on node1):
161161

@@ -201,7 +201,7 @@ volume create: volume1: success: please start the volume to access data
201201

202202
!!! Note
203203

204-
As the return command says, a 2-node cluster is not the best idea in the world against split brain. But this will suffice for the purposes of our test platform.
204+
As the return command says, a 2-node cluster is not the best idea in the world against split brain. But this will suffice for our test platform.
205205

206206
We can now start the volume to access data:
207207

@@ -259,7 +259,7 @@ We can already restrict access on the volume a little bit:
259259
$ sudo gluster volume set volume1 auth.allow 192.168.10.*
260260
```
261261

262-
It's as simple as that
262+
It is as simple as that.
263263

264264
## Clients access
265265

@@ -291,9 +291,9 @@ total 0
291291
-rw-r--r--. 2 root root 0 Feb 3 19:21 test
292292
```
293293

294-
Sound good! But what happens if the node 1 fails? It is the one that was specified when mounting the remote access.
294+
Sounds good! But what happens if node 1 fails? It is the one that was specified when mounting the remote access.
295295

296-
Let's stop the node one:
296+
Let's stop node one:
297297

298298
```
299299
$ sudo shutdown -h now
@@ -338,4 +338,4 @@ Upon connection, the glusterfs client receives a list of nodes it can address, w
338338

339339
## Conclusions
340340

341-
While there are no current repositories, using the archived repositories that CentOS had for GlusterFS will still work. As outlined, GlusterFS is pretty easy to install and maintain. Using the command line tools is a pretty straight forward process. GlusterFS will help with creating and maintaining high-availability clusters for data storage and redundancy. You can find more information on GlusterFS and tool usage from the [official documentation pages.](https://docs.gluster.org/en/latest/)
341+
While there are no current repositories, using the archived repositories that CentOS had for GlusterFS will still work. As outlined, GlusterFS is pretty easy to install and maintain. Using the command line tools is a pretty straight forward process. GlusterFS will help create and maintain high-availability clusters for data storage and redundancy. You can find more information on GlusterFS and tool usage from the [official documentation pages.](https://docs.gluster.org/en/latest/)

0 commit comments

Comments
 (0)