Wednesday, 15 February 2017

Setting up replication with GlusterFS on CentOS 7

GlusterFS is a relatively new (but promising) file system aimed at providing a scalable network file system for typically bandwidth intensive tasks - such as media streaming, file sharing and so on. There are also other alternatives I could have used instead - such as GPFS - although unless you have a pretty substantial budget not many businesses will be able to adopt this.

Let's firstly setup our GlusterFS volume - on node A:

curl https://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/EPEL.repo/glusterfs-epel.repo -o /etc/yum.repos.d/glusterfs-epel.repo

yum install glusterfs-server

echo GLUSTER01 > /etc/hostname
echo 10.0.0.2 GLUSTER02 >> /etc/hosts
echo 10.0.0.1 GLUSTER01 >> /etc/hosts

sudo systemctl enable glusterd
sudo systemctl start glusterd

Rinse and repeat for the second server.

Now let's check connectivity with each gluster peer:

GLUSTER01> gluster peer probe GLUSTER02
GLUSTER02> gluster peer probe GLUSTER01

I got the following error when attempting to probe the peer:

peer probe: failed: Probe returned with unknown errno 107

Foolishly forgetting to add the relevent rule in the iptables:

iptables -t filter -I INPUT 3 -p tcp --dport 24007:24008 -j ACCEPT (for glusterd and management respectively)
iptables -t filter -I INPUT 3 -p tcp --dport 38465:38467 -j ACCEPT (for NFS only)
iptables -t filter -I INPUT 3 -p tcp --dport 49152 -j ACCEPT (and above for each brick.)

sudo iptables-save > /etc/sysconfig/iptables

We can then check the peer(s) status with:

gluster peer status

Let's now add an additional drive to each host - lets say udev names it /dev/sdb:

parted /dev/sdb
mktable msdos
mkpart pri xfs 0% 100%

mkfs.xfs -L SHARED /dev/sdb1

ensure

echo 'UUID=yyyyyy-uuuu-444b-xxxx-a6a31a93dd2d /mnt/shared xfs defaults 0 0' >> /etc/fstab

Rinse and repeat for the second host.

Create the volume with:

gluster volume create datastore replica 2 transport tcp GLUSTER01:/mnt/shared/gluster-datastore GLUSTER02:/mnt/shared/gluster-datastore

and then start it with:

gluster volume start datastore

To check the status of the volume use (on any node):

gluster volume info

We can now mount the glusterfs filesystem like follows on GLUSTER01:

mkdir /mnt/shared/data
mount -t glusterfs GLUSTER01:datastore /mnt/shared/data

and on GLUSTER02:

mkdir /mnt/shared/data
mount -t glusterfs GLUSTER02:datastore /mnt/shared/data





0 comments:

Post a Comment