GlusterFS is a relatively new (but promising) file system aimed at providing a scalable network file system for typically bandwidth intensive tasks - such as media streaming, file sharing and so on. There are also other alternatives I could have used instead - such as GPFS - although unless you have a pretty substantial budget not many businesses will be able to adopt this.
Let's firstly setup our GlusterFS volume - on node A:
yum install centos-release-gluster
yum install glusterfs-server
echo GLUSTER01 > /etc/hostname
echo 10.0.0.2 GLUSTER02 >> /etc/hosts
echo 10.0.0.1 GLUSTER01 >> /etc/hosts
sudo systemctl enable glusterd
sudo systemctl start glusterd
We'll also need to permit access the following ports for GlusterFS i.e. from node A to B and back:
111 / tcp
24007 / tcp GlusterFS Daemon.
24008 / tcp GlusterFS Management
38465 to 38467 / tcp GlusterFS NFS service
49152 to n / tcp - Depends on number of bricks.
which translates to:
iptables -t filter -I INPUT 3 -p tcp --dport 111 -j ACCEPT
iptables -t filter -I INPUT 3 -p tcp --dport 24007:24008 -j ACCEPT
iptables -t filter -I INPUT 3 -p tcp --dport 38465:38467 -j ACCEPT
iptables -t filter -I INPUT 3 -p tcp --dport 49152 -j ACCEPT#
sudo iptables-save > /etc/sysconfig/iptables
otherwise you might get:
peer probe: failed: Probe returned with unknown errno 107
Continue by rinse and repeating for the second server.
Now let's check connectivity with each gluster peer:
GLUSTER01> gluster peer probe GLUSTER02
GLUSTER02> gluster peer probe GLUSTER01
We can then check the peer(s) status with:
gluster peer status
Let's now add an additional drive to each host - lets say udev names it /dev/sdb:
parted /dev/sdb
mktable msdos
mkpart pri xfs 0% 100%
mkfs.xfs -L SHARED /dev/sdb1
ensure
echo 'UUID=yyyyyy-uuuu-444b-xxxx-a6a31a93dd2d /mnt/shared xfs defaults 0 0' >> /etc/fstab
Rinse and repeat for the second host.
Create the volume with:
gluster volume create datastore replica 2 transport tcp GLUSTER01:/mnt/shared/datastore GLUSTER02:/mnt/shared/datastore
and then start it with:
gluster volume start datastore
To check the status of the volume use (on any node):
gluster volume info
We should also set the access permission for example to allow the 172.30.0.0/16 network:
gluster volume set datastore auth.allow "172.30.*"
We can now mount the glusterfs filesystem on the client as follows:
sudo modprobe fuse
sudo yum install fuse fuse-libs openib libibverbs -y
sudo yum install glusterfs-client -y
mkdir /mount/gluster
mount -t glusterfs GLUSTER01:datastore /mount/gluster
and on GLUSTER02:
sudo modprobe fuse
sudo yum install fuse fuse-libs openib libibverbs -y
sudo yum install glusterfs-client -y
mkdir /mount/gluster
mount -t glusterfs GLUSTER02:datastore /mount/gluster
Then we can test the replication from GLUSTER01 e.g.
touch /mount/gluster/test.txt
If it's present on GLUSTER02 we have success!
If things don't quite work the first time you can tail the log files to help troubelshoot:
tail -f /var/log/glusterfs/<name>.log
No comments:
Post a Comment