mkdir /bricks/brick1
Josip Maslać, nabava.net
user upload
transparent to applications
setup and management - as "simple" as possible
possible solutions:
lsync ("configured" rsync)
DRBD (2-node only)
Ceph (looks complicated)
open source distributed storage
scale-out network-attached storage file system
purchased by Red Hat in 2011
licence GNU GPL v3
storing & accessing LARGE amounts of data (think PB’s)
ensuring High Availability (file replication)
"backup"
software only
in userspace
runs on commodity hardware
heterogeneous hosts (xfs, ext4, btrfs..)
file systems that support extended atribute (xattrs)
client-server
no external metadata
Servers/nodes ⇒ trusted storage pool (cluster)
Bricks
actual storage on disks
mkdir /bricks/brick1
Volumes
clients access data through volumes
by mounting volume to local folder
Distributed (default)
Elastic hashing
Replicated
synchronous replication!
high availability, self-healing
Stripped
we are building:
2 nodes: srv1 & srv2
replicated volume
Servers/nodes
Bricks
Volumes
# srv1 & srv2
apt-get install glusterfs-server
service glusterfs-server start
# srv1
gluster peer probe srv2
# srv1
gluster poll list
--
UUID Hostname State
0f5cfce3-d6ca-4887-afa1-e9465f148ff7 srv2 Connected
69091d19-aa4f-48be-a6a3-0661689cdde7 localhost Connected
gluster
> peer probe srv2
> poll list
> ...
# srv1
mkdir -p /bricks/brick-srv1
# srv2
mkdir -p /bricks/brick-srv2
After this you DO NOT tamper with these folders!
…if necessary only in read mode
# srv1 (or srv2)
gluster volume create test-volume replica 2 transport tcp \
srv1:/bricks/brick-srv1 srv2:/bricks/brick-srv2 force
gluster volume start test-volume
accessing files
# client
mkdir -p /volumes/test
# manual mount
mount -t glusterfs srv1:/test-volume /volumes/test
# fstab entry
srv1:/test-volume /volumes/test glusterfs defaults,_netdev,acl 0 0
# THAT IS IT!
echo "123" > /volumes/test/file123.txt
ls /volumes/test/
gluster volume list
--
test-volume
gluster volume info
--
Volume Name: test-volume
Type: Replicate
Volume ID: ece85c0a-e86c-44e1-8cc9-5ab2f2d697c0
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: srv1:/bricks/brick-srv1
Brick2: srv2:/bricks/brick-srv2
Options Reconfigured:
performance.readdir-ahead: on
gluster volume add-brick
gluster volume remove-brick
gluster volume set <VOLNAME> <KEY> <VALUE>
gluster volume set vol-name auth.allow CLIENT_IP
gluster volume profile
gluster volume top
gluster snapshot create
...
network latency
GlusterFS
scalable
affordable
flexible
easy to use storage
Questions?
Contact info
twitter.com/josipmaslac