• 一、客户端挂载

  可以使用Gluster Native Client方法在GNU / Linux客户端中实现高并发性,性能和透明故障转移。可以使用NFS v3访问gluster卷。已经对GNU / Linux客户端和其他操作系统中的NFS实现进行了广泛的测试,例如FreeBSD,Mac OS X,以及Windows 7(Professional和Up)和Windows Server 2003.其他NFS客户端实现可以与gluster一起使用NFS服务器。使用Microsoft Windows以及SAMBA客户端时,可以使用CIFS访问卷。对于此访问方法,Samba包需要存在于客户端。

  总结:GlusterFS支持三种客户端类型。Gluster Native Client、NFS和CIFS。Gluster Native Client是在用户空间中运行的基于FUSE的客户端,官方推荐使用Native Client,可以使用GlusterFS的全部功能。

  • 1、使用Gluster Native Client挂载

Gluster Native Client是基于FUSE的,所以需要保证客户端安装了FUSE。这个是官方推荐的客户端,支持高并发和高效的写性能。

在开始安装Gluster Native Client之前,您需要验证客户端上是否已加载FUSE模块,并且可以访问所需的模块,如下所示:

[root@localhost ~]# modprobe fuse  #将FUSE可加载内核模块(LKM)添加到Linux内核
[root@localhost ~]# dmesg | grep -i fuse  #验证是否已加载FUSE模块
[ 569.630373] fuse init (API version 7.22)

安装Gluseter Native Client:

[root@localhost ~]# yum -y install glusterfs-client  #安装glusterfs-client客户端
[root@localhost ~]# mkdir /mnt/glusterfs  #创建挂载目录
[root@localhost ~]# mount.glusterfs 192.168.56.11:/gv1 /mnt/glusterfs/  #挂载/gv1
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 20G .4G 19G % /
devtmpfs 231M 231M % /dev
tmpfs 241M 241M % /dev/shm
tmpfs 241M 4.6M 236M % /run
tmpfs 241M 241M % /sys/fs/cgroup
/dev/sda1 197M 97M 100M % /boot
tmpfs 49M 49M % /run/user/
192.168.56.11:/gv1 .0G 312M .7G % /mnt/glusterfs
[root@localhost ~]# ll /mnt/glusterfs/  #查看挂载目录的内容
total
-rw-r--r-- root root Aug : 100M.file
[root@localhost ~]# mount  #查看挂载信息
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
......
192.168.56.11:/gv1 on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=,group_id=,default_permissions,allow_other,max_read=)

手动挂载卷选项:

使用该mount -t glusterfs命令时,可以指定以下选项 。请注意,您需要用逗号分隔所有选项。

backupvolfile-server=server-name  #在安装fuse客户端时添加了这个选择,则当第一个vofile服务器故障时,该选项执行的的服务器将用作volfile服务器来安装客户端

volfile-max-fetch-attempts=number of attempts  指定在装入卷时尝试获取卷文件的尝试次数。

log-level=loglevel  #日志级别

log-file=logfile    #日志文件

transport=transport-type  #指定传输协议

direct-io-mode=[enable|disable]

use-readdirp=[yes|no]  #设置为ON,则强制在fuse内核模块中使用readdirp模式

举个例子:
# mount -t glusterfs -o backupvolfile-server=volfile_server2,use-readdirp=no,volfile-max-fetch-attempts=2,log-level=WARNING,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs

自动挂载卷:

除了使用mount挂载,还可以使用/etc/fstab自动挂载

语法格式:HOSTNAME-OR-IPADDRESS:/VOLNAME MOUNTDIR glusterfs defaults,_netdev

举个例子:
192.168.56.11:/gv1 /mnt/glusterfs glusterfs defaults,_netdev
  •  二、管理GlusterFS卷

(1)停止卷

[root@gluster-node1 ~]# gluster volume stop gv1

(2)删除卷

[root@gluster-node1 ~]# gluster volume delete gv1

(3)扩展卷

GlusterFS支持在线进行卷的扩展。

如果添加的节点还不是集群中的节点,需要使用下面命令添加到集群

语法:# gluster peer probe <SERVERNAME>

扩展卷语法:# gluster volume add-brick <VOLNAME> <NEW-BRICK>

[root@gluster-node1 ~]# gluster peer probe gluster-node3  #添加gluster-node3到集群
peer probe: success. [root@gluster-node1 ~]# gluster volume add-brick test-volume gluster-node3:/storage/brick1 force  #扩展test-volume卷
volume add-brick: success
[root@gluster-node1 ~]# gluster volume info Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c--a382-0a838ee63935
Status: Started
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/brick1
Brick3: gluster-node3:/storage/brick1   #增加的brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on [root@gluster-node1 ~]# gluster volume rebalance test-volume start  #添加后,重新平衡卷以确保文件分发到新添加的brick
volume rebalance: test-volume: success: Rebalance on test-volume has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: ca58bd21-11a5--bb2a-8f9079982394

(4)收缩卷

收缩卷和扩展卷相似据以Brick为单位。

语法:# gluster volume remove-brick <VOLNAME> <BRICKNAME> start

[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 start  #删除brick
volume remove-brick start: success
ID: dd0004f0-b3e6-45d6-80ed-90506dc16159
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 status  #查看remove brick操作的状态
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
gluster-node3 0Bytes completed ::
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node3:/storage/brick1 commit  #显示completed状态后,提交remove-brick操作
volume remove-brick commit: success
[root@gluster-node1 ~]# gluster volume info Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c--a382-0a838ee63935
Status: Started
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

(5)迁移卷

要替换分布式卷上的brick,需要添加一个新的brick,然后删除要替换的brick。在替换的过程中会触发重新平衡的操作,会将移除的brick中的数据到新加入的brick中。

注意:这里仅支持可以对分布式复制卷或复制卷使用"replace-brick"命令进行替换操作。

(1)初始卷test-volume的配置信息
[root@gluster-node1 gv1]# gluster volume info Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c--a382-0a838ee63935
Status: Started
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node2:/storage/
brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

(2)test-volume挂载目录的文件和在实际存储位置的文件信息
[root@gluster-node1 gv1]# ll
total
-rw-r--r-- root root Aug : file1
-rw-r--r-- root root Aug : file2
-rw-r--r-- root root Aug : file3
-rw-r--r-- root root Aug : file4
-rw-r--r-- root root Aug : file5 [root@gluster-node1 gv1]# ll /storage/brick1/
total
-rw-r--r-- root root Aug : file1
-rw-r--r-- root root Aug : file2
-rw-r--r-- root root Aug : file5 [root@gluster-node2 ~]# ll /storage/brick1/
total
-rw-r--r-- root root Aug file3
-rw-r--r-- root root Aug file4 (3)添加新brick gluster-node3:/storage/brick1
[root@gluster-node1 ~]# gluster volume add-brick test-volume gluster-node3:/storage/brick1/ force
volume add-brick: success

(4)启动remove-brick
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 start
volume remove-brick start: success
ID: 2acdaebb-25a9-477c-807e-980a6086796e

(5)查看remove-brick的状态是否为completed
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 status
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
gluster-node2 0Bytes completed ::

(6)确认删除旧的brick
[root@gluster-node1 ~]# gluster volume remove-brick test-volume gluster-node2:/storage/brick1 commit
volume remove-brick commit: success

(7)test-volume的最新配置
[root@gluster-node1 ~]# gluster volume info Volume Name: test-volume
Type: Distribute
Volume ID: 26a625bb-301c--a382-0a838ee63935
Status: Started
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: gluster-node1:/storage/brick1
Brick2: gluster-node3:/storage/
brick1
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
nfs.disable: on

(8)检查新增brick的文件存储信息,原先存储在gluster-node2节点的文件移动到了gluster-node3中
[root@gluster-node3 ~]# ll /storage/brick1/
total
-rw-r--r-- root root Aug file3
-rw-r--r-- root root Aug file4

(6)系统配额

[root@gluster-node1 ~]# gluster volume quota test-volume enable    #启用配额
volume quota : success [root@gluster-node1 ~]# gluster volume quota test-volume disable #禁用配额
volume quota : success [root@gluster-node1 ~]# mount -t glusterfs 127.0.0.1:/test-volume /gv1  #挂载test-volume卷
[root@gluster-node1 ~]# mkdir /gv1/quota  #创建限制的目录
[root@gluster-node1 ~]# gluster volume quota test-volume limit-usage /quota 10MB #对/gv1/quota目录限制 [root@gluster-node1 ~]# gluster volume quota test-volume list  #查看目录限制信息
Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?
-------------------------------------------------------------------------------------------------------------------------------
/quota .0MB %(.0MB) 0Bytes .0MB No No [root@gluster-node1 ~]# gluster volume set test-volume features.quota-timeout   #设置信息的超时时间 [root@gluster-node1 quota]# cp /gv1/20M.file .  #拷贝20M文件到/gv1/quota下,已经超出了限额,但是依旧可以成功,由于限制的值较小,可能受到算法的影响
[root@gluster-node1 quota]# cp /gv1/20M.file ./20Mb.file  #再拷贝20M的文件,就会提示超出目录限额
cp: cannot create regular file ‘./20Mb.file’: Disk quota exceeded [root@gluster-node1 gv1]# gluster volume quota test-volume remove /quota  #删除某个目录的quota设置
volume quota : success

备注:

quota功能,主要是对挂载点下的某个目录进行空间限额,如:/mnt/glusterfs/data目录,而不是对组成卷组的空间进行限制。

(7)I/O信息查看

 Profile Command 提供接口查看一个卷中的每一个brick的IO信息。

[root@gluster-node1 ~]# gluster volume profile test-volume start  #启动profiling,之后则可以进行IO信息查看
Starting volume profile on test-volume has been successful
[root@gluster-node1 ~]# gluster volume profile test-volume info  #查看IO信息,可以查看到每个brick的IO信息
Brick: gluster-node1:/storage/brick1
------------------------------------
Cumulative Stats:
Block Size: 32768b+ 131072b+
No. of Reads:
No. of Writes:
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us FORGET
0.00 0.00 us 0.00 us 0.00 us RELEASE
0.00 0.00 us 0.00 us 0.00 us RELEASEDIR Duration: seconds
Data Read: bytes
Data Written: bytes Interval Stats: Duration: seconds
Data Read: bytes
Data Written: bytes Brick: gluster-node3:/storage/brick1
------------------------------------
Cumulative Stats:
Block Size: 1024b+ 2048b+ 4096b+
No. of Reads:
No. of Writes: Block Size: 8192b+ 16384b+ 32768b+
No. of Reads:
No. of Writes: Block Size: 65536b+ 131072b+
No. of Reads:
No. of Writes:
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us RELEASE
0.00 0.00 us 0.00 us 0.00 us RELEASEDIR Duration: seconds
Data Read: bytes
Data Written: bytes Interval Stats: Duration: seconds
Data Read: bytes
Data Written: bytes
[root@gluster-node1 ~]# gluster volume profile test-volume stop  #查看结束后关闭profiling功能
Stopping volume profile on test-volume has been successful

(8)Top监控

Top command 允许你查看bricks的性能例如:read, write, file open calls, file read calls, file write calls, directory open calls, and directory real calls

所有的查看都可以设置top数,默认100

# gluster volume top VOLNAME open [brick BRICK-NAME] [list-cnt]    //查看打开的fd

[root@gluster-node1 ~]# gluster volume top test-volume open brick gluster-node1:/storage/brick1 list-cnt
Brick: gluster-node1:/storage/brick1
Current open fds: , Max open fds: , Max openfd time: -- ::24.099217
Count filename
=======================
/.txt
/.txt
/.txt # gluster volume top VOLNAME read [brick BRICK-NAME] [list-cnt] //查看调用次数最多的读调用 [root@gluster-node1 ~]# gluster volume top test-volume read brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
/20M.file # gluster volume top VOLNAME write [brick BRICK-NAME] [list-cnt] //查看调用次数最多的写调用 [root@gluster-node1 ~]# gluster volume top test-volume write brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
/20M.file # gluster volume top VOLNAME opendir [brick BRICK-NAME] [list-cnt] # gluster volume top VOLNAME readdir [brick BRICK-NAME] [list-cnt] //查看次数最多的目录调用 [root@gluster-node1 ~]# gluster volume top test-volume opendir brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
/quota [root@gluster-node1 ~]# gluster volume top test-volume readdir brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Count filename
=======================
/quota # gluster volume top VOLNAME read-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt] //查看每个Brick的读性能 [root@gluster-node1 ~]# gluster volume top test-volume read-perf bs count brick gluster-node3:/storage/brick1
Brick: gluster-node3:/storage/brick1
Throughput 42.67 MBps time 0.0000 secs
MBps Filename Time
==== ======== ====
/20M.file -- ::24.7443 # gluster volume top VOLNAME write-perf [bsblk-size count count] [brick BRICK-NAME] [list-cnt] //查看每个Brick的写性能 [root@gluster-node1 ~]# gluster volume top test-volume write-perf bs count brick gluster-node1:/storage/brick1
Brick: gluster-node1:/storage/brick1
Throughput 16.00 MBps time 0.0000 secs
MBps Filename Time
==== ======== ====
/quota/20Mb.file -- ::21.957635
/quota/20M.file -- ::02.767068
05-16 05:06