博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
NFS-heartbeat-drbd模拟NFS高可用
阅读量:4518 次
发布时间:2019-06-08

本文共 13776 字,大约阅读时间需要 45 分钟。


NFS介绍:

  NFS(Network File System)即,是FreeBSD支持的文件系统中的一种,它允许网络中的计算机之间通过TCP/IP网络共享资源。在NFS的应用中,本地NFS的客户端应用可以透明地读写位于远端NFS服务器上的文件,就像访问本地文件一样。

 

实战案例

NFS+DRBD+HEARTBEAT

1.环境描述

OS:    NFS-MASTER:        centos6.5        IP:            eth0:172.16.50.199  #管理ip            eth1:10.0.0.1   #内网直连ip            VIP:172.16.50.249        软件:            NFS            DRBD            hearteat    NFS-BACKUP:        centos6.5        IP:            eth0:172.16.50.193            eth1:10.0.0.2            VIP:172.16.50.249        软件:            NFS            DRBD            hearteat描述:        本案例假设有2台NFS存储系统服务器NFS-master、NFS-backup,其实际IP分别为172.16.50.199、172.16.50.193        NFS-master的NFS存储系统文件目录为/data,对前端提供的访问VIP为172.16.50.249        配置目标:一旦NFS存储系统服务器NFS-master宕机,该服务器上的NFS存储系统服务和虚拟IP会自动切换到热备服务器NFS-backup上继续提供服务,从而达到NFS存储系统高可用宕机后无业务影响的目的        这里有一个特别的问题,就是以前的多个从NFS存储系统如何能自动和新的主NFS存储系统同步,经过实践,通过drbd的方式同步的数据NFS存储系统,以及做从NFS存储系统时使用和主NFS存储系统对外提供服务的VIP为同步VIP,当主NFS存储系统宕机后,VIP漂移到热备主NFS存储系统,默认情况在几秒内,新的主NFS存储系统就可以启动同步程序同步到所有的从NFS存储系统中

  

  

2准备工作

两个都加一块硬盘,master 2G,backup 4G,用于存放数据MASTER:[root@localhost /]# /etc/init.d/iptables stop[root@localhost /]# setenforce 0  #永久关闭,修改selinux文件[root@localhost /]# hostname NFS-master  #永久修改,修改network文件[root@NFS-master /]# bash[root@NFS-master /]# route add -host 10.0.0.2 dev eth1  #路由,模拟心跳线直连[root@NFS-master /]# echo "10.0.0.1 NFS-master" >> /etc/hosts[root@NFS-master /]# echo "10.0.0.2 NFS-backup" >> /etc/hosts[root@NFS-master /]# echo '/sbin/route add -host 10.0.0.2 dev eth1'  >> /etc/rc.local BACKUP:[root@localhost /]# /etc/init.d/iptables stop[root@localhost /]# setenforce 0[root@localhost /]# hostname NFS-backup[root@NFS-backup /]# bash[root@NFS-backup /]# route add -host 10.0.0.1 dev eth1[root@NFS-backup /]# echo "10.0.0.1 NFS-master" >> /etc/hosts[root@NFS-backup /]# echo "10.0.0.2 NFS-backup" >> /etc/hosts[root@NFS-backup /]# echo '/sbin/route add -host 10.0.0.1 dev eth1' >> /etc/rc.local

  

安装heartbeat

#快速安装heartbeat#这里使用yum安装MASTER:1.下载并安装epel包[root@NFS-master /]# wget -q http://mirrors.ustc.edu.cn/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm[root@NFS-master /]# [ -f epel-release-6-8.noarch.rpm ] && rpm -ivh epel-release-6-8.noarch.rpm || echo "失败"[root@NFS-master /]# rpm -qa | grep epel2.安装heartbeat[root@NFS-master /]# yum -y install heartbeat3.配置heartbeat[root@NFS-master /]# cd /etc/ha.d/[root@NFS-master ha.d]# cp /usr/share/doc/heartbeat-3.0.4/ha.cf .[root@NFS-master ha.d]# cp /usr/share/doc/heartbeat-3.0.4/authkeys .[root@NFS-master ha.d]# cp /usr/share/doc/heartbeat-3.0.4/haresources .[root@NFS-master ha.d]# cat ha.cflogfacility     local0debugfile /var/log/ha-debuglogfile /var/log/ha.loglogfacility local1 keepalive   2deadtime    30warntime    10initdead    60 #bcast  eth1mcast   eth1  225.0.0.250 694  1  0     auto_failback onnode NFS-masternode NFS-backupcrm no#the end by Daniel[root@NFS-master ha.d]# cat authkeys auth 11 sha1 Hello[root@NFS-master ha.d]# cat haresources NFS-master IPaddr::172.16.50.249/24/eth0NFS-backup IPaddr::172.16.50.250/24/eth0[root@NFS-backup ha.d]# chmod 600 authkeysBACKUP:1.下载并安装epel包[root@NFS-backup /]# wget -q http://mirrors.ustc.edu.cn/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm[root@NFS-backup /]# [ -f epel-release-6-8.noarch.rpm ] && rpm -ivh epel-release-6-8.noarch.rpm || echo "失败"[root@NFS-backup /]# rpm -qa | grep epel2.安装heartbeat[root@NFS-backup /]# yum -y install heartbeat3.配置heartbeat[root@NFS-backup /]# cd /etc/ha.d/[root@NFS-backup ha.d]# cp /usr/share/doc/heartbeat-3.0.4/ha.cf .[root@NFS-backup ha.d]# cp /usr/share/doc/heartbeat-3.0.4/authkeys .[root@NFS-backup ha.d]# cp /usr/share/doc/heartbeat-3.0.4/haresources .[root@NFS-backup ha.d]# cat ha.cf logfacility     local0debugfile /var/log/ha-debuglogfile /var/log/ha.loglogfacility local1 keepalive   2deadtime    30warntime    10initdead    60 #bcast  eth1mcast   eth1  225.0.0.250 694  1  0     auto_failback onnode NFS-masternode NFS-backupcrm no#the end by Daniel[root@NFS-backup ha.d]# cat authkeys auth 11 sha1 Hello[root@NFS-backup ha.d]# cat haresources NFS-master IPaddr::172.16.50.249/24/eth0NFS-backup IPaddr::172.16.50.250/24/eth0[root@NFS-master ha.d]# chmod 600 authkeys#启动heartbeat/etc/init.d/heartbeat start

  

安装drbd

 

1.硬盘分区[root@NFS-master /]# parted /dev/sdb mklabel gpt  #修改成gpt分区格式[root@NFS-master /]# parted /dev/sdb mkpart primary 0 1024 #分一个主分区大小1024[root@NFS-master /]# parted /dev/sdb mkpart primary 1025 2146[root@NFS-master /]# parted /dev/sdb pModel: VMware, VMware Virtual S (scsi)Disk /dev/sdb: 2147MBSector size (logical/physical): 512B/512BPartition Table: gptNumber  Start   End     Size    File system  Name     标志 1      17.4kB  1024MB  1024MB               primary 2      1024MB  2147MB  1123MB               primary#backup,1分区0 2048,2分区2049 4292[root@NFS-backup /]# parted /dev/sdb pModel: VMware, VMware Virtual S (scsi)Disk /dev/sdb: 4295MBSector size (logical/physical): 512B/512BPartition Table: gptNumber  Start   End     Size    File system  Name     标志 1      17.4kB  2048MB  2048MB               primary 2      2049MB  4292MB  2243MB               primary2.安装drbd#yum安装[root@NFS-master /]# mkdir /root/downloads[root@NFS-master /]# cd /root/downloads/[root@NFS-master downloads]# wget -q http://www.elrepo.org/elrepo-release-6-8.el6.elrepo.noarch.rpm[root@NFS-master downloads]# rpm -ivh elrepo-release-6-8.el6.elrepo.noarch.rpm[root@NFS-master downloads]# yum -y install drbd kmod-drbd84[root@NFS-master downloads]# modprobe drbd#如果这步报错:FATAL: Module drbd not found.则yum -y install kernel kernel-devel kernel-headers,然后重启,再次modprobe drbd[root@NFS-master ~]# echo '/sbin/modprobe drbd' >>/etc/rc.local3.配置drbd配置文件[root@NFS-master ~]# cat /etc/drbd.conf global {    usage-count no;      } common {    syncer {        rate 100M;               verify-alg crc32c;      }} #primary for drbd1resource data {               protocol C;                disk {        on-io-error detach;          }    on NFS-master {                      device /dev/drbd0;               disk /dev/sdb1;                     address 10.0.0.1:7788;           meta-disk /dev/sdb2[0];      }    on NFS-backup {        device /dev/drbd0;        disk /dev/sdb1;        address 10.0.0.2:7788;        meta-disk /dev/sdb2[0];    }}[root@NFS-master ~]# drbdadm create-md data[root@NFS-master ~]# drbdadm up data[root@NFS-master ~]# cat /proc/drbd version: 8.4.9-1 (api:1/proto:86-101)GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6, 2016-12-13 18:38:15 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----    ns:0 nr:0 dw:0 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:9999843.设置主,同步数据到对端[root@NFS-master ~]# drbdadm -- --overwrite-data-of-peer primary data[root@NFS-master ~]# cat /proc/drbd version: 8.4.9-1 (api:1/proto:86-101)GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6, 2016-12-13 18:38:15 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----    ns:999983 nr:0 dw:0 dr:1000662 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0[root@NFS-master ~]# mkfs.ext4 -b 4096 /dev/drbd0 #格式化分区[root@NFS-master ~]# tune2fs -c -1 /dev/drbd0 4.挂载测试[root@NFS-master ~]# mkdir /md1[root@NFS-master ~]# cd /md1[root@NFS-master md1]# for n in `seq 10`;do cp /bin/cat daniel$n;done  #写入操作#以上操作在master上操作(3、4)#接下来在backup上操作[root@NFS-backup ~]# drbdadm down data[root@NFS-backup ~]# mkdir /md1[root@NFS-backup ~]# mount /dev/sdb1 /md1/ #然后查看/md1下,确实已经同步了

  

配合heartbeat调试drbd服务

  需要执行相关切换命令确保heartbeat服务及drbd服务之间的配合是正确的才能继续向下进行。这类似项目阶段的里程碑,成功配置drbd服务,并且能配合heartbeat服务进行主备切换是第二步的关键

1.配置haresourceumount /md1[root@NFS-master ha.d]# /etc/init.d/heartbeat stop #两边都要停掉[root@NFS-master ~]# cat /etc/ha.d/haresources NFS-master IPaddr::172.16.50.249/24/eth0 drbddisk::data Filesystem::/dev/drbd0::/data::ext4NFS-backup IPaddr::172.16.50.250/24/eth0#NFS-master:主机名#IPaddr:VIP/掩码/网卡#drbddisk:就是数据名data#Filesystem:逻辑设备drbd0:,同步数据data,文件类型ext4#两边都要配置#启动[root@NFS-master ~]# /etc/init.d/heartbeat start #先在master启动#测试关掉master,最好是使用暂停虚拟机的方式,我们模拟一下裂脑查看backup[root@NFS-backup /]# cat /proc/drbd version: 8.4.9-1 (api:1/proto:86-101)GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6, 2016-12-13 18:38:15 0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----    ns:4100 nr:57975 dw:57995 dr:6204 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:4[root@NFS-backup /]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda2        15G  4.4G  9.6G  32% /tmpfs           426M  4.0K  426M   1% /dev/shm/dev/sda1        93M   63M   26M  72% /boot/dev/sda5       178G   60M  169G   1% /home/dev/drbd0      946M  1.7M  895M   1% /data#这里也自动挂载了,成为了主[root@NFS-backup /]# ip addr|grep 172    inet 172.16.50.193/24 brd 172.16.50.255 scope global eth0    inet 172.16.50.250/24 brd 172.16.50.255 scope global secondary eth0    inet 172.16.50.249/24 brd 172.16.50.255 scope global secondary eth0启动master再次查看就会发现两端的drbd都说对方是Unknown#解决先关掉两边的heartbeat从节点做如下操作modprobe drbddrbdadm secondary datadrbdadm up datadrbdadm disconnect datadrbdadm -- --discard-my-data connect data[root@NFS-backup /]# cat /proc/drbd version: 8.4.9-1 (api:1/proto:86-101)GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6, 2016-12-13 18:38:15 0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----    ns:0 nr:4 dw:58003 dr:6204 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0主节点做如下操作[root@NFS-master /]# drbdadm connect data#两边再次启动heartbeat,再次查看[root@NFS-master /]# cat /proc/drbd version: 8.4.9-1 (api:1/proto:86-101)GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6, 2016-12-13 18:38:15 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----    ns:8 nr:0 dw:28 dr:3866 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0[root@NFS-master /]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda5        15G  486M   14G   4% /tmpfs           426M   80K  426M   1% /dev/shm/dev/sda6        90G   56M   85G   1% /home/dev/sda3        79G  3.9G   71G   6% /usr/dev/sda1       9.8G  349M  8.9G   4% /var/dev/drbd0      946M  1.7M  895M   1% /data[root@NFS-master /]# ip addr | grep 172    inet 172.16.50.199/24 brd 172.16.50.255 scope global eth0    inet 172.16.50.249/24 brd 172.16.50.255 scope global secondary eth0#再次查看发现好了

  

安装NFS

#安装root@NFS-master ~]# yum -y install nfs-utils rpcbind #两端操作,一起安装#启动[root@NFS-master ~]# /etc/init.d/rpcbind start[root@NFS-master ~]# /etc/init.d/nfs start[root@NFS-master ~]# chkconfig rpcbind on[root@NFS-master ~]# chkconfig nfs on#两端操作#配置[root@NFS-master ~]# cat /etc/exports /data	10.0.0.*(rw,sync) #data是同步目录,rw读写,sync同步 [root@NFS-backup /]# /etc/init.d/nfs reload[root@NFS-master ~]# showmount -e 10.0.0.1Export list for 10.0.0.1:/data 10.0.0.*#两端操作,backup showmount -e 10.0.0.2,其他一样

  

配置客户端挂载

#客户端IP:10.0.0.3service rpcbind start #开启rpcbindchkconfig rpcbind onshowmount -e 172.16.50.249#如果单网卡的话,则route add -host 172.16.50.249 dev eth0mkdir /datamount -t nfs 172.16.50.249:/data /datals /data #发现有了touch 1 #测试一下,如果不行就在master上给足权限777,生产环境就要只能给nfs用户#测试高可用#关闭master#查看backup[root@NFS-backup /]# ip addr| grep 172    inet 172.16.50.193/24 brd 172.16.50.255 scope global eth0    inet 172.16.50.250/24 brd 172.16.50.255 scope global secondary eth0    inet 172.16.50.249/24 brd 172.16.50.255 scope global secondary eth0[root@NFS-backup /]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda2        15G  4.4G  9.6G  32% /tmpfs           426M  4.0K  426M   1% /dev/shm/dev/sda1        93M   63M   26M  72% /boot/dev/sda5       178G   60M  169G   1% /home/dev/drbd0      946M  1.7M  895M   1% /data[root@NFS-backup /]# cat /proc/drbd version: 8.4.9-1 (api:1/proto:86-101)GIT-hash: 9976da086367a2476503ef7f6b13d4567327a280 build by mockbuild@Build64R6, 2016-12-13 18:38:15 0: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r-----    ns:8 nr:4228 dw:4272 dr:1532 al:2 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:32#没有问题#再次写入东西,然后查看#master的heartbeat最好还是不要启动

  

使用exports

#使用exportfs#master[root@NFS-master ~]# cat /etc/exports #/md1	10.0.0.*(rw,sync)[root@NFS-master ~]# exportfs -o rw,sync,all_squash,anonuid=65534,anongid=65534,mp,fsid=2 10.0.0.0/24:/md1[root@NFS-master ~]# showmount -e 10.0.0.1Export list for 10.0.0.1:/md1 10.0.0.*,10.0.0.0/24#客户端umount /md1mount -t nfs 172.16.50.249:/md1 /md1

  

如果像前面一样,每次down机后都要自己重新挂载一遍nfs实在是麻烦,我们可以写个脚本

[root@NFS-backup ~]# vi /etc/ha.d/resource.d/rsmd1 FSID="1"EXPORT_DIR="/md1"EXPORT_OPTIONS="-o rw,sync,all_squash,anonuid=65534,anongid=65534,mp,fsid=$FSID"EXPORT_CLIENTS="10.0.0.0/24"exportfs_usage(){        cat <
$fn 2>&1 rc=$? # error exportfs if [ $rc -ne 0 ]; then echo "export resource '${EXPORT_DIR}' error." exit $rc fi rm -f $fn echo "export resource '${EXPORT_DIR}' ok." exit 0}exportfs_stop(){ fn="`/bin/mktemp`" /etc/init.d/nfs restart exportfs -u ${EXPORT_CLIENTS}:${EXPORT_DIR} > $fn 2>&1 rc=$? if [ $? -eq 0 ]; then echo "unexport resource ${EXPORT_DIR} ok." exit 0 fi rm -f $fn echo "unexport resource ${EXPORT_DIR} error." exit $rc}if [ $# -ne 1 ]; then exportfs_usage exit 1ficase $1 in start) exportfs_start ;; stop) exportfs_stop ;; *) exportfs_usage exit 1 ;;esac#这个脚本用于挂载nfs

  

我们怎么使用这个脚本呢?

[root@NFS-backup ~]# cat /etc/ha.d/haresources NFS-master IPaddr::172.16.50.249/8/eth1 drbddisk::data Filesystem::/dev/drbd0::/md1::ext4 rsmd1NFS-backup IPaddr::172.16.50.250/8/eth1#这样就等于我们heartbeatdown掉后,备用接管的同时执行这个脚本,并且挂在上drbd0到md1

 

 


 

转载于:https://www.cnblogs.com/wazy/p/8310591.html

你可能感兴趣的文章
[转载]开机出现A disk read error occurred错误
查看>>
STM32 C++编程 002 GPIO类
查看>>
无线冲方案 MCU vs SoC
查看>>
进程装载过程分析(execve系统调用分析)
查看>>
在windows 7中禁用media sense
查看>>
ELK-Elasticsearch安装
查看>>
Android 模拟器(Emulator)访问模拟器所在主机
查看>>
删除字符串中指定子串
查看>>
day40-socket编程
查看>>
SpringBoot里mybatis查询结果为null的列不返回问题的解决方案
查看>>
为什么留不住优秀的员工
查看>>
Django后台管理admin笔记
查看>>
JavaScript中的变量
查看>>
iptables基本原理和规则配置
查看>>
ArcGIS JS 学习笔记4 实现地图联动
查看>>
ubuntu 12.04 lts安装golang并设置vim语法高亮
查看>>
编程题目:PAT 1004. 成绩排名 (20)
查看>>
使用分层实现业务处理
查看>>
Microsoft Windows平台的NoSQL数据存储引擎
查看>>
浅谈虚拟机
查看>>