keepalived讲解
一、keepalived概述
概述:
keepalived起初为Lvs设计的一款强大的辅助工具,保证LB负载调度器的故障切换以及RS节点的健康状态检查,后续被应用到很多需要容错的场景下,keepalived自身基于VRRP协议–虚拟路由冗余协议,思科公有协议。
二、keepalived设计原理
2.1 为什么keepalived能够实现高可用呢?
2.2 keepalived有哪些设计模块?
- core模块 //为keepalived的核心组件,负责主进程的启动、维护以及全局配置文件的加载和解析
- check模块 //负责real server 节点池内的节点的健康检测
- VRRP模块 //在master与backup之间执行心跳检测
2.3 keepalived的热备实现过程
- 将多个主机以软件的方式组成一个热备组,通过共有的虚拟ip(VIP)地址对外提供服务,同一时刻,热备组中只有一台主机在工作,别的主机冗余状态,当当前在线的主机失效时,其他冗余的主机会自动接替虚拟ip地址,继续提供服务,以保证架构的稳定性。
三、VRRP协议
3.1 简介
VRRP(Virtual Router Redundancy Protocol)协议是用于实现路由器冗余的协议。
VRRP协议将两台或多台路由器设备虚拟成一个设备,对外提供虚拟路由器IP(一个或多个),而在路由器组内部,如果实际拥有这个对外IP的路由器如果工作正常的话就是MASTER,或者是通过算法选举产生,MASTER实现针对虚拟路由器IP的各种网络功能,如ARP请求,ICMP,以及数据的转发等;其他设备不拥有该IP,状态是BACKUP,除了接收MASTER的VRRP状态通告信息外,不执行对外的网络功能。当主机失效时,BACKUP将接管原先MASTER的网络功能。
配置VRRP协议时需要配置每个路由器的虚拟路由器ID(VRID)和优先权值,使用VRID将路由器进行分组,具有相同VRID值的路由器为同一个组,VRID是一个0~255的正整数;同一组中的路由器通过使用优先权值来选举MASTER,优先权大者为MASTER,优先权也是一个0~255的正整数。
VRRP协议使用多播数据来传输VRRP数据,VRRP数据使用特殊的虚拟源MAC地址发送数据而不是自身网卡的MAC地址,VRRP运行时只有MASTER路由器定时发送VRRP通告信息,表示MASTER工作正常以及虚拟路由器IP(组),BACKUP只接收VRRP数据,不发送数据,如果一定时间内没有接收到MASTER的通告信息,各BACKUP将宣告自己成为MASTER,发送通告信息,重新进行MASTER选举状态。
VRRP的工作过程为:
虚拟路由器中的路由器根据优先级选举出Master。Master 路由器通过发送免费ARP 报文,将自己的虚拟MAC 地址通知给与它连接的设备或者主机,从而承担报文转发任务;
Master 路由器周期性发送VRRP 报文,以公布其配置信息(优先级等)和工作状况;
如果Master 路由器出现故障,虚拟路由器中的Backup 路由器将根据优先级重新选举新的Master;
虚拟路由器状态切换时,Master 路由器由一台设备切换为另外一台设备,新的Master 路由器只是简单地发送一个携带虚拟路由器的MAC 地址和虚拟IP地址信息的免费ARP 报文,这样就可以更新与它连接的主机或设备中的ARP 相关信息。网络中的主机感知不到Master 路由器已经切换为另外一台设备。
Backup 路由器的优先级高于Master 路由器时,由Backup 路由器的工作方式(抢占方式和非抢占方式)决定是否重新选举Master。
3.2 VRRP的几个术语
- 虚拟路由器:由一个Master路由器和多个Backup路由器组成。主机将虚拟路由器当作默认网关。
- 虚拟路由器标识 VRID:虚拟路由器的标识。由相同VRID的一组路由器构成一个虚拟路由器。
- Master路由器:虚拟路由器中承担报文转发任务的路由器。
- Backup路由器:Master路由器出现故障时,能够代替Master路由器工作的路由器。
- 虚拟IP地址:虚拟路由器的IP地址。一个虚拟路由器可以拥有一个或多个IP地址。
- 优先级:VRRP根据优先级来确定虚拟路由器中每台路由器的地位。
- 虚拟MAC地址:一个虚拟路由器拥有一个虚拟MAC地址。虚拟MAC地址的格式为00-00-5E-00-01-{VRID}。通常情况下,虚拟路由器回应ARP请求使用的是虚拟MAC地址,只有虚拟路由器做特殊配置的时候,才回应接口的真实MAC地址。
四、keepalived架构图
4.1 示图

4.2 各组件说明
组件说明 |
|
checkers |
检查后端RS的健康状态,并且也能管理后端RS,该组件使用独立的子进程负责,由父进程管理。 |
VRRPstack |
基于VRRP协议实现高可用的组件,它能够提供故障转移功能,并且能单独使用,即不对后端做健康检查,运行为一个独立的子进程由父进程管理 |
System Call |
提供读取自定义脚本的功能 |
IPVS wrapper |
能够读取配置文件的规则,通过系统调用直接管理ipvs |
NetlinkReflector |
用于管理和检查VIP地址 |
Watch Dog |
用于检查Checkers和VRRPstack进程 |
五、Keepalived 实现双机热备
系统类型 |
IP地址 |
主机名 |
软件包 |
Centos7 |
192.168.2.1 |
keep01 |
keepalived-1.2.13.tar.gz |
Centos7 |
192.168.2.2 |
keep02 |
keepalived-1.2.13.tar.gz |
5.1 两台节点上部署httpd服务
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| //keep01节点 [root@keep01 ~] [root@keep01 ~] [root@keep01 ~] [root@keep01 ~] <h1> 192.168.2.1 </h1> END
//keep02节点 [root@keep02 ~] [root@keep01 ~] [root@keep02 ~] [root@keep02 ~] <h1> 192.168.2.2 </h1> END
|


5.2 两台节点上安装keepalived(操作相同)
5.2.1 下载官方的软件包并解压
1 2 3
| [root@keep01 ~] [root@keep01 ~] [root@keep01 ~]
|
5.2.2 YUM安装依赖包
1 2 3
| //安装内核开发包,popt支持库等工具;如果安装不上,请使用阿里的yum源 [root@keep01 keepalived-1.2.13] [root@keep01 keepalived-1.2.13]
|
5.2.3 配置、编译、安装
1 2
| [root@keep01 keepalived-1.2.13] [root@keep01 keepalived-1.2.13]
|


5.2.4 安装后产生的目录
- 安装完成后会在keepalived家目录生成这 bin etc sbin share 这 4 个目录。
- 其中一个是主配置文件(keepalived.conf)在 /usr/local/keepalived/etc/keepalived/ 这个路径下
- 还有一个是启动文件(keepalived)在 /usr/local/keepalived/etc/rc.d/init.d/ 这个路径下

5.2.5 将上面相关的文件拷贝到对应目录下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| //创建keepalived主配置文件目录 [root@keep01 ~] //主配置文件 [root@keep01 ~] //启动时需要加载的配置文件 [root@keep01 ~] //服务的控制脚本 [root@keep01 ~] //优化keepalived的命令 [root@keep01 ~] //给启动脚本指定权限 [root@keep01 ~] //查看一下keepalived版本 [root@keep02 ~] Keepalived v1.2.13 (05/16,2020)
|
5.2.6 配置文件详解
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113
| ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc } notification_email_from test0@163.com smtp_server smtp.163.com smtp_connect_timeout 30 router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_strict vrrp_garp_interval 0 vrrp_gna_interval 0 script_user keepalived_script enable_script_security }
vrrp_script chk_nginx_service { script "/etc/keepalived/chk_nginx.sh" interval 3 weight -20 fall 3 rise 2 user keepalived_script }
vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.119.130 } track_script { chk_nginx_service } notify_master "/etc/keepalived/shell.sh start" notify_backup "/etc/keepalived/shell.sh stop" notify_fault "/etc/keepalived/shell.sh stop" }
virtual_server 192.168.119.130 80 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.119.120 80 { weight 1 TCP_CHECK { connect_timeout 10 retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.119.121 80 { weight 1 TCP_CHECK { connect_timeout 10 retry 3 delay_before_retry 3 connect_port 80 } } }
vrrp_instance VI_2 { state BACKUP interface ens33 virtual_router_id 52 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.119.131 } }
virtual_server 192.168.119.131 80 { delay_loop 6 lb_algo rr lb_kind DR protocol TCP real_server 192.168.119.120 80 { weight 1 TCP_CHECK { connect_timeout 10 retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.119.121 80 { weight 1 TCP_CHECK { connect_timeout 10 retry 3 delay_before_retry 3 connect_port 80 } } }
|
5.3 配置keep01的master主节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
| [root@keep01 ~] ! Configuration File for keepalived
global_defs { router_id kp01 }
vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123123 } virtual_ipaddress { 192.168.2.100 } }
virtual_server 192.168.2.100 80 { delay_loop 3 lb_algo wrr lb_kind DR persistence_timeout 60 protocol TCP
real_server 192.168.2.1 80 { weight 3 notify_down /etc/keepalived/check.sh TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
|
1 2 3 4 5 6 7
| [root@keep01 ~]
/etc/init.d/keepalived stop IPV4=`ip a|grep ens32|grep inet|awk '{print $2}'|awk -F'/' '{print $1}'` echo "$IPV4 (httpd) is down on $(date +%F-%T)" >>/root/check_httpd.log :wq [root@keep01 ~]
|
- 启动主keepalived 并 检查虚拟VIP是否在他这里
5.4 配置keep02上backup从节点
- 启动主keepalived 并 检查虚拟VIP是否在他这里

5.5 客户端访问测试双机热备的效果



- 查看有没有root下有没有生产check_httpd.log日志文件

- 此时再回到keep02从主机上查看VIP是否会自动漂移过来


可以看到,这里已经实现了双机热备的功能~~~
- 将keep01节点的http和keepalived服务重新启动
1 2 3
| [root@keep01 ~] [root@keep01 ~] startStarting keepalived (via systemctl): [ 确定 ]
|
- 验证虚拟VIP在主keepalived恢复后是否还会飘过来


可以看到,当主keepalived恢复后,VIP会自动再回到他的上面来,下面再来访问一下浏览器

- 上述步骤实现了keepalived在real server节点池中进行了四层健康监测(传输层,基于协议或者端口号),下面展示keepalived如何对节点池中进行七层健康监测(应用层)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
| [root@keep01 ~] global_defs { router_id r1 }
vrrp_script_check_apache { script/etc/keepalived/check_apache.sh interval 2 }
vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123123 } virtual_ipaddress { 192.168.2.100 } track_script { check_apache } }
virtual_server 192.168.2.100 80 { protocol TCP real_server 192.168.2.1 80 { weight 3 } }
[root@keep01 ~]
Count1= `netstat -anpt|grep -v grep|grep httpd|wc -l` if [ $Count1 -eq 0 ];then /etc/init.d/keepalived stop fi
|
六、脑裂是什么?【重点】
- 脑裂:经常会出现在高可用软件身上,什么是脑裂?也就是主从上都有vip,那是怎么导致
的这种情况?网络波动、抖动导致的。
- 怎么解决这种问题?
- 直接重启keepalived服务【不建议】
- 给中从做一个机制策略,仲裁,给中从写入一个脚本,让他俩定时ping自己的网关,以但一方ping不通的时候,就把这一方的keepalived服务停掉。【建议】
keepalived实战案例
案例(一)
LVS+Keepalived实现高可用负载均衡
架构图如下:

主机名 / 角色 |
操作系统 |
IP地址 |
软件包 |
keep01 / LB-MASTER |
Centos7.4_3.10 X86 |
192.168.1.1 |
keepalived-1.2.13.tar.gz |
keep02 / LB-BACKUP |
Centos7.4_3.10 X86 |
192.168.1.2 |
keepalived-1.2.13.tar.gz |
web01 / RS-WEBSERVER |
Centos7.4_3.10 X86 |
192.168.1.3 |
apache或者nginx |
web02 / RS-WEBSERVER |
Centos7.4_3.10 X86 |
192.168.1.4 |
apache或者nginx |
client01 / 测试 |
Centos7.4_3.10 X86 |
192.168.1.5 |
curl 或者 wget |
1. 在两台web主机上安装httpd服务
1 2 3 4
| yum -y install httpd systemctl start httpd;systemctl enable httpd [root@web01 ~] [root@web02 ~]
|
2. 在两台keep主机上安装keepalived和ipvsadm管理工具(相同配置)
1 2 3
| [root@keep01 ~] [root@keep01 ~] [root@keep01 ~]
|
1 2 3
| //安装内核开发包,popt支持库等工具;如果安装不上,请使用阿里的yum源 [root@keep01 keepalived-1.2.13] [root@keep01 keepalived-1.2.13]
|
1 2
| [root@keep01 keepalived-1.2.13] [root@keep01 keepalived-1.2.13]
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| //创建keepalived主配置文件目录 [root@keep01 ~] //主配置文件 [root@keep01 ~] //启动时需要加载的配置文件 [root@keep01 ~] //服务的控制脚本 [root@keep01 ~] //优化keepalived的命令 [root@keep01 ~] //给启动脚本指定权限 [root@keep01 ~] //查看一下keepalived版本 [root@keep02 ~] Keepalived v1.2.13 (05/16,2020)
|
1 2 3 4 5 6
| //在LVS集群环境中应用时,也需要用到ipvsadm管理工具 [root@keep01 ~] [root@keep01 ~] IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn
|
3. 配置MASTER调度器(keep01)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
| [root@keep01 ~] ! Configuration File for keepalived
global_defs { router_id kp01_2.1 }
vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 123123 } virtual_ipaddress { 192.168.2.100 } }
virtual_server 192.168.2.100 80 { delay_loop 3 lb_algo rr lb_kind DR protocol TCP real_server 192.168.2.3 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.2.4 80 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } }
|
4. 配置BACKUP调度器(keep02)
1 2
| [root@keep01 ~] [root@keep02 ~]
|

5. 接下来启动lvs 和 keepalived
1 2 3 4 5 6 7 8 9 10 11 12 13
| //启动keep01的lvs和keepalived [root@keep01 ~] [root@keep01 ~] [root@keep01 ~] [root@keep01 ~] [root@keep01 ~]
//启动keep02的lvs和keepalived [root@keep02 ~] [root@keep02 ~] [root@keep02 ~] [root@keep02 ~] [root@keep02 ~]
|
6. 在两台keep节点上查看一下虚拟VIP

- 以上图中可以看到vip地址已经承载到了这块网卡上,接下来启动从服务器查看IP地址

- 以上图中可以看到从服务器上面没有vip地址的,那么就是正常的,因为它是老二,是一个备用服务器,啥时候等主服务器挂了那么它就会自动继承主服务器该做的事情。
7. 在两台keep节点上查看一下LVS集群转发情况


8. 在两台RS上编写realserver.sh脚本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
| vim /opt/realserver.sh
VIP=192.168.2.100
case "$1" in start) /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up /sbin/route add -host $VIP dev lo:0 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce sysctl -p >/dev/null 2>&1 echo "RealServer Start OK" ;; stop) /sbin/ifconfig lo:0 down /sbin/route del $VIP >/dev/null 2>&1 echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce echo "RealServer Stoped" ;; *) echo "Usage: $0 {start|stop}" exit 1 esac
exit 0
|
9. 在两台RS上分别执行脚本
1 2 3 4 5
| [root@web01 ~] RealServer Start OK
[root@web02 ~] RealServer Start OK
|


10. 客户端测试负载均衡



11. 查看两台web服务的访问日志


12. 模拟主服务器down掉后是否还能正常访问
1 2 3 4 5
| [root@keep01 ~] Stopping keepalived (via systemctl): [ 确定 ]
[root@keep02 ~] inet 192.168.2.100/32 scope global ens32
|

13. 验证主服务器恢复之后vip是否还会回来
1 2 3 4 5 6 7 8 9
| [root@keep01 ~] Starting keepalived (via systemctl): [ 确定 ] [root@keep01 ~] [root@keep01 ~] inet 192.168.2.100/32 scope global ens32
[root@keep02 ~] [root@keep02 ~]
|
至此,nginx + keepalived高可用负载均衡已经完成~~~
案例(二)
- Nginx+Keepalived实现高可用负载均衡
- 架构图如下:

环境准备
主机名 / 角色 |
操作系统 |
IP地址 |
软件包 |
keep01 / LB-MASTER |
Centos7.4_3.10 |
192.168.1.1 |
keepalived-1.2.13.tar.gz、nginx-1.12.2.tar.gz |
keep02 / LB-BACKUP |
Centos7.4_3.10 |
192.168.1.2 |
keepalived-1.2.13.tar.gz、nginx-1.12.2.tar.gz |
web01 / RS-WEBSERVER |
Centos7.4_3.10 |
192.168.1.3 |
jdk-8u181-linux-x64.tar.gz、apache-tomcat-8.5.32.tar.gz |
web02 / RS-WEBSERVER |
Centos7.4_3.10 |
192.168.1.4 |
jdk-8u181-linux-x64.tar.gz、apache-tomcat-8.5.32.tar.gz |
client01 / 测试 |
Centos7.4_3.10 |
192.168.1.5 |
curl 或者 wget |
1. 安装Tomcat
1.1 安装JDK(配置相同)
1 2 3 4 5 6 7 8 9 10 11
| [root@web01 ~] [root@web01 ~] [root@web01 ~] export JAVA_HOME=/usr/local/java export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib [root@web01 ~] [root@web01 ~] java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
|
1.2 安装tomcat(配置相同)
1 2 3 4 5 6 7 8 9 10 11 12 13
| [root@web01 ~] [root@web01 ~]
[root@web01 ~] [root@web01 ~]
[root@web01 ~]
[root@web01 ~]
[root@web01 ~] tcp6 0 0 :::8080 :::* LISTEN 16297/java
|
1.3 修改两台tomcat的默认页面
1 2 3 4 5 6 7
| //web01 [root@web01 ~] 52 <h2>RS_web01-192.168.2.3</h2>
//web02 [root@web02 ~] 52 <h2>RS_web02-192.168.2.4</h2>
|
1.4 访问两台web服务

2. 安装 keepalived + Nginx
2.1 安装keepalived
1 2 3
| [root@keep01 ~] [root@keep01 ~] [root@keep01 ~]
|
1 2 3
| //安装内核开发包,popt支持库等工具;如果安装不上,请使用阿里的yum源 [root@keep01 keepalived-1.2.13] [root@keep01 keepalived-1.2.13]
|
1 2
| [root@keep01 keepalived-1.2.13] [root@keep01 keepalived-1.2.13]
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| //创建keepalived主配置文件目录 [root@keep01 ~] //主配置文件 [root@keep01 ~] //启动时需要加载的配置文件 [root@keep01 ~] //服务的控制脚本 [root@keep01 ~] //优化keepalived的命令 [root@keep01 ~] //给启动脚本指定权限 [root@keep01 ~] //查看一下keepalived版本 [root@keep02 ~] Keepalived v1.2.13 (05/16,2020)
|
2.2 分别配置两台keepalived
2.2.1 主Keepalived配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
| [root@keep01 ~] ! Configuration File for keepalived
global_defs { router_id nginx-master }
vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" interval 2 weight 2 }
vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_nginx } virtual_ipaddress { 192.168.2.100 } }
|
2.2.2 备Keepalived配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
| [root@keep02 ~] ! Configuration File for keepalived
global_defs { router_id nginx-backup }
vrrp_script chk_nginx { script "/etc/keepalived/check_nginx.sh" interval 2 weight 2 }
vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_nginx } virtual_ipaddress { 192.168.2.100 } }
|
2.2.3 Nginx状态检查脚本创建(内容相同)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
| [root@keep01 ~]
NGINX=/usr/local/nginx/sbin/nginx PORT=80 IPV4=`ip a|grep ens32|grep inet|awk '{print $2}'|awk -F'/' '{print $1}'` nmap localhost -p $PORT | grep "$PORT/tcp open" > /dev/null if [ $? -ne 0 ];then $NGINX -s stop sleep 5 $NGINX nmap localhost -p $PORT | grep "$PORT/tcp open" > /dev/null if [ $? -ne 0 ];then killall -9 keepalived echo "$IPV4 (nginx) is down on $(date +%F-%T)" >> /etc/keepalived/check_nginx.log fi fi [root@keep01 ~] [root@keep01 ~]
|
2.3 安装Nginx
1 2 3 4 5 6 7 8 9
| [root@keep01 ~]
[root@keep01 ~]
[root@keep01 nginx-1.12.2]
[root@keep01 ~]
[root@keep01 ~]
|
2.4 分别配置两台服务器的Nginx(内容相同)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
| [root@keep01 ~]
user nginx;
worker_processes 1;
events { worker_connections 1024; }
http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream tomcat { server 192.168.2.3:8080 weight=2; server 192.168.2.4:8080 weight=2; }
server { listen 80; server_name localhost; location / { root html; index index.html index.htm; proxy_connect_timeout 3; proxy_send_timeout 30; proxy_read_timeout 30; proxy_pass http://tomcat; }
error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } [root@keep01 ~] [root@keep01 conf]
|
3. 启动nginx 和 keepalived
1
| nginx -s start/etc/init.d/keepalived start
|

4. 模拟主服务器的nginx宕机后VIP自动漂移
1 2
| [root@keep01 ~] [root@keep01 ~]
|



5. 将主节点的nginx和keepalived恢复
1 2 3
| [root@keep01 ~] [root@keep01 ~] [root@keep01 ~]
|

可以看到,当主keepalived恢复后,VIP会自动再回到他的上面来,下面再来访问一下浏览器

