基于Docker使用Consul-template实现动态配置Nginx服务-LMLPHP

  • 传统负载均衡

    传统的负载均衡,就是 Client 直接访问 Nginx,然后被转发到后端某一台 Web Server。如果后端有添加/删除 Web Server,运维需要手动改下 nginx.conf ,然后重新载入配置,就可以动态的调整负载均衡。

    自动负载均衡

    再看看基于服务自动发现注册的负载均衡,负载均衡的方式没有变,只是多了一些外围组件,当然这些组件对 Client 是不可见的,client 依然只能看到 Nginx 入口,访问方式也没变化。

    Nginx 的动态负载均衡实现流程如下:

    环境准备

    系统环境

    节点规划

    这里的3台主机 - 192.168.1.182192.168.1.183192.168.1.185,每台主机部署两个 Client WebApp 容器和一个 Client Server 容器,用于模拟服务层的负载均衡。

    镜像构建

    这里先说说 test-clienttest-server 的镜像构建:

    构建完成后查看本地镜像库:

    部署模型

    五台主机,其中 192.168.1.181192.168.1.186 两台主机的主要作用如下:

    其余三台主机 - 192.168.1.182192.168.1.183192.168.1.185,充当的角色如下:

    这里有两次服务转发操作:

    开始搭建

    Consul Server主机

    (a). 分别编写 docker-compose.yml,注意 Registrator 需要配置各自的 IP地址。

    docker-compose.yml

    version: '2'
    services:
      load_balancer:
        image: liberalman/nginx-consul-template:latest
        hostname: lb
        links:
          - consul_server_master:consul
        ports:
          - "80:80"

      consul_server_master:
        image: consul:latest
        hostname: consul_server_master
        ports:
          - "8300:8300"
          - "8301:8301"
          - "8302:8302"
          - "8400:8400"
          - "8500:8500"
          - "8600:8600"
        command: consul agent -server -bootstrap-expect 1 -advertise 192.168.1.181 -node consul_server_master -data-dir /tmp/data-dir -client 0.0.0.0 -ui

      registrator:
        image: gliderlabs/registrator:latest
        hostname: registrator
        links:
          - consul_server_master:consul
        volumes:
          - "/var/run/docker.sock:/tmp/docker.sock"
        command:  -ip 192.168.1.181 consul://192.168.1.181:8500

    docker-compose.yml

    version: '2'
    services:
      load_balancer:
        image: liberalman/nginx-consul-template:latest
        hostname: lb
        links:
          - consul_server_slave:consul
        ports:
          - "80:80"

      consul_server_slave:
        image: consul:latest
        hostname: consul_server_slave
        ports:
          - "8300:8300"
          - "8301:8301"
          - "8302:8302"
          - "8400:8400"
          - "8500:8500"
          - "8600:8600"
        command: consul agent -server -join=192.168.1.181 -advertise 192.168.1.186 -node consul_server_slave -data-dir /tmp/data-dir -client 0.0.0.0 -ui

      registrator:
        image: gliderlabs/registrator:latest
        hostname: registrator
        links:
          - consul_server_slave:consul
        volumes:
          - "/var/run/docker.sock:/tmp/docker.sock"
        command:  -ip 192.168.1.186 consul://192.168.1.186:8500

    (b). 在两台主机上分别通过 docker-compose 启动多容器应用,命令如下:

    docker-compose up -d

    这是在主机 192.168.1.181 上运行启动命令时的输出,可以看到 docker-compose 启动时会先去检查目标镜像文件是否拉取到本地,然后依次创建启动 docker-compose.yml 文件配置的容器实例

    (c). 查看正常启动的容器进程,观察ConsulRegistratorNginx/Consul-template的容器都正常启动。

    (d). 利用 docker-compose,以相同的方式在主机 192.168.1.186 上启动所配置的容器服务实例,查看启动状态如下:

    (e). 访问 http://IP:8500 查看 Consul Server节点信息服务注册列表

    两台 Consul Server 主机上的容器服务实例均正常启动!

    Consul Client主机

    一般情况下,我们把 Consul 作为服务注册与发现中心,会使用它提供的服务定义 (Service Definition) 和健康检查定义 (Health Check Definition) 功能,相关配置说明参考如下:

    服务定义

    服健康检查定义

    配置原则为: SERVICE_XXX_*。如果你的应用监听的是 5000 端口,则改为 SERVICE_5000_CHECK_HTTP,其它环境变量配置同理。

    配置说明

    (a). 分别编写 docker-compose.yml,同样注意 Registrator 需要配置各自的 IP 地址。test-servertest-client服务实例在配置时需要指定相关的环境变量

    docker-compose.yml

    version: '2'
    services:
      consul_client_01:
        image: consul:latest
        ports:
          - "8300:8300"
          - "8301:8301"
          - "8301:8301/udp"
          - "8302:8302"
          - "8302:8302/udp"
          - "8400:8400"
          - "8500:8500"
          - "8600:8600"
        command: consul agent -retry-join 192.168.1.181 -advertise 192.168.1.182 -node consul_client_01 -data-dir /tmp/data-dir -client 0.0.0.0 -ui

      registrator:
        image: gliderlabs/registrator:latest
        volumes:
          - "/var/run/docker.sock:/tmp/docker.sock"
        command:  -ip 192.168.1.182 consul://192.168.1.182:8500

      test_server_1:
        image: test-server:latest
        environment:
          - SERVICE_8080_NAME=test-server-http-service
          - SERVICE_8080_TAGS=test-server-http-service-01
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/health
          - SERVICE_25000_NAME=test-server-thrift-service
          - SERVICE_25000_TAGS=test-server-thrift-service-01
          - SERVICE_25000_CHECK_INTERVAL=10s
          - SERVICE_25000_CHECK_TIMEOUT=2s
          - SERVICE_25000_CHECK_TCP=/
        ports:
          - "16000:8080"
          - "30000:25000"

      test_server_2:
        image: test-server:latest
        environment:
          - SERVICE_8080_NAME=test-server-http-service
          - SERVICE_8080_TAGS=test-server-http-service-02
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/health
          - SERVICE_25000_NAME=test-server-thrift-service
          - SERVICE_25000_TAGS=test-server-thrift-service-02
          - SERVICE_25000_CHECK_INTERVAL=10s
          - SERVICE_25000_CHECK_TIMEOUT=2s
          - SERVICE_25000_CHECK_TCP=/
        ports:
          - "18000:8080"
          - "32000:25000"

      test_client_1:
        image: test-client:latest
        environment:
          - SERVICE_8080_NAME=my-web-server
          - SERVICE_8080_TAGS=test-client-http-service-01
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/features
        ports:
          - "80:8080"

    docker-compose.yml

    version: '2'
    services:
      consul_client_02:
        image: consul:latest
        ports:
          - "8300:8300"
          - "8301:8301"
          - "8301:8301/udp"
          - "8302:8302"
          - "8302:8302/udp"
          - "8400:8400"
          - "8500:8500"
          - "8600:8600"
        command: consul agent -retry-join 192.168.1.181 -advertise 192.168.1.183 -node consul_client_02 -data-dir /tmp/data-dir -client 0.0.0.0 -ui

      registrator:
        image: gliderlabs/registrator:latest
        volumes:
          - "/var/run/docker.sock:/tmp/docker.sock"
        command:  -ip 192.168.1.183 consul://192.168.1.183:8500

      test_server_1:
        image: test-server:latest
        environment:
          - SERVICE_8080_NAME=test-server-http-service
          - SERVICE_8080_TAGS=test-server-http-service-03
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/health
          - SERVICE_25000_NAME=test-server-thrift-service
          - SERVICE_25000_TAGS=test-server-thrift-service-03
          - SERVICE_25000_CHECK_INTERVAL=10s
          - SERVICE_25000_CHECK_TIMEOUT=2s
          - SERVICE_25000_CHECK_TCP=/
        ports:
          - "16000:8080"
          - "30000:25000"

      test_server_2:
        image: test-server:latest
        environment:
          - SERVICE_8080_NAME=test-server-http-service
          - SERVICE_8080_TAGS=test-server-http-service-04
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/health
          - SERVICE_25000_NAME=test-server-thrift-service
          - SERVICE_25000_TAGS=test-server-thrift-service-04
          - SERVICE_25000_CHECK_INTERVAL=10s
          - SERVICE_25000_CHECK_TIMEOUT=2s
          - SERVICE_25000_CHECK_TCP=/
        ports:
          - "18000:8080"
          - "32000:25000"

      test_client_1:
        image: test-client:latest
        environment:
          - SERVICE_8080_NAME=my-web-server
          - SERVICE_8080_TAGS=test-client-http-service-02
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/features
        ports:
          - "80:8080"

    docker-compose.yml

    version: '2'
    services:
      consul_client_03:
        image: consul:latest
        ports:
          - "8300:8300"
          - "8301:8301"
          - "8301:8301/udp"
          - "8302:8302"
          - "8302:8302/udp"
          - "8400:8400"
          - "8500:8500"
          - "8600:8600"
        command: consul agent -retry-join 192.168.1.181 -advertise 192.168.1.185 -node consul_client_03 -data-dir /tmp/data-dir -client 0.0.0.0 -ui

      registrator:
        image: gliderlabs/registrator:latest
        volumes:
          - "/var/run/docker.sock:/tmp/docker.sock"
        command:  -ip 192.168.1.185 consul://192.168.1.185:8500

      test_server_1:
        image: test-server:latest
        environment:
          - SERVICE_8080_NAME=test-server-http-service
          - SERVICE_8080_TAGS=test-server-http-service-05
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/health
          - SERVICE_25000_NAME=test-server-thrift-service
          - SERVICE_25000_TAGS=test-server-thrift-service-05
          - SERVICE_25000_CHECK_INTERVAL=10s
          - SERVICE_25000_CHECK_TIMEOUT=2s
          - SERVICE_25000_CHECK_TCP=/
        ports:
          - "16000:8080"
          - "30000:25000"

      test_server_2:
        image: test-server:latest
        environment:
          - SERVICE_8080_NAME=test-server-http-service
          - SERVICE_8080_TAGS=test-server-http-service-06
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/health
          - SERVICE_25000_NAME=test-server-thrift-service
          - SERVICE_25000_TAGS=test-server-thrift-service-06
          - SERVICE_25000_CHECK_INTERVAL=10s
          - SERVICE_25000_CHECK_TIMEOUT=2s
          - SERVICE_25000_CHECK_TCP=/
        ports:
          - "18000:8080"
          - "32000:25000"

      test_client_1:
        image: test-client:latest
        environment:
          - SERVICE_8080_NAME=my-web-server
          - SERVICE_8080_TAGS=test-client-http-service-03
          - SERVICE_8080_CHECK_INTERVAL=10s
          - SERVICE_8080_CHECK_TIMEOUT=2s
          - SERVICE_8080_CHECK_HTTP=/features
        ports:
          - "80:8080"

    (b). 在三台主机上使用 docker-compose 启动多容器应用:

    docker-compose up -d

    以主机 192.168.1.182 为例 (其余两台类似),控制台日志显示,创建并启动 docker-compose.yml 文件配置的5个容器实例

    (c). 查看正常启动的容器进程,观察到 Consul、一台test-client 和 两台test-server的容器都正常启动。

    (d). 在 b 操作中的控制台输出可以看到:docker-compose 并非按照 docker-compose.yml 文件中服务配置的先后顺序启动。registrator 容器的启动依赖于 consul 容器,而此时 consul 还并未启动,就出现了 registrator 优先启动而异常退出的现象。解决方法是再运行一次 docker-compose up -d 命令。

    (e). 再次查看容器进程,此时 Registrator 容器就已经正常启动了。

    (f). 以相同的方式在其余两台主机上重复以上操作,再次访问 http://IP:8500 查看 Consul Server节点信息服务注册列表

    三台  Consul Client 主机上的容器服务实例均正常启动,服务注册和发现运行正常!

    结果验证

    Nginx负载均衡

    访问Nginx

    Nginx 默认访问端口号为80,任选一台 Nginx 访问,比如:http://192.168.1.181/swagger-ui.html

    请求转发至 Test ClientSwagger页面,表明 nginx配置文件 nginx.confConsul-template 成功修改。

    进入Nginx容器

    运行 docker ps 查看 nginx-consul-template 的容器 ID,比如这里是:4f2731a7e0cb。进入 nginx-consul-template 容器。

    docker-enter 4f2731a7e0cb

    查看容器内部的进程列表:

    特别留意以下一行进程命令,这里完成了三步重要的操作:

    consul-template -consul-addr=consul:8500 -template /etc/consul-templates/nginx.conf.ctmpl:/etc/nginx/conf.d/app.conf:nginx -s reload

    查看 app.conf 的配置项,发现三个 test-client 节点的 IP:port 都加入了路由转发列表中。

    退出并关闭主机 192.168.1.182 上的 test-client 容器。

    再次查看 app.conf,可以发现路由节点 192.168.1.182:80 已经从 Nginx路由转发列表剔除掉了。

    同样的,重新启动 test-client 恢复容器,又可以发现 Nginx路由转发列表 再次自动将其添加!

    服务负载均衡

    接口测试

    test-client 通过 http 通信方式请求任意一台 test-server,返回响应结果 (请求处理时间 ms )。

    test-client 通过 thrift 通信方式请求任意一台 test-server,返回响应结果 (请求处理时间 ms )。

    日志分析

    服务的负载均衡并不是很好观察,这里直接截取了一段 test-client服务缓存列表动态定时刷新时打印的日志:

    2018-02-09 13:15:55.157  INFO 1 --- [erListUpdater-1] t.c.l.ThriftConsulServerListLoadBalancer : Refreshed thrift serverList: [
    test-server-thrift-service: [
     ThriftServerNode{node='consul_client_01', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-01], host='192.168.1.182', port=30000, address='192.168.1.182', isHealth=true},
     ThriftServerNode{node='consul_client_01', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-02], host='192.168.1.182', port=32000, address='192.168.1.182', isHealth=true},
     ThriftServerNode{node='consul_client_02', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-03], host='192.168.1.183', port=30000, address='192.168.1.183', isHealth=true},
     ThriftServerNode{node='consul_client_02', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-04], host='192.168.1.183', port=32000, address='192.168.1.183', isHealth=true},
     ThriftServerNode{node='consul_client_03', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-05], host='192.168.1.185', port=30000, address='192.168.1.185', isHealth=true},
     ThriftServerNode{node='consul_client_03', serviceId='test-server-thrift-service', tags=[test-server-thrift-service-06], host='192.168.1.185', port=32000, address='192.168.1.185', isHealth=true}
    ],
    test-server-http-service: [
     ThriftServerNode{node='consul_client_01', serviceId='test-server-http-service', tags=[test-server-http-service-01], host='192.168.1.182', port=16000, address='192.168.1.182', isHealth=true},
     ThriftServerNode{node='consul_client_01', serviceId='test-server-http-service', tags=[test-server-http-service-02], host='192.168.1.182', port=18000, address='192.168.1.182', isHealth=true},
     ThriftServerNode{node='consul_client_02', serviceId='test-server-http-service', tags=[test-server-http-service-03], host='192.168.1.183', port=16000, address='192.168.1.183', isHealth=true},
     ThriftServerNode{node='consul_client_02', serviceId='test-server-http-service', tags=[test-server-http-service-04], host='192.168.1.183', port=18000, address='192.168.1.183', isHealth=true},
     ThriftServerNode{node='consul_client_03', serviceId='test-server-http-service', tags=[test-server-http-service-05], host='192.168.1.185', port=16000, address='192.168.1.185', isHealth=true},
     ThriftServerNode{node='consul_client_03', serviceId='test-server-http-service', tags=[test-server-http-service-06], host='192.168.1.185', port=18000, address='192.168.1.185', isHealth=true}
    ],
    my-web-server: [
     ThriftServerNode{node='consul_client_01', serviceId='my-web-server', tags=[test-client-http-service-01], host='192.168.1.182', port=80, address='192.168.1.182', isHealth=true},
     ThriftServerNode{node='consul_client_02', serviceId='my-web-server', tags=[test-client-http-service-02], host='192.168.1.183', port=80, address='192.168.1.183', isHealth=true},
     ThriftServerNode{node='consul_client_03', serviceId='my-web-server', tags=[test-client-http-service-03], host='192.168.1.185', port=80, address='192.168.1.185', isHealth=true}
    ]]

    服务实例

    spring-cloud-starter-thrift 采用的轮询的转发策略,也就是说 my-web-server 会按次序循环往来地将 http 或者 rpc 请求分发到各自的 6服务实例完成处理。

    总结

    本文提供了一套基于微服务服务注册与发现体系容器高可用 (HA) 解决方案,引入了接入层服务层自动负载均衡的实现,详细给出了实践方案技术手段

    转载:https://juejin.cn/post/6844903623084736525

    参考资料


    基于Docker使用Consul-template实现动态配置Nginx服务-LMLPHP

    点击图片-【云原生生态圈】2020年全年文章索引

    本文分享自微信公众号 - 云原生生态圈(CloudNativeEcoSystem)。
    如有侵权,请联系 support@oschina.cn 删除。
    本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。

    03-10 02:03
    查看更多