salt-syndic是做神马的呢?如果大家知道zabbix proxy的话那就可以很容易理解了,syndic的意思为理事,其实如果叫salt-proxy的话那就更好理解了,它就是一层代理,如同zabbix proxy功能一样,隔离master与minion,使其不需要通讯,只需要与syndic都通讯就可以,这样的话就可以在跨机房的时候将架构清晰部署了,建议zabbix proxy与salt-syndic可以放在一起哦

本次我萌使用node2作为node3的代理让他收到node1(master)的控制saltstack syndic安装配置使用-LMLPHP

在node1(master)上配置

 [root@linux-node1 ~]# grep "^[a-Z]" /etc/salt/master
default_include: master.d/*.conf
file_roots:
order_masters: True # 修改这里,表示允许开启多层master

在node2上安装配置

 [root@linux-node2 salt]# yum install salt-syndic -y
[root@linux-node2 salt]# cd /etc/salt/
[root@linux-node2 salt]# grep "^[a-Z]" proxy
master: 192.168.56.11 # proxy文件里
[root@linux-node2 salt]# grep "^[a-Z]" master
syndic_master: 192.168.56.11 # master文件里
[root@linux-node2 salt]# systemctl start salt-master.service
[root@linux-node2 salt]# systemctl start salt-syndic.service
[root@linux-node2 salt]# netstat -tpln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0.0.0.0: 0.0.0.0:* LISTEN /systemd
tcp 0.0.0.0: 0.0.0.0:* LISTEN /sshd
tcp 0.0.0.0: 0.0.0.0:* LISTEN /python
tcp 0.0.0.0: 0.0.0.0:* LISTEN /python
tcp6 ::: :::* LISTEN /systemd
tcp6 ::: :::* LISTEN /sshd

node3上正常安装minion

 [root@linux-node3 salt]# yum install salt-minion -y
[root@linux-node3 salt]# cd /etc/salt/
[root@linux-node3 salt]# grep "^[a-Z]" minion
master: 192.168.56.12 # 此时只需要认定node2就好,不需要知道node1的存在
[root@linux-node3 salt]# systemctl start salt-minion

然后回到node2(syndic)

 [root@linux-node2 salt]# salt-key -L
Accepted Keys:
Denied Keys:
Unaccepted Keys:
linux-node3.example.com
Rejected Keys:
[root@linux-node2 salt]# salt-key –A # 把key接受了

最后回到node1(master)

 [root@linux-node1 ~]# salt-key –L                    # 发现并没有linux-node3.example.com
Accepted Keys:
linux-node1.example.com
linux-node2.example.com
Denied Keys:
Unaccepted Keys:
Rejected Keys:
[root@linux-node1 ~]# salt '*' test.ping
linux-node2.example.com:
True
linux-node1.example.com:
True
linux-node3.example.com: # 但是它会出现效果
True

其他的同层代理及多层代理的配置也是相同的,只需要分清每个代理的上层master就好

这里有一些常见的问题

1.我代理之下控制的是否可以重名?举个例子就是node3的id改成node2,然后在总master上执行会有什么情况?

首先我萌要涉及到修改id啦,小伙伴还记得修改id的流程吗~~saltstack syndic安装配置使用-LMLPHP

 [root@linux-node3 salt]# systemctl stop salt-minion.service    # 停止minion
[root@linux-node2 salt]# salt-key –L # 注意是在node2(syndic)上哦,因为node3的眼里的master是node2,并且把key是发送给node2了哦,删除它
Accepted Keys:
linux-node3.example.com
Denied Keys:
Unaccepted Keys:
Rejected Keys:
[root@linux-node2 salt]# salt-key -d linux-node3.example.com
The following keys are going to be deleted:
Accepted Keys:
linux-node3.example.com
Proceed? [N/y] y
Key for minion linux-node3.example.com deleted.
[root@linux-node3 salt]# rm -fr /etc/salt/pki/minion/ # 删除minion端/etc/salt/pki/minion下所有文件
[root@linux-node3 salt]# grep "^[a-Z]" /etc/salt/minion # 修改新id
master: 192.168.56.12
id: linux-node2.example.com # 配置一个已有的重复id做测试
[root@linux-node3 salt]# systemctl start salt-minion.service # 再次启动minion
[root@linux-node2 salt]# salt-key –L # 回到node2再次接受新id的key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
linux-node2.example.com
Rejected Keys:
[root@linux-node2 salt]# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
linux-node2.example.com
Proceed? [n/Y] Y
Key for minion linux-node2.example.com accepted.
[root@linux-node2 salt]# salt '*' test.ping # 简单测试下
linux-node2.example.com:
True 最后验证我们的测试,回到node1(master)
[root@linux-node1 ~]# salt '*' test.ping
linux-node2.example.com:
True
linux-node1.example.com:
True
linux-node2.example.com:
True
我萌发现,wtf,什么鬼,linux-node2.example.com居然出现了两次,虽然已经想过这种情况,但是在实际使用中我肯定是分不清谁是谁了,所以这种使用了代理后依然id重名的方式依然是很不好的,所以还是建议大家把id要分清楚哦,最简单的方式就是设置合理的主机名,这样所有的机器都不会重复,而且连设置id这个事情都可以省略了(我已经将node3的id改回去了)

2.远程执行没有问题了,这种架构下状态文件的执行会不会有影响呢?

 [root@linux-node1 base]# pwd                    # 我们在master上定义top
/srv/salt/base
[root@linux-node1 base]# cat top.sls # 其实就是给大家传输了个文件
base:
'*':
- known-hosts.known-hosts
[root@linux-node1 base]# cat known-hosts/known-hosts.sls
known-hosts:
file.managed:
- name: /root/.ssh/known_hosts
- source: salt://known-hosts/templates/known-hosts
- clean: True
[root@linux-node1 base]# salt '*' state.highstate
linux-node3.example.com:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes: Summary for linux-node3.example.com
------------
Succeeded:
Failed:
------------
Total states run:
Total run time: 0.000 ms
linux-node2.example.com:
----------
ID: known-hosts
Function: file.managed
Name: /root/.ssh/known_hosts
Result: True
Comment: File /root/.ssh/known_hosts updated
Started: ::35.210699
Duration: 37.978 ms
Changes:
----------
diff:
New file
mode: Summary for linux-node2.example.com
------------
Succeeded: (changed=)
Failed:
------------
Total states run:
Total run time: 37.978 ms
linux-node1.example.com:
----------
ID: known-hosts
Function: file.managed
Name: /root/.ssh/known_hosts
Result: True
Comment: File /root/.ssh/known_hosts is in the correct state
Started: ::35.226119
Duration: 51.202 ms
Changes: Summary for linux-node1.example.com
------------
Succeeded:
Failed:
------------
Total states run:
Total run time: 51.202 ms
ERROR: Minions returned with non-zero exit code
显而易见的node3发生了错误,而node1跟node2正常(很好理解),我们去看node3报出的“No Top file or master_tops data matches found”,言简意赅没有找到匹配的top执行文件,简单推断出是因为node3认证的master是node2,但是node2上没有写top,我们去node2上写一个不同的top再次测试下
[root@linux-node2 base]# pwd
/srv/salt/base
[root@linux-node2 base]# cat top.sls # 这个更简单了,就是ls /root
base:
'*':
- cmd.cmd
[root@linux-node2 base]# cat cmd/cmd.sls
cmd:
cmd.run:
- name: ls /root
好的我们回到master上再次测试,我将node1、2正常执行的信息省略
[root@linux-node1 base]# salt '*' state.highstate
linux-node3.example.com:
----------
ID: cmd
Function: cmd.run
Name: ls /root
Result: True
Comment: Command "ls /root" run
Started: ::42.752326
Duration: 11.944 ms
Changes:
----------
pid: retcode: stderr:
stdout:
lvs.sh Summary for linux-node3.example.com
------------
Succeeded: (changed=)
Failed:
------------
Total states run:
Total run time: 11.944 ms
我们已经可以看出一些端倪,我们再次修改master的配置文件并执行测试
[root@linux-node1 base]# cat top.sls
base:
'linux-node3.example.com': # 只定义执行node3
- known-hosts.known-hosts
[root@linux-node1 base]# salt '*' state.highstate
linux-node3.example.com:
----------
ID: cmd
Function: cmd.run
Name: ls /root
Result: True
Comment: Command "ls /root" run
Started: ::20.792475
Duration: 8.686 ms
Changes:
----------
pid: retcode: stderr:
stdout:
lvs.sh Summary for linux-node3.example.com
------------
Succeeded: (changed=)
Failed:
------------
Total states run:
Total run time: 8.686 ms
linux-node2.example.com:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes: Summary for linux-node2.example.com
------------
Succeeded:
Failed:
------------
Total states run:
Total run time: 0.000 ms
linux-node1.example.com:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes: Summary for linux-node1.example.com
------------
Succeeded:
Failed:
------------
Total states run:
Total run time: 0.000 ms
ERROR: Minions returned with non-zero exit code
我们发现这次node1跟node2出刚才问题了,而node3执行的是node2上定义的top,好吧,这时候就要发挥小北方的作用!
北方的总结:
每个minion会去找自己master里定义的top并执行,即node1、2找的是master的,而node2找的是syndic(node2)的 “No Top file or master_tops data matches found”出现是因为我每次执行都是salt '*' state.highstate,即让所有机器都查找top文件并执行对应操作,第一次node3出现问题是因为它听从的top文件在syndic上,当时syndic上我还没有写top所以他找不到匹配自己的;第二次我把top里执行的*换成了node3单独一个,没有node1跟node2的相关操作了,他们接受到指令并来查找top文件想执行相关操作发现没匹配自己也因此报错,就跟刚才node3找不到是一个意思 一下子还是无法理解呢,那么怎么办呢,有一个规范的做法就是,将master的文件目录直接拷到所有的syndic上,这样就可以保证所有的操作都是统一的了,如同没有代理的时候一样

3.top好麻烦呀,那么我普通的执行sls文件会怎么样呢?saltstack syndic安装配置使用-LMLPHP

 [root@linux-node1 base]# salt '*' state.sls  known-hosts.known-hosts
linux-node3.example.com:
Data failed to compile:
----------
No matching sls found for 'known-hosts.known-hosts' in env 'base'
linux-node2.example.com:
----------
ID: known-hosts
Function: file.managed
Name: /root/.ssh/known_hosts
Result: True
Comment: File /root/.ssh/known_hosts is in the correct state
Started: ::03.968021
Duration: 870.596 ms
Changes: Summary for linux-node2.example.com
------------
Succeeded:
Failed:
------------
Total states run:
Total run time: 870.596 ms
linux-node1.example.com:
----------
ID: known-hosts
Function: file.managed
Name: /root/.ssh/known_hosts
Result: True
Comment: File /root/.ssh/known_hosts is in the correct state
Started: ::05.003462
Duration: 42.02 ms
Changes: Summary for linux-node1.example.com
------------
Succeeded:
Failed:
------------
Total states run:
Total run time: 42.020 ms
ERROR: Minions returned with non-zero exit code
我么看到node3又报错了,“No matching sls found for 'known-hosts.known-hosts' in env 'base'”,我甚至都不需要验证都知道是怎么回事了,直接复制下来 每个minion会去找自己master里定义的sls并执行,即node1、2找的是master的,而node2找的是syndic(node2)的 所以如果你在syndic定义个known-hosts但是里面执行些其他操作那么node3就会按这个来了,但是没有人会这么乱七八糟的搞,因此:保证各个syndic与master的文件目录保持统一!
05-11 14:01