问题描述
我最近一直在研究Mercurial的一些工作流程,因为我们开始在Web开发中使用它.我们需要一种自动化的方式来将推送到测试的更改和活动实例传播到多个端点.这是一个想法图:
I've been looking at some workflows for Mercurial recently, as we start using it for our web development. We need an automated way to propagate changes that are pushed to the testing and live instances to multiple endpoints. Here's a diagram of the idea:
+-------+
|Dev |
| |
+-------+
| Push
+--------+
|
V
+-------+ Push +-------+
|Live |<--------|Test |
|server | |server |
+-------+ +-------+
| +-------+ | +-------+
+--->|Live 1 | +--->|Test 1 |
| | | | | |
| +-------+ | +-------+
| |
| +-------+ | +-------+
+--->|Live 2 | +--->|Test 2 |
| | | | | |
| +-------+ | +-------+
| |
| +-------+ | +-------+
+--->|Live 3 | +--->|Test 3 |
| | | |
+-------+ +-------+
基本上,想法是,作为开发人员,我们所要做的就是一旦开发达到稳定水平,就发出push命令(不必一定是hg push
)到测试服务器,然后从那里自动传播出去.然后,一旦测试完成,我们会将其从测试推送到实际运行(或者,如果更容易,我们可以将其从开发推送到实际运行),并且还将传播到每个不同的实例.
Basically, the idea is that all that we as the developers would have to do is, once the development has reached a stable level, to issue the push command (which doesn't necessarily have to just be a hg push
) to the test server, and from there it would automatically propagate out. Then, once testing is done, we'd push it from test to live (or, if it would be easier, we could push from dev to live), and that would also propagate out to each of the different instances.
如果我们可以相当容易地添加新的测试和活动实例(例如,如果IP存储在可以由脚本读取的数据库中,等等),那将是很好的选择.
It would be nice if we could add new test and live instances fairly easily (e.g. maybe if the IPs were stored in a database that could be read by a script, etc...).
实现此目标的最佳方法是什么?我知道Mercurial挂钩.挂钩可能会运行的进程内脚本?我还研究了 Fabric ,这是一个不错的选择吗?
What would be the best way to accomplish this? I know about Mercurial hooks. Maybe an in-process script that the hook would run? I've also looked into Fabric, would that be a good option?
此外,每个端点都需要什么样的支持软件?如果在每个服务器上都存在一个Mercurial存储库,是否最简单? SSH访问会有所帮助吗?等等...
Also, what kind of support software would each of the endpoints need? Would it be easiest if a Mercurial repository existed on each server? Would SSH access be beneficial? Etc...
推荐答案
I've done something like this using Mercurial, Fabric, and Jenkins:
+-------+
| Devs |
+-------+
| hg push
V
+-------+
| hg | "central" (by convention) hg repo
+-------+\
| \
| +--------------+
| Jenkins job | Jenkins job
| pull stable | pulls test
| branch & compile | branch & compile
| +-------+ |
| +----|Jenkins|-----+ |
| | +-------+ | |
V | | V
+-------+ +-------+
| "live"| | "test"| shared workspaces ("live", "test")
+-------+ +-------+
| Jenkins job | Jenkins job <-- jobs triggered
| calls fabric | calls fabric manually in
| +-------+ | +-------+ Jenkins UI
|--> | live1 | |--> | test1 |
ssh | +-------+ ssh | +-------+
| +-------+ | +-------+
|--> | live2 | |--> | test2 |
| +-------+ | +-------+
| ... | ...
| +-------+ | +-------+
+--> | liveN | +--> | testN |
+-------+ +-------+
- 我没有每个Web服务器上的存储库;我只使用Fabric部署必要的东西.
- 我只有一个fabfile.py(在仓库中),其中包含所有部署逻辑
- 要部署到的服务器(IP)集以命令行arg的形式提供给架构(这是Jenkins作业配置的一部分)
- 我使用Jenkins共享工作区,因此我可以将拉取和编译任务与实际部署分开(这样,如果需要,我可以重新部署相同的代码)
- 如果您可以摆脱Jenkins的一项需要进行编译,部署和部署的工作,那么您会更快乐.共享工作区是我必须在设置中使用的一种技巧,并且存在弊端.
- I don't have a repo on each web server; I use fabric to deploy only what is necessary.
- I have a single fabfile.py (in the repo) that contains all the deploy logic
- The set of servers (IPs) to deploy to is given as a command line arg to fabric (it's part of the Jenkins job config)
- I use Jenkins shared workspaces so I can separate the tasks of pulling and compiling from actually deploying (so I can re-deploy the same code if necessary)
- If you can get away with a single Jenkins job that pulls-compiles-deploys, you'll be happier. The shared workspaces thing is a hack I have to use for my setup, and has downsides.
- 在测试分支上工作的开发人员可以随意工作,并共同决定何时运行Jenkins作业以更新测试环境
- 当测试满意时,将其合并到稳定状态并运行Jenkins作业以更新实时环境
- 添加一个新的Web框仅需在用于调用结构的命令行中添加另一个IP(即在Jenkins作业的配置中)
- 所有服务器都需要从Jenkins框中进行ssh访问
直接解决您的一些问题:
To directly address some of your questions:
这篇关于使用Mercurial在多个服务器上自动进行Web部署的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!