问题描述
我在使Jenkins管道脚本正常工作时遇到问题,该脚本使用Docker Pipeline插件在Docker容器中运行部分构建. Jenkins服务器和从属服务器都在Docker容器中运行.
I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.
- 在Docker容器中运行的詹金斯服务器
- 基于自定义图像的詹金斯奴隶( https://github.com/simulogics/protokube- jenkins-slave )也在Docker容器中运行
- 基于
docker:1.12-dind
映像的Docker守护程序容器 - 奴隶开始是这样的:
docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
- Jenkins server running in a Docker container
- Jenkins slave based on custom image (https://github.com/simulogics/protokube-jenkins-slave) running in a Docker container as well
- Docker daemon container based on
docker:1.12-dind
image - Slave started like so:
docker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
基本Docker操作(拉,构建和推送图像)在此设置下工作正常.
Basic Docker operations (pull, build and push images) are working just fine with this setup.
- 我希望服务器完全不必了解Docker.这应该是从属/节点的特征.
- 我不需要动态分配奴隶或短暂奴隶.手动启动一个奴隶足以满足我的目的.
- 理想情况下,我想离开我为奴隶定制的Docker映像,转而使用通用Docker奴隶内Docker管道插件提供的
inside
功能.
- I want the server to not have to know about Docker at all. This should be a characteristic of the slave/node.
- I do not need dynamic allocation of slaves or ephemeral slaves. One slave started manually is quite enough for my purposes.
- Ideally, I want to move away from my custom Docker image for the slave and instead use the
inside
function provided by the Docker pipeline plugin within a generic Docker slave.
这是导致问题的代表性构建步骤:
This is a representative build step that's causing the issue:
image.inside {
stage ('Install Ruby Dependencies') {
sh "bundle install"
}
}
这会在日志中引起如下错误:
This would cause an error like this in the log:
以前,此警告将显示:
有趣的是,此问题确实在此插件的CloudBees文档中进行了描述, https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside :
Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:
无法创建/…@ tmp/durable-…/pid:目录不存在 或负退出代码.
cannot create /…@tmp/durable-…/pid: Directory nonexistent or negative exit codes.
当Jenkins可以检测到代理本身在Docker容器中运行时,它将自动将--volumes-from参数传递给内部容器,以确保它可以与代理共享工作区.
When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.
不幸的是,上一段中描述的检测似乎无效.
Unfortunately, the detection described in the last paragraph doesn't seem to work.
既然我的服务器和从属服务器都在Docker容器中运行,那么我必须使用什么类型的卷映射来使其工作?
Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?
推荐答案
我已经看到了此问题的变体,并且agents
由kubernetes-plugin
驱动.
I've seen variations of this issue, also with the agents
powered by the kubernetes-plugin
.
我认为,要使agent/jnlp
容器正常工作,必须与build
容器共享工作区.
I think that for it to work the agent/jnlp
container needs to share workspace with the build
container.
通过build
容器,我指的是将运行bundle install
命令的容器.
By build
container I am referring to the one that will run the bundle install
command.
这可能通过withArgs
问题是您为什么要这样做?无论如何,大多数管道步骤都是在master上执行的,实际的构建将在build
容器中运行.还使用agent
的目的是什么?
The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build
container. What is the purpose of also using an agent
?
这篇关于Docker管道的“内部"消息在Docker容器中运行的Jenkins从属中不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!