问题描述
在Kubernetes中运行sonatype\nexus3
并允许使用Docker存储库的最佳设置是什么?
What would be the best setup to run sonatype\nexus3
in Kubernetes that allows using the Docker repositories?
目前,我有一个基本设置:
Currently I have a basic setup:
- 部署
sonatype\nexus3
- 内部服务公开端口80和5000
- Ingress + kube-lego提供对Nexus UI的HTTPS访问
如何解决不允许使用多个端口的入口限制?
How do I get around the limitation of ingress that doesn't allow more than one port?
推荐答案
tl; dr
Nexus需要通过SSL进行服务,否则docker将无法连接到它.这可以通过使用k8s入口+ kube-lego 来实现. ://letsencrypt.org/"rel =" noreferrer>让我们进行加密证书.任何其他真实证书也将起作用.但是,为了通过一个入口(因此,一个端口)同时为联系用户界面和docker注册表提供服务,需要在入口后面提供反向代理,以检测docker用户代理并将请求转发给注册表.
tl;dr
Nexus needs to be served over SSL, otherwise docker won't connect to it. This can be achieved with a k8s ingress + kube-lego for a Let's Encrypt certificate. Any other real certificate will work as well. However, in order to serve both the nexus UI and the docker registry through one ingress (thus, one port) one needs a reverse proxy behind the ingress to detect the docker user agent and forward the request to the registry.
--(IF user agent docker) --> [nexus service]nexus:5000 --> docker registry
|
[nexus ingress]nexus.example.com:80/ --> [proxy service]internal-proxy:80 -->|
|
--(ELSE ) --> [nexus service]nexus:80 --> nexus UI
启动nexus服务器
nexus-deployment.yaml 这利用了azureFile卷,但是您可以使用任何卷.另外,出于明显的原因,也没有显示秘密.
Start nexus server
nexus-deployment.yamlThis makes use of an azureFile volume, but you can use any volume. Also, the secret is not shown, for obvious reasons.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nexus
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nexus
spec:
containers:
- name: nexus
image: sonatype/nexus3:3.3.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
resources:
requests:
cpu: 440m
memory: 3.3Gi
limits:
cpu: 440m
memory: 3.3Gi
volumes:
- name: nexus-data
azureFile:
secretName: azure-file-storage-secret
shareName: nexus-data
添加运行状况和就绪状态探测器始终是一个好主意,以便kubernetes可以检测到应用程序何时关闭.击index.html
页并非总是能很好地工作,所以我改用REST API.这需要为具有nx-script-*-browse
权限的用户添加Authorization标头.显然,您必须首先在没有探针的情况下启动系统来设置用户,然后再更新您的部署.
It is always a good idea to add health and readiness probes, so that kubernetes can detect when the app goes down. Hitting the index.html
page doesn't always work very well, so I'm using the REST API instead. This requires adding the Authorization header for a user with the nx-script-*-browse
permission. Obviously you'll have to first bring the system up without probes to set up the user, then update your deployment later.
readinessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
# The authorization token is simply the base64 encoding of the `healthprobe` user's credentials:
# $ echo -n user:password | base64
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
livenessProbe:
httpGet:
path: /service/siesta/rest/v1/script
port: 8081
httpHeaders:
- name: Authorization
value: Basic dXNlcjpwYXNzd29yZA==
initialDelaySeconds: 900
timeoutSeconds: 60
因为联系有时可能需要很长时间才能开始,所以我使用了非常慷慨的初始延迟和超时.
Because nexus can sometimes take a long time to start, I use a very generous initial delay and timeout.
nexus-service.yaml 公开UI的端口80和注册表的端口5000.该端口必须与通过UI为注册表配置的端口相对应.
nexus-service.yaml Expose port 80 for the UI, and port 5000 for the registry. This must correspond to the port configured for the registry through the UI.
apiVersion: v1
kind: Service
metadata:
labels:
app: nexus
name: nexus
namespace: default
selfLink: /api/v1/namespaces/default/services/nexus
spec:
ports:
- name: http
port: 80
targetPort: 8081
- name: docker
port: 5000
targetPort: 5000
selector:
app: nexus
type: ClusterIP
启动反向代理(nginx)
proxy-configmap.yaml 将 nginx.conf 添加为ConfigMap数据卷.这包括用于检测docker用户代理的规则.这依靠kubernetes DNS作为上游访问nexus
服务.
Start reverse proxy (nginx)
proxy-configmap.yaml The nginx.conf is added as ConfigMap data volume. This includes a rule for detecting the docker user agent. This relies on the kubernetes DNS to access the nexus
service as upstream.
apiVersion: v1
data:
nginx.conf: |
worker_processes auto;
events {
worker_connections 1024;
}
http {
error_log /var/log/nginx/error.log warn;
access_log /dev/null;
proxy_intercept_errors off;
proxy_send_timeout 120;
proxy_read_timeout 300;
upstream nexus {
server nexus:80;
}
upstream registry {
server nexus:5000;
}
server {
listen 80;
server_name nexus.example.com;
keepalive_timeout 5 5;
proxy_buffering off;
# allow large uploads
client_max_body_size 1G;
location / {
# redirect to docker registry
if ($http_user_agent ~ docker ) {
proxy_pass http://registry;
}
proxy_pass http://nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
}
}
kind: ConfigMap
metadata:
creationTimestamp: null
name: internal-proxy-conf
namespace: default
selfLink: /api/v1/namespaces/default/configmaps/internal-proxy-conf
proxy-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: internal-proxy
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
proxy: internal
spec:
containers:
- name: nginx
image: nginx:1.11-alpine
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: internal-proxy-conf
mountPath: /etc/nginx/
env:
# This is a workaround to easily force a restart by incrementing the value (numbers must be quoted)
# NGINX needs to be restarted for configuration changes, especially DNS changes, to be detected
- name: RESTART_
value: "0"
volumes:
- name: internal-proxy-conf
configMap:
name: internal-proxy-conf
items:
- key: nginx.conf
path: nginx.conf
proxy-service.yaml 代理的故意类型为ClusterIP
,因为入口会将流量转发给它.在此示例中未使用端口443.
proxy-service.yaml The proxy is deliberately of type ClusterIP
because the ingress will forward traffic to it. Port 443 is not used in this example.
kind: Service
apiVersion: v1
metadata:
name: internal-proxy
namespace: default
spec:
selector:
proxy: internal
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
type: ClusterIP
创建入口
nexus-ingress.yaml 此步骤假定您具有nginx入口控制器.如果您拥有证书,则不需要入口,而可以公开代理服务,但不会获得kube-lego的自动化好处.
Create Ingress
nexus-ingress.yaml This step assumes you have an nginx ingress controller. If you have a certificate you don't need an ingress and can instead expose the proxy service, but you won't have the automation benefits of kube-lego.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: "true"
spec:
tls:
- hosts:
- nexus.example.com
secretName: nexus-tls
rules:
- host: nexus.example.com
http:
paths:
- path: /
backend:
serviceName: internal-proxy
servicePort: 80
这篇关于在Kubernetes集群中使用Docker运行Nexus 3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!