边缘集群场景下的镜像缓存
1、问题背景
数量庞大的边缘节点拉取中心云私有仓库里的同一镜像时会给云端服务器造成很大的压力,因此如果同一份镜像边缘侧只拉取一次,然后把镜像缓存到边缘侧,之后边缘节点再去拉取镜像时直接从缓存中获取,会大大减少云端服务器的压力,有点类似于镜像CDN的功能。
本次实验需要至少三个节点,一个云中心节点用来部署我们的私有镜像仓库,两个边缘节点,其中一个用来缓存镜像,一个用来做拉取实验。
2、环境搭建
2.1、云端私有仓库Harbor搭建
2.1.1、安装docker环境
#官方仓库安装
https://docs.docker.com/install/linux/docker-ce/centos
#二进制安装
https://docs.docker.com/install/linux/docker-ce/binaries
https://download.docker.com/linux/static/stable #下载二进制文件
2.1.2、安装docker-compose
curl -L https://github.com/docker/compose/releases/download/1.27.4/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose -version
2.1.3、安装Harbor
wget https://github.com/goharbor/harbor/releases/download/v2.1.1/harbor-offline-installer-v2.1.1.tgz
#解压到home目录下
tar xvf harbor-offline-installer-v2.1.1.tgz -C /home/xing && cd /home/xing/harbor/
修改harbor.yml文件,配置文件中对应目录要创建好
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.xing.com
# http related config
#http:
# port for http, default is 80. If https enabled, this port will redirect to https port
# port: 80
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /home/xing/harbor/certs/harbor.crt
private_key: /home/xing/harbor/certs/harbor.key
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# # set enabled to true means internal tls is enabled
# enabled: true
# # put your cert and key files on dir
# dir: /etc/harbor/tls/internal
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345
# Harbor DB configuration
database:
# The password for the root user of Harbor DB. Change this before any production use.
password: root123
# The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
max_idle_conns: 50
# The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
# Note: the default number of connections is 1024 for postgres of harbor.
max_open_conns: 1000
# The default data volume
data_volume: /home/xing/harbor/data
# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
# # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
# # of registry's and chart repository's containers. This is usually needed when the user hosts a internal storage with self signed certificate.
# ca_bundle:
# # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
# # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
# filesystem:
# maxthreads: 100
# # set disable to true when you want to disable registry redirect
# redirect:
# disabled: false
# Trivy configuration
#
# Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
# in the local file system. In addition, the database contains the update timestamp so Trivy can detect whether it
# should download a newer version from the Internet or use the cached one. Currently, the database is updated every
# 12 hours and published as a new release to GitHub.
trivy:
# ignoreUnfixed The flag to display only fixed vulnerabilities
ignore_unfixed: false
# skipUpdate The flag to enable or disable Trivy DB downloads from GitHub
#
# You might want to enable this flag in test or CI/CD environments to avoid GitHub rate limiting issues.
# If the flag is enabled you have to download the `trivy-offline.tar.gz` archive manually, extract `trivy.db` and
# `metadatta.json` files and mount them in the `/home/scanner/.cache/trivy/db` path.
skip_update: false
#
# insecure The flag to skip verifying registry certificate
insecure: false
# github_token The GitHub access token to download Trivy DB
#
# Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
# https://developer.github.com/v3/#rate-limiting
#
# You can create a GitHub token by following the instructions in
# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
#
# github_token: xxx
jobservice:
# Maximum number of job workers in job service
max_job_workers: 10
notification:
# Maximum retry count for webhook job
webhook_job_max_retry: 10
chart:
# Change the value of absolute_url to enabled can enable absolute url in chart
absolute_url: disabled
# Log configurations
log:
# options are debug, info, warning, error, fatal
level: info
# configs for logs in local storage
local:
# Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
rotate_count: 50
# Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
# If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
# are all valid.
rotate_size: 200M
# The directory on your host that store log
location: /var/log/harbor
# Uncomment following lines to enable external syslog endpoint.
# external_endpoint:
# # protocol used to transmit log to external endpoint, options is tcp or udp
# protocol: tcp
# # The host of external endpoint
# host: localhost
# # Port of external endpoint
# port: 5140
#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 2.2.0
# Uncomment external_database if using external database.
# external_database:
# harbor:
# host: harbor_db_host
# port: harbor_db_port
# db_name: harbor_db_name
# username: harbor_db_username
# password: harbor_db_password
# ssl_mode: disable
# max_idle_conns: 2
# max_open_conns: 0
# notary_signer:
# host: notary_signer_db_host
# port: notary_signer_db_port
# db_name: notary_signer_db_name
# username: notary_signer_db_username
# password: notary_signer_db_password
# ssl_mode: disable
# notary_server:
# host: notary_server_db_host
# port: notary_server_db_port
# db_name: notary_server_db_name
# username: notary_server_db_username
# password: notary_server_db_password
# ssl_mode: disable
# Uncomment external_redis if using external Redis server
# external_redis:
# # support redis, redis+sentinel
# # host for redis: <host_redis>:<port_redis>
# # host for redis+sentinel:
# # <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
# host: redis:6379
# password:
# # sentinel_master_set must be set to support redis+sentinel
# #sentinel_master_set:
# # db_index 0 is for core, it's unchangeable
# registry_db_index: 1
# jobservice_db_index: 2
# chartmuseum_db_index: 3
# trivy_db_index: 5
# idle_timeout_seconds: 30
# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
# ca_file: /path/to/ca
# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:
http_proxy:
https_proxy:
no_proxy:
components:
- core
- jobservice
- trivy
# metric:
# enabled: false
# port: 9090
# path: /metrics
2.1.4、openssl生成自签证书
# 1、生成证书,并保存到 /home/xing/harbor/certs 目录下
openssl req -newkey rsa:4096 -nodes -sha256 -keyout /home/xing/harbor/certs/harbor.key -x509 -out /home/xing/harbor/certs/harbor.crt -subj /C=CN/ST=BJ/L=BJ/O=DEVOPS/CN=harbor.xing.com -days 3650
req 产生证书签发申请命令
-newkey 生成新私钥
rsa:4096 生成秘钥位数
-nodes 表示私钥不加密
-sha256 使用SHA-2哈希算法
-keyout 将新创建的私钥写入的文件名
-x509 签发X.509格式证书命令。X.509是最通用的一种签名证书格式。
-out 指定要写入的输出文件名
-subj 指定用户信息
-days 有效期(3650表示十年)
2.1.5、启动harbor服务
./install.sh
修改hosts文件,添加域名。也可以直接登录https://hostip:80,用户密码默认admin Harbor12345(配置文件中)
2.2、上传镜像到私有仓库
2.2.1、添加私有镜像信任仓库
# 1、添加仓库地址
vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://k1ktap5m.mirror.aliyuncs.com"],
"insecure-registries": ["172.16.9.3","harbor.xing.com"]
}
# 2、重启 docker 服务
systemctl daemon-reload
systemctl restart docker
#添加证书
2.2.2、登录私有仓库
2.2.3、从Harbor仓库上传/下载镜像
# 1、将本地镜像打上私有仓库
# 格式:docker tag 本地镜像名:版本 Harbor服务器访问ip+端口/项目名/仓库镜像名:版本
docker tag nginx:latest harbor.xing.com/xing/mynginx:v1
# 2、上传镜像
docker push harbor.xing.com/xing/mynginx:v1
# 3、下载镜像
docker pull harbor.xing.com/xing/mynginx:v1
2.3、在k8s集群中使用Harbor
k8s 默认https访问harbor,访问http,需要修改整个集群节点
/etc/docker/daemon.json
文件
由于harbor采用了用户名密码认证,所以在镜像下载时需要配置sercet
#创建一个给Docker registry使用的secret
kubectl create secret docker-registry registry-secret --namespace=default \
--docker-server=172.16.9.3 \
--docker-username=admin \
--docker-password=Harbor12345
#查看secret
[root@master demo]# kubectl get secret
NAME TYPE DATA AGE
default-token-gdwgn kubernetes.io/service-account-token 3 2d18h
registry-secret kubernetes.io/dockerconfigjson 1 116s
#删除
kubectl delete secret registry-secret
至此只需要把containers中的images镜像指定为harbor仓库镜像地址即可。
3、边缘部署registry缓存
3.1、运行服务
可以通过官方镜像启动docker registry缓存服务,也可以通过二进制启动。
启动服务
二进制方式
git clone https://github.com/distribution/distribution.git cd distribution make binaries #然后进入/bin目录启动 ./registry serve /etc/docker/registry/config.yml #前提是配置文件写好
docker启动,把相应存储卷挂载到容器里。
docker run -itd -p 5000:5000 -v /var/lib/registry:/var/lib/registry -v /etc/docker/registry/config.yml:/etc/docker/registry/config.yml --name registry registry:v2 #端口要对外暴漏出来,端口的定义在配置文件里
编写distribution配置文件,默认放在/etc/docker/registry/config.yml里
version: 0.1 log: fields: service: registry storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /var/lib/registry http: addr: :5000 #服务端口 headers: X-Content-Type-Options: [nosniff] proxy: remoteurl: https://harbor.xing.com #私有镜像地址 username: *** #用户名 password: *** #密码 health: storagedriver: enabled: true interval: 10s threshold: 3
更改hosts文件
vi /etc/hosts #添加域名 10.10.102.190 harbor.xing.com #ip为harbor镜像仓库的地址
如果对外暴露的是80端口,docker pull的时候可以不加端口,如果使用默认暴露的5000端口,拉取镜像时候要加端口
docker pull localhost:5000/xing/imagecache:v2
如果想要上传的话,先更改镜像名称
但是这样只是push到了缓存中,远程仓库并没有更改。
镜像缓存在/var/lib/registry/docker/registry/v2/repositories/项目名/仓库名下面。
其他节点使用缓存的镜像只需要把域名改为缓存的地址就可以了
vi /etc/hosts
192.168.123.160 harbor.xing.com #此IP为缓存节点的IP而非harbor仓库的IP!!!
#当我们在其他节点上执行这个命令的时候(如果是80端口可以省略),镜像拉取的请求会被打入到缓存仓库里去,缓存仓库里如果有,直接返回镜像,如果没有,缓存仓库会到config.yml里配置的remoteurl的地址里也就是我们的私有harbor仓库里去拉取镜像,把镜像缓存并返回给请求节点(cache-aside策略)
docker pull harbor.xing.com:5000/xing/imagecahe:v2
3.2、问题总结
当我用docker启动的时候,拉取镜像时会报错误
我们的harbor.xing.com域名解析不了,本以为是docker内部没有解析到这个域名,我尝试进入容器内部修改hosts文件,没能解决,索性把容器的启动网络设置为host模式,依旧不行。因此我把源代码重新编译成二进制形式运行,没有错误,正常缓存。
跨平台运行端口问题:我把服务部署在x86_64平台下,服务正常运行,但由于我们有arm64的边缘节点,我尝试编译成arm架构的指令去运行在边缘节点下时,docker pull必须带端口,即使是80端口,也不能省略。否则会报
invalid character '<' looking for beginning of value
错误,原因未知。
3.3、注意事项
docker pull 默认使用的是https协议,因为我们拉取私有地址时候可能用到http协议,因此需要修改damon.json文件。只要我们请求的域名报https错误,就把这个域名添加到
insecure-registries
字段里就行了。证书问题
我们需要把前面生成的harbor.crt证书导入到我们的缓存节点上,去更新证书,更新方法自行搜索。如果更新后出现上面的
certificate relies on legacy Conmmon Name field
,要么降低Golang的版本,要么运行时候注入GODEBUG=x509ignoreCN=0
环境变量。
4、缓存策略(补充知识)
4.1、Cache-Aside策略
这种策略下,应用程序会与cache和data source进行通信,应用程序会在命中data source之前先检查cache。
这种策略下,应用程序在首先读cache里面的数据,如果未命中,则去data source里面获取数据,然后在存到cache里。
优点
- 适合
读多
应用场景 - 在一定程度上可以抵抗缓存故障,如果缓存服务故障,系统可以直接访问data source获取数据
缺点
- 不能保证数据存储和缓存之间的一致性
- 首次请求数据时,总是缓存未命中(可通过手动触发查询操作来对数据进行
预热
)
4.2、Read-Through策略
这种策略,应用程序无需管理数据源和缓存,只需要将数据源的同步委托给缓存提供程序Cache Provider即可。所有数据交互都是通过抽象缓存层完成的。
Read-Through适用于多次请求相同数据的场景
优点
- 进行大量读取时,可以减少数据源的负载
- 也对缓存服务的故障具备一定的弹性
缺点
- 首次请求数据,会导致缓存未命中,还是可以通过缓存预热来解决
与Cache-Aside相比,实际对缓存和数据源的操作通过Cache Provider支持
4.3、Write-Through策略
这种策略下,当数据发生更新时,Cache Provider负责更新数据源和缓存。缓存与数据源保持一致,并且写入时始终通过抽象缓存层到达数据源。
由于需要将数据同步写入缓存和数据源,因此数据写入速度较慢。但是,与Read-Through配合使用时,我们将获得Read-Through的所有好处,并且还可以获得数据一致性保证。
4.4、Write-Behind策略
如果没有强一致性要求,可以简单地使缓存的更新请求入队,并且定期将其flush到数据存储中
Write-Behind在数据更新时,只写入缓存。
优点
- 数据写入速度快,适用于频繁的写工作负载
- 与
Read-Through
配合使用,可以很好地用于混合工作负载,最近更新和访问的数据总是在缓存中可用 - 可以抵抗数据源故障,并可以容忍某些数据源停机时间
缺点
- 一旦更新后的缓存数据还未被写入数据源时(断电),数据将无法找回