前言: 最近开始捣鼓OpenStack了,在用RDO部署OpenStack的时候,发现装了Redis, 遂决定看看OpenStack哪些地方(可以)用到Redis。


  •  Redis作为OpenStack Dashboard的session storage backend
    目前(M版本,后面默认为M版本) OpenStack的Dashboard支持以下三种session storage backend:
    Local memory cache
    Key-Value store(Memcached, Redis)
    Database(Mysql/Mariadb)
    其中Local memory cache是最简单而且是最快的,但是缺点也很明显,比如在process和worker之间不能共享,存储随着process结束而结束。Database作为backend相对而言最慢的,但是可以做到scalable,persistent. K-V storage速度上介于两者之间,也可以salable,比较适合小规模部署的环境,一下是配置Redis作为Session storage的backend。

    1. 安装依赖包:redis, django-redis。
    2. 修改local_settings配置文件:/etc/openstack-dashboard/local_settings.

    SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    CACHES = {
    "default": {
    "BACKEND": "redis_cache.cache.RedisCache",
    "LOCATION": "127.0.0.1:6379:1",
    "OPTIONS": {
    "CLIENT_CLASS": "redis_cache.client.DefaultClient",
    }
    }
    }

    如果django-redis版本是3.8.0或以上,那么应该要这样配置(https://niwinz.github.io/django-redis/latest/#_configure_as_cache_backend):

    SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    CACHES = {
    'default': {
    'BACKEND': 'django_redis.cache.RedisCache',
    'LOCATION': 'redis://127.0.0.1:6379/1',
    'OPTIONS': {
    'CLIENT_CLASS': 'django_redis.client.DefaultClient',
    }
    }
    }

    3. 重启httpd/apache, 登陆Dashboard,然后查看Redis的key:
    OpenStack和Redis-LMLPHP

  •  Redis作为OpenStack Keystone的token storage backend
    Keystone支持的Token storage backend目前有三个:
    Mysql(Mariadb)
    MemCache
    Redis

    Mysql会有token无限增长的问题,需要定期清理不需要的token, Memcache的问题是空间固定,不好扩容,相对而言Redis是一个不错的选择,一下是配置Redis的步骤:
    1. 安装依赖包: Redis.
    2. 修改keystone.conf:
    [cache]
    enabled=true
    expiration_time=600
    backend=dogpile.cache.redis
    backend_argument=url:redis://127.0.0.1:6379/2 [token]
    caching=true
    driver = keystone.token.persistence.backends.kvs.Token

       3. 重启keystone(httpd), 查看Redis的key:

OpenStack和Redis-LMLPHP

  • Redis作为OpenStack Telemetry的多个agent instances之间协作的backend

RDO 安装后,默认的就是Redis作为backend, 参看/etc/ceilometer/ceilometer.conf:

[coordination]

#
# From ceilometer
# # The backend URL to use for distributed coordination. If left empty, per-
# deployment central agent and per-host compute agent won't do workload
# partitioning and will only function correctly if a single instance of that
# service is running. (string value)
#backend_url = <None>
backend_url = redis://9.114.112.108:6379 # Number of seconds between heartbeats for distributed coordination. (floating
# point value)
#heartbeat = 1.0 # Number of seconds between checks to see if group membership has changed
# (floating point value)
#check_watchers = 10.0
05-06 04:24