问题描述
我有一个目前不使用任何缓存的API。我确实有一个正在使用的中间件生成缓存头(Cache-Control,Expires,ETag,Last-Modified-使用库)。它不存储任何内容,因为它仅生成标头。
将If-None-Match标头传递给API请求时,中间件检查传入的Etag值与当前生成的值,如果匹配,则发送304未修改为响应( httpContext.Response.StatusCode = StatusCodes.Status304NotModified;
)
我正在使用Redis缓存,我不确定如何实现缓存失效。我在项目中使用了 Microsoft.Extensions.Caching.Redis
包。我在本地安装Redis并在控制器中使用它,如下所示:
[AllowAnonymous]
[ProducesResponseType(200)]
[Produces( application / json, application / xml)]
公共异步任务< IActionResult> GetEvents([FromQuery] ParameterModel模型)
{
var cachedEvents =等待_cache.GetStringAsync( events);
IEnumerable< Event> events = null;
if(!string.IsNullOrEmpty(cachedEvents))
{
events = JsonConvert.DeserializeObject< IEnumerable< Event>>(cachedEvents);
}
else
{
events =等待_eventRepository.GetEventsAsync(model);
字符串项目= JsonConvert.SerializeObject(事件,新的JsonSerializerSettings()
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
});
等待_cache.SetStringAsync( events,item);
}
var eventsToReturn = _mapper.Map< IEnumerable< EventViewDto>>(事件);
return Ok(eventsToReturn);
}
请注意 _cache
这里使用的是 IDistributedCache
。这是请求第二次访问缓存时的工作。但是,当我获取的事件
被修改时,它不会考虑修改后的值。
我的中间件设置为:
缓存头中间件-> MVC。因此,缓存头管道首先将比较客户端发送的Etag值,然后决定将请求转发到MVC或使用304未修改的响应将其短路。
我的计划是在缓存头之前添加一个中间件(即,我的中间件->缓存头中间件-> MVC),然后等待响应返回缓存头中间件,并检查响应是否为304。如果为304,请转到缓存并检索响应。否则,更新缓存中的响应。
这是进行缓存失效的理想方法吗?有更好的方法吗?使用上述方法,我将必须检查每个304响应,确定路由,并具有某种逻辑来验证要使用的缓存键。不知道这是否是最好的方法。
如果您可以提供一些有关缓存失效的指南和文档/教程,那将非常有帮助。
以下是基于我支持的服务如何在CQRS系统上使用缓存失效的指南。
命令系统接收到create,更新,删除来自客户端的请求。该请求将应用于Origin。该请求将广播到侦听器。
存在一个单独的失效服务,并订阅更改列表。收到命令事件后,将检查事件中该项的配置的分布式缓存。根据特定系统采取了几种不同的操作。
第一个选项是Invalidation服务从分布式缓存中删除该项。随后,共享分布式缓存的服务的使用者最终将遇到缓存未命中,从存储中检索项目并将项目的最新版本添加到分布式缓存的情况。在这种情况下,服务中所有谨慎的机器之间都存在竞争状态,并且Origin可能会在短时间内收到针对同一商品的多个请求。如果物品价格昂贵,这可能会使产地紧张。但是Invalidation场景非常简单。
第二个选项是Invalidation服务使用相同的分布式缓存向其中一个服务发出请求,并要求服务忽略缓存并从Origin获取该项目的最新版本。这解决了多个称为Origin的谨慎机器的潜在峰值。但这意味着Invalidation服务与其他相关服务之间的联系更加紧密。现在,该服务具有一个API,该API允许调用者绕过其缓存策略。仅需要Invalidation服务和其他授权的调用者才能确保对未缓存API的访问权限。
在任何一种情况下,所有使用相同redis数据库的谨慎机器还订阅命令更改列表。任何一台单独的计算机都只是通过从其本地缓存中删除项目来本地处理更改。如果该项目不存在,则没有错误。在下一个请求时,将从redis或Origin刷新该项目。对于热物料,这意味着仍可以从任何已删除热物料且redis尚未更新的计算机上收到对Origin的多个请求。对于谨慎的计算机而言,本地的缓存和正在检索的项目任务可以使所有后续请求都可以等待,而不是调用Origin。这可能是有益的。
谨慎的机器和共享的Redis,无效逻辑还扩展到Akamai和类似的内容分发网络。一旦Redis缓存无效,Invalidation例程就会使用CDN API刷新项目。 Akamai的行为举止相当好,如果配置正确,则对更新项的Origin调用会相对较少。理想情况下,服务已检索到该项目,并且副本在分立机器本地缓存和共享Redis中都存在。如果无法正确预期和设计,则CDN无效可能是导致请求高峰的另一个原因。
在redis中,来自谨慎共享它的机器,这种设计使用redis来指示刷新项目也可以屏蔽来自同一项目的多个请求的来源。一个简单的计数器,其密钥基于物料ID和当前时间间隔(四舍五入到最接近的分钟,30秒等),可以使用redis INCR命令在获得计数1的机器上访问Origin,而其他所有机器都在等待。 p>
最后,对于热门商品,将刷新时间值附加到该商品会很有帮助。如果所有有效负载均具有类似于以下内容的包装器,则当检索到某个项目且其刷新时间已过时,被调用程序将对该项目执行后台刷新。对于热门商品,这意味着它们将在过期之前从缓存中刷新。对于读取量大,写入量少的系统,将项目缓存一小时,而刷新时间少于一小时,这意味着热点项目通常会停留在redis中。
这里是缓存项目的样本包装。在所有情况下,都假定呼叫者基于所请求的项目密钥知道类型T。假定写入redis的实际有效负载是已序列化的字节数组,并且可能是gzip-ed。
interface CacheItem< T> SchemaVersion提供了有关如何创建redis字符串的提示。 {
String Key {get; }
DateTimeOffset ExpirationTime {get; }
DateTimeOffset TimeToRefresh {get; }
Int SchemaVersion {get;}
T项{get; }
}
存储
时var redisString = Gzip.Compress(NetDataContractSerializer。 Serialize(cacheItem))
在检索项目时,将通过互补的uncompress和反序列化方法重新创建该项目。
I have an API that currently does not use any caching. I do have one piece of Middleware that I am using that generates cache headers (Cache-Control, Expires, ETag, Last-Modified - using the https://github.com/KevinDockx/HttpCacheHeaders library). It does not store anything as it only generates the headers.
When an If-None-Match header is passed to the API request, the middleware checks the Etag value passed in vs the current generated value and if they match, sends a 304 not modified as the response (httpContext.Response.StatusCode = StatusCodes.Status304NotModified;
)
I'm using a Redis cache and I'm not sure how to implement cache invalidation. I used the Microsoft.Extensions.Caching.Redis
package in my project. I installed Redis locally and used it in my controller as below:
[AllowAnonymous]
[ProducesResponseType(200)]
[Produces("application/json", "application/xml")]
public async Task<IActionResult> GetEvents([FromQuery] ParameterModel model)
{
var cachedEvents = await _cache.GetStringAsync("events");
IEnumerable<Event> events = null;
if (!string.IsNullOrEmpty(cachedEvents))
{
events = JsonConvert.DeserializeObject<IEnumerable<Event>>(cachedEvents);
}
else
{
events = await _eventRepository.GetEventsAsync(model);
string item = JsonConvert.SerializeObject(events, new JsonSerializerSettings()
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
});
await _cache.SetStringAsync("events", item);
}
var eventsToReturn = _mapper.Map<IEnumerable<EventViewDto>>(events);
return Ok(eventsToReturn);
}
Note that _cache
here is using IDistributedCache
. This works as the second time the request is hitting the cache. But when the Events
I am fetching are modified, it does not take the modified values into account. It serves up the same value without doing any validation.
My middlware is setup as:Cache Header Middleware -> MVC. So the cache headers pipeline will first compare the Etag value sent by the client and either decides to forward the request to MVC or short circuits it with a 304 not modified response.
My plan was to add a piece of middleware prior to the cache header one (i.e. My Middleware -> Cache Header Middleware -> MVC) and wait for a response back from the cache header middleware and check if the response is a 304. If 304, go to the cache and retrieve the response. Otherwise update the response in the cache.
Is this the ideal way of doing cache invalidation? Is there a better way of doing it? With above method, I'll have to inspect each 304 response, determine the route, and have some sort of logic to verify what cache key to use. Not sure if that is the best approach.
If you can provide some guidelines and documentation/tutorials on cache invalidation, that would be really helpful.
Here is a guideline based on how a service I support uses cache invalidation on a CQRS system.
The command system receives create, update, delete requests from clients. The request is applied to Origin. The request is broadcast to listeners.
A separate invalidation service exists and subscribes to the change list. When a command event is received, the configured distributed caches are examined for the item in the event. A couple of different actions are taken based on the particular system.
The first option is the Invalidation service removes the item from a distributed cache. Subsequently consumers of the services sharing the distributed cache will eventually suffer a cache miss, retrieve the item from storage and add the latest version of the item to the distributed cache. In this scenario there is a race condition between all of the discreet machines in the services and Origin may receive multiple requests for the same item in a short window. If the item is expensive to retrieve this can strain Origin. But the Invalidation scenario is very simple.
The second option is the Invalidation service makes a request to one of the services using the same distributed cache and asks the service to ignore cache and get the latest version of the item from Origin. This addresses the potential spike for multiple discreet machines calling Origin. But it means the Invalidation service is more tightly coupled to the other related services. And the service now has an API that allows a caller to bypass its caching strategy. Access to the uncached API would need to be secured to just the Invalidation service and other authorized callers.
In either case, all of the discreet machines that use the same redis database also subscribe to the command change list. Any individual machine just processes changes locally by removing items from its local cache. No error exists if the item is not present. The item will be refreshed from redis or Origin on the next request. For hot items, this means multiple requests to Origin could still be received from any machine that has removed the hot item and redis has not yet been updated. It can be beneficial for the discreet machines to locally "cache" and "item being retrieved" task that all subsequent request can await rather than calling Origin.
In addition to the discreet machines and a shared redis, the Invalidation logic also extends to Akamai and similar content distribution networks. Once the redis cache has been invalidated, the Invalidation routine uses the CDN APIs to flush the item. Akamai is fairly well-behaved and if configured correctly makes a relatively small number of calls to Origin for the updated item. Ideally the service has already retrieved the item and copies exist in both discreet machines local caches and the shared redis. CDN invalidation can be another source of spikes of requests if not anticipated and designed correctly.
In redis, from the discreet machines sharing it, a design that uses redis to indicate an item is being refreshed can also shield origin from multiple requests for the same item. A simple counter whose key is based on the item ID and the current time interval rounded to the nearest minute, 30 seconds, etc. can use the redis INCR command the machines that gets the count of 1 access Origin while all others wait.
Finally, for hot items, it can be helpful to have a Time To Refresh value attached to the item. If all payloads have a wrapper similar to below, then when an item is retrieved and its refresh time has passed, the called performs a background refresh of the item. For hot items this means they will be refreshed from cache before their expiration. For a system with heavy reads and low volumes of writes, caching items for an hour with a refresh time of something less than an hour means the hot items will generally stay in redis.
Here is a sample wrapper for cached items. In all cases it is assumed that the caller knows type T based on the item key being requested. The actual payload written to redis is assumed to be a byte array serialized and possibly gzip-ed. The SchemaVersion provide a hint to how the redis string is created.
interface CacheItem<T> {
String Key {get; }
DateTimeOffset ExpirationTime {get; }
DateTimeOffset TimeToRefresh {get; }
Int SchemaVersion {get;}
T Item {get; }
}
When storingvar redisString = Gzip.Compress(NetDataContractSerializer.Serialize(cacheItem))
When retrieving the item is recreated by the complementary uncompress and deserialize methods.
这篇关于Web API缓存-如何使用分布式缓存实现无效的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!