问题描述
我有一堆服务器,其中有多个实例,这些实例正在访问对每秒请求有严格限制的资源.
I have a bunch of servers with multiple instances accessing a resource that has a hard limit on requests per second.
我需要一种机制来锁定所有正在运行的服务器和实例对此资源的访问.
I need a mechanism to lock the access on this resource for all servers and instances that are running.
我在github上找到了一个宁静的分布式锁管理器: https://github.com/thefab/restful-distributed-lock-manager
There is a restful distributed lock manager I found on github: https://github.com/thefab/restful-distributed-lock-manager
不幸的是似乎有一个分钟.锁定时间为1秒,这相对不可靠.在几次测试中,解锁1秒锁需要1到3秒.
Unfortunately there seems to be a min. lock time of 1 second and it's relatively unreliable. In several tests it took between 1 and 3 seconds to unlock a 1 second lock.
我是否可以使用python接口对它进行良好的测试?
Is there something well tested with a python interface I can use for this purpose?
我需要一些可以在1秒内自动解锁的东西.该锁将永远不会在我的代码中释放.
I need something that auto unlocks in under 1 second. The lock will never be released in my code.
推荐答案
我的第一个想法是使用Redis.但是,还有更多很棒的工具,有些甚至更轻巧,因此我的解决方案基于zmq.因此,您不必运行Redis,就足以运行小型Python脚本.
My first idea was using Redis. But there are more great tools and some are even lighter, so my solution builds on zmq. For this reason you do not have to run Redis, it is enough to run small Python script.
让我在描述解决方案之前先查看您的要求.
Let me review your requirements before describing solution.
-
将对某些资源的请求数限制为固定时间内的请求数.
limit number of request to some resource to a number of requests within fixed period of time.
自动解锁
资源(自动)解锁应在少于1秒的时间内完成.
resource (auto) unlocking shall happen in time shorter than 1 second.
它应该被分发.我将假设,您的意思是多个消耗某些资源的分布式服务器应该能够,并且只拥有一个更衣室服务(在结论中有更多关于此的服务)是很好的
it shall be distributed. I will assume, you mean that multiple distributed servers consuming some resource shall be able and it is fine to have just one locker service (more on it at Conclusions)
时隙可以是一秒钟,更长的时间或更短的时间.唯一的限制是Python中时间测量的精度.
Timeslot can be a second, more seconds, or shorter time. The only limitation is precision of time measurement in Python.
如果您的资源每秒定义了硬限制,则应使用时隙1.0
If your resource has hard limit defined per second, you shall use timeslot 1.0
在首次请求访问资源时,请为下一个时隙设置开始时间并初始化请求计数器.
With first request for accessing your resource, set up start time for next timeslot and initialize request counter.
对于每个请求,请增加请求计数器(针对当前时隙)并允许该请求,除非您在当前时隙中已达到允许的最大请求数量.
With each request, increase request counter (for current time slot) and allow the request unless you have reached max number of allowed requests in current time slot.
您使用服务器的服务器可能分布在更多计算机上.要提供对LockerServer的访问,您将使用zmq.
Your consuming servers could be spread across more computers. To provide access to LockerServer, you will use zmq.
zmqlocker.py:
zmqlocker.py:
import time
import zmq
class Locker():
def __init__(self, max_requests=1, in_seconds=1.0):
self.max_requests = max_requests
self.in_seconds = in_seconds
self.requests = 0
now = time.time()
self.next_slot = now + in_seconds
def __iter__(self):
return self
def next(self):
now = time.time()
if now > self.next_slot:
self.requests = 0
self.next_slot = now + self.in_seconds
if self.requests < self.max_requests:
self.requests += 1
return "go"
else:
return "sorry"
class LockerServer():
def __init__(self, max_requests=1, in_seconds=1.0, url="tcp://*:7777"):
locker=Locker(max_requests, in_seconds)
cnt = zmq.Context()
sck = cnt.socket(zmq.REP)
sck.bind(url)
while True:
msg = sck.recv()
sck.send(locker.next())
class LockerClient():
def __init__(self, url="tcp://localhost:7777"):
cnt = zmq.Context()
self.sck = cnt.socket(zmq.REQ)
self.sck.connect(url)
def next(self):
self.sck.send("let me go")
return self.sck.recv()
运行服务器:
run_server.py:
Run your server:
run_server.py:
from zmqlocker import LockerServer
svr = LockerServer(max_requests=5, in_seconds=0.8)
从命令行:
$ python run_server.py
这将在localhost的默认端口7777上开始提供更衣室服务.
This will start serving locker service on default port 7777 on localhost.
run_client.py:
run_client.py:
from zmqlocker import LockerClient
import time
locker_cli = LockerClient()
for i in xrange(100):
print time.time(), locker_cli.next()
time.sleep(0.1)
从命令行:
$ python run_client.py
您将看到打印出"go","go","sorry" ...答复.
You shall see "go", "go", "sorry"... responses printed.
尝试运行更多客户端.
您可以先启动客户端,然后再启动服务器.客户端将阻塞直到服务器启动,然后才能愉快地运行.
You may start clients first and server later on. Clients will block until the server is up, and then will happily run.
- 满足
- 描述的要求
- 请求数量有限
- 无需解锁,它允许在下一个时隙可用时发出更多请求
- LockerService可通过网络或本地套接字使用.
另一方面,您可能会发现资源的限制并没有您想象的那么可预测,因此请准备好使用参数来找到适当的平衡,并始终为这一方面的异常做好准备.
On the other hand, you may find, that limits of your resource are not so predictable as you assume, so be prepared to play with parameters to find proper balance and be always prepared for exceptions from this side.
还有一些空间可以优化提供锁"的功能,例如如果更衣室用完了所允许的请求,但是当前时隙已经快要完成了,您可以考虑稍等一下对不起",并在几分之一秒后输入"go".
There is also some space for optimization of providing "locks" - e.g. if locker runs out of allowed requests, but current timeslot is already almost completed, you might consider waiting a bit with your "sorry" and after a fraction of second provide "go".
通过分布式",我们还可以了解多个运行在一起的更衣室服务器.这更难做到,但也是可能的.zmq允许非常容易地连接到多个URL,因此客户端实际上可以轻松地连接到多个更衣室服务器.有一个问题,如何协调更衣室服务器以不允许对资源的过多请求.zmq允许服务器间通信.一种模型可能是,每个储物柜服务器将在PUB/SUB上发布每个提供的开始".所有其他更衣室服务器都将被订阅,并使用每个"go"来增加其本地请求计数器(具有稍微修改的逻辑).
By "distributed" we might also understand multiple locker servers running together. This is more difficult to do, but is also possible. zmq allows very easy connection to multiple urls, so clients could really easily connect to multiple locker servers. There is a question, how to coordinate locker servers not to allow too many request to your resource. zmq allows inter-server communication. One model could be, that each locker server would publish each provided "go" on PUB/SUB. All other locker servers would be subscribed, and used each "go" to increase their local request counter (with a bit modified logic).
这篇关于适用于Python的分布式锁管理器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!