本文介绍了瓶子是否可以无并发地处理请求?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

起初,我认为Bottle将同时处理请求,因此我在下面编写了测试代码:

At first, I think Bottle will handle requests concurrently, so I wrote test code bellow:

import json
from bottle import Bottle, run, request, response, get, post
import time

app = Bottle()
NUMBERS = 0


@app.get("/test")
def test():
    id = request.query.get('id', 0)
    global NUMBERS
    n = NUMBERS
    time.sleep(0.2)
    n += 1
    NUMBERS = n
    return id


@app.get("/status")
def status():
    return json.dumps({"numbers": NUMBERS})


run(app, host='0.0.0.0', port=8000)

然后我使用jmeter请求具有10个线程的/test url循环20次.

Then I use jmeter to request /test url with 10 threads loops 20 times.

然后,/status给了我{"numbers": 200},似乎那个瓶子不能同时处理请求.

After that, /status gives me {"numbers": 200}, which seems like that bottle does not handle requests concurrently.

我误会了吗?

更新

我进行了另一项测试,我认为它可以证明Bottle可以一个接一个地处理请求(没有并发性).我对test函数做了一些改动:

I did another test, I think it can prove that bottle deal with requests one by one(with no concurrency). I did a little change to the test function:

@app.get("/test")
def test():
    t1 = time.time()
    time.sleep(5)
    t2 = time.time()
    return {"t1": t1, "t2": t2}

当我在浏览器中两次访问/test时,我得到:

And when I access /test twice in a browser I get:

{
    "t2": 1415941221.631711,
    "t1": 1415941216.631761
}
{
    "t2": 1415941226.643427,
    "t1": 1415941221.643508
}

推荐答案

并发不是Web框架的功能-它是用于为其服务的Web服务器的功能.由于Bottle是WSGI兼容的,这意味着您可以通过任何WSGI服务器提供Bottle应用程序:

Concurrency isn't a function of your web framework -- it's a function of the web server you use to serve it. Since Bottle is WSGI-compliant, it means you can serve Bottle apps through any WSGI server:

  • wsgiref(Python stdlib中的引用服务器)不会给您带来任何并发性.
  • CherryPy通过线程池调度(同时请求数=它正在使用的线程数).
  • nginx + uwsgi使您可以多进程分派每个进程多个线程.
  • Gevent为您提供轻量级的协程,在您的用例中,如果您的应用程序主要是IO-或数据库-,则可以以非常少的CPU负载轻松实现C10K +(在Linux上-在Windows上,它只能处理1024个同时打开的套接字).绑定.
  • wsgiref (reference server in the Python stdlib) will give you no concurrency.
  • CherryPy dispatches through a thread pool (number of simultaneous requests = number of threads it's using).
  • nginx + uwsgi gives you multiprocess dispatch and multiple threads per process.
  • Gevent gives you lightweight coroutines that, in your use case, can easily achieve C10K+ with very little CPU load (on Linux -- on Windows it can only handle 1024 simultaneous open sockets) if your app is mostly IO- or database-bound.

后两个可以服务于大量的同时连接.

The latter two can serve massive numbers of simultaneous connections.

根据 http://bottlepy.org/docs/dev/api.html ,如果未指定具体说明,bottle.run使用wsgiref为您的应用程序提供服务,这说明了为什么它一次只能处理一个请求.

According to http://bottlepy.org/docs/dev/api.html , when given no specific instructions, bottle.run uses wsgiref to serve your application, which explains why it's only handling one request at once.

这篇关于瓶子是否可以无并发地处理请求?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-18 19:32