问题描述
我记得2年前或3年前读过几篇文章,当时人们声称现代线程库变得如此好,以至于每个请求线程服务器不仅比非阻塞服务器更容易编写,而且也更快.我相信,这甚至在Java中使用JVM进行了演示,该JVM将Java线程映射到pthread(即Java nio的开销大于上下文切换的开销).
I remember 2 or 3 years ago reading a couple articles where people claimed that modern threading libraries were getting so good that thread-per-request servers would not only be easier to write than non-blocking servers but that they'd be faster, too. I believe this was even demonstrated in Java with a JVM that mapped Java threads to pthreads (i.e. the Java nio overhead was more than the context-switching overhead).
但是现在我看到所有尖端"服务器都使用异步库(Java nio,epoll甚至node.js).这是否意味着异步获胜?
But now I see all the "cutting edge" servers use asynchronous libraries (Java nio, epoll, even node.js). Does this mean that async won?
推荐答案
我认为不是.如果两个模型都实现良好(这是BIG的要求),我认为NIO的概念应占上风.
Not in my opinion. If both models are well implemented (this is a BIG requirement) I think that the concept of NIO should prevail.
计算机的核心是核心.无论您做什么,都无法比拥有内核更多地并行化应用程序.即,如果您拥有4核计算机,则一次只能做4件事(我在这里详细介绍了一些细节,但这足以满足您的要求).
At the heart of a computer are cores. No matter what you do, you cannot parallelize your application more than you have cores. i.e. If you have a 4 core machine, you can ONLY do 4 things at a time (I'm glossing over some details here, but that suffices for this argument).
扩展这种想法,如果您拥有的线程多于内核,那么您就浪费了.这种浪费有两种形式.首先是额外线程本身的开销.其次是在线程之间切换所花费的时间.两者都可能很小,但是它们在那里.
Expanding on that thought, if you ever have more threads than cores, you have waste. That waste takes two forms. First is the overhead of the extra threads themselves. Second is the time spent switching between threads. Both are probably minor, but they are there.
理想情况下,每个内核有一个线程,并且这些线程中的每个线程都在其内核上以100%的处理速度运行.理想情况下不会发生任务切换.当然有操作系统,但是如果您使用16核计算机,并为操作系统留出2-3个线程,那么其余的13-14会用于您的应用程序.这些线程可以在您的应用程序内 切换它们的工作,例如当它们被IO要求阻止时,而不必在操作系统级别上支付这些费用.将其直接写到您的应用中.
Ideally, you have a single thread per core, and each of those threads is running at 100% processing speed on their core. Task switching wouldn't occur in the ideal. Of course there is the OS, but if you take a 16 core machine and leave 2-3 threads for the OS, then the remaining 13-14 go towards your app. Those threads can switch what they are doing within your app, like when they are blocked by IO requirements, but don't have to pay that cost at the OS level. Write it right into your app.
在SEDA http://www中可以看到一个很好的例子. eecs.harvard.edu/~mdw/proj/seda/.与标准的每请求线程数"模型相比,它在负载下显示了更好的伸缩性.
An excellent example of this scaling is seen in SEDA http://www.eecs.harvard.edu/~mdw/proj/seda/ . It showed much better scaling under load than a standard thread-per-request model.
我的个人经验是与Netty合作.我有一个简单的应用程序.我在Tomcat和Netty中都很好地实现了它.然后,我对100个并发请求进行了负载测试(我认为最多为800个).最终,Tomcat变慢了爬行速度,并表现出极其突发/缓慢的行为. Netty的实施只是增加了响应时间,但继续以令人难以置信的总体吞吐量进行.
My personal experience is with Netty. I had a simple app. I implemented it well in both Tomcat and Netty. Then I load tested it with 100s of concurrent requests (upwards of 800 I believe). Eventually Tomcat slowed to a crawl and exhibited extremely bursty/laggy behavior. Whereas the Netty implementation simply increased response time, but continued with incredibly overall throughput.
请注意,这取决于可靠的实施.随着时间的推移,NIO仍在不断改善.我们正在学习如何调整服务器操作系统以使其更好地工作,以及如何实现JVM以更好地利用操作系统功能.我认为尚不能宣布获胜者,但我相信NIO将是最终的获胜者,而且已经做得不错.
Please note, this hinges on solid implementation. NIO is still getting better with time. We are learning how to tune our servers OSes to work better with it as well as how to implement the JVMs to better leverage the OS functionality. I don't think a winner can be declared yet, but I believe NIO will be the eventual winner, and it's doing quite well already.
这篇关于每个请求模型的线程是否可以比非阻塞I/O更快?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!