线程本地异步事件

线程本地异步事件

本文介绍了boost :: asio:线程本地异步事件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将在我的服务器应用程序中创建x个线程数。 x将是机器上的核心数量,这些线程将是(非超线程)核心绑定。自然地,使用这个方案,我想要分布传入的连接跨线程,目的是确保一旦连接被分配给线程,它将只在该特定线程之外被服务。这是如何实现boost :: asio?

I will be creating x amount of threads in my server-app. x will be the amount of cores on the machine, and these threads will be (non-hyperthread) core-bound. Naturally with this scheme I would like to distribute incoming connections across the threads with the aim of ensuring that once a connection is assigned to a thread, it will only be served out of that particular thread. How is this achieved in boost::asio ?

我想:一个套接字绑定到地址共享多个 io_service 的每个线程都有自己的 io_service 。这个推理是正确的吗?

I am thinking: a single socket bound to an address shared by multiple io_service's where each threads gets it's own io_service. Is this line of reasoning correct ?

编辑:看起来像我要自己回答这个问题。

edit: looks like I am going to have to answer this question myself.

推荐答案

是的,你的推理基本上是正确的。你将在每个核心创建一个线程,每个线程一个io_service实例,并在每个线程中调用io_service.run()。

Yes, your reasoning is basically correct. You would create a thread per core, an io_service instance per thread, and call io_service.run() in each thread.

但是,问题是你真的做到这一点。这些是我看到的问题:

However, the question is whether you'd really do it that way. These are the problems I see:


  • 根据工作的平衡,你可能会遇到很忙的核心和空闲的核心在您的连接。微优化在核心中的缓存命中可能意味着,当最佳核心未准备就绪时,您最终失去了使空闲核心工作的能力。

  • You can end up with very busy cores and idling cores depending on how the work is balanced across your connections. Micro-optimising for cache hits in a core might mean that you end up losing the ability to have an idle core do work when the "optimal" core is not ready.

在套接字速度(即:慢),您将从CPU缓存命中获得多少赢?如果一个连接需要足够的CPU来保持核心繁忙,并且你只有和核心一样多的连接,那么很好。否则,无法移动工作来处理工作负载的差异可能会破坏您从缓存命中获得的任何胜利。

At socket speeds (ie: slow), how much of a win will you get from CPU cache hits? If one connection requires enough CPU to keep a core busy and you only up as many connections as cores, then great. Otherwise the inability to move work around to deal with variance in workload might destroy any win you get from cache hits. And if you are doing lots of different work in each thread, the cache isn't going to that hot anyway.

如果你只是在做I /缓存胜利可能不是那么大,不管。取决于您的实际工作量。

If you're just doing I/O the cache win might not be that big, regardless. Depends on your actual workload.

我的建议是有一个io_service实例并调用io_service.run每个核心一个线程。如果你的性能不足,或者有连接类别,每个连接有很多CPU,你可以获得缓存胜利,将它们移动到特定的io_service实例。

My recommendation would be to have one io_service instance and call io_service.run() in a thread per core. If you get inadequate performance or have classes of connections where there is a lot of CPU per connection and you can get cache wins, move those to specific io_service instances.

一个情况下,你应该做剖析,看看有多少缓存未命中成本你和在哪里。

This is a case where you should do profiling to see how much cache misses are costing you, and where.

这篇关于boost :: asio:线程本地异步事件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 06:42