异步编程并不涉及多线程编程,尽管两者经常相互关联(并且可以很好地协同工作).一个 synchronous 程序必须完成每个步骤,然后再继续下一步. 异步程序启动一个步骤,然后继续执行不需要第一步结果的其他步骤,然后在需要第一步结果时检查第一步的结果.也就是说,同步程序可能会有点像这样:执行此任务",等到完成",对结果进行一些处理"以及继续进行其他处理".相比之下,异步程序可能会更像这样:我要开始一个任务,以后我需要结果,但是我暂时不需要它",与此同时,我ll做其他事情",直到现在有了第一步的结果,我才能做其他事情,所以,如果还没有准备好,我将等待它",然后继续进行其他工作".请注意,异步"指的是非常宽泛的概念,该概念总是涉及某种形式开始一些工作,告诉我什么时候完成",而不是传统的现在就做!".这不需要多线程,在这种情况下,它只是一种软件设计选择(通常涉及回调函数以及类似的事情,以提供异步结果的通知").使用多个线程,它变得更加强大,因为您可以在异步任务运行时并行执行各种操作.极端地讲,它可以成为更加成熟的体系结构,例如基于任务的方法(这是一种异步编程技术).我认为您想要的东西更符合另一个概念:并行计算(或并行处理).这种方法更多地涉及将大型处理任务拆分为较小的部分,然后并行处理所有部分,然后合并结果.您应该查看 OpenMP 或OpenCL/CUDA之类的库(对于 GPGPU ).也就是说,您可以将多线程用于并行处理. 但是显然异步编程并不一定意味着您正在使用多个内核?异步编程不一定涉及在多个线程中同时发生的任何事情.这可能意味着OS在幕后为您做事(并会在工作完成时通知您),例如在异步I/O 中,这种情况不会创建任何线程.归结为软件设计的选择. 如果没有多个内核可以利用,为什么要这样做?如果您没有多个内核,则多线程仍可以通过重用等待时间"来提高性能(例如,不要阻止"等待文件或网络I/O或等待用户的处理)单击鼠标按钮).这意味着程序可以在等待这些事情的同时做一些有用的工作.除此之外,它还可以提供设计灵活性,并使事物看起来可以同时运行,这通常会使用户感到更快乐.不过,您是正确的,在多核CPU之前,进行多线程的动机没有那么多,因为收益通常不足以证明开销.I'm new to this concept. Are these the same or different things? What is the difference? I really like the idea of being able to run two processes at once, for example if I have several large files to load into my program I'd love to load as many of them simultaneously as possible instead of waiting for one at a time. And when working with a large file, such as wav file, it would be great to break it into pieces and do processing on several chunks at once and then put them back together. What do I want to look into to learn how to do this sort of thing?Edit: Also, I know using more than one core on a multicore processor fits in here somewhere, but apparently asynchronous programming doesn't necessarily mean you are using multiple cores? Why would you do this if you didn't have multiple cores to take advantage of? 解决方案 They are related but different.Threading, normally called multi-threading, refers to the use of multiple threads of execution within a single process. This usually refers to the simple case of using a small set of threads each doing different tasks that need to be, or could benefit from, running simultaneously. For example, a GUI application might have one thread draw elements, another thread respond to events like mouse clicks, and another thread do some background processing.However, when the number of threads, each doing their own thing, is taken to an extreme, we usually start to talk about an Agent-based approach.The task-based approach refers to a specific strategy in software engineering where, in abstract terms, you dynamically create "tasks" to be accomplished, and these tasks are picked up by a task manager that assigns the tasks to threads that can accomplish them. This is more of a software architectural thing. The advantage here is that the execution of the whole program is a succession of tasks being relayed (task A finished -> trigger task B, when both task B and task C are done -> trigger task D, etc..), instead of having to write a big function or program that executes each task one after the other. This gives flexibility when it is unclear which tasks will take more time than others, and when tasks are only loosely coupled. This is usually implemented with a thread-pool (threads that are waiting to be assigned a task) and some message-passing interface (MPI) to communicate data and task "contracts".Asynchronous programming does not refer to multi-threaded programming, although the two are very often associated (and work well together). A synchronous program must complete each step before moving on to the next. An asynchronous program starts a step, moves on to other steps that don't require the result of the first step, then checks on the result of the first step when its result is required.That is, a synchronous program might go a little bit like this: "do this task", "wait until done", "do something with the result", and "move on to something else". By contrast, an asynchronous program might go a little more like this: "I'm gonna start a task, and I'll need the result later, but I don't need it just now", "in the meantime, I'll do something else", "I can't do anything else until I have the result of the first step now, so I'll wait for it, if it isn't ready", and "move on to something else".Notice that "asynchronous" refers to a very broad concept, that always involves some form of "start some work and tell me when it's done" instead of the traditional "do it now!". This does not require multi-threading, in which case it just becomes a software design choice (which often involves callback functions and things like that to provide "notification" of the asynchronous result). With multiple threads, it becomes more powerful, as you can do various things in parallel while the asynchronous task is working. Taken to the extreme, it can become a more full-blown architecture like a task-based approach (which is one kind of asynchronous programming technique).I think the thing that you want corresponds more to yet another concept: Parallel Computing (or parallel processing). This approach is more about splitting a large processing task into smaller parts and processing all parts in parallel, and then combining the results. You should look into libraries like OpenMP or OpenCL/CUDA (for GPGPU). That said, you can use multi-threading for parallel processing. but apparently asynchronous programming doesn't necessarily mean you are using multiple cores?Asynchronous programming does not necessarily involve anything happening concurrently in multiple threads. It could mean that the OS is doing things on your behalf behind the scenes (and will notify you when that work is finished), like in asynchronous I/O, which happens without you creating any threads. It boils down to being a software design choice. Why would you do this if you didn't have multiple cores to take advantage of?If you don't have multiple cores, multi-threading can still improve performance by reusing "waiting time" (e.g., don't "block" the processing waiting on file or network I/O, or waiting on the user to click a mouse button). That means the program can do useful work while waiting on those things. Beyond that, it can provide flexibility in the design and make things seem to run simultaneously, which often makes users happier. Still, you are correct that before multi-core CPUs, there wasn't as much of an incentive to do multi-threading, as the gains often do not justify the overhead. 这篇关于线程与基于任务的与异步编程的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云! 09-05 22:43