问题描述
我将要编写一些计算量大的Python代码,几乎可以肯定,它将大部分时间都花在numpy
的线性代数函数中.
I am about to write some computationally-intensive Python code that'll almost certainly spend most of its time inside numpy
's linear algebra functions.
眼前的问题是尴尬地平行.长话短说,对我来说最简单的方法就是使用多个线程.几乎可以肯定,主要障碍将是全球口译员锁定(GIL).
The problem at hand is embarrassingly parallel. Long story short, the easiest way for me to take advantage of that would be by using multiple threads. The main barrier is almost certainly going to be the Global Interpreter Lock (GIL).
为帮助设计这一点,最好有一个心理模型,对于该模型,可以预期numpy
操作将在其持续时间内释放GIL.为此,我希望能有一些经验法则,注意事项和注意事项,指针等.
To help design this, it would be useful to have a mental model for which numpy
operations can be expected to release the GIL for their duration. To this end, I'd appreciate any rules of thumb, dos and don'ts, pointers etc.
如果有问题,我将在Linux上使用64位Python 2.7.1,并使用由Intel MKL 10.3.1构建的numpy
1.5.1和scipy
0.9.0rc2.
In case it matters, I'm using 64-bit Python 2.7.1 on Linux, with numpy
1.5.1 and scipy
0.9.0rc2, built with Intel MKL 10.3.1.
推荐答案
您可能会在官方Wiki .
此外,请查看此食谱页面-其中包含示例关于如何在多个线程中使用NumPy的代码.
Also, have a look at this recipe page -- it contains example code on how to use NumPy with multiple threads.
这篇关于numpy和全局解释器锁定的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!