我有两个np.ndarray


a是形状为(13000, 8, 315000)且类型为uint8的数组
b是形状为(8,)且类型为float32的数组


我想将沿第二维度(8)的每个切片乘以b中的相应元素,然后沿该维度求和(即沿第二轴的点积)。结果将为形状(13000, 315000)

我设计了两种方法:


np.einsum('ijk,j->ik', a, b):使用%timeit给出49 s ± 12.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
np.dot(a.transpose(0, 2, 1), b):使用%timeit给出1min 8s ± 3.54 s per loop (mean ± std. dev. of 7 runs, 1 loop each)


有更快的选择吗?

补充资料

np.show_config()返回:

blas_mkl_info:
  NOT AVAILABLE
openblas_lapack_info:
    libraries = ['openblas', 'openblas']
    language = c
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
  NOT AVAILABLE
openblas_info:
    libraries = ['openblas', 'openblas']
    language = c
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
blis_info:
  NOT AVAILABLE
lapack_opt_info:
    libraries = ['openblas', 'openblas']
    language = c
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
    libraries = ['openblas', 'openblas']
    language = c
    library_dirs = ['/usr/local/lib']
    define_macros = [('HAVE_CBLAS', None)]


a.flags

C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False


b.flags

C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
UPDATEIFCOPY : False

最佳答案

我们可以利用multi-core with numexpr module处理大数据并获得内存效率,从而提高性能-

import numexpr as ne

d = {'a0':a[:,0],'b0':b[0],'a1':a[:,1],'b1':b[1],\
     'a2':a[:,2],'b2':b[2],'a3':a[:,3],'b3':b[3],\
     'a4':a[:,4],'b4':b[4],'a5':a[:,5],'b5':b[5],\
     'a6':a[:,6],'b6':b[6],'a7':a[:,7],'b7':b[7]}
eval_str = 'a0*b0 + a1*b1 + a2*b2 + a3*b3 + a4*b4 + a5*b5 + a6*b6 + a7*b7'
out = ne.evaluate(eval_str,d)


计时样品运行-

In [474]: # Setup with ~10x smaller than posted one, as my system can't handle those
     ...: np.random.seed(0)
     ...: a = np.random.randint(0,9,(1000,8,30000)).astype(np.uint8)
     ...: b = np.random.rand(8).astype(np.float32)

In [478]: %timeit np.einsum('ijk,j->ik', a, b)
1 loop, best of 3: 247 ms per loop

# einsum with optimize flag set as True
In [479]: %timeit np.einsum('ijk,j->ik', a, b, optimize=True)
1 loop, best of 3: 248 ms per loop

In [480]: d = {'a0':a[:,0],'b0':b[0],'a1':a[:,1],'b1':b[1],\
     ...:      'a2':a[:,2],'b2':b[2],'a3':a[:,3],'b3':b[3],\
     ...:      'a4':a[:,4],'b4':b[4],'a5':a[:,5],'b5':b[5],\
     ...:      'a6':a[:,6],'b6':b[6],'a7':a[:,7],'b7':b[7]}

In [481]: eval_str = 'a0*b0 + a1*b1 + a2*b2 + a3*b3 + a4*b4 + a5*b5 + a6*b6 + a7*b7'

In [482]: %timeit ne.evaluate(eval_str,d)
10 loops, best of 3: 94.3 ms per loop


~2.6x在那里有所改善。

创建评估部分的一种更好的方法(较少出错)和通用方法如下所示:

d = {'a'+str(i):a[:,i] for i in range(8)}
d.update({'b'+str(i):b[i] for i in range(8)})
eval_str = ' + '.join(['a'+str(i)+'*'+'b'+str(i) for i in range(8)])

关于python - 如何在numpy中沿小尺寸优化点积?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/51949163/

10-12 00:27
查看更多