让我们看看如果我在您的数据上使用这些数据会发生什么情况import numpy as npimport mathdef norm(l): s = 0 for i in l: s += i**2 return math.sqrt(s)r1 = range(10**4)r2 = range(10**2)%timeit norm(r1)3.34 ms ± 150 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)%timeit np.linalg.norm(r1)1.05 ms ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)%timeit norm(r2)30.8 µs ± 1.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)%timeit np.linalg.norm(r2)14.2 µs ± 313 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)对于短的可迭代对象,它并不慢,但它仍然更快.但是请注意,如果您已经拥有NumPy数组,那么NumPy函数的真正优势就在于:a1 = np.arange(10**4)a2 = np.arange(10**2)%timeit np.linalg.norm(a1)18.7 µs ± 539 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)%timeit np.linalg.norm(a2)4.03 µs ± 157 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)是的,现在速度要快得多. 18.7us与1ms-10000个元素的速度几乎提高了100倍.这意味着在您的示例中,大部分时间np.linalg.norm都用于将range转换为np.array.import numpy as npfrom datetime import datetimeimport mathdef norm(l): s = 0 for i in l: s += i**2 return math.sqrt(s)def foo(a, b, f): l = range(a) s = datetime.now() for i in range(b): f(l) e = datetime.now() return e-sfoo(10**4, 10**5, norm)foo(10**4, 10**5, np.linalg.norm)foo(10**2, 10**7, norm)foo(10**2, 10**7, np.linalg.norm)I got the following output: 0:00:43.1562780:00:23.9232390:00:44.1848350:01:00.343875It seems like when np.linalg.norm is called many times for small-sized data, it runs slower than my norm function. What is the cause of that? 解决方案 First of all: datetime.now() isn't appropriate to measure performance, it includes the wall-time and you may just pick a bad time (for your computer) when a high-priority process runs or Pythons GC kicks in, ... There are dedicated timing functions/modules available in Python: the built-in timeit module or %timeit in IPython/Jupyter and several other external modules (like perf, ...)Let's see what happens if I use these on your data:import numpy as npimport mathdef norm(l): s = 0 for i in l: s += i**2 return math.sqrt(s)r1 = range(10**4)r2 = range(10**2)%timeit norm(r1)3.34 ms ± 150 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)%timeit np.linalg.norm(r1)1.05 ms ± 3.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)%timeit norm(r2)30.8 µs ± 1.53 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)%timeit np.linalg.norm(r2)14.2 µs ± 313 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)It isn't slower for short iterables it's still faster. However note that the real advantage from NumPy functions comes if you already have NumPy arrays:a1 = np.arange(10**4)a2 = np.arange(10**2)%timeit np.linalg.norm(a1)18.7 µs ± 539 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)%timeit np.linalg.norm(a2)4.03 µs ± 157 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)Yeah, it's quite a lot faster now. 18.7us vs. 1ms - almost 100 times faster for 10000 elements. That means most of the time of np.linalg.norm in your examples was spent in converting the range to a np.array. 这篇关于为什么对小尺寸数据多次调用numpy.linalg.norm速度较慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!
10-14 01:40