本文介绍了为什么 np.linalg.norm(x,2) 比直接求解慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

示例代码:

import numpy as np
import math
import time

x=np.ones((2000,2000))

start = time.time()
print(np.linalg.norm(x, 2))
end = time.time()
print("time 1: " + str(end - start))

start = time.time()
print(math.sqrt(np.sum(x*x)))
end = time.time()
print("time 2: " + str(end - start))

输出(在我的机器上)是:

The output (on my machine) is:

1999.999999999991
time 1: 3.216777801513672
2000.0
time 2: 0.015042781829833984

它表明 np.linalg.norm() 需要超过 3 秒来解决它,而直接解决只需要 0.01 秒.为什么 np.linalg.norm() 这么慢?

It shows that np.linalg.norm() takes more than 3s to solve it, while the direct solution takes just 0.01s. Why is np.linalg.norm() so slow?

推荐答案

np.linalg.norm(x, 2) 计算 2-范数,取最大的奇异值

np.linalg.norm(x, 2) computes the 2-norm, taking the largest singular value

math.sqrt(np.sum(x*x)) 计算 frobenius 范数

math.sqrt(np.sum(x*x)) computes the frobenius norm

这些操作是不同的,因此它们花费的时间不同也就不足为奇了.Frobenius 范数和矩阵的 2 范数之间有什么区别? 数学上的内容可能很有趣.

These operations are different, so it should be no surprise that they take different amounts of time. What is the difference between the Frobenius norm and the 2-norm of a matrix? on math.SO may be of interest.

这篇关于为什么 np.linalg.norm(x,2) 比直接求解慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-15 23:34