问题描述
我的问题的简短版本:
如果我们已经知道属于特征向量的特征值,那么为矩阵A
计算特征向量的最佳方法是什么?
What would be the optimal way of calculating an eigenvector for a matrix A
, if we already know the eigenvalue belonging to the eigenvector?
详细说明:
我有一个大的随机矩阵A
,因为它是随机的,所以具有非负的左特征向量x
(即A^Tx=x
).
I have a large stochastic matrix A
which, because it is stochastic, has a non-negative left eigenvector x
(such that A^Tx=x
).
我正在寻找一种快速有效的数值计算此向量的方法. (最好在MATLAB或numpy/scipy中使用-由于这两种方法都围绕ARPACK/LAPACK,所以任何一种都可以).
I'm looking for quick and efficient methods of numerically calculating this vector. (Preferrably in MATLAB or numpy/scipy - since both of these wrap around ARPACK/LAPACK, any one would be fine).
我知道1
是A
的最大特征值,所以我知道调用类似以下Python代码的代码:
I know that 1
is the largest eigenvalue of A
, so I know that calling something like this Python code:
from scipy.sparse.linalg import eigs
vals, vecs = eigs(A, k=1)
将导致vals = 1
和vecs
等于我需要的向量.
will result in vals = 1
and vecs
equalling the vector I need.
但是,令我困扰的是,通常,计算本征值比求解线性系统要困难得多,而且,通常,如果矩阵M
具有本征值l
,则找到适当的特征向量是求解方程式(M - 1 * I) * x = 0
的问题,因为从理论上讲,这仅是求解线性系统,更具体地说,是找到矩阵的零空间,所以从理论上讲,这比计算特征值更简单.
However, the thing that bothers me here is that calculating eigenvalues is, in general, a more difficult operation than solving a linear system, and, in general, if a matrix M
has eigenvalue l
, then finding the appropriate eigenvector is a matter of solving the equation (M - 1 * I) * x = 0
, which is, in theory at least, an operation that is simpler than calculating an eigenvalue, since we are only solving a linear system, more specifically, finding the nullspace of a matrix.
但是,我发现MATLAB
中的所有空空间计算方法都依赖于svd
计算,这是我无法承受的对我大小的矩阵执行的过程.我也不能在线性方程上调用解算器,因为它们都只能找到一个解,而该解是0
(是的,这是一个解,但不是我需要的一个解).
However, I find that all methods of nullspace calculation in MATLAB
rely on svd
calculation, a process I cannot afford to perform on a matrix of my size. I also cannot call solvers on the linear equation, because they all only find one solution, and that solution is 0
(which, yes, is a solution, but not the one I need).
与通过计算最大特征值和随附的特征向量相比,有什么方法可以避免调用类似eigs
的函数来更快地解决我的问题?
Is there any way to avoid calls to eigs
-like function to solve my problem more quickly than by calculating the largest eigenvalue and accompanying eigenvector?
推荐答案
以下是使用Matlab的一种方法:
Here's one approach using Matlab:
- 让 x 表示与特征值1相关的(行)左特征向量.它满足线性方程(或矩阵方程)系统 xA = x ,或 x ( A - I )= 0 .
- 为避免对该方程组进行全零解,请删除第一个方程,并在其余方程中将 x 的第一个条目任意设置为1.
- 求解剩余的方程式( x = 1)以获得 x 的其他项.
- Let x denote the (row) left eigenvector associated to eigenvalue 1. It satisfies the system of linear equations (or matrix equation) xA = x, or x(A−I)=0.
- To avoid the all-zeros solution to that system of equations, remove the first equation and arbitrarily set the first entry of x to 1 in the remaining equations.
- Solve those remaining equations (with x = 1) to obtain the other entries of x.
使用Matlab的示例:
Example using Matlab:
>> A = [.6 .1 .3
.2 .7 .1
.5 .1 .4]; %// example stochastic matrix
>> x = [1, -A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1))]
x =
1.000000000000000 0.529411764705882 0.588235294117647
>> x*A %// check
ans =
1.000000000000000 0.529411764705882 0.588235294117647
请注意,代码-A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1))
是步骤3.
Note that the code -A(1, 2:end)/(A(2:end, 2:end)-eye(size(A,1)-1))
is step 3.
在您的公式中,您定义 x 为 A (例如 A x = x ).这只是上面代码中的x.'
:
In your formulation you define x to be a (column) right eigenvector of A (such that Ax = x). This is just x.'
from the above code:
>> x = x.'
x =
1.000000000000000
0.529411764705882
0.588235294117647
>> A.'*x %// check
ans =
1.000000000000000
0.529411764705882
0.588235294117647
您当然可以对特征向量进行归一化来求和1:
You can of course normalize the eigenvector to sum 1:
>> x = x/sum(x)
x =
0.472222222222222
0.250000000000000
0.277777777777778
>> A.'*x %'// check
ans =
0.472222222222222
0.250000000000000
0.277777777777778
遵循通常的惯例.等效地,这对应于转置的矩阵的 right 特征向量.
Following the usual convention. Equivalently, this corresponds to a right eigenvector of the transposed matrix.
这篇关于快速有效地计算已知特征值的特征向量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!