我在matlab中使用svmtrain和MLP内核,如下所示:
mlp=svmtrain(train_data,train_label,'Kernel_Function','mlp','showplot',true);
但我有个错误:
??? Error using ==> svmtrain at 470
Unable to solve the optimization problem:
Exiting: the solution is unbounded and at infinity;
the constraints are not restrictive enough.
原因是什么我试过其他的果仁,没有任何错误。
甚至我也试着回答了svmtrain - unable to solve the optimization problem如下:
options = optimset('maxiter',1000);
svmtrain(train_data,train_label,'Kernel_Function','mlp','Method','QP',...
'quadprog_opts',options);
但我又犯了同样的错误。
我的训练集是一个简单的45*2数据集,由2个类数据点组成。
最佳答案
here中的解决方案并不能真正解释任何问题问题是二次规划方法不能收敛于优化问题通常的做法是增加迭代次数,但我已经在相同大小的数据上进行了测试,迭代次数为1000000次,但仍然无法收敛。
options = optimset('maxIter',1000000);
mlp = svmtrain(data,labels,'Kernel_Function','mlp','Method','QP',...
'quadprog_opts',options);
??? Error using ==> svmtrain at 576
Unable to solve the optimization problem:
Exiting: the solution is unbounded and at infinity;
the constraints are not restrictive enough.
我的问题是:有没有什么理由要用二次规划来优化SMO使用SMO执行完全相同的操作效果良好:
mlp = svmtrain(data,labels,'Kernel_Function','mlp','Method','SMO');
mlp =
SupportVectors: [40x2 double]
Alpha: [40x1 double]
Bias: 0.0404
KernelFunction: @mlp_kernel
KernelFunctionArgs: {}
GroupNames: [45x1 double]
SupportVectorIndices: [40x1 double]
ScaleData: [1x1 struct]
FigureHandles: []