我想用Shortest Path
中的PSO
来解决MATLAB
问题。我用优先级编码[1]来编码路径,我用压缩和速度钳制[2]。
我面临的问题是,与Dijkstra
相比,代码非常慢。首先,我使用Dijkstra
测试以获得基准时间,然后运行PSO
以找到在该时间内可以达到的最低成本。PSO
的结果总是要高得多。
如果我检查每次迭代完成的速度,我会发现对于一个Intel Core i3-2120
处理器上有1000多个节点的路径,需要几秒钟的时间。
在下面的代码中,您需要先运行data.m
来初始化成本矩阵,然后运行Dijkstra来获得时间基准之后,在几秒钟内修改allowedTime
变量。
参数:pso.m
尺寸:节点数data.m
允许时间:允许群集运行的时间(秒)
群大小:粒子数
startnode:否。表示路径的起始位置(在pso.m
范围内)
endnode:否。表示路径的结束位置(在dimensions
范围内)dimensions
接受(dijkstra.m
,costMatrix
,<start_node_id>
)
对于混乱的代码和没有使用函数,我很抱歉,但是我需要在代码完成后或中断时使所有值都<end_node_id>
,并查看所有值。
数据.m
% <<<<<<<<<<<<<<<<<< data definition >>>>>>>>>>>>>>>> %
clear;
clc;
fprintf('Generating data ...\n\n');
dimensions = 5000;
costMatrix = randi(dimensions, dimensions);
fprintf('DONE!\n\n');
粒子群
%% initialization
clc;
fprintf('Initialising swarm ...\n\n');
% parameters
% >>>>>>>>>>>>>>>>>>> edit <<<<<<<<<<<<<<<<<<< %
allowedTime = 15 * 60;
swarm_size = 50;
% SP algorithm specific.
startNode = 1;
endNode = 2;
% velocity equation params.
correction_factor_p = 2.05;
correction_factor_g = 2.05;
contrictionFactor = 0.72984;
% ^^^^^^^^^^^^^^^^^^^ end ^^^^^^^^^^^^^^^^^^^ %
gbest = 1;
oldFitness = 1000000001;
iterations = 0;
% pre-allocate arrays.
swarmPos = zeros(swarm_size, dimensions);
swarmVel = zeros(swarm_size, dimensions);
swarmBestPos = zeros(swarm_size, dimensions);
swarmBestPath = cell(swarm_size, 1);
swarmBestFit = zeros(1, swarm_size);
result = zeros(1, swarm_size);
upperBound = zeros(1, dimensions);
lowerBound = zeros(1, dimensions);
% set bounds.
for i = 1 : dimensions
upperBound(i) = 100;
lowerBound(i) = -100;
end
% ---- initiate swarm -----
for i = 1 : swarm_size
for j = 2 : dimensions
swarmPos(i,j) = lowerBound(j) + rand * (upperBound(j) - lowerBound(j));
swarmVel(i,j) = rand * (upperBound(j) - lowerBound(j)) / 2;
swarmBestPos(i,j) = swarmPos(i,j);
swarmBestPath{i}(j) = -1;
swarmBestFit(i) = 1000000000; % best fitness so far
end
end
% set starting node to avoid on accidental access.
for i = 1 : swarm_size
swarmPos(i,1) = -99999999;
swarmVel(i,1) = -99999999;
swarmBestPos(i,1) = -99999999;
swarmBestPath{i}(1) = startNode;
end
% ^^^^^^^^^^^^^^^^ END: initialisation ^^^^^^^^^^^^^^^^ %
% >>>>>>>>>>>>>>>>>>> START: swarming <<<<<<<<<<<<<<<<<<< %
clc;
fprintf('Swarming ...\n\n');
tic;
%% iterations
while true
% reset results to allow summing.
parfor i = 1 : swarm_size
result(i) = 0;
end
% <<<<<<<<<<<<<<<<< START: movement and fitness >>>>>>>>>>>>>>>>> %
for i = 1 : swarm_size
for j = 2 : dimensions
swarmPos(i,j) = swarmPos(i,j) + swarmVel(i,j); % update x position
if (swarmPos(i,j) > upperBound(j))
swarmPos(i,j) = swarmPos(i,j) - (swarmPos(i,j) - lowerBound(j)) / 2;
elseif (swarmPos(i,j) < lowerBound(j))
swarmPos(i,j) = swarmPos(i,j) + (lowerBound(j) - swarmPos(i,j)) / 2;
end
end
%tic;
% <<< inline fitness function >>> %
tempPath = [];
tempPath(1) = startNode;
invalidBuild = false;
tempPos = swarmPos(i,:);
for j = 2 : dimensions
for k = 2 : dimensions
[discard, maxPos] = max(tempPos);
cost = costMatrix(tempPath(j - 1), maxPos);
tempPos(maxPos) = -9999999 - k;
if (cost < 100000000)
tempPath(j) = maxPos;
result(i) = result(i) + cost;
break;
elseif (k == dimensions)
invalidBuild = true;
end
end
if (invalidBuild)
result(i) = 1000000000;
break;
elseif (tempPath(j) == endNode)
break;
end
end
% ^^^ END: fitness function ^^^ %
% if new position is better
if result(i) < swarmBestFit(i)
for d = 1 : dimensions
swarmBestPos(i,d) = swarmPos(i,d); % update best x,
end
swarmBestPath{i} = tempPath;
swarmBestFit(i) = result(i); % and best value
end
end
% ^^^^^^^^^ END: movement and fitness ^^^^^^^^^ %
% <<<<<<<<<<<<<<<<< update global best >>>>>>>>>>>>>>>>> %
for i = 1 : swarm_size
if swarmBestFit(i) < swarmBestFit(gbest)
gbest = i; % global best i.
took = toc; % time taken to reach this best.
end
end
% <<<<<<<<<<<<<<<<< update velocity >>>>>>>>>>>>>>>>> %
for i = 1 : swarm_size
for j = 2 : dimensions
swarmVel(i,j) = contrictionFactor * (swarmVel(i,j) ...
+ correction_factor_p * rand * (swarmBestPos(i,j) - swarmPos(i,j)) ...
+ correction_factor_g * rand * (swarmBestPos(gbest,j) - swarmPos(i,j)));
if (swarmVel(i,j) > (upperBound(j) - lowerBound(j)) / 2)
swarmVel(i,j) = (upperBound(j) - lowerBound(j)) / 2;
end
end
end
% <<<<<<<<<<<<<<<<< print global bests if changed >>>>>>>>>>>>>>>>> %
if ( oldFitness ~= swarmBestFit(gbest) )
oldFitness = swarmBestFit(gbest);
% update display
clc
fprintf('Best particle position:\n');
sizeTemp = size(swarmBestPath{gbest}, 2);
for i = 1 : sizeTemp
if (swarmBestPath{gbest}(i) ~= 0)
fprintf('%d\n', swarmBestPath{gbest}(i));
end
end
fprintf('\nBest fitness: %d\n\n', swarmBestFit(gbest));
end
iterations = iterations + 1;
% end on timeout
elapsedTime = toc;
if (elapsedTime > allowedTime)
break;
end
end
clc;
fprintf('>>>>>>>>>>>>>>> FINISHED <<<<<<<<<<<<<<<\n\n\n');
fprintf('Best path:\n');
sizeTemp = size(swarmBestPath{gbest}, 1);
for i = 1 : sizeTemp
if (swarmBestPath{gbest}(i) ~= 0)
fprintf('%d\n', swarmBestPath{gbest}(i));
end
end
fprintf('\nBest cost: %d\n\n', swarmBestFit(gbest));
fprintf('\nTook: %d iterations, and %.2f seconds.\n\n', iterations, took);
迪克斯特拉
function dijkstra(matriz_costo, s, d)
% This is an implementation of the dijkstra´s algorithm, wich finds the
% minimal cost path between two nodes. It´s supoussed to solve the problem on
% possitive weighted instances.
% the inputs of the algorithm are:
%farthestNode: the farthest node to reach for each node after performing
% the routing;
% n: the number of nodes in the network;
% s: source node index;
% d: destination node index;
%For information about this algorithm visit:
%http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
%This implementatios is inspired by the Xiaodong Wang's implememtation of
%the dijkstra's algorithm, available at
%http://www.mathworks.com/matlabcentral/fileexchange
%file ID 5550
%Author: Jorge Ignacio Barrera Alviar. April/2007
n=size(matriz_costo,1);
S(1:n) = 0; %s, vector, set of visited vectors
dist(1:n) = inf; % it stores the shortest distance between the source node and any other node;
prev(1:n) = n+1; % Previous node, informs about the best previous node known to reach each network node
dist(s) = 0;
iterations = 0;
tic;
while sum(S)~=n
candidate=[];
for i=1:n
if S(i)==0
candidate=[candidate dist(i)];
else
candidate=[candidate inf];
end
end
[u_index u]=min(candidate);
S(u)=1;
for i=1:n
if(dist(u)+matriz_costo(u,i))<dist(i)
dist(i)=dist(u)+matriz_costo(u,i);
prev(i)=u;
end
end
iterations = iterations + 1;
end
sp = [d];
while sp(1) ~= s
if prev(sp(1))<=n
sp=[prev(sp(1)) sp];
else
error;
end
end;
spcost = dist(d);
took = toc;
fprintf('Best path:\n');
fprintf('%d\n', sp);
fprintf('\nBest cost: %d\n\n', spcost);
fprintf('\nTook: %d iterations, and %.2f seconds.\n\n', iterations, took);
(1)A Nondominated Sorting Genetic Algorithm for SP Routing Problem
(2)Constriction factors and Parameters
最佳答案
我是人工智能领域的新手。我已经尽力帮助你了。
PSO是一种元启发式方法对于信息不完全或不完全或计算能力有限的问题,可以用元启发式方法进行求解。这种问题不能用dijkstra解决,因为它需要完整的拓扑细节。因此,算法的使用取决于问题域。
由于pso是一个随机过程,初始结果将不是最优的。随着迭代的执行,成本函数值降低其中Dijkstra通过一次迭代找到最短路径。
因此,与dijkstra相比,pso的时间消耗将更大。这些算法的使用是针对特定问题的。
希望这对你有帮助!