本文介绍了普里姆的算法时间复杂度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在查看 Wikipedia条目中的Prim算法,我注意到它的时间了邻接矩阵的复杂度为O(V ^ 2),堆和邻接列表的时间复杂度为O(E lg(V)),其中E是边的数量,V是图中顶点的数量./p>

由于Primer算法用于较密集的图中,因此E可以逼近V ^ 2,但是当这样做时,堆的时间复杂度变为O(V ^ 2 lg(V)),大于O(V ^ 2) ).显然,堆将仅通过搜索数组来提高性能,但是时间复杂度则相反.

该算法实际上如何随着改进而减慢速度?

解决方案

即使堆使您免于遍历数组的搜索,它也会减慢算法的更新"部分:数组更新为O(1),而堆更新为O(log(N)).

本质上,您在算法的一部分中交换速度而在另一部分中交换速度.

无论如何,您都必须搜索N次.但是,在密集图中,您需要进行大量更新(〜V ^ 2),而在稀疏图中,则不需要进行大量更新.

另一个令人费解的例子是搜索数组中的元素.如果只执行一次,那么线性搜索是最好的-但是,如果您执行很多查询,最好对它进行排序并每次使用二进制搜索.

I was looking at the Wikipedia entry for Prim's algorithm and I noticed that its time complexity with an adjacency matrix is O(V^2) and its time complexity with a heap and adjacency list is O(E lg(V)) where E is the number of edges and V is the number of vertices in the graph.

Since Prim's algorithm is used in denser graphs, E can approach V^2, but when it does, the time complexity with a heap becomes O(V^2 lg(V)) which is greater than O(V^2). Obviously, a heap will improve performance over just searching the array, but the time complexity says otherwise.

How does the algorithm actually slow down with an improvement?

解决方案

Even though the heap saves you from searching through the array, it slows down the "update" part of the algorithm: array updates are O(1), while heap updates are O(log(N)).

In essence, you trade speed in one part of the algorithm for speed in another.

No matter what, you'll have to search N times.However, in dense graphs, you'll need to update a lot (~V^2), and in sparse graphs, you don't.

Another example off the top of my head is searching for elements in an array.If you're only doing it once, linear search is the best - but if you do lots of queries, it's better to sort it and use binary search every time.

这篇关于普里姆的算法时间复杂度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-23 17:06