这是什么算法的大O的复杂性

这是什么算法的大O的复杂性

本文介绍了这是什么算法的大O的复杂性?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有我写了下面的函数。这个功能实际上是一个合并排序。

 公共静态长nlgn(双[] NUMS){

        如果(nums.length→1){
            INT elementsInA1 = nums.length / 2;
            INT elementsInA2 = nums.length  -  elementsInA1;
            双[] ARR1 =新的双[elementsInA1]
            双[] = ARR2新的双[elementsInA2]

            的for(int i = 0; I< elementsInA1;我++)
            ARR1 [i] = NUM​​S [I]

            对于(INT I = elementsInA1; I< elementsInA1 + elementsInA2;我++)
            ARR2 [我 -  elementsInA1] = NUM​​S [I]

            nlgn(ARR1);
            nlgn(ARR2);

            INT I = 0,J = 0,K = 0;

            而(arr1.length = J和!&安培; arr2.length = K!){
                如果(ARR1 [J] LT = ARR2 [K]){
                    NUMS [i] = ARR1 [J]。
                    我++;
                    J ++;
                } 其他 {
                    NUMS [I] = ARR2 [k]的;
                    我++;
                    ķ++;
                }
            }

            而(arr1.length!= j)条{
                NUMS [i] = ARR1 [J]。
                我++;
                J ++;
            }
            而(arr2.length!= K){
                NUMS [I] = ARR2 [k]的;
                我++;
                ķ++;
            }
        }

        返回坚果;
    }
 

由于这是一个归并排序,我从我的研究知道,这种算法的大O复杂度为O(n LGN)。然而,当我跑我的计时测试,我得到的结果并不认为这是在为O(n LGN)同时运行。现在看来似乎是O(n LGN)的时间,但因为直到我们得到的两个端for循环的开始。它运行在O(n)时间。一旦过去认为,它应该是O(LGN)同时运行,因为它排序每个元素。

我的问题是,有人可以证实,这片code为O(N LGN)的时间已经不多?如果没有,我想知道我错在我的理解。

解决方案

没有必要表现出来,因为合并排序已经被证明的到运行为O(n LG(N))的时间。但是,如果你想看到它,你需要试验越来越大的价值为你的投入。你可能想与你的输入值和时序结果更新您的文章。

我想你可能误会什么大哦符号确实试图告诉你。大O给你一个的逼近的渐近的上限的的算法相同的投入变得足够大。 (如何大就是够大将从算法各不相同的算法,将需要通过实验发现,问题的关键是,这个值的确实的存在,我们重新present更抽象。)

在换句话说,大O会告诉你什么是的最坏的算法的情况下表现的可以的是为 N 变得非常大。由于这是最坏的情况的,这也意味着,它可能在某些情况下表现得更好,但我们一般不关心的那些。 (查找到大欧米茄和大西塔,如果你有兴趣。)例如,如果你有一个足够小名单中,合并排序可以运行速度比快速排序更快,这是经常被用来作为一种优化。

这也是一个近似值,因为常数等多项条款的没有的显示为标记的一部分。例如,一些假设的算法 500倍^ 2 + 15X + 9000 将被写成为O(n ^ 2)的时间复杂度

部分原因丢掉低位条款包括:

  • 相对大小:作为 N 趋于正无穷大,较大的 N ^ 2 长期的占主导地位的;较低的条件相比,贡献越来越少了总成本的 的最大/主导短期--like加入几滴或水桶入湖;
  • 便利性:阅读和理解为O(n ^ 2)是长的,更复杂的多项式容易没有真正的好处

I have a function that I wrote below. This function is essentially a merge-sort.

public static long nlgn(double[] nums)  {

        if(nums.length > 1)     {
            int elementsInA1 = nums.length/2;
            int elementsInA2 = nums.length - elementsInA1;
            double[] arr1 = new double[elementsInA1];
            double[] arr2 = new double[elementsInA2];

            for(int i = 0; i < elementsInA1; i++)
            arr1[i] = nums[i];

            for(int i = elementsInA1; i < elementsInA1 + elementsInA2; i++)
            arr2[i - elementsInA1] = nums[i];

            nlgn(arr1);
            nlgn(arr2);

            int i = 0, j = 0, k = 0;

            while(arr1.length != j && arr2.length != k) {
                if(arr1[j] <= arr2[k]) {
                    nums[i] = arr1[j];
                    i++;
                    j++;
                } else {
                    nums[i] = arr2[k];
                    i++;
                    k++;
                }
            }

            while(arr1.length != j) {
                nums[i] = arr1[j];
                i++;
                j++;
            }
            while(arr2.length != k) {
                nums[i] = arr2[k];
                i++;
                k++;
            }
        }

        return nuts;
    }

Since this is a merge sort, I know from my research that the big-O complexity of this algorithm is O(n lgn). However, when I run my timing tests, the results I get do not suggest that this is running in O(n lgn) time. It seems like it is O(n lgn) time though, because up until we get to the end of the two for loops at the beginning. it runs in O(n) time. Once past that, it should be running in O(lgn) time as it sorts each element.

My question is, can somebody confirm that this piece of code is running in O(n lgn) time? If not, I would like to know where I am going wrong in my understanding.

解决方案

No need to show it, because merge sort has already been proven to run in O(n lg(n)) time. But if you'd like to observe it, you'll need to experiment with increasingly large values for your inputs. You might want to update your post with your input values and timing results.

I think you may be misunderstanding what Big-Oh notation actually tries to tell you. Big-O gives you an approximation of the asymptotic upper bound of the algorithm as the inputs become large enough. (How "large" is "large enough" will vary from algorithm to algorithm and would need to be found by experimentation. The point is that this value does exist and we represent it more abstractly.)

In other words, Big-O tells you what the worst case performance of the algorithm could be as N becomes very large. Since this is the worst case scenario, it also means that, it could perform better under some circumstances, but we don't generally care about those. (Look into Big-Omega and Big-Theta if you're interested.) For example, if you have a "small-enough" list, merge-sort can run faster than quick-sort and this is often used as an optimization.

It's also an approximation because the constants and other polynomial terms are not shown as part of the notation. For example, some hypothetical algorithm with a time complexity of 500x^2 + 15x + 9000 is going to be written as O(n^2).

Some reasons for dropping the lower terms include:

  • Relative Size: As n tends towards positive infinity, the larger n^2 term dominates; the lower terms contribute less and less to the overall cost in comparison to the largest/dominating term --like adding a few drops or buckets of water into a lake;
  • Convenience: Reading and understanding O(n^2) is easier than a long and more complicated polynomial for no real benefit

这篇关于这是什么算法的大O的复杂性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-20 20:29