本文介绍了list(...).insert(...) 的性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想到了以下有关计算机体系结构的问题.假设我用 Python 做

I thought about the following question about computer's architecture. Suppose I do in Python

from bisect import bisect
index = bisect(x, a)      # O(log n)  (also, shouldn't it be a standard list function?)
x.insert(index, a)        # O(1) + memcpy()

需要 log n,加上,如果我理解正确的话,还有 x[index:] 的内存复制操作.现在我最近读到瓶颈通常在处理器和内存之间的通信中,因此内存复制可以由 RAM 非常快地完成.它是如何工作的?

which takes log n, plus, if I correctly understand it, a memory copy operation for x[index:]. Now I read recently that the bottleneck is usually in the communication between processor and the memory so the memory copy could be done by RAM quite fast. Is it how that works?

推荐答案

Python 是一种语言.存在多种实现,它们可能对列表有不同的实现.因此,如果不查看实际实现的代码,您就无法确定列表是如何实现的以及它们在某些情况下的行为方式.

Python is a language. Multiple implementations exist, and they may have different implementations for lists. So, without looking at the code of an actual implementation, you cannot know for sure how lists are implemented and how they behave under certain circumstances.

我敢打赌,对列表中对象的引用存储在连续内存中(当然不是作为链表......).如果确实如此,那么使用 x.insert 插入将导致插入元素后面的所有元素都被移动.这可能由硬件有效地完成,但复杂性仍然是O(n).

My bet would be that the references to the objects in a list are stored in contiguous memory (certainly not as a linked list...). If that is indeed so, then insertion using x.insert will cause all elements behind the inserted element to be moved. This may be done efficiently by the hardware, but the complexity would still be O(n).

对于小列表,bisect 操作可能比 x.insert 花费更多的时间,即使前者是 O(log n) 而后者是O(n).然而,对于长列表,我可能会猜测 x.insert 是瓶颈.在这种情况下,您必须考虑使用不同的数据结构.

For small lists the bisect operation may take more time than x.insert, even though the former is O(log n) while the latter is O(n). For long lists, however, I'd hazard a guess that x.insert is the bottleneck. In such cases you must consider using a different data structure.

这篇关于list(...).insert(...) 的性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-29 06:25