我创建了一个trie来存储英语词典中的所有单词(不是定义)它的目的是让我可以得到所有的单词,只包含字母在给定的范围内。
包含所有单词的文本文件大约为2.7MB,但是在创建树并使用pickle将其写入文件后,该文件的大小将大于33MB。
这种大小上的差异是从哪里来的我想不需要为不同的单词存储同一个字母的多个副本可以节省空间,例如对于words a p p和app l e,我只需要5个节点,对于a->p->p->l->e。
我的代码如下:

import pickle

class WordTrieNode:
    def __init__(self, nodeLetter='', parentNode=None, isWordEnding=False):
        self.nodeLetter = nodeLetter
        self.parentNode = parentNode
        self.isWordEnding = isWordEnding
        self.children = [None]*26 # One entry for each lowercase letter of the alphabet

    def getWord(self):
        if(self.parentNode is None):
            return ''

        return self.parentNode.getWord() + self.nodeLetter

    def isEndOfWord(self):
        return self.isWordEnding

    def markEndOfWord():
        self.isWordEnding = True

    def insertWord(self, word):
        if(len(word) == 0):
            return

        char = word[0]
        idx = ord(char) - ord('a')
        if(len(word) == 1):
            if(self.children[idx] is None):
                node = WordTrieNode(char, self, True)
                self.children[idx] = node
            else:
                self.children[idx].markEndOfWord()
        else:
            if(self.children[idx] is None):
                node = WordTrieNode(char, self, False)
                self.children[idx] = node
                self.children[idx].insertWord(word[1:])
            else:
                self.children[idx].insertWord(word[1:])

    def getAllWords(self):
        for node in self.children:
            if node is not None:
                if node.isEndOfWord():
                    print(node.getWord())
                node.getAllWords()

    def getAllWordsInRange(self, low='a', high='z'):
        i = ord(low) - ord('a')
        j = ord(high) - ord('a')
        for node in self.children[i:j+1]:
            if node is not None:
                if node.isEndOfWord():
                    print(node.getWord())
                node.getAllWordsInRange(low, high)



def main():

    tree = WordTrieNode("", None, False)

    with open('en.txt') as file:
        for line in file:
            tree.insertWord(line.strip('\n'))
    with open("treeout", 'wb') as output:
        pickle.dump(tree, output, pickle.HIGHEST_PROTOCOL)

    #tree.getAllWordsInRange('a', 'l')
    #tree.getAllWords()
if __name__ == "__main__":
    main()

最佳答案

trie的节点很大,因为它们存储了所有可能的下一个字母的链接正如您在代码中看到的,每个节点都包含26个链接(子链接)的列表。
更紧凑的方案是可能的(https://en.wikipedia.org/wiki/Trie#Compressing_tries),以更复杂和更慢的速度为代价。

关于python - 大小差异来自何处?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/37453275/

10-11 22:56
查看更多