本文介绍了请帮助它更快的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了这个函数,它执行以下操作:

从文件读取行后。它分裂并通过哈希表找到单词出现

...原因这个很慢..可以有一个

帮我把它变得更快......

f = open(文件名)

lines = f .readlines()

def create_words(lines):

cnt = 0

spl_set =''[",;<> {} _&?!(): - [\。= + * \t\\\
\ r] +''

代表行内容:

words = content.split()

countDict = {}

wordlist = []

for w in words:

w = string.lower(w)

如果在spl_set中w [-1]:w = w [: - 1]

如果w!='''' ':

if countDict.has_key(w):

countDict [w] = countDict [w] +1

else:

countDict [w] = 1

wordlist = countDict.keys()

wordlist.sort()

cnt + = 1

if countDict!= {}:

word word word:print(word + '''+

str(countDict [word])+''\ n'')

I wrote this function which does the following:
after readling lines from file.It splits and finds the word occurences
through a hash table...for some reason this is quite slow..can some one
help me make it faster...
f = open(filename)
lines = f.readlines()
def create_words(lines):
cnt = 0
spl_set = ''[",;<>{}_&?!():-[\.=+*\t\n\r]+''
for content in lines:
words=content.split()
countDict={}
wordlist = []
for w in words:
w=string.lower(w)
if w[-1] in spl_set: w = w[:-1]
if w != '''':
if countDict.has_key(w):
countDict[w]=countDict[w]+1
else:
countDict[w]=1
wordlist = countDict.keys()
wordlist.sort()
cnt += 1
if countDict != {}:
for word in wordlist: print (word+'' ''+
str(countDict[word])+''\n'')

推荐答案








这篇关于请帮助它更快的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-18 17:26