问题描述
我正在尝试通过一个数据集中有百万行的函数运行.
I am trying to run through a function with my million lines in a datasets.
- 我从数据框中的CSV读取数据
- 我使用删除列表删除不需要的数据
- 我将它通过for循环中的NLTK函数传递.
代码:
def nlkt(val):
val=repr(val)
clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
nonum = [char for char in nopunc if not char.isdigit()]
words_string = ''.join(nonum)
return words_string
现在,我正在使用for循环调用上述函数,以通过百万条记录运行.即使我在具有24核cpu和88 GB Ram的重量级服务器上,我仍然看到循环占用了太多时间,并且没有使用那里的计算能力
Now i am calling the above function using a for loop to run through by million records. Even though i am on a heavy weight server with 24 core cpu and 88 GB Ram i see the loop is taking too much time and not using the computational power that is there
我这样调用上面的函数
data = pd.read_excel(scrPath + "UserData_Full.xlsx", encoding='utf-8')
droplist = ['Submitter', 'Environment']
data.drop(droplist,axis=1,inplace=True)
#Merging the columns company and detailed description
data['Anylize_Text']= data['Company'].astype(str) + ' ' + data['Detailed_Description'].astype(str)
finallist =[]
for eachlist in data['Anylize_Text']:
z = nlkt(eachlist)
finallist.append(z)
当我们有几百万条记录时,上面的代码工作得很好,只是速度太慢.它只是excel中的一个示例记录,但实际数据将存储在DB中,该数据库将运行数亿个.我有什么办法可以加快操作以更快地通过函数传递数据-而是使用更多的计算能力?
The above code works perfectly OK just too slow when we have few million record. It is just a sample record in excel but actual data will be in DB which will run in few hundred millions. Is there any way I can speed up the operation to pass the data through the function faster - use more computational power instead?
推荐答案
您的原始nlkt()
在每一行中循环3次.
Your original nlkt()
loops through each row 3 times.
def nlkt(val):
val=repr(val)
clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
nopunc = [char for char in str(clean_txt) if char not in string.punctuation]
nonum = [char for char in nopunc if not char.isdigit()]
words_string = ''.join(nonum)
return words_string
此外,每次调用nlkt()
时,都会一次又一次地重新初始化它们.
Also, each time you're calling nlkt()
, you're re-initializing these again and again.
-
stopwords.words('english')
-
string.punctuation
stopwords.words('english')
string.punctuation
这些应该是全局的.
stoplist = stopwords.words('english') + list(string.punctuation)
逐行浏览内容:
val=repr(val)
我不确定您为什么需要这样做.但是您可以轻松地将列转换为str
类型.这应该在预处理功能之外完成.
I'm not sure why you need to do this. But you could easy cast a column to a str
type. This should be done outside of your preprocessing function.
希望这是不言而喻的:
>>> import pandas as pd
>>> df = pd.DataFrame([[0, 1, 2], [2, 'xyz', 4], [5, 'abc', 'def']])
>>> df
0 1 2
0 0 1 2
1 2 xyz 4
2 5 abc def
>>> df[1]
0 1
1 xyz
2 abc
Name: 1, dtype: object
>>> df[1].astype(str)
0 1
1 xyz
2 abc
Name: 1, dtype: object
>>> list(df[1])
[1, 'xyz', 'abc']
>>> list(df[1].astype(str))
['1', 'xyz', 'abc']
现在转到下一行:
clean_txt = [word for word in val.split() if word.lower() not in stopwords.words('english')]
使用str.split()
很尴尬,您应该使用适当的标记器.否则,您的标点符号可能会与前面的单词卡住,例如
Using str.split()
is awkward, you should use a proper tokenizer. Otherwise, your punctuations might be stuck with the preceding word, e.g.
>>> from nltk.corpus import stopwords
>>> from nltk import word_tokenize
>>> import string
>>> stoplist = stopwords.words('english') + list(string.punctuation)
>>> stoplist = set(stoplist)
>>> text = 'This is foo, bar and doh.'
>>> [word for word in text.split() if word.lower() not in stoplist]
['foo,', 'bar', 'doh.']
>>> [word for word in word_tokenize(text) if word.lower() not in stoplist]
['foo', 'bar', 'doh']
还应同时检查.isdigit()
:
>>> text = 'This is foo, bar, 234, 567 and doh.'
>>> [word for word in word_tokenize(text) if word.lower() not in stoplist and not word.isdigit()]
['foo', 'bar', 'doh']
将所有内容整合在一起,您的nlkt()
应该看起来像这样:
Putting it all together your nlkt()
should look like this:
def preprocess(text):
return [word for word in word_tokenize(text) if word.lower() not in stoplist and not word.isdigit()]
您可以使用 DataFrame.apply
:
And you can use the DataFrame.apply
:
data['Anylize_Text'].apply(preprocess)
这篇关于为什么在处理DataFrame时我的NLTK函数变慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!