帮助我找出python代码有什么问题。
多数民众赞成在代码
import nltk
import re
import pickle
raw = open('tom_sawyer_shrt.txt').read()
### this is how the basic Punkt sentence tokenizer works
#sent_tokenizer=nltk.data.load('tokenizers/punkt/english.pickle')
#sents = sent_tokenizer.tokenize(raw)
### train & tokenize text using text
sent_trainer = nltk.tokenize.punkt.PunktSentenceTokenizer().train(raw)
sent_tokenizer = nltk.tokenize.punkt.PunktSentenceTokenizer(sent_trainer)
# break in to sentences
sents = sent_tokenizer.tokenize(raw)
# get sentence start/stop indexes
sentspan = sent_tokenizer.span_tokenize(raw)
### Remove \n in the middle of setences, due to fixed-width formatting
for i in range(0,len(sents)-1):
sents[i] = re.sub('(?<!\n)\n(?!\n)',' ',raw[sentspan[i][0]:sentspan[i+1][0]])
for i in range(1,len(sents)):
if (sents[i][0:3] == '"\n\n'):
sents[i-1] = sents[i-1]+'"\n\n'
sents[i] = sents[i][3:]
### Loop thru each sentence, fix to 140char
i=0
tweet=[]
while (i<len(sents)):
if (len(sents[i]) > 140):
ntwt = int(len(sents[i])/140) + 1
words = sents[i].split(' ')
nwords = len(words)
for k in range(0,ntwt):
tweet = tweet + [
re.sub('\A\s|\s\Z', '', ' '.join(
words[int(k*nwords/float(ntwt)):
int((k+1)*nwords/float(ntwt))]
))]
i=i+1
else:
if (i<len(sents)-1):
if (len(sents[i])+len(sents[i+1]) <140):
nextra = 1
while (len(''.join(sents[i:i+nextra+1]))<140):
nextra=nextra+1
tweet = tweet+[
re.sub('\A\s|\s\Z', '',''.join(sents[i:i+nextra]))
]
i = i+nextra
else:
tweet = tweet+[re.sub('\A\s|\s\Z', '',sents[i])]
i=i+1
else:
tweet = tweet+[re.sub('\A\s|\s\Z', '',sents[i])]
i=i+1
### A last pass to clean up leading/trailing newlines/spaces.
for i in range(0,len(tweet)):
tweet[i] = re.sub('\A\s|\s\Z','',tweet[i])
for i in range(0,len(tweet)):
tweet[i] = re.sub('\A"\n\n','',tweet[i])
### Save tweets to pickle file for easy reading later
output = open('tweet_list.pkl','wb')
pickle.dump(tweet,output,-1)
output.close()
listout = open('tweet_lis.txt','w')
for i in range(0,len(tweet)):
listout.write(tweet[i])
listout.write('\n-----------------\n')
listout.close()
多数民众赞成在错误消息
追溯(最近一次通话):
在第13行的文件“ twain_prep.py”
sent_trainer = nltk.tokenize.punkt.PunktSentenceTokenizer()。train(原始)
在火车中的文件“ /home/user/.local/lib/python2.7/site-packages/nltk/tokenize/punkt.py”,第1227行
token_cls = self._Token).get_params()
初始化文件“ /home/user/.local/lib/python2.7/site-packages/nltk/tokenize/punkt.py”,行649
self.train(train_text,verbose,finalize = True)
在火车上的第713行中的文件“ /home/user/.local/lib/python2.7/site-packages/nltk/tokenize/punkt.py”
self._train_tokens(self._tokenize_words(文本),详细)
文件“ /home/user/.local/lib/python2.7/site-packages/nltk/tokenize/punkt.py”,第729行,_train_tokens
代币=清单(代币)
_tokenize_words中第542行的文件“ /home/user/.local/lib/python2.7/site-packages/nltk/tokenize/punkt.py”
对于plaintext.split('\ n')中的行:
UnicodeDecodeError:'ascii'编解码器无法解码位置6的字节0xe2:序数不在范围内(128)
最佳答案
当您的字符串中包含某些Unicode时,会发生UnicodeDecodeError
。基本上,Python字符串仅处理ascii
值,因此,当您将文本发送到tokenizer
时,它必须包含一些不在ascii
列表中的字符。
那么如何解决呢?
您可以将文本转换为ascii
字符,而忽略“ Unicode”字符。
raw = raw..encode('ascii', 'ignore')
另外,您可以阅读此post来处理
Unicode
错误。关于python - UnicodeDecodeError:'ascii'编解码器无法解码位置6的字节0xe2:序数不在范围内(128),我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/43555593/