问题描述
我的文本中有很多句子.如何使用nltk.ngrams
进行处理?
I have a text which has many sentences. How can I use nltk.ngrams
to process it?
这是我的代码:
sequence = nltk.tokenize.word_tokenize(raw)
bigram = ngrams(sequence,2)
freq_dist = nltk.FreqDist(bigram)
prob_dist = nltk.MLEProbDist(freq_dist)
number_of_bigrams = freq_dist.N()
但是,上面的代码假定所有句子都是一个序列.但是,句子是分开的,我想一个句子的最后一个词与另一个句子的开始词无关.如何为这样的文本创建bigram
?我还需要基于`freq_dist的prob_dist
和number_of_bigrams
.
However, the above code supposes that all sentences are one sequence. But, sentences are separated, and I guess the last word of one sentence is unrelated to the start word of another sentence. How can I create a bigram
for such a text? I need also prob_dist
and number_of_bigrams
which are based on the `freq_dist.
还有类似的问题,例如什么是ngram计数以及如何使用nltk来实现?,但是它们大多是关于单词序列的.
There are similar questions like this What are ngram counts and how to implement using nltk? but they are mostly about a sequence of words.
推荐答案
您可以使用新的nltk.lm
模块.这是一个示例,首先获取一些数据并将其标记化:
You can use the new nltk.lm
module. Here's an example, first get some data and tokenize it:
import os
import requests
import io #codecs
from nltk import word_tokenize, sent_tokenize
# Text version of https://kilgarriff.co.uk/Publications/2005-K-lineer.pdf
if os.path.isfile('language-never-random.txt'):
with io.open('language-never-random.txt', encoding='utf8') as fin:
text = fin.read()
else:
url = "https://gist.githubusercontent.com/alvations/53b01e4076573fea47c6057120bb017a/raw/b01ff96a5f76848450e648f35da6497ca9454e4a/language-never-random.txt"
text = requests.get(url).content.decode('utf8')
with io.open('language-never-random.txt', 'w', encoding='utf8') as fout:
fout.write(text)
# Tokenize the text.
tokenized_text = [list(map(str.lower, word_tokenize(sent)))
for sent in sent_tokenize(text)]
然后进行语言建模:
# Preprocess the tokenized text for 3-grams language modelling
from nltk.lm.preprocessing import padded_everygram_pipeline
from nltk.lm import MLE
n = 3
train_data, padded_sents = padded_everygram_pipeline(n, tokenized_text)
model = MLE(n) # Lets train a 3-grams maximum likelihood estimation model.
model.fit(train_data, padded_sents)
要获取计数:
model.counts['language'] # i.e. Count('language')
model.counts[['language']]['is'] # i.e. Count('is'|'language')
model.counts[['language', 'is']]['never'] # i.e. Count('never'|'language is')
获取概率:
model.score('is', 'language'.split()) # P('is'|'language')
model.score('never', 'language is'.split()) # P('never'|'language is')
在加载笔记本时,Kaggle平台上存在一些问题,但在某些情况下,该笔记本应该可以很好地概述nltk.lm
模块 https://www.kaggle.com/alvations/n-gram-language-model-with-nltk
There's some kinks on the Kaggle platform when loading the notebook but at some point this notebook should give a good overview of the nltk.lm
module https://www.kaggle.com/alvations/n-gram-language-model-with-nltk
这篇关于如何获得句子文本中的双峰概率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!