本文介绍了python中GSDMM的实际示例?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用GSDMM将主题分配给数据集中的某些推文.我发现的唯一示例(1 2 )不够详细.我想知道您是否知道一个显示 GSDMM 是如何使用 python 实现的源代码(或足够细心地做一个小例子).

I want to use GSDMM to assign topics to some tweets in my data set. The only examples I found (1 and 2) are not detailed enough. I was wondering if you know of a source (or care enough to make a small example) that shows how GSDMM is implemented using python.

推荐答案

我终于为GSDMM编译了我的代码,并将其从头开始放在这里供他人使用.希望这可以帮助.我试图对重要部分发表评论:

I finally compiled my code for GSDMM and will put it here from scratch for others' use. Hope this helps. I have tried to comment on important parts:

#turning sentences into words

data_words =[]
for doc in data:
    doc = doc.split()
    data_words.append(doc)


#building bi-grams

bigram = gensim.models.Phrases(vocabulary, min_count=5, threshold=100)

bigram_mod = gensim.models.phrases.Phraser(bigram)

print('done!')



# Removing stop Words

stop_words.extend(['from', 'rt'])

def remove_stopwords(texts):
    return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]

data_words_nostops = remove_stopwords(vocabulary)


# Form Bigrams
data_words_bigrams = [bigram_mod[doc] for doc in data_words_nostops]



#lemmatization
data_lemmatized = []
for sent in data_words_bigrams:
    doc = nlp(" ".join(sent))
    data_lemmatized.append([token.lemma_ for token in doc if token.pos_ in ['NOUN', 'ADJ', 'VERB', 'ADV']])

docs = data_lemmatized
vocab = set(x for doc in docs for x in doc)

# Train a new model
import random
random.seed(1000)
# Init of the Gibbs Sampling Dirichlet Mixture Model algorithm
mgp = MovieGroupProcess(K=10, alpha=0.1, beta=0.1, n_iters=30)

vocab = set(x for doc in docs for x in doc)
n_terms = len(vocab)
n_docs = len(docs)

# Fit the model on the data given the chosen seeds
y = mgp.fit(docs, n_terms)

def top_words(cluster_word_distribution, top_cluster, values):
    for cluster in top_cluster:
        sort_dicts =sorted(mgp.cluster_word_distribution[cluster].items(), key=lambda k: k[1], reverse=True)[:values]
        print('Cluster %s : %s'%(cluster,sort_dicts))
        print(' — — — — — — — — — ')

doc_count = np.array(mgp.cluster_doc_count)
print('Number of documents per topic :', doc_count)
print('*'*20)

# Topics sorted by the number of document they are allocated to
top_index = doc_count.argsort()[-10:][::-1]
print('Most important clusters (by number of docs inside):', top_index)
print('*'*20)


# Show the top 10 words in term frequency for each cluster

top_words(mgp.cluster_word_distribution, top_index, 10)


希望这会有所帮助!

这篇关于python中GSDMM的实际示例?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

05-17 10:12