您可以尝试使用 ShardedCorpus 序列化和替换corpus,它应该可以更快地读取/写入.另外,只需压缩大型.mm文件,使其占用较少的空间(=较少的I/O),也可能会有所帮助.例如,mm = gensim.corpora.MmCorpus(bz2.BZ2File('enwiki-latest-pages-articles_tfidf.mm.bz2'))lda = gensim.models.ldamulticore.LdaMulticore(corpus=mm, id2word=id2word, num_topics=100, workers=4)When I run gensim's LdaMulticore model on a machine with 12 cores, using:lda = LdaMulticore(corpus, num_topics=64, workers=10)I get a logging message that says using serial LDA version on this node A few lines later, I see another loging message that says training LDA model using 10 processesWhen I run top, I see 11 python processes have been spawned, but 9 are sleeping, I.e. only one worker is active. The machine has 24 cores, and is not overwhelmed by any means. Why isn't LdaMulticore running in parallel mode? 解决方案 First, make sure you have installed a fast BLAS library, because most of the time consuming stuff is done inside low-level routines for linear algebra. On my machine the gensim.models.ldamodel.LdaMulticore can use up all the 20 cpu cores with workers=4 during training. Setting workers larger than this didn't speed up the training. One reason might be the corpus iterator is too slow to use LdaMulticore effectively.You can try to use ShardedCorpus to serialize and replace the corpus, which should be much faster to read/write. Also, simply zipping your large .mm file so it takes up less space (=less I/O) may help too. E.g.,mm = gensim.corpora.MmCorpus(bz2.BZ2File('enwiki-latest-pages-articles_tfidf.mm.bz2'))lda = gensim.models.ldamulticore.LdaMulticore(corpus=mm, id2word=id2word, num_topics=100, workers=4) 这篇关于gensim LdaMulticore不是多重处理吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 09-18 16:42