问题描述
我需要对一些文本文档进行聚类,并一直在研究各种选项.看起来 LingPipe 可以在没有事先转换(到向量空间等)的情况下对纯文本进行聚类,但它是我见过的唯一一个明确声称可以处理字符串的工具.
I need to cluster some text documents and have been researching various options. It looks like LingPipe can cluster plain text without prior conversion (to vector space etc), but it's the only tool I've seen that explicitly claims to work on strings.
有没有可以直接对文本进行聚类的 Python 工具?如果没有,最好的处理方法是什么?
Are there any Python tools that can cluster text directly? If not, what's the best way to handle this?
推荐答案
文本聚类的质量主要取决于两个因素:
The quality of text-clustering depends mainly on two factors:
要聚类的文档之间的一些相似性概念.例如,很容易通过 tfidf-cosine-distance 在向量空间中区分关于体育和政治的新闻文章.根据这种衡量标准,将产品评论归为好"或坏"要困难得多.
Some notion of similarity between the documents you want to cluster. For example, it's easy to distinguish between newsarticles about sports and politics in vector space via tfidf-cosine-distance. It's a lot harder to cluster product-reviews in "good" or "bad" based on this measure.
聚类方法本身.你知道会有多少集群吗?好的,使用kmeans.您不关心准确性,但想要显示一个漂亮的树结构来导航搜索结果?使用某种层次聚类.
The clustering method itself. You know how many cluster there'll be? Ok, use kmeans. You don't care about accuracy but want to show a nice tree-structure for navigation of search-results? Use some kind of hierarchical clustering.
没有文本聚类解决方案,在任何情况下都可以很好地工作.因此,将一些集群软件开箱即用并将数据投入其中可能还不够.
There is no text-clustering solution, that would work well under any circumstances. And therefore it's probably not enough to take some clustering software out of the box and throw your data at it.
话虽如此,这是我前段时间用来玩文本聚类的一些实验代码.文档表示为归一化的 tfidf 向量,相似度用余弦距离来衡量.聚类方法本身是majorclust.
Having said that, here's some experimental code i used some time ago to play around with text-clustering. The documents are represented as normalized tfidf-vectors and the similarity is measured as cosine distance. The clustering method itself is majorclust.
import sys
from math import log, sqrt
from itertools import combinations
def cosine_distance(a, b):
cos = 0.0
a_tfidf = a["tfidf"]
for token, tfidf in b["tfidf"].iteritems():
if token in a_tfidf:
cos += tfidf * a_tfidf[token]
return cos
def normalize(features):
norm = 1.0 / sqrt(sum(i**2 for i in features.itervalues()))
for k, v in features.iteritems():
features[k] = v * norm
return features
def add_tfidf_to(documents):
tokens = {}
for id, doc in enumerate(documents):
tf = {}
doc["tfidf"] = {}
doc_tokens = doc.get("tokens", [])
for token in doc_tokens:
tf[token] = tf.get(token, 0) + 1
num_tokens = len(doc_tokens)
if num_tokens > 0:
for token, freq in tf.iteritems():
tokens.setdefault(token, []).append((id, float(freq) / num_tokens))
doc_count = float(len(documents))
for token, docs in tokens.iteritems():
idf = log(doc_count / len(docs))
for id, tf in docs:
tfidf = tf * idf
if tfidf > 0:
documents[id]["tfidf"][token] = tfidf
for doc in documents:
doc["tfidf"] = normalize(doc["tfidf"])
def choose_cluster(node, cluster_lookup, edges):
new = cluster_lookup[node]
if node in edges:
seen, num_seen = {}, {}
for target, weight in edges.get(node, []):
seen[cluster_lookup[target]] = seen.get(
cluster_lookup[target], 0.0) + weight
for k, v in seen.iteritems():
num_seen.setdefault(v, []).append(k)
new = num_seen[max(num_seen)][0]
return new
def majorclust(graph):
cluster_lookup = dict((node, i) for i, node in enumerate(graph.nodes))
count = 0
movements = set()
finished = False
while not finished:
finished = True
for node in graph.nodes:
new = choose_cluster(node, cluster_lookup, graph.edges)
move = (node, cluster_lookup[node], new)
if new != cluster_lookup[node] and move not in movements:
movements.add(move)
cluster_lookup[node] = new
finished = False
clusters = {}
for k, v in cluster_lookup.iteritems():
clusters.setdefault(v, []).append(k)
return clusters.values()
def get_distance_graph(documents):
class Graph(object):
def __init__(self):
self.edges = {}
def add_edge(self, n1, n2, w):
self.edges.setdefault(n1, []).append((n2, w))
self.edges.setdefault(n2, []).append((n1, w))
graph = Graph()
doc_ids = range(len(documents))
graph.nodes = set(doc_ids)
for a, b in combinations(doc_ids, 2):
graph.add_edge(a, b, cosine_distance(documents[a], documents[b]))
return graph
def get_documents():
texts = [
"foo blub baz",
"foo bar baz",
"asdf bsdf csdf",
"foo bab blub",
"csdf hddf kjtz",
"123 456 890",
"321 890 456 foo",
"123 890 uiop",
]
return [{"text": text, "tokens": text.split()}
for i, text in enumerate(texts)]
def main(args):
documents = get_documents()
add_tfidf_to(documents)
dist_graph = get_distance_graph(documents)
for cluster in majorclust(dist_graph):
print "========="
for doc_id in cluster:
print documents[doc_id]["text"]
if __name__ == '__main__':
main(sys.argv)
对于实际应用程序,您将使用合适的标记器,使用整数而不是标记字符串,并且不要计算 O(n^2) 距离矩阵...
For real applications, you would use a decent tokenizer, use integers instead of token-strings and don't calc a O(n^2) distance-matrix...
这篇关于在 Python 中聚类文本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!