问题描述
我有两个文件. Doc1的格式如下:
I have two documents. Doc1 is in the below format:
TOPIC: 0 5892.0
site 0.0371690427699
Internet 0.0261371350984
online 0.0229124236253
web 0.0218940936864
say 0.0159538357094
TOPIC: 1 12366.0
web 0.150331554262
site 0.0517548115801
say 0.0451237263464
Internet 0.0153647096879
online 0.0135856380398
...依此类推,直到主题99.
...and so on till Topic 99 in the same pattern.
Doc2的格式为:
0 0.566667 0 0.0333333 0 0 0 0.133333 ..........
依此类推...每个主题总共有100个值.
and so on... There are totally 100 values each value for each topic.
现在,我必须找到每个单词的加权平均概率,即:
Now, I have to find the weighted average probability for each word, that is:
P(w) = alpha.P(w1)+ alpha.P(w2)+...... +alpha.P(wn)
where alpha = value in the nth position corresponding to the nth topic.
即"say"一词的概率应为
that is for the word "say", the probability should be
P(say) = 0*0.0159 + 0.5666*0.045+.......
就像每个单词一样,我必须计算概率.
Likewise for each and every word, I have to calculate the probability.
For multiplication, if the word is taken from topic 0, then the 0th value from the doc2 must be considered and so on.
我仅使用以下代码对单词的出现次数进行了计数,但从未考虑过它们的值.所以,我很困惑.
I have only performed counting of the occurrences of words with the below code, but have never taken their values. So, I am confused.
with open(doc2, "r") as f:
with open(doc3, "w") as f1:
words = " ".join(line.strip() for line in f)
d = defaultdict(int)
for word in words.split():
d[word] += 1
for key, value in d.iteritems() :
f1.write(key+ ' ' + str(value) + ' ')
print '\n'
我的输出应如下所示:
say = "prob of this word calculated by above formula"
site = "
internet = "
以此类推.
我在做什么错了?
推荐答案
假设您忽略了TOPIC行,请使用defaultdict对值进行分组,然后最后进行计算:
Presuming you are ignoring TOPIC lines, use a defaultdict to group the values and then do the calculation at the end:
from collections import defaultdict
from itertools import groupby, imap
d = defaultdict(list)
with open("doc1") as f,open("doc2") as f2:
values = map(float, f2.read().split())
for line in f:
if line.strip() and not line.startswith("TOPIC"):
name, val = line.split()
d[name].append(float(val))
for k,v in d.items():
print("Prob for {} is {}".format(k ,sum(i*j for i, j in zip(v,values)) ))
另一种方法是随心所欲地进行计算,每当您点击一个新的部分时即增加计数,即使用TOPIC的一行通过索引从值中获取正确的值:
Another way would be to do the calcs as you go, increasing a count each time you hit a new section i.e a line with TOPIC to get the correct value from values by indexing:
from collections import defaultdict
d = defaultdict(float)
from itertools import imap
with open("doc1") as f,open("doc2") as f2:
# create list of all floats from doc2
values = imap(float, f2.read().split())
for line in f:
# if we have a new TOPIC increase the ind to get corresponding ndex from values
if line.startswith("TOPIC"):
ind = next(values)
continue
# ignore empty lines
if line.strip():
# get word and float and multiply the val by corresponding values value
name, val = line.split()
d[name] += float(val) * values[ind]
for k,v in d.items():
print("Prob for {} is {}".format(k ,v) )
在两个doc2中使用两个doc1内容和0 0.566667 0 0.0333333 0
会为两个输出以下内容:
Using you two doc1 content and 0 0.566667 0 0.0333333 0
inside doc2 outputs the following for both:
Prob for web is 0.085187930859
Prob for say is 0.0255701266375
Prob for online is 0.0076985327511
Prob for site is 0.0293277438137
Prob for Internet is 0.00870667394471
您还可以使用itertools groupby:
You could also use itertools groupby:
from collections import defaultdict
d = defaultdict(float)
from itertools import groupby, imap
with open("doc1") as f,open("doc2") as f2:
values = imap(float, f2.read().split())
# lambda x: not(x.strip()) will split into groups on the empty lines
for ind, (k, v) in enumerate(groupby(f, key=lambda x: not(x.strip()))):
if not k:
topic = next(v)
# get matching float from values
f = next(values)
# iterate over the group
for s in v:
name, val = s.split()
d[name] += (float(val) * f)
for k,v in d.iteritems():
print("Prob for {} is {}".format(k,v))
对于python3,所有 itertools imaps
应该更改为map
还会在python3中返回一个迭代器.
For python3 all the itertools imaps
should be changed to just map
which also returns an iterator in python3.
这篇关于如何确定单词的概率?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!