假设我们有以下数据集:
import pandas as pd
data = [('apple', 'red', 155), ('apple', 'green', 102), ('apple', 'iphone', 48),
('tomato', 'red', 175), ('tomato', 'ketchup', 96), ('tomato', 'gun', 12)]
df = pd.DataFrame(data)
df.columns = ['word', 'rel_word', 'weight']
我想重新计算权重,以便它们在每组(示例中的苹果、番茄)中的总和为 1.0,并保持相关权重不变(例如,苹果/红色到苹果/绿色仍应为 155/102)。
最佳答案
使用 transform
- 比 apply
和查找更快
In [3849]: df['weight'] / df.groupby('word')['weight'].transform('sum')
Out[3849]:
0 0.508197
1 0.334426
2 0.157377
3 0.618375
4 0.339223
5 0.042403
Name: weight, dtype: float64
In [3850]: df['norm_w'] = df['weight'] / df.groupby('word')['weight'].transform('sum')
In [3851]: df
Out[3851]:
word rel_word weight norm_w
0 apple red 155 0.508197
1 apple green 102 0.334426
2 apple iphone 48 0.157377
3 tomato red 175 0.618375
4 tomato ketchup 96 0.339223
5 tomato gun 12 0.042403
或者,
In [3852]: df.groupby('word')['weight'].transform(lambda x: x/x.sum())
Out[3852]:
0 0.508197
1 0.334426
2 0.157377
3 0.618375
4 0.339223
5 0.042403
Name: weight, dtype: float64
时间
In [3862]: df.shape
Out[3862]: (12000, 4)
In [3864]: %timeit df['weight'] / df.groupby('word')['weight'].transform('sum')
100 loops, best of 3: 2.44 ms per loop
In [3866]: %timeit df.groupby('word')['weight'].transform(lambda x: x/x.sum())
100 loops, best of 3: 5.16 ms per loop
In [3868]: %%timeit
...: group_weights = df.groupby('word').aggregate(sum)
...: df.apply(lambda row: row['weight']/group_weights.loc[row['word']][0],axis=1)
1 loop, best of 3: 2.5 s per loop
关于python - Pandas :在组内正常化,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/46419180/