我正在对熊猫的一些数据进行规范化,而埋葬需要很长时间。不过,计算起来似乎相对简单,只有大约2500行。有没有更快的方法来做这个?
如下所示,我已经手动完成了规范化。

# normalize the rating columns to values between 0 and 1
df_1['numerator_norm'] = ((df_1['rating_numerator']- df_1['rating_numerator'].min())/(df_1['rating_numerator'].max()- df_1['rating_numerator'].min()))
df_1['denominator_norm'] = ((df_1['rating_denominator']- df_1['rating_denominator'].min())/(df_1['rating_denominator'].max()- df_1['rating_denominator'].min()))
df_1['normalized_rating'] = np.nan

for index, row in df_1.iterrows():
    df_1['normalized_rating'][index] = (df_1['numerator_norm'][index] / df_1['denominator_norm'][index])

最好在几秒钟内完成这个过程,而不是大约60秒

最佳答案

更改:

for index, row in df_1.iterrows():
    df_1['normalized_rating'][index] = (df_1['numerator_norm'][index] /
df_1['denominator_norm'][index])

致:
df_1['normalized_rating'] = df_1['numerator_norm'] / df_1['denominator_norm']

用于矢量化分区。
Iterrows是最好的避免,检查Does iterrows have performance issues?

关于python - 为什么迭代运行如此缓慢?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54631549/

10-12 18:04