我正在尝试为数据帧内的不同子段计算p值和t值。

数据框有两列,这是我数据框中的前5个值:

df[["Engagement_score", "Performance"]].head()
   Engagement_score  Performance
0    6                 0.0
1    5                 0.0
2    7                 66.3
3    3                 0.0
4    11                0.0


我按照参与度得分对数据框进行分组,然后为这些组计算这三个统计数据:

1)平均效果得分(sub_average)和该组中的值数(sub_bookings)

2)其余组的平均表现得分(rest_average)和其余组的值数量(rest_bookings)

将为整体数据框计算整体绩效得分和整体预订量。

这是我的代码。

def stats_comparison(i):
    df.groupby(i)['Performance'].agg({
    'average': 'mean',
    'bookings': 'count'
    }).reset_index()
    cat = df.groupby(i)['Performance']\
        .agg({
            'sub_average': 'mean',
            'sub_bookings': 'count'
       }).reset_index()
    cat['overall_average'] = df['Performance'].mean()
    cat['overall_bookings'] = df['Performance'].count()
    cat['rest_bookings'] = cat['overall_bookings'] - cat['sub_bookings']
    cat['rest_average'] = (cat['overall_bookings']*cat['overall_average'] \
                     - cat['sub_bookings']*cat['sub_average'])/cat['rest_bookings']
    cat['t_value'] = stats.ttest_ind(cat['sub_average'], cat['rest_average'])[0]


    cat['prob'] = stats.ttest_ind(cat['sub_average'], cat['rest_average'])[1] # this is the p value
    cat['significant'] = [(lambda x: 1 if x > 0.9 else -1 if x < 0.1 else 0)(i) for i in cat['prob']]
    # if the p value is less than 0.1 then I can confidently say that the 2 samples are different.

    print(cat)

stats_comparison('Engagement_score')


我得到以下输出,但是我的子段得到相同的P值和T值,如何在不编写循环的情况下为这些子段提供不同的P值和T值:

    Engagement_score  sub_average  sub_bookings  overall_average  \
0                 3    68.493120          1032         69.18413
1                 4    71.018214           571         69.18413
2                 5    70.265373           670         69.18413
3                 6    68.986506           704         69.18413
4                 7    69.587893           636         69.18413
5                 8    70.215244           656         69.18413
6                 9    63.495813           812         69.18413
7                10    71.235994           664         69.18413
8                11    69.302559           508         69.18413
9                12    81.980952           105         69.18413

   overall_bookings  rest_bookings  rest_average   t_value      prob  \
0              6358           5326     69.318025  0.870172  0.395663
1              6358           5787     69.003162  0.870172  0.395663
2              6358           5688     69.056769  0.870172  0.395663
3              6358           5654     69.208737  0.870172  0.395663
4              6358           5722     69.139252  0.870172  0.395663
5              6358           5702     69.065503  0.870172  0.395663
6              6358           5546     70.016967  0.870172  0.395663
7              6358           5694     68.944854  0.870172  0.395663
8              6358           5850     69.173846  0.870172  0.395663
9              6358           6253     68.969247  0.870172  0.395663

最佳答案

我认为您可以对参与度组进行简单的循环。

样本数据

import numpy as np
import pandas as pd
from scipy import stats

np.random.seed(123)
df = pd.DataFrame({'Engagement Score': np.random.choice(list('abcde'), 1000),
                   'Performance': np.random.normal(0,1,1000)})




# Get all of the subgroup averages and counts
d = {'mean': 'sub_average', 'size': 'sub_bookings'}
df_res = df.groupby('Engagement Score').Performance.agg(['mean', 'size']).rename(columns=d)

# Add overall values
df_res['overall_avg'] = df.Performance.mean()
df_res['overall_bookings'] = len(df)

# T-test of each subgroup against everything not in that subgroup.
for grp in df['Engagement Score'].unique():
    # mask to separate the groups
    m = df['Engagement Score'] == grp
    # Decide whether you want to assume equal variances. equal_var=True by default.
    t,p = stats.ttest_ind(df.loc[m, 'Performance'], df.loc[~m, 'Performance'])
    df_res.loc[grp, 't-stat'] = t
    df_res.loc[grp, 'p-value'] = p


输出df_res

                  sub_average  sub_bookings  overall_avg  overall_bookings    t_stat   p-value
Engagement Score
a                   -0.024469           203     -0.03042              1000  0.094585  0.924663
b                   -0.053663           206     -0.03042              1000 -0.372866  0.709328
c                    0.080888           179     -0.03042              1000  1.638958  0.101537
d                   -0.127941           224     -0.03042              1000 -1.652303  0.098787
e                   -0.001161           188     -0.03042              1000  0.443412  0.657564


不出所料,因为所有这些都来自相同的正态分布,所以没有什么意义重大。

10-08 12:42