问题描述
我想计算 DataFrame 中某一列的百分位数?我在 Spark 聚合函数中找不到任何 percentile_approx 函数.
I am trying to calculate percentile of a column in a DataFrame? I cant find any percentile_approx function in Spark aggregation functions.
例如在 Hive 中,我们有 percentile_approx,我们可以通过以下方式使用它
For e.g. in Hive we have percentile_approx and we can use it in the following way
hiveContext.sql("select percentile_approx("Open_Rate",0.10) from myTable);
但出于性能原因,我想使用 Spark DataFrame 来实现.
But I want to do it using Spark DataFrame for performance reasons.
样本数据集
|User ID|Open_Rate|
-------------------
|A1 |10.3 |
|B1 |4.04 |
|C1 |21.7 |
|D1 |18.6 |
我想知道有多少用户属于 10% 或 20% 等等.我想做这样的事情
I want to find out how many users fall into 10 percentile or 20 percentile and so on. I want to do something like this
df.select($"id",Percentile($"Open_Rate",0.1)).show
推荐答案
自从 Spark2.0 以来,事情变得越来越容易,只需在 DataFrameStatFunctions 中使用此函数,例如:
Since Spark2.0, things are getting easier,simply use this function in DataFrameStatFunctions like :
df.stat.approxQuantile("Open_Rate",Array(0.25,0.50,0.75),0.0)
DataFrameStatFunctions 中还有一些有用的 DataFrame 统计函数.
There are also some useful statistic functions for DataFrame in DataFrameStatFunctions.
这篇关于如何计算spark中DataFrame中列的百分比?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!