我必须遵循以下数据:
data = {'date': ['2014-01-01', '2014-01-02', '2014-01-03', '2014-01-04', '2014-01-05', '2014-01-06'],
'flat': ['A;A;B', 'D;P;E;P;P', 'H;X', 'P;Q;G', 'S;T;U', 'G;C;G']}
data['date'] = pd.to_datetime(data['date'])
data = pd.DataFrame(data)
data['date'] = pd.to_datetime(data['date'])
spark = SparkSession.builder \
.master('local[*]') \
.config("spark.driver.memory", "500g") \
.appName('my-pandasToSparkDF-app') \
.getOrCreate()
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
spark.sparkContext.setLogLevel("OFF")
df=spark.createDataFrame(data)
new_frame = df.withColumn("list", F.split("flat", "\;"))
我想添加一个新列,该列包含每个不同元素的出现次数(按升序排列),另一个列中包含最大值:
+-------------------+-----------+---------------------+-----------+----+
| date| flat | list |occurrences|max |
+-------------------+-----------+---------------------+-----------+----+
|2014-01-01 00:00:00|A;A;B |['A','A','B'] |[1,2] |2 |
|2014-01-02 00:00:00|D;P;E;P;P |['D','P','E','P','P']|[1,1,3] |3 |
|2014-01-03 00:00:00|H;X |['H','X'] |[1,1] |1 |
|2014-01-04 00:00:00|P;Q;G |['P','Q','G'] |[1,1,1] |1 |
|2014-01-05 00:00:00|S;T;U |['S','T','U'] |[1,1,1] |1 |
|2014-01-06 00:00:00|G;C;G |['G','C','G'] |[1,2] |2 |
+-------------------+-----------+---------------------+-----------+----+
非常感谢你!
最佳答案
您可以通过几个groupBy语句来做到这一点,
首先,您需要一个这样的数据框,
+-------------------+---------+---------------+
| date| flat| list|
+-------------------+---------+---------------+
|2014-01-01 00:00:00| A;A;B| [A, A, B]|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]|
|2014-01-03 00:00:00| H;X| [H, X]|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]|
|2014-01-05 00:00:00| S;T;U| [S, T, U]|
|2014-01-06 00:00:00| G;C;G| [G, C, G]|
+-------------------+---------+---------------+
像这样使用
list
分解F.explode
列,new_frame_exp = new_frame.withColumn("exp", F.explode('list'))
然后,您的数据框将如下所示,
+-------------------+---------+---------------+---+
| date| flat| list|exp|
+-------------------+---------+---------------+---+
|2014-01-01 00:00:00| A;A;B| [A, A, B]| A|
|2014-01-01 00:00:00| A;A;B| [A, A, B]| A|
|2014-01-01 00:00:00| A;A;B| [A, A, B]| B|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| D|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| P|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| E|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| P|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| P|
|2014-01-03 00:00:00| H;X| [H, X]| H|
|2014-01-03 00:00:00| H;X| [H, X]| X|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| P|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| Q|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| G|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| S|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| T|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| U|
|2014-01-06 00:00:00| G;C;G| [G, C, G]| G|
|2014-01-06 00:00:00| G;C;G| [G, C, G]| C|
|2014-01-06 00:00:00| G;C;G| [G, C, G]| G|
+-------------------+---------+---------------+---+
在这个数据帧上,像这样进行分组
new_frame_exp_agg = new_frame_exp.groupBy('date', 'flat', 'list', 'exp').count()
然后,您将获得一个这样的数据框,
+-------------------+---------+---------------+---+-----+
| date| flat| list|exp|count|
+-------------------+---------+---------------+---+-----+
|2014-01-03 00:00:00| H;X| [H, X]| H| 1|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| G| 1|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| U| 1|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| T| 1|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| P| 1|
|2014-01-03 00:00:00| H;X| [H, X]| X| 1|
|2014-01-06 00:00:00| G;C;G| [G, C, G]| G| 2|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| E| 1|
|2014-01-06 00:00:00| G;C;G| [G, C, G]| C| 1|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| S| 1|
|2014-01-01 00:00:00| A;A;B| [A, A, B]| B| 1|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| D| 1|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| Q| 1|
|2014-01-01 00:00:00| A;A;B| [A, A, B]| A| 2|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| P| 3|
+-------------------+---------+---------------+---+-----+
在此数据帧上,再应用一个聚合级别,以收集要列出的计数并像这样找到最大值,
res = new_frame_exp_agg.groupBy('date', 'flat', 'list').agg(
F.collect_list('count').alias('occurances'),
F.max('count').alias('max'))
res.orderBy('date').show()
+-------------------+---------+---------------+----------+---+
| date| flat| list|occurances|max|
+-------------------+---------+---------------+----------+---+
|2014-01-01 00:00:00| A;A;B| [A, A, B]| [2, 1]| 2|
|2014-01-02 00:00:00|D;P;E;P;P|[D, P, E, P, P]| [1, 1, 3]| 3|
|2014-01-03 00:00:00| H;X| [H, X]| [1, 1]| 1|
|2014-01-04 00:00:00| P;Q;G| [P, Q, G]| [1, 1, 1]| 1|
|2014-01-05 00:00:00| S;T;U| [S, T, U]| [1, 1, 1]| 1|
|2014-01-06 00:00:00| G;C;G| [G, C, G]| [1, 2]| 2|
+-------------------+---------+---------------+----------+---+
如果您想对列
occurance
进行排序,如果您使用的是Spark 2.4+,则可以在列上使用F.array_sort
,否则您必须为此编写udf。