One way is to collect the $dollars column as a list per window, and then calculate the median of the resulting lists using an udf:from pyspark.sql.window import Windowfrom pyspark.sql.functions import *import numpy as np from pyspark.sql.types import FloatTypew = (Window.orderBy(col("timestampGMT").cast('long')).rangeBetween(-2, 0))median_udf = udf(lambda x: float(np.median(x)), FloatType())df.withColumn("list", collect_list("dollars").over(w)) \ .withColumn("rolling_median", median_udf("list")).show(truncate = False)+-------+---------------------+------------+--------------+|dollars|timestampGMT |list |rolling_median|+-------+---------------------+------------+--------------+|25 |2017-03-18 11:27:18.0|[25] |25.0 ||17 |2017-03-18 11:27:19.0|[25, 17] |21.0 ||13 |2017-03-18 11:27:20.0|[25, 17, 13]|17.0 ||27 |2017-03-18 11:27:21.0|[17, 13, 27]|17.0 ||13 |2017-03-18 11:27:22.0|[13, 27, 13]|13.0 ||43 |2017-03-18 11:27:23.0|[27, 13, 43]|27.0 ||12 |2017-03-18 11:27:24.0|[13, 43, 12]|13.0 |+-------+---------------------+------------+--------------+ 这篇关于如何使用Window()计算PySpark中的滚动中位数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 09-16 09:35