是否可以为RangeBetween或RowBetween创建一个在orderby中具有多个条件的窗口函数。假设我有如下的数据框架。

user_id     timestamp               date        event
0040b5f0    2018-01-22 13:04:32     2018-01-22  1
0040b5f0    2018-01-22 13:04:35     2018-01-22  0
0040b5f0    2018-01-25 18:55:08     2018-01-25  1
0040b5f0    2018-01-25 18:56:17     2018-01-25  1
0040b5f0    2018-01-25 20:51:43     2018-01-25  1
0040b5f0    2018-01-31 07:48:43     2018-01-31  1
0040b5f0    2018-01-31 07:48:48     2018-01-31  0
0040b5f0    2018-02-02 09:40:58     2018-02-02  1
0040b5f0    2018-02-02 09:41:01     2018-02-02  0
0040b5f0    2018-02-05 14:03:27     2018-02-05  1

对于每一行,我需要日期不超过3天的事件列值之和。但我不能接受同一天晚些时候发生的事件。我可以创建一个窗口函数,比如:
days = lambda i: i * 86400
my_window = Window\
                .partitionBy(["user_id"])\
                .orderBy(F.col("date").cast("timestamp").cast("long"))\
                .rangeBetween(-days(3), 0)

但这将包括同一日期晚些时候发生的事件。我需要创建一个窗口函数,它的作用类似于(对于带有*)的行:
user_id     timestamp               date        event
0040b5f0    2018-01-22 13:04:32     2018-01-22  1----|==============|
0040b5f0    2018-01-22 13:04:35     2018-01-22  0  sum here       all events
0040b5f0    2018-01-25 18:55:08     2018-01-25  1 only           within 3 days
* 0040b5f0  2018-01-25 18:56:17     2018-01-25  1----|              |
0040b5f0    2018-01-25 20:51:43     2018-01-25  1===================|
0040b5f0    2018-01-31 07:48:43     2018-01-31  1
0040b5f0    2018-01-31 07:48:48     2018-01-31  0
0040b5f0    2018-02-02 09:40:58     2018-02-02  1
0040b5f0    2018-02-02 09:41:01     2018-02-02  0
0040b5f0    2018-02-05 14:03:27     2018-02-05  1

我试图创造出如下的东西:
days = lambda i: i * 86400
my_window = Window\
                .partitionBy(["user_id"])\
                .orderBy(F.col("date").cast("timestamp").cast("long"))\
                .rangeBetween(-days(3), Window.currentRow)\
                .orderBy(F.col("t_stamp"))\
                .rowsBetween(Window.unboundedPreceding, Window.currentRow)

但它只反映了最后一个orderby。
结果表应如下所示:
user_id     timestamp               date        event   event_last_3d
0040b5f0    2018-01-22 13:04:32     2018-01-22  1       1
0040b5f0    2018-01-22 13:04:35     2018-01-22  0       1
0040b5f0    2018-01-25 18:55:08     2018-01-25  1       2
0040b5f0    2018-01-25 18:56:17     2018-01-25  1       3
0040b5f0    2018-01-25 20:51:43     2018-01-25  1       4
0040b5f0    2018-01-31 07:48:43     2018-01-31  1       1
0040b5f0    2018-01-31 07:48:48     2018-01-31  0       1
0040b5f0    2018-02-02 09:40:58     2018-02-02  1       2
0040b5f0    2018-02-02 09:41:01     2018-02-02  0       2
0040b5f0    2018-02-05 14:03:27     2018-02-05  1       2

我在这个问题上已经坚持了一段时间了,我会很感激任何关于如何处理它的建议。

最佳答案

我已经用scala编写了与您要求相同的代码。我认为转换为python并不难:

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
val DAY_SECS = 24*60*60 //Seconds in a day
//Given a timestamp in seconds, returns the seconds equivalent of 00:00:00 of that date
val trimToDateBoundary = (d: Long) => (d / 86400) * 86400
//Using 4 for range here - since your requirement is to cover 3 days prev, which date wise inclusive is 4 days
//So e.g. given any TS of 25 Jan, the range will cover (25 Jan 00:00:00 - 4 times day_secs = 22 Jan 00:00:00) to current TS
val wSpec = Window.partitionBy("user_id").
                orderBy(col("timestamp").cast("long")).
                rangeBetween(trimToDateBoundary(Window.currentRow)-(4*DAY_SECS), Window.currentRow)
df.withColumn("sum", sum('event) over wSpec).show()

以下是应用于数据时的输出:
+--------+--------------------+--------------------+-----+---+
| user_id|           timestamp|                date|event|sum|
+--------+--------------------+--------------------+-----+---+
|0040b5f0|2018-01-22 13:04:...|2018-01-22 00:00:...|  1.0|1.0|
|0040b5f0|2018-01-22 13:04:...|2018-01-22 00:00:...|  0.0|1.0|
|0040b5f0|2018-01-25 18:55:...|2018-01-25 00:00:...|  1.0|2.0|
|0040b5f0|2018-01-25 18:56:...|2018-01-25 00:00:...|  1.0|3.0|
|0040b5f0|2018-01-25 20:51:...|2018-01-25 00:00:...|  1.0|4.0|
|0040b5f0|2018-01-31 07:48:...|2018-01-31 00:00:...|  1.0|1.0|
|0040b5f0|2018-01-31 07:48:...|2018-01-31 00:00:...|  0.0|1.0|
|0040b5f0|2018-02-02 09:40:...|2018-02-02 00:00:...|  1.0|2.0|
|0040b5f0|2018-02-02 09:41:...|2018-02-02 00:00:...|  0.0|2.0|
|0040b5f0|2018-02-05 14:03:...|2018-02-05 00:00:...|  1.0|2.0|
+--------+--------------------+--------------------+-----+---+

我没有用“日期”栏。不知道我们怎样才能达到你的要求。所以,如果有可能TS的日期与日期列不同,那么这个解决方案就不包括它。
注意:接受rangeBetween参数的Column已经在接受日期/时间戳类型列的Spark 2.3.0中引入。所以,这个解决方案可能更优雅。

08-05 13:57