本文介绍了连续行之间的日期差-Pyspark数据框的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有一个具有以下结构的表
I have a table with following structure
USER_ID Tweet_ID Date
1 1001 Thu Aug 05 19:11:39 +0000 2010
1 6022 Mon Aug 09 17:51:19 +0000 2010
1 1041 Sun Aug 19 11:10:09 +0000 2010
2 9483 Mon Jan 11 10:51:23 +0000 2012
2 4532 Fri May 21 11:11:11 +0000 2012
3 4374 Sat Jul 10 03:21:23 +0000 2013
3 4334 Sun Jul 11 04:53:13 +0000 2013
基本上我想做的是有一个 PysparkSQL 查询,该查询计算具有相同user_id号的连续记录的日期差(以秒为单位).预期结果将是:
Basically what I would like to do is have a PysparkSQL query that calculates the date difference (in seconds) for consecutive records with the same user_id number. The expected result would be:
1 Sun Aug 19 11:10:09 +0000 2010 - Mon Aug 09 17:51:19 +0000 2010 839930
1 Mon Aug 09 17:51:19 +0000 2010 - Thu Aug 05 19:11:39 +0000 2010 340780
2 Fri May 21 11:11:11 +0000 2012 - Mon Jan 11 10:51:23 +0000 2012 1813212
3 Sun Jul 11 04:53:13 +0000 2013 - Sat Jul 10 03:21:23 +0000 2013 5510
推荐答案
像这样:
df.registerTempTable("df")
sqlContext.sql("""
SELECT *, CAST(date AS bigint) - CAST(lag(date, 1) OVER (
PARTITION BY user_id ORDER BY date) AS bigint)
FROM df""")
这篇关于连续行之间的日期差-Pyspark数据框的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!