本文介绍了pyspark上的SparkSQL:如何生成时间序列?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在pyspark上使用SparkSQL将一些PostgreSQL表存储到DataFrames中,然后构建一个查询,该查询基于类型为datestartstop列生成多个时间序列.

I'm using SparkSQL on pyspark to store some PostgreSQL tables into DataFrames and then build a query that generates several time series based on a start and stop columns of type date.

假设my_table包含:

 start      | stop       
-------------------------
 2000-01-01 | 2000-01-05 
 2012-03-20 | 2012-03-23 

在PostgreSQL中很容易做到这一点:

In PostgreSQL it's very easy to do that:

SELECT generate_series(start, stop, '1 day'::interval)::date AS dt FROM my_table

它将生成此表:

 dt
------------
 2000-01-01
 2000-01-02
 2000-01-03
 2000-01-04
 2000-01-05
 2012-03-20
 2012-03-21
 2012-03-22
 2012-03-23

但是如何使用普通的SparkSQL做到这一点?是否需要使用UDF或某些DataFrame方法?

but how to do that using plain SparkSQL? Will it be necessary to use UDFs or some DataFrame methods?

推荐答案

假设您有来自Spark sql的数据框df,请尝试

Suppose you have dataframe df from spark sql, Try this

from pyspark.sql.functions as F
from pyspark.sql.types as T

def timeseriesDF(start, total):
    series = [start]
    for i xrange( total-1 ):
        series.append(
            F.date_add(series[-1], 1)
        )
    return series

df.withColumn("t_series", F.udf(
                timeseriesDF, 
                T.ArrayType()
            ) ( df.start, F.datediff( df.start, df.stop ) ) 
    ).select(F.explode("t_series")).show()

这篇关于pyspark上的SparkSQL:如何生成时间序列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-20 01:23