本文介绍了Spark DataFrame TimestampType - 如何从字段中获取年、月、日值?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个带有 take(5) 顶行的 Spark DataFrame,如下所示:

I have Spark DataFrame with take(5) top rows as follows:

[Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=1, value=638.55),
 Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=2, value=638.55),
 Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=3, value=638.55),
 Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=4, value=638.55),
 Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=5, value=638.55)]

它的架构定义为:

elevDF.printSchema()

root
 |-- date: timestamp (nullable = true)
 |-- hour: long (nullable = true)
 |-- value: double (nullable = true)

如何从日期"字段中获取年、月、日值?

How do I get the Year, Month, Day values from the 'date' field?

推荐答案

从 Spark 1.5 开始,您可以使用许多日期处理函数:

Since Spark 1.5 you can use a number of date processing functions:

import datetime
from pyspark.sql.functions import year, month, dayofmonth

elevDF = sc.parallelize([
    (datetime.datetime(1984, 1, 1, 0, 0), 1, 638.55),
    (datetime.datetime(1984, 1, 1, 0, 0), 2, 638.55),
    (datetime.datetime(1984, 1, 1, 0, 0), 3, 638.55),
    (datetime.datetime(1984, 1, 1, 0, 0), 4, 638.55),
    (datetime.datetime(1984, 1, 1, 0, 0), 5, 638.55)
]).toDF(["date", "hour", "value"])

elevDF.select(
    year("date").alias('year'), 
    month("date").alias('month'), 
    dayofmonth("date").alias('day')
).show()
# +----+-----+---+
# |year|month|day|
# +----+-----+---+
# |1984|    1|  1|
# |1984|    1|  1|
# |1984|    1|  1|
# |1984|    1|  1|
# |1984|    1|  1|
# +----+-----+---+

你可以像使用任何其他 RDD 一样使用简单的 map:

elevDF = sqlContext.createDataFrame(sc.parallelize([
        Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=1, value=638.55),
        Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=2, value=638.55),
        Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=3, value=638.55),
        Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=4, value=638.55),
        Row(date=datetime.datetime(1984, 1, 1, 0, 0), hour=5, value=638.55)]))

(elevDF
 .map(lambda (date, hour, value): (date.year, date.month, date.day))
 .collect())

结果是:

[(1984, 1, 1), (1984, 1, 1), (1984, 1, 1), (1984, 1, 1), (1984, 1, 1)]

顺便说一句:datetime.datetime 无论如何都会存储一个小时,所以单独保存它似乎是在浪费内存.

Btw: datetime.datetime stores an hour anyway so keeping it separately seems to be a waste of memory.

这篇关于Spark DataFrame TimestampType - 如何从字段中获取年、月、日值?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-27 04:54