本文介绍了使用Spark Dataframe Scala将Array [Double]列转换为字符串或两个不同的列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我早些时候遇到了麻烦,试图在Spark Dataframes中进行一些转换.

I hit a snag earlier, trying to do some transformations within Spark Dataframes.

假设我有一个架构的数据框:

Let's say I have a dataframe of schema :

root
|-- coordinates: array (nullable = true)
|    |-- element: double (containsNull = true)
|-- userid: string (nullable = true)
|-- pubuid: string (nullable = true)

我想摆脱坐标中的array(double),而是获得一个DF,其行看起来像

I would like to get rid of the array(double) in coordinates, and instead get a DF with row that look like

"coordinates(0),coordinates(1)", userid, pubuid 
                   or something like 
 coordinates(0), coordinates(1), userid, pubuid . 

有了Scala,我可以做到

With Scala I could do

coordinates.mkString(",")

但在DataFrames中,坐标解析为java.util.List.

but in DataFrames coordinates resolves to a java.util.List.

到目前为止,我通过阅读RDD,转换然后构建新的DF来解决该问题.但是我想知道是否有更优雅的方法可以使用Dataframes做到这一点.

So far I worked around the issue, by reading into an RDD, transforming then building a new DF. But I was wondering if there's a more elegant way to do that with Dataframes.

感谢您的帮助.

推荐答案

您可以使用UDF:

import org.apache.spark.sql.functions.{udf, lit}

val mkString = udf((a: Seq[Double]) => a.mkString(", "))
df.withColumn("coordinates_string", mkString($"coordinates"))

val apply = udf((a: Seq[Double], i: Int) => a(i))
df.select(
  $"*", 
  apply($"coordinates", lit(0)).alias("x"),
  apply($"coordinates", lit(1)).alias("y")
)

修改:

在最新版本中,您也可以使用concat_ws:

In the recent versions you can also use concat_ws:

import org.apache.spark.sql.functions.concat_ws

df.withColumn(
  "coordinates_string", concat_ws(",", $"coordinates")
)

或简单的Column.apply:

df.select($"*", $"coordinates"(0).alias("x"), $"coordinates"(1).alias("y"))

这篇关于使用Spark Dataframe Scala将Array [Double]列转换为字符串或两个不同的列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-21 14:53