问题描述
我想通过使用不同的大小写类将dataFrame转换为dataSet.现在,我的代码如下所示.
I want to convert dataFrame to dataSet by using different case class.Now, my code is like below.
case Class Views(views: Double)
case Class Clicks(clicks: Double)
def convertViewsDFtoDS(df: DataFrame){
df.as[Views]
}
def convertClicksDFtoDS(df: DataFrame){
df.as[Clicks]
}
所以,我的问题是无论如何,我可以通过案例类将一个通用函数用作此函数的额外参数吗?"
So, my question is "Is there anyway I can use one general function to this by pass case class as extra parameter to this function?"
推荐答案
似乎已经过时了( as
方法完全符合您的要求),但是您可以
It seems a bit obsolete (as
method does exactly what you want) but you can
import org.apache.spark.sql.{Encoder, Dataset, DataFrame}
def convertTo[T : Encoder](df: DataFrame): Dataset[T] = df.as[T]
或
def convertTo[T](df: DataFrame)(implicit enc: Encoder[T]): Dataset[T] = df.as[T]
这两种方法都是等效的,并且表达的是完全相同的东西(类型为 T
的隐式 Encoder
的存在).
Both methods are equivalent and express exactly the same thing (existence of an implicit Encoder
for a type T
).
如果要避免使用隐式参数,则可以一直使用显式的 Encoder
:
If you want to avoid implicit parameter you can use explicit Encoder
all the way down:
def convertTo[T](df: DataFrame, enc: Encoder[T]): Dataset[T] = df.as[T](enc)
convertTo(df, encoderFor[Clicks])
这篇关于如何将Encoder作为参数传递给数据框的as方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!