本文介绍了如何从Java调用DataFrameFunctions.createCassandraTable?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何从Java调用此函数?还是我需要在scala中使用包装器?
How can I call this function from Java? Or do I need a wrapper in scala?
package com.datastax.spark.connector
class DataFrameFunctions(dataFrame: DataFrame) extends Serializable {
...
def createCassandraTable(
keyspaceName: String,
tableName: String,
partitionKeyColumns: Option[Seq[String]] = None,
clusteringKeyColumns: Option[Seq[String]] = None)(
implicit
connector: CassandraConnector = CassandraConnector(sparkContext.getConf)): Unit = {
...
推荐答案
我使用了以下代码:
DataFrameFunctions frameFunctions = new DataFrameFunctions(dfTemp2);
Seq<String> argumentsSeq1 = JavaConversions.asScalaBuffer(Arrays.asList("CategoryName")).seq();
Option<Seq<String>> some1 = new Some<Seq<String>>(argumentsSeq1);
Seq<String> argumentsSeq2 = JavaConversions.asScalaBuffer(Arrays.asList("DealType")).seq();
Option<Seq<String>> some2 = new Some<Seq<String>>(argumentsSeq2);
frameFunctions.createCassandraTable("coupons", "IdealFeeds", some1, some2, connector);
这篇关于如何从Java调用DataFrameFunctions.createCassandraTable?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!