这里有一个类似的问题:How to add a schema to a Dataset in Spark?
但是,我面临的问题是我已经有一个预定义的Dataset<Obj1>
,并且我想定义一个架构以匹配其数据成员。最终目标是能够在两个java对象之间进行联接。
样例代码:
Dataset<Row> rowDataset = spark.getSpark().sqlContext().createDataFrame(rowRDD, schema).toDF();
Dataset<MyObj> objResult = rowDataset.map((MapFunction<Row, MyObj>) row ->
new MyObj(
row.getInt(row.fieldIndex("field1")),
row.isNullAt(row.fieldIndex("field2")) ? "" : row.getString(row.fieldIndex("field2")),
row.isNullAt(row.fieldIndex("field3")) ? "" : row.getString(row.fieldIndex("field3")),
row.isNullAt(row.fieldIndex("field4")) ? "" : row.getString(row.fieldIndex("field4"))
), Encoders.javaSerialization(MyObj.class));
如果要打印行数据集的架构,则可以按预期方式获得架构:
rowDataset.printSchema();
root
|-- field1: integer (nullable = false)
|-- field2: string (nullable = false)
|-- field3: string (nullable = false)
|-- field4: string (nullable = false)
如果我打印对象数据集,那么我将丢失实际的架构
objResult.printSchema();
root
|-- value: binary (nullable = true)
问题是如何为
Dataset<MyObj>
应用架构? 最佳答案
以下是我尝试过的代码片段,并且spark的行为与预期的一样,看来您问题的根本原因不是map函数。
SparkSession session = SparkSession.builder().config(conf).getOrCreate();
Dataset<Row> ds = session.read().text("<some path>");
Encoder<Employee> employeeEncode = Encoders.bean(Employee.class);
ds.map(new MapFunction<Row, Employee>() {
@Override
public Employee call(Row value) throws Exception {
return new Employee(value.getString(0).split(","));
}
}, employeeEncode).printSchema();
输出:
root
|-- age: integer (nullable = true)
|-- name: string (nullable = true)
//雇员豆
public class Employee {
public String name;
public Integer age;
public Employee(){
}
public Employee(String [] args){
this.name=args[0];
this.age=Integer.parseInt(args[1]);
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Integer getAge() {
return age;
}
public void setAge(Integer age) {
this.age = age;
}
}