我正在尝试通过map reduce作业将数据加载到mysql数据库中,但是我遇到了类强制转换异常错误,这是我使用的过程:
我首先创建一个实现Writable和DBWritable接口(interface)的DBOutputWritable类。
然后,我使用我的reduce作业将数据写入数据库中,但是当我运行该作业时,它失败并指出存在错误:
java.lang.ClassCastException: com.amalwa.hadoop.DataBaseLoadMapReduce.DBOutputWritable cannot be cast to org.apache.hadoop.mapreduce.lib.db.DBWritable
at org.apache.hadoop.mapreduce.lib.db.DBOutputFormat$DBRecordWriter.write(DBOutputFormat.java:66)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:601)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at com.amalwa.hadoop.DataBaseLoadMapReduce.DBMapReduce$DBReducer.reduce(DBMapReduce.java:58)
at com.amalwa.hadoop.DataBaseLoadMapReduce.DBMapReduce$DBReducer.reduce(DBMapReduce.java:53)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:663)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:426)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
我很难弄清楚,如果我的类(class)实现了我们使用map reduce作业写入数据库所需的接口(interface),那么为什么会有一个类强制转换异常。我正在实现所需的所有功能。
谢谢。
DBOutputWritable
package com.amalwa.hadoop.DataBaseLoadMapReduce;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapred.lib.db.DBWritable;
public class DBOutputWritable implements Writable, DBWritable{
private String keyValue;
private String response;
public DBOutputWritable(String keyValue, String response){
this.keyValue = keyValue;
this.response = response;
}
public void readFields(DataInput resultSet) throws IOException {
}
public void readFields(ResultSet resultSet) throws SQLException {
keyValue = resultSet.getString(1);
response = resultSet.getString(2);
}
public void write(PreparedStatement preparedStatement) throws SQLException {
preparedStatement.setString(1, keyValue);
preparedStatement.setString(2, response);
}
public void write(DataOutput dataOutput) throws IOException {
}
}
reducer :
public static class DBReducer extends Reducer<Text, Text, DBOutputWritable, NullWritable>{
public void reduce(Text requestKey, Iterable<Text> response, Context context){
for(Text responseSet: response){
try{
context.write(new DBOutputWritable(requestKey.toString(), responseSet.toString()), NullWritable.get());
}catch(IOException e){
System.err.println(e.getMessage());
}
catch(InterruptedException e){
System.err.println(e.getMessage());
}
}
}
}
映射器:
公共(public)静态类DBMapper扩展了Mapper {
public void map(LongWritable key, Text value, Context context) throws IOException{
String tweetInfo = value.toString();
String[] myTweetData = tweetInfo.split(",", 2);
String requestKey = myTweetData[0];
String response = myTweetData[1];
try {
context.write(new Text(requestKey), new Text(response));
} catch (InterruptedException e) {
System.err.println(e.getMessage());;
}
}
}
主类:
public static void main(String[] args) throws Exception{
Configuration conf = new Configuration();
DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver", "jdbc:mysql://ec2-54-152-254-194.compute-1.amazonaws.com/TWEETS", "user", "password");
Job job = new Job(conf);
job.setJarByClass(DBMapReduce.class);
job.setMapperClass(DBMapper.class);
job.setReducerClass(DBReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(DBOutputWritable.class);
job.setOutputValueClass(NullWritable.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(DBOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[1]));
DBOutputFormat.setOutput(job, "TWEET_INFO", new String[] { "REQUESTKEY", "TWEET_DETAILS" });
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
最佳答案
似乎您在混合旧的(org.apache.hadoop.mapred.*
)和新的(org.apache.hadoop.mapreduce.*
)MapReduce API,这会引起冲突。我怀疑您的DBReducer
类正在从新API扩展Reducer
类,但是您的DBOutputWritable
正在从旧API实现DBWritable
。
您应该在整个实现中仅选择这些API之一,这意味着所有导入的MapReduce类型均以相同的包前缀开头。
请注意,通常在使用旧API时实现MapReduce接口(interface),而在使用新API时扩展MapReduce基类。
关于java - 将数据写入MySQL数据库时,Map中的类强制转换异常减少工作,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/29134892/