我正在AWS EMR中运行Sqoop。我正在尝试从MySQL复制约10 GB的表到HDFS。
我得到以下异常
15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000000_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 3
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000005_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 2
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
15/07/06 12:19:08 INFO mapreduce.Job: map 0% reduce 0%
15/07/06 12:19:20 INFO mapreduce.Job: map 25% reduce 0%
15/07/06 12:19:22 INFO mapreduce.Job: map 38% reduce 0%
15/07/06 12:19:23 INFO mapreduce.Job: map 50% reduce 0%
15/07/06 12:19:24 INFO mapreduce.Job: map 75% reduce 0%
15/07/06 12:19:25 INFO mapreduce.Job: map 100% reduce 0%
15/07/06 12:23:11 INFO mapreduce.Job: Job job_1435664372091_0048 failed with state FAILED due to: Task failed task_1435664372091_0048_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/07/06 12:23:11 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=28
Launched map tasks=28
Other local map tasks=28
Total time spent by all maps in occupied slots (ms)=34760760
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=5793460
Total vcore-seconds taken by all map tasks=5793460
Total megabyte-seconds taken by all map tasks=8342582400
15/07/06 12:23:11 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 829.8697 seconds (0 bytes/sec)
15/07/06 12:23:11 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Retrieved 0 records.
15/07/06 12:23:11 ERROR tool.ImportTool: Error during import: Import job failed!
如果我在运行时没有'--direct'选项,则会出现通信异常,如https://issues.cloudera.org/browse/SQOOP-186
我在MySQL中将“net-write-timeout”和“net-read-timeout”值设置为6000。
我的Sqoop命令如下所示
sqoop import -D mapred.task.timeout=0 --fields-terminated-by '\t' --escaped-by '\\' --optionally-enclosed-by '\"' --bindir ./ --connect jdbc:mysql://<remote ip>/<mysql db> --username tuser --password tuser --table table1 --target-dir=/base/table1 --split-by id -m 8 --direct
如何解决相同?我错过了什么吗?
我还创建了SQOOP JIRA-https://issues.apache.org/jira/browse/SQOOP-2411
最佳答案
我已经看到,当Sqoop无法均匀划分键空间并且其中一个映射任务处理零行数据时,会发生此错误。可能的解决方法是更改映射器的数量(-n
)或指定具有均匀分布值的其他键列(--split-by
)。
关于mysql - 使用 '--direct'选项的Sqoop失败,并显示mysqldump退出代码2和3,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/31245943/