本文介绍了使用s3distcp将文件从亚马逊s3复制到hdfs失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图使用EMR中的工作流程将文件从s3复制到hdfs,并且当我运行以下命令时,作业流程成功启动,但在尝试将文件复制到HDFS时出现错误。是否需要设置任何输入文件权限?



命令:
$ b

./ elastic-mapreduce --jobflow j-35D6JOYEDCELA --jar s3://us-east-1.elasticmapreduce/libs/s3distcp/1.latest/s3distcp.jar --args'--src,s3:// odsh / input /, - dest, hdfs:/// Users

输出

任务TASKID =task_201301310606_0001_r_000000 TASK_TYPE =REDUCETASK_STATUS =FAILEDFINISH_TIME =1359612576612ERROR =java.lang.RuntimeException:Reducer任务无法复制1个文件:s3://odsh/input/GL_01112_20121019.dat etc
at com .mazon.external.elasticmapreduce.s3distcp.CopyFilesReducer.close(CopyFilesReducer.java:70)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:538)
at org.apache .hadoop.mapred.ReduceTa (org.apache.hadoop.mapred.Child)上的
$ 4.run(Child.java:255)$ java.util.AccessController.doPrivileged中的b $ b(Native方法)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org .apache.hadoop.mapred.Child.main(Child.java:249)

解决方案

我遇到了同样的异常。当 CopyFilesReducer 使用多个 CopyFilesRunable 实例从S3下载文件时,它看起来像是由竞争条件引起的。问题是它在多个线程中使用相同的临时目录,并且线程在完成时删除临时目录。因此,当一个线程在另一个线程之前完成时,它将删除另一个线程仍在使用的临时目录。



我向AWS报告了这个问题,但同时您可以通过在作业配置中将变量 s3DistCp.copyfiles.mapper.numWorkers 设置为1来强制Reducer使用单个线程来解决该bug。


I am trying to copy files from s3 to hdfs using workflow in EMR and when I run the below command the jobflow successfully starts but gives me an error when it tries to copy the file to HDFS .Do i need to set any input file permissions ?

Command:

./elastic-mapreduce --jobflow j-35D6JOYEDCELA --jar s3://us-east-1.elasticmapreduce/libs/s3distcp/1.latest/s3distcp.jar --args '--src,s3://odsh/input/,--dest,hdfs:///Users

Output

Task TASKID="task_201301310606_0001_r_000000" TASK_TYPE="REDUCE" TASK_STATUS="FAILED" FINISH_TIME="1359612576612" ERROR="java.lang.RuntimeException: Reducer task failed to copy 1 files: s3://odsh/input/GL_01112_20121019.dat etc at com.amazon.external.elasticmapreduce.s3distcp.CopyFilesReducer.close(CopyFilesReducer.java:70) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:538) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:429) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132) at org.apache.hadoop.mapred.Child.main(Child.java:249)

解决方案

I'm getting the same exception. It looks like the bug is caused by a race condition when CopyFilesReducer uses multiple CopyFilesRunable instances to download the files from S3. The problem is that it uses the same temp directory in multiple threads, and the threads delete the temp directory when they're done. Hence, when one thread completes before another it deletes the temp directory that another thread is still using.

I've reported the problem to AWS, but in the mean time you can work around the bug by forcing the reducer to use a single thread by setting the variable s3DistCp.copyfiles.mapper.numWorkers to 1 in your job config.

这篇关于使用s3distcp将文件从亚马逊s3复制到hdfs失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-15 03:14