本文介绍了Hadoop Mapper由于“由ApplicationMaster杀死的容器”而失败。的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



当我将作业提交给hadoop单节点群集时,我试图在Hadoop上执行map reduce程序。该工作正在创建,但失败的消息



由ApplicationMaster杀死的容器



使用的输入是10 MB的大小。



当我使用输入文件400 KB的相同脚本时,它成功了。但输入文件大小为10 MB时失败。



终端中显示的完整日志如下。

  15/05/29 09:52:16 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库......在适用的情况下使用builtin-java类
在群集中提交作业...
15/05/29 09:52:17信息client.RMProxy:连接到ResourceManager,位于/0.0.0.0:8032
15/05/29 09:52:18 INFO input.FileInputFormat:要输入的总输入路径:1
15/05/29 09:52:18信息mapreduce.JobSubmitter:分割数量:1
15/05/29 09:52:19信息mapreduce.JobSubmitter:提交作业的标记:job_1432910768528_0001
15/05/29 09:52:19 INFO impl.YarnClientImpl:提交的应用程序application_1432910768528_0001
15/05/29 09:52 :19 INFO mapreduce.Job:跟踪作业的URL:http:// localhost:8088 / proxy / application_1432910768528_0001 /
15/05/29 09:52:19信息mapreduce.Job:正在运行的作业:job_143 2910768528_0001
15/05/29 09:52:29信息mapreduce.Job:Job job_1432910768528_0001以超级模式运行:false
15/05/29 09:52:29信息mapreduce.Job:map 0%减少0%
15/05/29 09:52:41信息mapreduce.Job:地图100%减少0%
15/05/29 10:03:01信息mapreduce.Job:地图0%减少0%
15/05/29 10:03:01信息mapreduce.Job:任务ID:attempt_1432910768528_0001_m_000000_0,状态:失败
尝试ID:attempt_1432910768528_0001_m_000000_0 600秒后超时
ApplicationMaster。
容器在请求时死亡。退出代码是143
使用非零退出代码退出的容器143

我的映射器在这里正在触发正在处理我的输入文件的其他程序。获得mapper触发的程序通常会消耗大量内存。

请在这方面帮助我。

yarn-site.xml 中包含以下属性并重新启动 VM

 < property> 
< name> yarn.nodemanager.vmem-check-enabled< / name>
<值> false< /值>
< description>是否对容器< / description>执行虚拟内存限制
< / property>

<属性>
< name> yarn.nodemanager.vmem-pmem-ratio< / name>
<值> 4< /值>
< description>为容器设置内存限制时虚拟内存与物理内存之间的比率< / description>
< / property>


I am trying to execute a map reduce program on Hadoop.

When i submit my job to the hadoop single node cluster. The job is getting created but failing with the message

"Container killed by the ApplicationMaster"

The input used is of the size 10 MB.

When i used the same script of input file 400 KB, it got succeded. But failing for the the input file of size 10 MB.

The complete log that is displayed in my terminal is as follows.

    15/05/29 09:52:16 WARN util.NativeCodeLoader: Unable to `load native-  hadoop library for your platform... using builtin-java classes      where applicable
Submitting job on the cluster...
15/05/29 09:52:17 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/05/29 09:52:18 INFO input.FileInputFormat: Total input paths to process : 1
15/05/29 09:52:18 INFO mapreduce.JobSubmitter: number of splits:1
15/05/29 09:52:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1432910768528_0001
15/05/29 09:52:19 INFO impl.YarnClientImpl: Submitted application application_1432910768528_0001
15/05/29 09:52:19 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1432910768528_0001/
15/05/29 09:52:19 INFO mapreduce.Job: Running job: job_1432910768528_0001
15/05/29 09:52:29 INFO mapreduce.Job: Job job_1432910768528_0001 running in uber mode : false
15/05/29 09:52:29 INFO mapreduce.Job:  map 0% reduce 0%
15/05/29 09:52:41 INFO mapreduce.Job:  map 100% reduce 0%
15/05/29 10:03:01 INFO mapreduce.Job:  map 0% reduce 0%
15/05/29 10:03:01 INFO mapreduce.Job: Task Id : attempt_1432910768528_0001_m_000000_0, Status : FAILED
AttemptID:attempt_1432910768528_0001_m_000000_0 Timed out after 600 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

My mapper here is triggering the other program which is going to process my input file here. The program which is getting trigger by mapper usually consumes lots of memory.

So please help me in this regard.

解决方案

Include below properties in yarn-site.xml and restart VM,

<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
   <value>false</value>
   <description>Whether virtual memory limits will be enforced for containers</description>
</property>

<property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
   <value>4</value>
   <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>

这篇关于Hadoop Mapper由于“由ApplicationMaster杀死的容器”而失败。的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

05-29 05:11
查看更多