本文介绍了Hadoop Mapper 因“容器被 ApplicationMaster 杀死"而失败.的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 Hadoop 上执行 map reduce 程序.

I am trying to execute a map reduce program on Hadoop.

当我将我的工作提交到 hadoop 单节点集群时.正在创建作业,但消息失败

When i submit my job to the hadoop single node cluster. The job is getting created but failing with the message

被ApplicationMaster杀死的容器"

"Container killed by the ApplicationMaster"

使用的输入大小为 10 MB.

The input used is of the size 10 MB.

当我使用相同的输入文件 400 KB 的脚本时,它成功了.但是对于大小为 10 MB 的输入文件失败.

When i used the same script of input file 400 KB, it got succeded. But failing for the the input file of size 10 MB.

我的终端显示的完整日志如下.

The complete log that is displayed in my terminal is as follows.

    15/05/29 09:52:16 WARN util.NativeCodeLoader: Unable to `load native-  hadoop library for your platform... using builtin-java classes      where applicable
Submitting job on the cluster...
15/05/29 09:52:17 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/05/29 09:52:18 INFO input.FileInputFormat: Total input paths to process : 1
15/05/29 09:52:18 INFO mapreduce.JobSubmitter: number of splits:1
15/05/29 09:52:19 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1432910768528_0001
15/05/29 09:52:19 INFO impl.YarnClientImpl: Submitted application application_1432910768528_0001
15/05/29 09:52:19 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1432910768528_0001/
15/05/29 09:52:19 INFO mapreduce.Job: Running job: job_1432910768528_0001
15/05/29 09:52:29 INFO mapreduce.Job: Job job_1432910768528_0001 running in uber mode : false
15/05/29 09:52:29 INFO mapreduce.Job:  map 0% reduce 0%
15/05/29 09:52:41 INFO mapreduce.Job:  map 100% reduce 0%
15/05/29 10:03:01 INFO mapreduce.Job:  map 0% reduce 0%
15/05/29 10:03:01 INFO mapreduce.Job: Task Id : attempt_1432910768528_0001_m_000000_0, Status : FAILED
AttemptID:attempt_1432910768528_0001_m_000000_0 Timed out after 600 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

我的映射器正在触发另一个程序,该程序将在这里处理我的输入文件.被mapper触发的程序通常会消耗大量内存.

My mapper here is triggering the other program which is going to process my input file here. The program which is getting trigger by mapper usually consumes lots of memory.

所以请在这方面帮助我.

So please help me in this regard.

推荐答案

yarn-site.xml 中包含以下属性并重启 VM,

Include below properties in yarn-site.xml and restart VM,

<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
   <value>false</value>
   <description>Whether virtual memory limits will be enforced for containers</description>
</property>

<property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
   <value>4</value>
   <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>

这篇关于Hadoop Mapper 因“容器被 ApplicationMaster 杀死"而失败.的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

05-29 05:11
查看更多