连接到ResourceManager失败

连接到ResourceManager失败

本文介绍了Hadoop:连接到ResourceManager失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

安装hadoop 2.2并尝试启动管道后,示例发生了以下错误(尝试启动 hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile后出现同样的错误。 txt / out ):

  / usr / local / hadoop $ hadoop pipes -Dhadoop.pipes。 java.recordreader = true -Dhadoop.pipes.java.recordwriter = true -input someFile.txt -output / out -program bin / wordcount 
DEPRECATED:不推荐使用此脚本来执行mapred命令。
改为使用mapred命令。

13/12/14 20:12:06信息client.RMProxy:在/0.0.0.0:8032连接到ResourceManager
13/12/14 20:12:06信息客户端。 RMProxy:连接到ResourceManager在/0.0.0.0:8032
13/12/14 20:12:07信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试0次(s);重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:08信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过1次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:09信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过2次(s);重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:10信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过3次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:11信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过4次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:12信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过5次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:13信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过6次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)
13/12/14 20:12:14信息ipc.Client:重试连接到服务器:0.0.0.0/0.0.0.0:8032。已经尝试过7次;重试策略是RetryUpToMaximumCountWithFixedSleep(maxRetries = 10,sleepTime = 1 SECONDS)

我的yarn-site.xml:

 <配置> 
<属性>
< name> yarn.nodemanager.aux-services< / name>
< value> mapreduce_shuffle< /值>
< / property>
<属性>
< name> yarn.nodemanager.aux-services.mapreduce.shuffle.class< / name>
< value> org.apache.hadoop.mapred.ShuffleHandler< / value>
< / property>
< / configuration>

core-site.xml:

 <结构> 
<属性>
<名称> fs.default.name< /名称>
< value> hdfs:// localhost:9000< / value>
< / property>
< / configuration>

mapred-site.xml:

 <结构> 
<属性>
< name> mapreduce.framework.name< / name>
<值>纱线< /值>
< / property>
< / configuration>

hdfs-site.xml:

 <结构> 
<属性>
< name> dfs.replication< / name>
<值> 1< /值>
< / property>
<属性>
<名称> dfs.namenode.name.dir< /名称>
<值>文件:/ home / hduser / mydata / hdfs / namenode< / value>
< / property>
<属性>
< name> dfs.datanode.data.dir< / name>
<值>文件:/ home / hduser / mydata / hdfs / datanode< / value>
< / property>
< / configuration>

我发现我的IPv6已被禁用,因为它应该是。也许我的/ etc / hosts不正确?



/ etc / hosts:

  fe00 :: 0 ip6-localnet 
ff00 :: 0 ip6-mcastprefix
ff02 :: 1 ip6-allnodes
ff02 :: 2 ip6-allrouters

127.0.0.1 localhost.localdomain localhost hduser
#自动生成的主机名。请不要删除此评论。
79.98.30.76 356114.s.dedikuoti.lt 356114
:: 1本地主机ip6-localhost ip6-loopback


解决方案

连接recource manager的问题是因为ive需要向yarn-site.xml添加一些属性:

 <属性> 
< name> yarn.resourcemanager.address< / name>
<值> 127.0.0.1:8032< /值>
< / property>
<属性>
< name> yarn.resourcemanager.scheduler.address< / name>
<值> 127.0.0.1:8030< /值>
< / property>
<属性>
< name> yarn.resourcemanager.resource-tracker.address< / name>
<值> 127.0.0.1:8031< /值>
< / property>

然而,我的工作没有运行,但现在连接成功

After installing hadoop 2.2 and trying to launch pipes example ive got the folowing error (the same error shows up after trying to launch hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount someFile.txt /out):

/usr/local/hadoop$ hadoop pipes -Dhadoop.pipes.java.recordreader=true -Dhadoop.pipes.java.recordwriter=true -input someFile.txt -output /out -program bin/wordcount
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
13/12/14 20:12:07 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:08 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:09 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:10 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:11 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:12 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:13 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
13/12/14 20:12:14 INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

My yarn-site.xml:

<configuration>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
<property>
  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- Site specific YARN configuration properties -->
</configuration>

core-site.xml:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

mapred-site.xml:

<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
</configuration>

hdfs-site.xml:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
</property>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
</configuration>

Ive figured out that my IPv6 is disabled as it should be. Maybe my /etc/hosts are not correct?

/etc/hosts:

fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

127.0.0.1 localhost.localdomain localhost hduser
# Auto-generated hostname. Please do not remove this comment.
79.98.30.76 356114.s.dedikuoti.lt  356114
::1             localhost ip6-localhost ip6-loopback
解决方案

The problem connecting recource manager was because ive needed to add a few properties to yarn-site.xml :

<property>
<name>yarn.resourcemanager.address</name>
<value>127.0.0.1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>127.0.0.1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>127.0.0.1:8031</value>
</property>

Yet, my Jobs arent runing but connecting is successful now

这篇关于Hadoop:连接到ResourceManager失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 04:54