在启动计算机上设置的单节点集群时,启动数据节点时出现错误

   ************************************************************/
2013-02-18 20:21:32,300 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = somnath-laptop/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
2013-02-18 20:21:32,593 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-02-18 20:21:32,618 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-02-18 20:21:32,620 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-02-18 20:21:32,620 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-02-18 20:21:33,052 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-02-18 20:21:33,056 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-02-18 20:21:37,890 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.IOException: Connection reset by peer
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at sun.proxy.$Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:370)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:429)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:331)
    at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:296)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:356)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
Caused by: java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcher.read0(Native Method)
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
    at sun.nio.ch.IOUtil.read(IOUtil.java:224)

关于如何解决此错误的任何想法吗?

最佳答案

好的,问题解决了。

由于我是通过网络代理使用单节点集群的,因此我在跨Hadoop守护程序进行通信时,将以下属性行添加到$ HADOOP_HOME / conf / mapred-site.xml中,以绕过代理服务器。

但是,这一次我尝试通过直接Internet连接,因此不得不注释掉我在mapred-site.xml中添加的属性。

下面是我注释掉的mapred-site.xml中的属性:

<!--
<property>
<name>hadoop.rpc.socket.factory.class.default</name>
<value>org.apache.hadoop.net.StandardSocketFactory</value>
<final>true</final>
<description>
  Prevent proxy settings set up by clients in their job configs from affecting our connectivity.
</description>
</property>
-->

关于hadoop - 错误org.apache.hadoop.hdfs.server.datanode.DataNode:java.io.IOException:本地异常调用本地/127.0.0.1:54310失败,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/14948801/

10-16 03:10