我用3台计算机建立了一个小型Hadoop集群:

  • 机器(Hadoop1)同时运行NameNode和Jobtracker
  • Machine(Hadoop2)正在运行SecondaryNameNode
  • Machine(Hadoop3)正在运行DataNode和TaskTracker

  • 当我检查日志文件时,一切都很好。
    但是,当我尝试通过在机器Hadoop2上键入localhost:50090来检查SecondaryNameNode的工作状态时,它显示:
    Unable to connect ....can't establish a connection to the server at localhost:50090.
    

    有人遇到过这种问题吗?

    SNN上的hdfs-site.xml中的内容:
    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>
    
    <property>
    <name>dfs.http.address</name>
    <value>Hadoop1:50070</value>
    </property>
    
    <property>
    <name>dfs.secondary.http.address</name>
    <value>Hadoop2:50090</value>
    </property>
    </configuration>
    

    以下是SNN运行日志的一部分:
    2013-04-23 19:47:00,820 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2013-04-23 19:47:00,987 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file fsimage size 654 bytes.
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Downloaded file edits size 4 bytes.
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 17.77875 MB
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries
    2013-04-23 19:47:00,989 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
    2013-04-23 19:47:00,998 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
    2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
    2013-04-23 19:47:00,999 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 7
    2013-04-23 19:47:01,000 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
    2013-04-23 19:47:01,000 INFO org.apache.hadoop.hdfs.server.common.Storage: Edits file /app/hadoop/tmp/dfs/namesecondary/current/edits of size 4 edits # 0 loaded in 0 seconds.
    2013-04-23 19:47:01,001 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2013-04-23 19:47:01,049 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
    2013-04-23 19:47:01,334 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of size 654 saved in 0 seconds.
    2013-04-23 19:47:01,570 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Posted URL Hadoop1:50070putimage=1&port=50090&machine=Hadoop3&token=-32:145975115:0:1366717621000:1366714020860
    2013-04-23 19:47:01,771 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 654
    

    最佳答案

    尝试在SNN的hdfs-site.xml中为dfs.secondary.http.address赋值。另外,我假设您的计算机之间没有启用防火墙,对吗?如果您可以显示日志,这将有所帮助,我看到有时用户输入的SNN端口号不正确,这会导致他们的日志有所不同,从而导致连接错误。

    关于hadoop - 无法打开SecondaryNameNode的Web UI状态页,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/16160686/

    10-13 05:26