2.2 启动服务

前面各节点参数配置好之后,接下来要先对namenode进行格式化,这个操作仍是在hdnode1节点上执行:

    [grid@hdnode1 ~]$ hadoop namenode -format

    13/01/30 10:14:36 INFO namenode.NameNode: STARTUP_MSG:  

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG:   host = hdnode1/192.168.30.203

    STARTUP_MSG:   args = [-format]

    STARTUP_MSG:   version = 0.20.2

    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by ¨chrisdo¨ on Fri Feb 19 08:07:34 UTC 2010

    ************************************************************/

    Re-format filesystem in /data2/hadoop/name ? (Y or N) y

    Format aborted in /data2/hadoop/name

    13/01/30 10:14:39 INFO namenode.NameNode: SHUTDOWN_MSG:  

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at hdnode1/192.168.30.203

    ************************************************************/

在执行格式化时,三思本地的测试环境反复尝试都报错。不过如果我们把hdfs-site.xml中dfs.name.dir那段property给删除掉,那么就能正常执行。可是,我实在是希望能够自定义namenode的存储路径,经过对比hadoop权威指南,该段参数写的也没有问题。

不过最终这个问题还是被解决了,知道怎么解决的吗?泥嘛呀,坑爹了。注意前面输出信息中加粗的部分。最终发现,居然是由于选择Y or N那里,我输入的是小写y,它默认理解非Y就是N,于是就没格式化。

重新执行hadoop namenode -format:

    [grid@hdnode1 ~]$ hadoop namenode -format

    13/01/30 105:20:07 INFO namenode.NameNode: STARTUP_MSG:  

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG:   host = hdnode1/192.168.30.203

    STARTUP_MSG:   args = [-format]

    STARTUP_MSG:   version = 0.20.2

    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by ¨chrisdo¨ on Fri Feb 19 08:07:34 UTC 2010

    ************************************************************/

    Re-format filesystem in /data2/hadoop/name ? (Y or N)Y

    13/01/30 10:20:08 INFO namenode.FSNamesystem: fsOwner=grid,grid

    13/01/30 10:20:08 INFO namenode.FSNamesystem: supergroup=supergroup

    13/01/30 10:20:08 INFO namenode.FSNamesystem: isPermissionEnabled=true

    13/01/30 10:20:08 INFO common.Storage: Image file of size 94 saved in 0 seconds.

    13/01/30 10:20:08 INFO common.Storage: Storage directory /data2/hadoop/name has been successfully formatted.

    13/01/30 10:20:08 INFO namenode.NameNode: SHUTDOWN_MSG:  

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at hdnode1/192.168.30.203

    ************************************************************/

注意加粗字体,然后就正常了。

激动人心的时刻到了,启动hadoop集群,让我们见识见识传说中的Hadoop到底是怎么个情况吧。

直接执行start-all.sh脚本即可:

    [grid@hdnode1 hadoop]$ start-all.sh  

    starting namenode, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-namenode-hdnode1.out

    hdnode3: starting datanode, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-datanode-hdnode3.out

    hdnode2: starting datanode, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-datanode-hdnode2.out

    hdnode1: starting secondarynamenode, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-secondarynamenode-hdnode1.out

    starting jobtracker, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-jobtracker-hdnode1.out

    hdnode3: starting tasktracker, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-tasktracker-hdnode3.out

    hdnode2: starting tasktracker, logging to /usr/local/hadoop-0.20.2/bin/../logs/hadoop-grid-tasktracker-hdnode2.out

这个脚本做了什么呢,查看这个脚本的内容,会发现它其实是调用了start-dfs.sh和start-mapred.sh两个脚本。如果你认真看了屏幕输出的信息,即使不看脚本的内容,也会发现输出的日志已经把什么都告诉我们了。

Start-all首先会启动namenode,而后按照conf/slaves文件中指定的节点各启动一个DataNode,在conf/masters文件指定的节点各启动一个SecondaryNameNode;这时start-dfs.sh的工具就结束了,而后执行的start-mapred.sh与start-dfs.sh类似,它会在本地机器上启动一个JobTracker,在conf/slaves文件中指定的节点上各启动一个TaskTracker。

提示:有start就有stop,若要关闭资源,则有与之对应的stop-mapred.sh、stop-dfs.sh及stop-all.sh用于停止这些服务。

验证服务端节点上的服务进程(有namenode但没有datanode):

    [grid@hdnode1 ~]$ jps

    7727 Jps

    6588 JobTracker

    6404 NameNode

    6524 SecondaryNameNode

验证客户端节点上的进程(有datanode没有namenode):

    [grid@hdnode2 ~]$ jps

    9581 Jps

    9337 DataNode

    9409 TaskTracker

查看HDFS存储信息:

    [grid@hdnode1 ~]$ hadoop dfsadmin -report

    Configured Capacity: 211376005120 (196.86 GB)

    Present Capacity: 200244994048 (186.49 GB)

    DFS Remaining: 200244920320 (186.49 GB)

    DFS Used: 73728 (72 KB)

    DFS Used%: 0%

    Under replicated blocks: 1

    Blocks with corrupt replicas: 0

    Missing blocks: 0


    -------------------------------------------------

    Datanodes available: 2 (2 total, 0 dead)


    Name: 192.168.30.204:50010

    Decommission Status : Normal

    Configured Capacity: 105688002560 (98.43 GB)

    DFS Used: 36864 (36 KB)

    Non DFS Used: 5565505536 (5.18 GB)

    DFS Remaining: 100122460160(93.25 GB)

    DFS Used%: 0%

    DFS Remaining%: 94.73%

    Last contact: Tue Feb 31 10:33:51 CST 2013



    Name: 192.168.30.205:50010

    Decommission Status : Normal

    Configured Capacity: 105688002560 (98.43 GB)

    DFS Used: 36864 (36 KB)

    Non DFS Used: 5565505536 (5.18 GB)

    DFS Remaining: 100122460160(93.25 GB)

    DFS Used%: 0%

    DFS Remaining%: 94.73%

    Last contact: Tue Feb 31 10:33:51 CST 2013

此外也可以通过浏览器获取服务的信息:

     
  • http://192.168.30.203:50030:Namenode状态信息;
  •  
  • http://192.168.30.203:50070:浏览HDFS文件系统;
  •  
  • http://192.168.30.203:50090:Secondary Namenode状态信息;
09-25 22:08