1.在第2个个节点上重复http://www.cnblogs.com/littlesuccess/p/3361497.html文章中的第1-5步
2.修改第1个节点上的hdfs-site.xml中的配置份数为3
[root@server-305 ~]# vim /opt/hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
3.修改第一个节点上的yarn-site.xml中的yarn resourcemanager地址
[root@server- hadoop]# vi yarn-site.xml
<property>
<name>yarn.resourcemanager.address</name>
<value>10.10.96.32:</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>10.10.96.32:</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>10.10.96.32:</value>
</property>
4.将第1个节点上的core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml拷贝到第2个节点
[root@server- ~]# scp /opt/hadoop/etc/hadoop/core-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/core-site.xml
[email protected]'s password:
core-site.xml % .0KB/s :
[root@server-305 ~]# scp /opt/hadoop/etc/hadoop/hdfs-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/hdfs-site.xml
[email protected]'s password:
hdfs-site.xml 100% 1406 1.4KB/s 00:00
[root@server- ~]# scp /opt/hadoop/etc/hadoop/mapred-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/mapred-site.xml
[email protected]'s password:
mapred-site.xml % .8KB/s :
[root@server- ~]# scp /opt/hadoop/etc/hadoop/yarn-site.xml 192.168.32.33:/opt/hadoop/etc/hadoop/yarn-site.xml
[email protected]'s password:
yarn-site.xml % .9KB/s :
5.在第1个节点上关闭namenode,secondarynamenode,datanode
[root@server- ~]# su - hdfs
[hdfs@server- ~]$ cd /opt/hadoop/sbin
[hdfs@server- sbin]$ ./hadoop-daemon.sh stop namenode
stopping namenode
[hdfs@server- sbin]$ ./hadoop-daemon.sh stop secondarynamenode
stopping secondarynamenode
[hdfs@server- sbin]$ ./hadoop-daemon.sh stop datanode
stopping datanode
6.在第1个节点上关闭resourcemanager,nodemanager
[yarn@server- sbin]$ ./yarn-daemon.sh stop resourcemanager
stopping resourcemanager
[yarn@server- sbin]$ ./yarn-daemon.sh stop nodemanager
stopping nodemanager
7.在第一个节点上格式化集群并重新启动hdfs
[root@server- ~]# su - hdfs
[hdfs@server- ~]$ cd /opt/hadoop/bin
[hdfs@server- bin]$ ./hadoop namenode -format
[hdfs@server- bin]$ cd ../sbin
[hdfs@server- sbin]$ ./hadoop-daemon.sh start namenode
[hdfs@server- sbin]$ ./hadoop-daemon.sh start datanode
8.在第2个节点上启动secondarynamenode,datanode
[hdfs@server-305 ~]# ssh 192.168.32.33
[hdfs@server-308 ~]# cd /opt/hadoop/sbin
[hdfs@server- sbin]# ./hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-server-.out
[hdfs@server- sbin]# jps
SecondaryNameNode
Jps
[hdfs@server- sbin]# ./hadoop-daemon.sh start datanode
starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-server-.out
Java HotSpot(TM) -Bit Server VM warning: You have loaded library /opt/hadoop-2.1.-beta/lib/native/libhadoop.so.1.0. which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
[hdfs@server- sbin]# jps
DataNode
SecondaryNameNode
Jps
[hdfs@server- sbin]#
9.在第一个节点上以yarn用户启动resourcemanager和nodemanager
10.在第2个节点上yarn用户启动nodemanager
11.检查192.168.32.31:8088
常见问题处理:
1.发现只能看到一个datanode,查看第2个,第3个节点上查看datanode日志,发现这两个节点无法连接到第一个节点。原因是防火墙没关掉。
2. JAVA_HOME not set
Error: JAVA_HOME is not set and could not be found.
这个问题是在执行libexec/hadoop-config.sh文件时出错。可以在文件开头把JAVA_HOME环境变量设置一下。在文件开头加入
JAVA_HOME=/usr/java/latest
问题解决
2.在启动hadoop之后测试hadoop写文件:
hadoop fs -put testfile /user/shaochen/testfile
报错误:/user/shaochen/testfile file or directory does not exist.
通过检查第2个和第三个节点,发现/var/data/hadoop/hdfs的权限不对。
chown hdfs:hadoop /var/data/hadoop/hdfs -R