分布式hadoop
如果分布式的hadoop搭建成功,则会开启一下服务
上传解压
修改环境变量
修改配置文件
1)hadoop-env.sh
2)core-site.xml
<property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/zhy/hadoop/hdfs/tmp</value> </property>
3)hdfs-site.xml
<property> <name>dfs.namenode.name.dir</name> <value>/usr/local/src/hadoop/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/src/hadoop/hdfs/data</value> </property> <property> <name>dfs.repliation</name> <value>2</value> </property> <property> <name>dfs.secondary.http.address</name> <value>slave1:50090</value> </property>
4)mapreduce-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
5)yarn-site.xml
<property>
<name>yarn.resourcemanager.host</name>
<value>slave3</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>slave3:8032</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
6)slaves
master
slave1
slave2
初始化
hadoop namenode -format 或者 hdfs namenode -format
启动
start-dfs.sh start-yarn.sh #在yarn上启动