简述:一个分布式系统基础架构,由Apache基金会开发。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有着高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上。而且它提供高传输率(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求(requirements)这样可以流的形式访问(streaming access)文件系统中的数据。
官方网站:http://hadoop.apache.org/
环境:
CentOS 6.0 x64
IP配置:
主机名 IP 角色
ha01 10.0.0.232 namenode&jobtracker
ha02 10.0.0.233 datanode&tasktracker
ha03 10.0.0.234 datanode&tasktracker
ha04 10.0.0.235 datanode&tasktracker
准备工作:
1、所有服务器添加hosts 文件
- vi /etc/hosts
- 10.0.0.232 ha01
- 10.0.0.233 ha02
- 10.0.0.234 ha03
- 10.0.0.235 ha04
2、添加用户hadoop,并对此用户做信任关系
- groupadd -g 690 hadoop
- useradd -g hadoop hadoop -u 690
从ha01到 ha02、ha03、ha03做SSH信任。
此处略,详见我的文章:ssh无密码登录验证技术 http://www.elain.org/?p=62
安装部署:
java环境安装:
- cd /root/tools
- wget http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.rpm
- rpm -ivh jdk-7-linux-x64.rpm
-
- scp jdk-7-linux-x64.rpm ha02:/root/tools/
- scp jdk-7-linux-x64.rpm ha03:/root/tools/
- scp jdk-7-linux-x64.rpm ha04:/root/tools/
依次安装即可
验证java环境
- [root@ha01 tools]# java -version
- java version "1.6.0_17"
- OpenJDK Runtime Environment (IcedTea6 1.7.4) (rhel-1.21.b17.el6-x86_64)
- OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)
安装hadoop
- cd /root/tools
- wget http://mirror.bjtu.edu.cn/apache/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz
- tar zxvf hadoop-0.20.2.tar.gz
- mv hadoop-0.20.2 /elain/apps/hadoop
- chown -R hadoop.hadoop /elain/apps/hadoop
建立hadoop所需目录(也可不建立,hadoop启动时会自动创建)
- mkdir -p /data/hadoop/{name,data01,data02,data03,tmp}
- chown -R hadoop.hadoop /data/hadoop/{name,data01,data02,data03,tmp}
配置
- cd /elain/apps/hadoop
配置java路径
- vi /elain/apps/hadoop/conf/hadoop-env.sh
-
- export JAVA_HOME=/usr/java/jdk1.7.0
核心配置
- vi conf/core-site.xml
- <configuration>
- <property>
- <name>fs.default.name</name>
- <value>hdfs://ha01:9000</value>
- </property>
- </configuration>
- vi mapred-site.xml
- <configuration>
- <property>
- <name>mapred.job.tracker</name>
- <value>hdfs://ha01:9001</value>
- </property>
- </configuration>
站点节点配置
- vi conf/hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.replication</name>
- <value>3</value>
- </property>
- <property>
- <name>dfs.name.dir</name>
- <value>/data/hadoop/name</value>
- </property>
- <property>
- <name>dfs.data.dir</name>
- <value>/data/hadoop/data01,/data/hadoop/data02,/data/hadoop/data03</value>
- </property>
- <property>
- <name>dfs.tmp.dir</name>
- <value>/data/hadoop/tmp</value>
- </property>
- <property>
- <name>dfs.block.size</name>
- <value>2097152</value>
- </property>
-
- </configuration>
主节点名称:masters
ha01
数据节点名称:slaves
ha02
ha03
ha04
初始化namenode节点
- /elain/apps/hadoop/bin/hadoop namenode -format
注:在此区分大小写,需输入大写Y
- scp -r /elain/apps/hadoop ha02:/elain/apps/
- scp -r /elain/apps/hadoop ha03:/elain/apps/
- scp -r /elain/apps/hadoop ha04:/elain/apps/
- ssh ha02 'chown -R hadoop.hadoop /elain/apps/hadoop'
- ssh ha03 'chown -R hadoop.hadoop /elain/apps/hadoop'
- ssh ha04 'chown -R hadoop.hadoop /elain/apps/hadoop'
启动HDFS服务
在/elain/apps/hadoop/bin下有很多命令
start-all.sh 启动所有的Hadoop进程 ,包括namenode, datanode,jobtracker,tasktrack,secondarynamenode。
stop-all.sh 停止所有的Hadoop。
start-mapred.sh 启动Map/Reduce进程 ,包括Jobtracker和Tasktrack。
stop-mapred.sh 停止Map/Reduce进程
start-dfs.sh 启动Hadoop DFS进程 ,Namenode和Datanode。
stop-dfs.sh 停止DFS进程
- [root@ha01 conf]# /elain/apps/hadoop/bin/start-all.sh
- starting namenode, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-namenode-ha01.out
- ha02: starting datanode, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-datanode-ha02.out
- ha03: starting datanode, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-datanode-ha03.out
- ha04: starting datanode, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-datanode-ha04.out
- ha01: starting secondarynamenode, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-secondarynamenode-ha01.out
- starting jobtracker, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-jobtracker-ha01.out
- ha04: starting tasktracker, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-tasktracker-ha04.out
- ha02: starting tasktracker, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-tasktracker-ha02.out
- ha03: starting tasktracker, logging to /elain/apps/hadoop/bin/../logs/hadoop-root-tasktracker-ha03.out
在Master、Slave可以使用jps查看Hadoop启动状况
验证:
- [root@ha01 conf]# jps
- 10955 SecondaryNameNode
- 11123 Jps
- 11027 JobTracker
- 10828 NameNode
- [root@ha02 ~]# jps
- 5163 DataNode
- 5299 Jps
- 5259 TaskTracker
实例测试:等更新……
转载请注明: 转载自http://www.elain.org
本文链接地址:
本文转自 elain2012 51CTO博客,原文链接:http://blog.51cto.com/elain/674027