Hadoop单机安装配置过程(一)

2014-11-24 10:29:10 · 作者: · 浏览: 2

Hadoop单机安装配置过程:
1、首先安装JDK,必须是sun公司的jdk,最好1.6版本以上。
最后java–version 查看成功与否。
注意配置/etc/profile文件,在其后面加上下面几句:
exportJAVA_HOME=/usr/local/jdk1.6.0_17
exportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre:$PATH
exportCLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar

2、安装SSH,免密码登陆。
apt-getinstall openssh-server 安装ssh
建立 SSHKEY:
$ssh-keygen -t rsa -P "" (注意不要多加空格或缺少空格)
中途需要安装者选择保存密钥的文件,按回车使用默认的文件就可以了:
Enterfile in which to save the key (/root/.ssh/id_rsa):(按回车)
启用SSHKEY:
$cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
/etc/init.d/sshreload
reloadssh
sshlocalhost
3、安装配置单机hadoop
1)解压到/opt/hadoop
Java代码
$tar zxvf hadoop-0.20.2.tar.gz
$sudo mv hadoop-0.20.2/opt/
$sudo chown -R hadoop:hadoop /opt/hadoop-0.20.2
$sudo ln -sf /opt/hadoop-0.20.2/opt/hadoop
4.配置hadoop-env.sh
1)在hadoop/conf里面hadoop-env.sh增加
Java代码
exportJAVA_HOME=/usr/jdk1.6.0.18
exportHADOOP_HOME=/opt/hadoop
exportPATH=$PATH:/opt/hadoop/bin
5.配置文件
1) /opt/hadoop/conf/core-site.xml
Java代码


fs.default.name
hdfs://localhost:9000


hadoop.tmp.dir
/tmp/hadoop/hadoop-${user.name}


2) /opt/hadoop/conf/hdfs-site.xml
Java代码


dfs.replication
1


3) /opt/hadoop/conf/mapred-site.xml
Java代码


mapred.job.tracker
localhost:9001


6.格式化hdfs
Java代码
$cd /opt/hadoop
$source conf/hadoop-env.sh
$hadoop namenode -format
提示信息一大堆...
7.启动hadoop
Java代码
$sudo ./start-all.sh //在hadoop/bin下执行


8.完成后的测试
Java代码
http://localhost:50030/- Hadoop 管理接口


启动
[hadoop@hadoop00~]$ ~/hadoop-0.21.0/bin/start-all.sh
Thisscript is Deprecated. Instead use start-dfs.sh andstart-mapred.sh
starting namenode, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-namenode-hadoop00.out
192.168.91.11:starting datanode, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-datanode-hadoop01.out
192.168.91.12:starting datanode, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-datanode-hadoop02.out
192.168.91.10:starting secondarynamenode, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop00.out
startingjobtracker, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-jobtracker-hadoop00.out
192.168.91.12:starting tasktracker, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-tasktracker-hadoop02.out
192.168.91.11:starting tasktracker, logging to/home/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-tasktracker-hadoop01.out
2.停止
[hadoop@hadoop00~]$ ~/hadoop-0.21.0/bin/stop-all.sh
Thisscript is Deprecated. Instead use stop-dfs.sh andstop-mapred.sh
stopping namenode
192.168.91.12: stoppingdatanode
192.168.91.11: stopping datanode
192.168.91.10:stopping secondarynamenode
stopping jobtracker
192.168.91.11:stopping tasktracker
192.168.91.12: stopping tasktracker
初始配置HDFS
1、 格式化HDFS文件系统
[hadoop@hadoop00~]$ hadoop namenode -format

2、 查看HDFS
[hadoop@hadoop00~]$ hadoop fs -ls /
11/09/24 07:49:55 INFO security.Groups: Groupmapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;cacheTimeout=300000
11/09/24 07:49:56 WARN conf.Configuration:mapred.task.id is deprecated. Instead, usemapreduce.task.attempt.id
Found 4 items
drwxr-xr-x - hadoop supergroup 0 2011-09-22 08:05 /home
drwxr-xr-x - hadoopsupergroup 02011-09-22 11:29 /jobtracker
drwxr-xr-x - hadoops