Hadoop 伪分布式模式配置

机器准备:

8核32G
平时写程序还是会在伪分布式环境下调试。

添加hadoop用户(笔记里为了方便还是用了root)

1
2
adduser hadoop  #添加hadoop用户
passwd hadoop #为hadoop用户设置密码

添加Java环境

解压jdk压缩包后添加下列配置信息

1
2
3
export JAVA_HOME=/usr/local/java/jdk1.7.0_80
export JRE_HOME=/usr/local/java/jdk1.7.0_80/jre
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

安装hadoop

wget http://mirror.bit.edu.cn/apache/hadoop/common/stable2/hadoop-2.7.1.tar.gz

解压完成后单机hadoop就算是完成了可以运行如下命令进行测试:

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input outout 'dfs[a-z.]+'

可以看到结果

1
2
[root@wft-wh-test hadoop]# cat outout/*
1 dfsadmin

hadoop的伪分布式配置

所谓的伪分布式就是namenode和datanode在同一个节点上。
伪分布式需要修改2个配置文件 core-site.xml 和 hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>

配置完成后执行namenode格式化
hdfs namenode -format

然后开启datanode和namenode的守护进程。
start-dfs.sh
启动过程中诺出现

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

此类warn则无需关心,直接忽略即可。

执行完毕后查看java进程

1
2
3
4
5
[root@wft-wh-test hadoop]# jps
15304 Jps
14826 NameNode
14949 DataNode
15121 SecondaryNameNode

启动完成可以在ip:50070查看hadoop信息

启动中遇到的问题:

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting.
重新启动进程可以解决,不行再查看日志分析。
[root@wft-wh-test logs]# hdfs dfs -ls
15/11/09 17:37:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
ls: `.’: No such file or directory

查看文件:

[root@wft-wh-test native]# file *
libhadoop.a: current ar archive
libhadooppipes.a: current ar archive
libhadoop.so: symbolic link to libhadoop.so.1.0.0' libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped libhadooputils.a: current ar archive libhdfs.a: current ar archive libhdfs.so: symbolic link tolibhdfs.so.0.0.0’
libhdfs.so.0.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped

发现并不是像很多博客所说的那样是32位的,而是64位的那为啥会出现这个问题。

其实问题并不是那个warn而是下面的.ls:.’: No such file or directory`

这里需要建个当前用户目录,hadoop fs -mkdir -p /user/[current login user]

伪分布式Hadoop实例

在伪分布式环境下执行单机实例
hdfs dfs -mkdir input新建input 目录并将,xml文件复制进去,hdfs dfs -put etc/hadoop/*.xml input

执行hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'
查看结果
hdfs dfs -cat output/*

关闭Hadoop

sbin/stop-dfs.sh

安装Httpfs

  • core-site.xml添加

    1
    2
    3
    4
    5
    6
    7
    8
     <property>
    <name>hadoop.proxyuser.root.hosts</name>
    <value>localhost</value>
    </property>
    <property>
    <name>hadoop.proxyuser.root.groups</name>
    <value>*</value>
    </property>
  • 配置httpfs的环境变量
    export CATALINA_BASE=/opt/work/hadoop/share/hadoop/httpfs/tomcat

  • 重启hadoop集群
    注意不要使用killall java不然namenode会启动不了
  • 启动httpfs服务: sbin/httpfs.sh
  • 检测httpfs服务:
    1
    2
    3
    4
    5
    6
    7
    8
    [root@wft-wh-test hadoop]# curl -i -X PUT -T /opt/hadoop/etc/hadoop/hadoop-env.sh  "http://localhost:14000/webhdfs/v1/tmp/hadoop-env.sh?op=CREATE&data=true&user.name=root" -H "Content-Type:application/octet-stream"
    HTTP/1.1 100 Continue
    HTTP/1.1 201 Created
    Server: Apache-Coyote/1.1
    Set-Cookie: hadoop.auth="u=root&p=root&t=simple&e=1447264691666&s=+lpTYq5LRs0C8YBtXSj6bQZxP2U="; Path=/; Expires= , 11- -2015 17:58:11 GMT; HttpOnly
    Content-Type: application/json
    Content-Length: 0
    Date: Wed, 11 Nov 2015 07:58:12 GMT

访问:http://hostname:14000/webhdfs/v1/tmp/hadoop-env.sh?user.name=root&op=open
验证.

注意点

hadoop-2.x版本中不存在JobTracker和TaskTracker,可以参考博客http://blog.csdn.net/skywalker_only/article/details/37905463
,启动NameNode和DataNode的命令为start-dfs.sh,启动yarn的命令为start-yarn.sh。

namenode无法启动解决:

  • 关停hadoo,sbin/stop-dfs.sh
  • 删除tmp 文件夹,rm -fr /usr/local/hadoop/tmp
  • 重新格式化namenode hdfs namenode -format
  • 启动dfs sbin/start-dfs.sh
  • 检查java 进程jps

datanode无法启动

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool

解决:

The problem could be that the namenode was formatted after the cluster was set up and the datanodes were not, so the slaves are still referring to the old namenode.
We have to delete and recreate the folder /home/hadoop/dfs/data on the local filesystem for the datanode.

Check your hdfs-site.xml file to see where dfs.data.dir is pointing to
and delete that folder
and then restart the datanode daemon on the machine
The steps above should recreate the folder and resolve the problem.

Please share your config info if the instructions above do not work

大致意思就是namenode被格式化之后,但是datanode还是指向原来的namenode。
所以需要把dfs.data.dir指向的路径给删除。

参考文档: