本文共 2510 字,大约阅读时间需要 8 分钟。
在开始Hadoop安装之前,首先需要确保环境配置正确。以下是我们需要完成的主要步骤:
sshd服务正常运行。集群网络环境配置如下:
/etc/hosts文件尾部添加以下内容:192.168.235.131 master192.168.235.132 slave1192.168.235.133 slave2
sudo hostnamectl set-hostname master
$hostname验证配置。~/.ssh/authorized_keys文件中。ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub root@slave1:~scp ~/.ssh/id_rsa.pub root@slave2:~
chmod 600 ~/.ssh/authorized_keys
wget http://mirror.apache.org/hadoop/core/hadoop-2.8.5.tar.gz
tar -zxvf hadoop-2.8.5.tar.gz
core-site.xml:vim ~/hadoop/etc/hadoop/core-site.xml
配置内容如下:
fs.default.name hdfs://master:9000 hadoop.tmp.dir file:/home/leesanghyuk/hadoop-2.8.5/hadoop/tmp
hdfs-site.xml、mapred-site.xml、yarn-site.xml等配置文件。scp -r ~/hadoop root@slave1:~scp -r ~/hadoop root@slave2:~
/etc/profile文件:vi /etc/profile
添加以下内容:
# Hadoop环境变量配置export HADOOP_HOME=/home/leesanghyuk/hadoop-2.8.5export HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
source /etc/profile
hadoop namenode -format
start-all.sh
jps
echo "My name is LeesangHyuk. This is a example program called WordCount, run by LeesangHyuk " > testWordCount
hadoop fs -mkdir /wordCountInputhadoop fs -put testWordCount /wordCountInput
hadoop jar ~/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar wordcount /wordCountInput /wordCountOutput
hadoop fs -ls /wordCountOutputhadoop fs -cat /wordCountOutput/part-r-00000
通过以上步骤,可以实现一个功能正常的Hadoop集群环境,满足大数据处理和分析需求。
转载地址:http://dqak.baihongyu.com/