hadoop3.1.3 HA

您所在的位置:网站首页 Rm2是什么单位 hadoop3.1.3 HA

hadoop3.1.3 HA

#hadoop3.1.3 HA| 来源: 网络整理| 查看: 265

实训一、安装java 步骤一、解压并重命名java

[root@master ~]#

tar -xzvf /opt/software/jdk-8u212-linux-x64.tar.gz -C /opt/module/ mv /opt/module/jdk1.8.0_212 /opt/module/java 步骤二、配置全局环境变量

[root@master ~]#

vim /etc/profile

配置内容:

export JAVA_HOME=/opt/module/java export PATH=$PATH:$JAVA_HOME/bin

加载环境变量

source /etc/profile 步骤三、测试java

[root@master ~]#

javac java -version 步骤四、分发java文件和环境变量

[root@master ~]#

scp -r /opt/module/java root@slave1:/opt/module/java scp -r /opt/module/java root@slave2:/opt/module/java scp /etc/profile root@slave1:/etc/ scp /etc/profile root@slave2:/etc/ 实训二、安装zookeeper集群 步骤一、解压并重命名为zookeeper

[root@master ~]#

tar -xzvf /opt/software/apache-zookeeper-3.5.7-bin.tar.gz -C /opt/module/ mv /opt/module/apache-zookeeper-3.5.7-bin /opt/module/zookeeper 步骤二、配置用户环境变量

[root@master ~]#

vim /root/.bash_profile

配置内容:

export ZOOKEEPER_HOME=/opt/module/zookeeper export PATH=$PATH:$ZOOKEEPER_HOME/bin

加载环境变量

source /root/.bash_profile 步骤三、配置zoo.cfg文件

[root@master ~]#

cp /opt/module/zookeeper/conf/zoo_sample.cfg /opt/module/zookeeper/conf/zoo.cfg vim $ZOOKEEPER_HOME/conf/zoo.cfg

配置内容:

# 配置zk数据文件路径 dataDir=/opt/module/zookeeper/data # 配置zk日志文件 dataLogDir=/opt/module/zookeeper/logs server.1=master:2888:3888 server.2=slave1:2888:3888 server.3=slave2:2888:3888 步骤四、配置myid文件

[root@master ~]#

mkdir -p $ZOOKEEPER_HOME/{logs,data} echo "1" > $ZOOKEEPER_HOME/data/myid 步骤五、分发文件 scp -r /opt/module/zookeeper root@slave1:/opt/module/ scp -r /opt/module/zookeeper root@slave2:/opt/module/ scp /root/.bash_profile slave1:/root/ scp /root/.bash_profile slave2:/root/ 步骤六、修改其他节点myid文件

[root@slave1 ~]#

echo 2 > $ZOOKEEPER_HOME/data/myid source /root/.bash_profile

[root@slave2 ~]#

echo 3 > $ZOOKEEPER_HOME/data/myid source /root/.bash_profile 步骤七、启动zookeer集群

[root@master ~]#

zkServer.sh start

[root@slave1 ~]#

zkServer.sh start

[root@slave2 ~]#

zkServer.sh start 步骤八、查看jps进程

[root@master ~]#

jps

[root@slave1 ~]#

jps

[root@slave2 ~]#

jps 实训三、部署Hadoop HA 步骤一、解压并重命名

[root@master ~]#

tar -xzvf /opt/software/hadoop-3.1.3.tar.gz -C /opt/module/ mv /opt/module/hadoop-3.1.3 /opt/module/hadoop 步骤二、配置环境变量

[root@master ~]#

vim /root/.bash_profile

配置内容:

export HADOOP_HOME=/opt/module/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

加载环境变量

source /root/.bash_profile 步骤三、查看hadoop版本

[root@master ~]#

hadoop version 步骤四、配置hadoop-env.sh

[root@master ~]#

vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh

配置内容:

export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HDFS_JOURNALNODE_USER=root export HDFS_ZKFC_USER=root export JAVA_HOME=/opt/module/java 步骤五、配置hdfs-site.xml

[root@master ~]#

vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml

配置内容:

dfs.nameservices mycluster dfs.ha.namenodes.mycluster nn1,nn2,nn3 dfs.namenode.rpc-address.mycluster.nn1 master:8020 dfs.namenode.rpc-address.mycluster.nn2 slave1:8020 dfs.namenode.rpc-address.mycluster.nn3 slave2:8020 dfs.namenode.http-address.mycluster.nn1 master:9870 dfs.namenode.http-address.mycluster.nn2 slave1:9870 dfs.namenode.http-address.mycluster.nn3 slave2:9870 dfs.namenode.shared.edits.dir qjournal://master:8485;slave1:8485;slave2:8485/mycluster dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa dfs.ha.fencing.ssh.connect-timeout 30000 dfs.ha.automatic-failover.enabled true ha.zookeeper.quorum master:2181,slave1:2181,slave2:2181 dfs.replication 3 dfs.namenode.name.dir /opt/module/hadoop/dfs/name dfs.datanode.data.dir /opt/module/hadoop/dfs/data dfs.blocksize 268435456 dfs.namenode.handler.count 100 步骤六、配置core-site.xml

[root@master ~]#

vim $HADOOP_HOME/etc/hadoop/core-site.xml

配置内容

hadoop.tmp.dir /opt/module/hadoop/dfs/tmp fs.defaultFS hdfs://mycluster io.file.buffer.size 131072 dfs.journalnode.edits.dir /opt/module/hadoop/journal/data dfs.replication 3 hadoop.tmp.dir /usr/local/src/hadoop/dfs/tmp hadoop.proxyuser.root.hosts * hadoop.proxyuser.root.groups * hadoop.proxyuser.root.users * 步骤七、配置yarn-site.xml

[root@master ~]#

vim $HADOOP_HOME/etc/hadoop/yarn-site.xml

配置内容

yarn.nodemanager.aux-services mapreduce_shuffle yarn.scheduler.minimum-allocation-mb 1024 yarn.scheduler.maximum-allocation-mb 4096 yarn.nodemanager.pmem-check-enabled false yarn.nodemanager.vmem-check-enabled false yarn.nodemanager.resource.cpu-vcores 5 yarn.resourcemanager.ha.enabled true yarn.resourcemanager.cluster-id RMcluster yarn.resourcemanager.ha.rm-ids rm1,rm2 yarn.resourcemanager.hostname.rm1 master yarn.resourcemanager.hostname.rm2 slave1 yarn.resourcemanager.webapp.address.rm1 master:8088 yarn.resourcemanager.webapp.address.rm2 slave1:8088 hadoop.zk.address master:2181,slave1:2181,slave2:2181 步骤八、配置mapred-site.xml

[root@master ~]#

vim $HADOOP_HOME/etc/hadoop/mapred-site.xml

配置内容

mapreduce.framework.name yarn yarn.app.mapreduce.am.env HADOOP_MAPRED_HOME=/opt/module/hadoop mapreduce.map.env HADOOP_MAPRED_HOME=/opt/module/hadoop mapreduce.reduce.env HADOOP_MAPRED_HOME=/opt/module/hadoop mapreduce.map.memory.mb 2048 mapreduce.map.java.opts -Xmx1536M mapreduce.reduce.memory.mb 4096 mapreduce.map.java.opts -Xmx2560M mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.webapp.address master:19888 mapreduce.jobhistory.intermediate-done-dir /mr-history/tmp mapreduce.jobhistory.done-dir /mr-history/done 步骤九、配置workers

[root@master ~]#

vim $HADOOP_HOME/etc/hadoop/workers

配置内容

master slave1 slave2 步骤十、分发文件

[root@master ~]#

scp -r /opt/module/hadoop/ root@slave1:/opt/module/ scp -r /opt/module/hadoop/ root@slave2:/opt/module/ scp /root/.bash_profile root@slave1:/root/ scp /root/.bash_profile root@slave2:/root/ 步骤十一、分别启动journalnode

[root@master ~]#

hdfs --daemon start journalnode

[root@slave1 ~]#

source /root/.bash_profile hdfs --daemon start journalnode

[root@slave2 ~]#

source /root/.bash_profile hdfs --daemon start journalnode 步骤十二、格式化namenode

[root@master ~]#

hdfs namenode -format 步骤十三、格式化zkfc

[root@master ~]#

hdfs zkfc -formatZK 步骤十四、启动集群

[root@master ~]#

start-all.sh

[root@master ~]#

jps

[root@slave1 ~]#

jps

[root@slave2 ~]#

jps 步骤十五、备用namenode复制元数据目录

[root@slave1 ~]#

hdfs namenode -bootstrapStandby

[root@slave2 ~]#

hdfs namenode -bootstrapStandby

[root@slave1 ~]#

hdfs --daemon start namenode jps

[root@slave2 ~]#

hdfs --daemon start namenode jps 步骤十六、运行pi程序测试

[root@master ~]#

yarn jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 10 10 步骤十七、查看所有namenode状态

[root@master ~]#

hdfs haadmin -getAllServiceState 步骤十八、重启master namenode

[root@master ~]#

hdfs --daemon start namenode jps hdfs haadmin -getServiceState nn1


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3