沙田网站建设精品课程网站开发的开题报告

当前位置: 首页 > news >正文

沙田网站建设,精品课程网站开发的开题报告,wordpress鼠标点击跟随,98同城招聘网信息附近背景概述 单 NameNode 的架构使得 HDFS 在集群扩展性和性能上都有潜在的问题#xff0c;当集群大到一定程度后#xff0c;NameNode 进程使用的内存可能会达到上百 G#xff0c;NameNode 成为了性能的瓶颈。因而提出了 namenode 水平扩展方案– Federation。 Federation 中…背景概述 单 NameNode 的架构使得 HDFS 在集群扩展性和性能上都有潜在的问题当集群大到一定程度后NameNode 进程使用的内存可能会达到上百 GNameNode 成为了性能的瓶颈。因而提出了 namenode 水平扩展方案– Federation。 Federation 中文意思为联邦,联盟是 NameNode 的 Federation,也就是会有多个NameNode。多个 NameNode 的情况意味着有多个 namespace(命名空间)区别于 HA 模式下的多 NameNode它们是拥有着同一个 namespace。 从上图中,我们可以很明显地看出现有的 HDFS 数据管理,数据存储 2 层分层的结构.也就是说,所有关于存储数据的信息和管理是放在 NameNode 这边,而真实数据的存储则是在各个 DataNode 下。而这些隶属于同一个 NameNode 所管理的数据都是在同一个命名空间下的。而一个 namespace 对应一个 block pool。Block Pool 是同一个 namespace 下的 block 的集合.当然这是我们最常见的单个 namespace 的情况,也就是一个 NameNode 管理集群中所有元数据信息的时候.如果我们遇到了之前提到的 NameNode 内存使用过高的问题,这时候怎么办?元数据空间依然还是在不断增大,一味调高 NameNode 的 jvm 大小绝对不是一个持久的办法.这时候就诞生了 HDFS Federation 的机制. Federation 架构设计 HDFS Federation 是解决 namenode 内存瓶颈问题的水平横向扩展方案。 Federation 意味着在集群中将会有多个 namenode/namespace。这些 namenode 之间是联合的也就是说他们之间相互独立且不需要互相协调各自分工管理自己的区域。分布式的 datanode 被用作通用的数据块存储存储设备。每个 datanode 要向集群中所有的namenode 注册且周期性地向所有 namenode 发送心跳和块报告并执行来自所有 namenode的命令。    Federation 一个典型的例子就是上面提到的 NameNode 内存过高问题,我们完全可以将上面部分大的文件目录移到另外一个NameNode上做管理. 更重要的一点在于, 这些 NameNode是共享集群中所有的 e DataNode 的 , 它们还是在同一个集群内的 。 这时候在DataNode上就不仅仅存储一个Block Pool下的数据了,而是多个(在DataNode的 datadir 所在目录里面查看 BP-xx.xx.xx.xx 打头的目录)。 多个 NN 共用一个集群里的存储资源每个 NN 都可以单独对外提供服务。 每个 NN 都会定义一个存储池有单独的 id每个 DN 都为所有存储池提供存储。 DN 会按照存储池 id 向其对应的 NN 汇报块信息同时DN 会向所有 NN 汇报本地存储可用资源情况。      集群部署搭建 主机规划 公共组件 zk: 192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181 mysql: 192.168.1.32 HDFS-federation 规划 clusterhostnameip机型组件cluster-ahadoop-31192.168.1.31armNameNode/DataNode/JournalNode/DFSZKFailoverController/DFSRouterhadoop-32192.168.1.32NameNode/DataNode/JournalNode/DFSZKFailoverController/DFSRouterhadoop-33192.168.1.33DataNode/JournalNodecluster-bspark-34192.168.1.34NameNode/DataNode/JournalNode/DFSZKFailoverController/DFSRouterspark-35192.168.1.35NameNode/DataNode/JournalNode/DFSZKFailoverController/DFSRouterspark-36192.168.1.36DataNode/JournalNode YARN-federation 规划 clusterhostnameip机型组件cluster-ahadoop-31192.168.1.31armResourceManager/NodeManager/Router/ApplicationHistoryServer/JobHistoryServer/WebAppProxyServerhadoop-32192.168.1.32ResourceManager/NodeManagerhadoop-33192.168.1.33NodeManagerspark-34192.168.1.34NodeManagercluster-bspark-35192.168.1.35ResourceManager/NodeManager/Router/ApplicationHistoryServer/JobHistoryServer/WebAppProxyServerspark-36192.168.1.36ResourceManager/NodeManager 环境变量配置 hadoop-env.sh 配置 export HADOOP_GC_DIR/data/apps/hadoop-3.3.1/logs/gc if [ ! -d \({HADOOP_GC_DIR} ];thenmkdir -p \){HADOOP_GC_DIR} fi export HADOOP_NAMENODE_JMX_OPTS-Dcom.sun.management.jmxremote.authenticatefalse -Dcom.sun.management.jmxremote.sslfalse -Dcom.sun.management.jmxremote.local.onlyfalse -Dcom.sun.management.jmxremote.port1234 -javaagent:/data/apps/hadoop-3.3.1/share/hadoop/jmx_prometheus_javaagent-0.17.2.jar9211:/data/apps/hadoop-3.3.1/etc/hadoop/namenode.yamlexport HADOOP_DATANODE_JMX_OPTS-Dcom.sun.management.jmxremote.authenticatefalse -Dcom.sun.management.jmxremote.sslfalse -Dcom.sun.management.jmxremote.local.onlyfalse -Dcom.sun.management.jmxremote.port1244 -javaagent:/data/apps/hadoop-3.3.1/share/hadoop/jmx_prometheus_javaagent-0.17.2.jar9212:/data/apps/hadoop-3.3.1/etc/hadoop/namenode.yamlexport HADOOP_ROOT_LOGGERINFO,console,RFA export SERVER_GC_OPTS-XX:UnlockExperimentalVMOptions -XX:UseG1GC -verbose:gc -XX:PrintGCDetails -XX:PrintGCDateStamps -XX:UseGCLogFileRotation -XX:NumberOfGCLogFiles10 -XX:GCLogFileSize512M -XX:ErrorFile/data/apps/hadoop-3.3.1/logs/hs_err_pid%p.log -XX:PrintAdaptiveSizePolicy -XX:PrintFlagsFinal -XX:MaxGCPauseMillis100 -XX:UnlockExperimentalVMOptions -XX:ParallelRefProcEnabled -XX:ConcGCThreads6 -XX:ParallelGCThreads16 -XX:G1NewSizePercent5 -XX:G1MaxNewSizePercent60 -XX:MaxTenuringThreshold1 -XX:G1HeapRegionSize32m -XX:G1MixedGCCountTarget8 -XX:InitiatingHeapOccupancyPercent65 -XX:G1OldCSetRegionThresholdPercent5 export HDFS_NAMENODE_OPTS-Xms4g -Xmx4g \({SERVER_GC_OPTS} -Xloggc:\){HADOOP_GC_DIR}/namenode-gc-date %Y%m%d%H%M \({HADOOP_NAMENODE_JMX_OPTS} export HDFS_DATANODE_OPTS-Xms4g -Xmx4g \){SERVER_GC_OPTS} -Xloggc:\({HADOOP_GC_DIR}/datanode-gc-date %Y%m%d%H%M \){HADOOP_DATANODE_JMX_OPTS} export HDFS_ZKFC_OPTS-Xms1g -Xmx1g \({SERVER_GC_OPTS} -Xloggc:\){HADOOP_GC_DIR}/zkfc-gc-date %Y%m%d%H%M export HDFS_DFSROUTER_OPTS-Xms1g -Xmx1g \({SERVER_GC_OPTS} -Xloggc:\){HADOOP_GC_DIR}/router-gc-date %Y%m%d%H%M export HDFS_JOURNALNODE_OPTS-Xms1g -Xmx1g \({SERVER_GC_OPTS} -Xloggc:\){HADOOP_GC_DIR}/journalnode-gc-date %Y%m%d%H%M export HADOOP_CONF_DIR/data/apps/hadoop-3.3.1/etc/hadoopyarn-env.sh 配置 export YARN_RESOURCEMANAGER_OPTS\(YARN_RESOURCEMANAGER_OPTS -Dcom.sun.management.jmxremote.authenticatefalse -Dcom.sun.management.jmxremote.sslfalse -Dcom.sun.management.jmxremote.local.onlyfalse -Dcom.sun.management.jmxremote.port2111 -javaagent:/data/apps/hadoop-3.3.1/share/hadoop/jmx_prometheus_javaagent-0.17.2.jar9323:/data/apps/hadoop-3.3.1/etc/hadoop/yarn-rm.yaml export YARN_NODEMANAGER_OPTS\){YARN_NODEMANAGER_OPTS} -Dcom.sun.management.jmxremote.authenticatefalse -Dcom.sun.management.jmxremote.sslfalse -Dcom.sun.management.jmxremote.local.onlyfalse -Dcom.sun.management.jmxremote.port2112 -javaagent:/data/apps/hadoop-3.3.1/share/hadoop/jmx_prometheus_javaagent-0.17.2.jar9324:/data/apps/hadoop-3.3.1/etc/hadoop/yarn-nm.yaml export YARN_ROUTER_OPTS\({YARN_ROUTER_OPTS}配置文件配置 HDFS的配置 cluster-a core-site.xml ?xml version1.0 encodingUTF-8? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? configurationpropertynamefs.defaultFS/namevaluehdfs://cluster-a/value/propertypropertynameha.zookeeper.quorum/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynamehadoop.zk.address/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynameha.zookeeper.parent-znode/namevalue/hadoop-ha-cluster-a/value/propertypropertynamefs.trash.interval/namevalue360/value/propertypropertynamefs.trash.checkpoint.interval/namevalue0/value/propertypropertynamehadoop.proxyuser.hduser.hosts/namevalue*/value/propertypropertynamehadoop.proxyuser.hduser.groups/namevalue*/value/propertypropertynamehadoop.proxyuser.root.hosts/namevalue*/value/propertypropertynamehadoop.proxyuser.root.groups/namevalue*/value/propertypropertynamehadoop.tmp.dir/namevalue/data/apps/hadoop-3.3.1/data/value/property!--安全认证初始化的类--propertynamehadoop.http.filter.initializers/namevalueorg.apache.hadoop.security.HttpCrossOriginFilterInitializer/value/property!--是否启用跨域支持--propertynamehadoop.http.cross-origin.enabled/namevaluetrue/value/propertypropertynamehadoop.http.cross-origin.allowed-origins/namevalue*/value/propertypropertynamehadoop.http.cross-origin.allowed-methods/namevalueGET, PUT, POST, OPTIONS, HEAD, DELETE/value/propertypropertynamehadoop.http.cross-origin.allowed-headers/namevalueX-Requested-With, Content-Type, Accept, Origin, WWW-Authenticate, Accept-Encoding, Transfer-Encoding/value/propertypropertynamehadoop.http.cross-origin.max-age/namevalue1800/value/propertypropertynamehadoop.http.authentication.simple.anonymous.allowed/namevaluetrue/value/propertypropertynamehadoop.http.authentication.type/namevaluesimple/value/propertypropertynamehadoop.security.authorization/namevaluefalse/value/propertypropertynameio.file.buffer.size/namevalue131072/value/propertypropertynameio.compression.codecs/namevalueorg.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec/value/propertypropertynameio.serializations/namevalueorg.apache.hadoop.io.serializer.WritableSerialization/value/propertypropertynameipc.client.connect.max.retries/namevalue50/value/propertypropertynameipc.client.connection.maxidletime/namevalue30000/value/propertypropertynameipc.client.idlethreshold/namevalue8000/value/propertypropertynameipc.server.tcpnodelay/namevaluetrue/value/property /configurationhdfs-site.xml ?xml version1.0 encodingUTF-8? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? configurationpropertynamedfs.replication/namevalue2/value/propertypropertynamedfs.nameservices/namevaluecluster-a,cluster-b/value/propertypropertynamedfs.ha.namenodes.cluster-a/namevaluenn1,nn2/value/propertypropertynamedfs.ha.namenodes.cluster-b/namevaluenn1,nn2/value/propertypropertynamedfs.namenode.rpc-address.cluster-a.nn1/namevalue192.168.1.31:8020/value/propertypropertynamedfs.namenode.rpc-address.cluster-a.nn2/namevalue192.168.1.32:8020/value/propertypropertynamedfs.namenode.http-address.cluster-a.nn1/namevalue192.168.1.31:9870/value/propertypropertynamedfs.namenode.http-address.cluster-a.nn2/namevalue192.168.1.32:9870/value/propertypropertynamedfs.namenode.rpc-address.cluster-b.nn1/namevalue192.168.1.34:8020/value/propertypropertynamedfs.namenode.rpc-address.cluster-b.nn2/namevalue192.168.1.35:8020/value/propertypropertynamedfs.namenode.http-address.cluster-b.nn1/namevalue192.168.1.34:9870/value/propertypropertynamedfs.namenode.http-address.cluster-b.nn2/namevalue192.168.1.35:9870/value/propertypropertynamedfs.namenode.name.dir/namevalue/data/apps/hadoop-3.3.1/data/namenode/value/propertypropertynamedfs.datanode.data.dir/namevalue/data/apps/hadoop-3.3.1/data/datanode/value/propertypropertynamedfs.journalnode.edits.dir/namevalue/data/apps/hadoop-3.3.1/data/journal/value/propertypropertynamedfs.namenode.shared.edits.dir/namevalueqjournal://192.168.1.31:8485;192.168.1.32:8485;192.168.1.33:8485/cluster-a/value/propertypropertynamedfs.client.failover.proxy.provider.cluster-a/namevalueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value/propertypropertynamedfs.client.failover.proxy.provider.cluster-b/namevalueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value/propertypropertynamedfs.client.failover.random.order/namevaluetrue/value/property!-- 开启NameNode失败自动切换 --propertynamedfs.ha.automatic-failover.enabled/namevaluetrue/value/property!-- 配置隔离机制方法--propertynamedfs.ha.zkfc.port/namevalue8019/value/propertypropertynamedfs.ha.fencing.methods/namevalueshell(/bin/true)/value/propertypropertynamedfs.ha.nn.not-become-active-in-safemode/namevaluetrue/value/propertypropertynamedfs.permissions/namevaluefalse/value/propertypropertynamefs.checkpoint.period/namevalue3600/value/propertypropertynamefs.checkpoint.size/namevalue67108864/value/propertypropertynamefs.checkpoint.dir/namevalue/data/apps/hadoop-3.3.1/data/checkpoint/value/propertypropertynamedfs.datanode.hostname/namevalue192.168.1.31/value/propertypropertynamedfs.namenode.handler.count/namevalue20/value/propertypropertynamedfs.datanode.handler.count/namevalue100/value/propertypropertynamedfs.datanode.max.transfer.threads/namevalue100/value/propertypropertynamedfs.blocksize/namevalue268435456/value/propertypropertynamedfs.hosts.exclude/namevalue/data/apps/hadoop-3.3.1/etc/hadoop/exclude-hosts/value/property /configurationcluster-b core-site.xml ?xml version1.0 encodingUTF-8? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? configurationpropertynamefs.defaultFS/namevaluehdfs://cluster-b/value/propertypropertynameha.zookeeper.quorum/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynamehadoop.zk.address/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynameha.zookeeper.parent-znode/namevalue/hadoop-ha-cluster-b/value/propertypropertynamefs.trash.interval/namevalue360/value/propertypropertynamefs.trash.checkpoint.interval/namevalue0/value/propertypropertynamehadoop.proxyuser.hduser.hosts/namevalue*/value/propertypropertynamehadoop.proxyuser.hduser.groups/namevalue*/value/propertypropertynamehadoop.proxyuser.root.hosts/namevalue*/value/propertypropertynamehadoop.proxyuser.root.groups/namevalue*/value/propertypropertynamehadoop.tmp.dir/namevalue/data/apps/hadoop-3.3.1/data/value/property!--安全认证初始化的类--propertynamehadoop.http.filter.initializers/namevalueorg.apache.hadoop.security.HttpCrossOriginFilterInitializer/value/property!--是否启用跨域支持--propertynamehadoop.http.cross-origin.enabled/namevaluetrue/value/propertypropertynamehadoop.http.cross-origin.allowed-origins/namevalue*/value/propertypropertynamehadoop.http.cross-origin.allowed-methods/namevalueGET, PUT, POST, OPTIONS, HEAD, DELETE/value/propertypropertynamehadoop.http.cross-origin.allowed-headers/namevalueX-Requested-With, Content-Type, Accept, Origin, WWW-Authenticate, Accept-Encoding, Transfer-Encoding/value/propertypropertynamehadoop.http.cross-origin.max-age/namevalue1800/value/propertypropertynamehadoop.http.authentication.simple.anonymous.allowed/namevaluetrue/value/propertypropertynamehadoop.http.authentication.type/namevaluesimple/value/propertypropertynamehadoop.security.authorization/namevaluefalse/value/propertypropertynameio.file.buffer.size/namevalue131072/value/propertypropertynameio.compression.codecs/namevalueorg.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec/value/propertypropertynameio.serializations/namevalueorg.apache.hadoop.io.serializer.WritableSerialization/value/propertypropertynameipc.client.connect.max.retries/namevalue50/value/propertypropertynameipc.client.connection.maxidletime/namevalue30000/value/propertypropertynameipc.client.idlethreshold/namevalue8000/value/propertypropertynameipc.server.tcpnodelay/namevaluetrue/value/property /configurationhdfs-site.xml ?xml version1.0 encodingUTF-8? ?xml-stylesheet typetext/xsl hrefconfiguration.xsl? configurationpropertynamedfs.replication/namevalue2/value/propertypropertynamedfs.nameservices/namevaluecluster-a,cluster-b/value/propertypropertynamedfs.ha.namenodes.cluster-a/namevaluenn1,nn2/value/propertypropertynamedfs.ha.namenodes.cluster-b/namevaluenn1,nn2/value/propertypropertynamedfs.namenode.rpc-address.cluster-a.nn1/namevalue192.168.1.31:8020/value/propertypropertynamedfs.namenode.rpc-address.cluster-a.nn2/namevalue192.168.1.32:8020/value/propertypropertynamedfs.namenode.http-address.cluster-a.nn1/namevalue192.168.1.31:9870/value/propertypropertynamedfs.namenode.http-address.cluster-a.nn2/namevalue192.168.1.32:9870/value/propertypropertynamedfs.namenode.rpc-address.cluster-b.nn1/namevalue192.168.1.34:8020/value/propertypropertynamedfs.namenode.rpc-address.cluster-b.nn2/namevalue192.168.1.35:8020/value/propertypropertynamedfs.namenode.http-address.cluster-b.nn1/namevalue192.168.1.34:9870/value/propertypropertynamedfs.namenode.http-address.cluster-b.nn2/namevalue192.168.1.35:9870/value/propertypropertynamedfs.namenode.name.dir/namevalue/data/apps/hadoop-3.3.1/data/namenode/value/propertypropertynamedfs.datanode.data.dir/namevalue/data/apps/hadoop-3.3.1/data/datanode/value/propertypropertynamedfs.journalnode.edits.dir/namevalue/data/apps/hadoop-3.3.1/data/journal/value/propertypropertynamedfs.namenode.shared.edits.dir/namevalueqjournal://192.168.1.34:8485;192.168.1.35:8485;192.168.1.36:8485/cluster-b/value/propertypropertynamedfs.client.failover.proxy.provider.cluster-a/namevalueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value/propertypropertynamedfs.client.failover.proxy.provider.cluster-b/namevalueorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider/value/propertypropertynamedfs.client.failover.random.order/namevaluetrue/value/property!-- 开启NameNode失败自动切换 --propertynamedfs.ha.automatic-failover.enabled/namevaluetrue/value/property!-- 配置隔离机制方法--propertynamedfs.ha.zkfc.port/namevalue8019/value/propertypropertynamedfs.ha.fencing.methods/namevalueshell(/bin/true)/value/propertypropertynamedfs.ha.fencing.ssh.private-key-files/namevalue/root/.ssh/id_rsa/value/propertypropertynamedfs.ha.nn.not-become-active-in-safemode/namevaluetrue/value/propertypropertynamedfs.permissions/namevaluefalse/value/propertypropertynamefs.checkpoint.period/namevalue3600/value/propertypropertynamefs.checkpoint.size/namevalue67108864/value/propertypropertynamefs.checkpoint.dir/namevalue/data/apps/hadoop-3.3.1/data/checkpoint/value/propertypropertynamedfs.datanode.hostname/namevalue192.168.1.34/value/propertypropertynamedfs.namenode.handler.count/namevalue20/value/propertypropertynamedfs.datanode.handler.count/namevalue100/value/propertypropertynamedfs.datanode.max.transfer.threads/namevalue100/value/propertypropertynamedfs.blocksize/namevalue268435456/value/propertypropertynamedfs.hosts.exclude/namevalue/data/apps/hadoop-3.3.1/etc/hadoop/exclude-hosts/value/property /configurationYARN的配置 cluster-a yarn-site.xml ?xml version1.0? configuration!-- 重试次数 --propertynameyarn.resourcemanager.am.max-attempts/namevalue2/value/property!-- 开启 Federation --propertynameyarn.federation.enabled/namevaluetrue/value/propertypropertynameyarn.router.bind-host/namevalue192.168.1.31/value/propertypropertynameyarn.router.hostname/namevalue192.168.1.31/value/propertypropertynameyarn.router.webapp.address/namevalue192.168.1.31:8099/value/propertypropertynameyarn.federation.state-store.class/namevalueorg.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore/value/propertypropertynameyarn.nodemanager.amrmproxy.enabled/namevaluetrue/value/propertypropertynameyarn.nodemanager.amrmproxy.ha.enable/namevaluetrue/value/property!-- 开启RM高可用 --propertynameyarn.resourcemanager.ha.enabled/namevaluetrue/value/property!-- 指定RM的cluster id --propertynameyarn.resourcemanager.cluster-id/namevaluecluster-a/value/property!-- 指定RM的名字 --propertynameyarn.resourcemanager.ha.rm-ids/namevaluerm1,rm2/value/property!-- 分别指定RM的地址 --propertynameyarn.resourcemanager.hostname.rm1/namevalue192.168.1.31/value/propertypropertynameyarn.resourcemanager.hostname.rm2/namevalue192.168.1.32/value/propertypropertynameyarn.resourcemanager.webapp.address.rm1/namevalue192.168.1.31:8088/value/propertypropertynameyarn.resourcemanager.webapp.address.rm2/namevalue192.168.1.32:8088/value/property!-- 指定zk集群地址 --propertynameyarn.resourcemanager.zk-address/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynamehadoop.zk.address/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynameyarn.resourcemanager.work-preserving-recovery.enabled/namevaluetrue/value/propertypropertynameyarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms/namevalue10000/value/property!--启动RM重启的功能默认是false--propertynameyarn.resourcemanager.recovery.enabled/namevaluetrue/valuedescriptionEnable RM to recover state after starting. If true, thenyarn.resourcemanager.store.class must be specified/description/property!--propertynameyarn.resourcemanager.store.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore/valuedescriptionThe class to use as the persistent store.用于状态存储的类默认是基于Hadoop 文件系统的实现FileSystemStateStore/description/propertypropertynameyarn.resourcemanager.zk-address/namevalue192.168.1.31:2181,192.168.1.31:2181,192.168.1.32:2181/valuedescriptionComma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server(e.g. 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002) to be used by the RM for storing RM state.This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStoreas the value for yarn.resourcemanager.store.class被RM用于状态存储的ZK服务器的主机端口号多个ZK之间使用逗号分离/description/propertypropertynameyarn.resourcemanager.store.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore/value/propertypropertynameyarn.resoucemanager.leveldb-state-store.path/namevalue\){hadoop.tmp.dir}/yarn-rm-recovery/leveldb/value/property–propertynameyarn.resourcemanager.store.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore/value/propertypropertynameyarn.resourcemanager.fs.state-store.uri/namevaluehdfs://cluster-a/yarn/rmstore/value/propertypropertynameyarn.resourcemanager.state-store.max-completed-applications/namevalue\({yarn.resourcemanager.max-completed-applications}/value/property!-- nodemanager --propertynameyarn.nodemanager.address/namevalue192.168.1.31:45454/value/propertypropertynameyarn.nodemanager.resource.memory-mb/namevalue16384/value/propertypropertynameyarn.nodemanager.container-executor.class/namevalueorg.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor/value/propertypropertynameyarn.nodemanager.remote-app-log-dir/namevalue/yarn/logs/valuedescriptionon HDFS. store app stage info /description/propertypropertynameyarn.nodemanager.local-dirs/namevalue/data/apps/hadoop-3.3.1/data/yarn/local/value/propertypropertynameyarn.nodemanager.log-dirs/namevalue/data/apps/hadoop-3.3.1/data/yarn/log/value/propertypropertynameyarn.nodemanager.aux-services/namevaluemapreduce_shuffle/value/propertypropertynameyarn.nodemanager.aux-services.mapreduce.shuffle.class/namevalueorg.apache.hadoop.mapred.shuffleHandler/value/propertypropertynameyarn.nodemanager.recovery.enabled/namevaluetrue/value/propertypropertynameyarn.nodemanager.recovery.dir/namevalue\){hadoop.tmp.dir}/yarn-nm-recovery/value/propertypropertynameyarn.nodemanager.recovery.supervised/namevaluetrue/value/property!–propertynameyarn.nodemanager.aux-services.timeline_collector.class/namevalueorg.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService/value/property–propertynameyarn.application.classpath/namevalue/data/apps/hadoop-3.3.1/etc/hadoop,/data/apps/hadoop-3.3.1/lib/,/data/apps/hadoop-3.3.1/share/hadoop/common/,/data/apps/hadoop-3.3.1/share/hadoop/common/lib/,/data/apps/hadoop-3.3.1/share/hadoop/hdfs/,/data/apps/hadoop-3.3.1/share/hadoop/hdfs/lib/,/data/apps/hadoop-3.3.1/share/hadoop/mapreduce/,/data/apps/hadoop-3.3.1/share/hadoop/mapreduce/lib/,/data/apps/hadoop-3.3.1/share/hadoop/yarn/,/data/apps/hadoop-3.3.1/share/hadoop/yarn/lib/,/data/apps/hadoop-3.3.1/share/hadoop/tools/,/data/apps/hadoop-3.3.1/share/hadoop/tools/lib//value/property!–需要执行如下命令对应的参数才能生效yarn-daemon.sh start proxyserver然后就可以看到spark的任务监控了–propertynameyarn.webapp.api-service.enable/namevaluetrue/value/propertypropertynameyarn.webapp.ui2.enable/namevaluetrue/value/propertypropertynameyarn.web-proxy.address/namevalue192.168.1.31:8089/value/property!– if not ha –!–propertynameyarn.resourcemanager.webapp.address/namevalue192.168.1.31:8088/value/property–propertynameyarn.resourcemanager.webapp.ui-actions.enabled/namevaluetrue/value/propertypropertynameyarn.resourcemanager.webapp.cross-origin.enabled/namevaluetrue/value/propertypropertynameyarn.nodemanager.webapp.cross-origin.enabled/namevaluetrue/value/propertypropertynamehadoop.http.cross-origin.allowed-origins/namevalue/value/property!– 开启日志聚合 –propertynameyarn.log-aggregation-enable/namevaluetrue/value/propertypropertynameyarn.log.server.url/namevaluehttp://192.168.1.31:19888/jobhistory/logs/value/propertypropertynameyarn.log.server.web-service.url/namevaluehttp://192.168.1.31:8188/ws/v1/applicationhistory/value/property!– 以下是Timeline相关设置 –!– 设置是否开启/使用Yarn Timeline服务 –!– 默认值:false –propertynameyarn.timeline-service.bind-host/namevalue192.168.1.31/value/propertypropertynameyarn.timeline-service.enabled/namevaluetrue/value/propertypropertynameyarn.timeline-service.hostname/namevalue192.168.1.31/value/propertypropertynameyarn.timeline-service.address/namevalue192.168.1.31:10200/value/propertypropertynameyarn.timeline-service.webapp.address/namevalue192.168.1.31:8188/value/propertypropertynameyarn.timeline-service.http-cross-origin.enabled/namevaluetrue/value/propertypropertynameyarn.timeline-service.webapp.https.address/namevalue192.168.1.31:8190/value/propertypropertynameyarn.timeline-service.handler-thread-count/namevalue10/value/propertypropertynameyarn.timeline-service.http-authentication.simple.anonymous.allowed/namevaluetrue/value/propertypropertynameyarn.timeline-service.http-cross-origin.allowed-origins/namevalue/valuedescriptionComma separated list of origins that are allowed for webservices needing cross-origin (CORS) support. Wildcards () and patternsallowed#需要跨域源支持的web服务所允许的以逗号分隔的列表/description/propertypropertynameyarn.timeline-service.http-cross-origin.allowed-methods/namevalueGET,POST,HEAD/valuedescriptionComma separated list of methods that are allowed for webservices needing cross-origin (CORS) support.跨域所允许的请求操作/description/propertypropertynameyarn.timeline-service.http-cross-origin.allowed-headers/namevalueX-Requested-With,Content-Type,Accept,Origin/valuedescriptionComma separated list of headers that are allowed for webservices needing cross-origin (CORS) support.允许用于web的标题的逗号分隔列表/description/propertypropertynameyarn.timeline-service.http-cross-origin.max-age/namevalue1800/valuedescriptionThe number of seconds a pre-flighted request can be cachedfor web services needing cross-origin (CORS) support.可以缓存预先传送的请求的秒数/description/property!– 设置是否从Timeline history-service中获取常规信息,如果为否,则是通过RM获取 –!– 默认值:false –propertynameyarn.timeline-service.generic-application-history.enabled/namevaluetrue/valuedescriptionIndicate to clients whether to query generic applicationdata from timeline history-service or not. If not enabled then applicationdata is queried only from Resource Manager.向资源管理器和客户端指示是否历史记录-服务是否启用。如果启用资源管理器将启动记录工时记录服务可以使用历史数据。同样当应用程序如果启用此选项请完成./description/propertypropertynameyarn.timeline-service.generic-application-history.store-class/namevalueorg.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore/valuedescriptionStore class name for history store, defaulting to file system store/description/property!– 这个地址是HDFS上的地址 –propertynameyarn.timeline-service.generic-application-history.fs-history-store.uri/namevalue/yarn/timeline/generic-history/valuedescriptionURI pointing to the location of the FileSystem path where the history will be persisted./description/propertypropertynameyarn.timeline-service.store-class/namevalueorg.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore/valuedescriptionStore class name for timeline store./description/property!– leveldb是用于存放Timeline历史记录的数据库,此参数控制leveldb文件存放路径所在 –!– 默认值:\({hadoop.tmp.dir}/yarn/timeline,其中hadoop.tmp.dir在core-site.xml中设置 --propertynameyarn.timeline-service.leveldb-timeline-store.path/namevalue\){hadoop.tmp.dir}/yarn/timeline/timeline/valuedescriptionStore file name for leveldb timeline store./description/property!– 设置leveldb中状态文件存放路径 –!– 默认值:\({hadoop.tmp.dir}/yarn/timeline --propertynameyarn.timeline-service.recovery.enabled/namevaluetrue/value/propertypropertynameyarn.timeline-service.leveldb-state-store.path/namevalue\){hadoop.tmp.dir}/yarn/timeline/state/valuedescriptionStore file name for leveldb state store./description/propertypropertynameyarn.timeline-service.ttl-enable/namevaluetrue/valuedescriptionEnable age off of timeline store data./description/propertypropertynameyarn.timeline-service.ttl-ms/namevalue6048000000/valuedescriptionTime to live for timeline store data in milliseconds./description/propertypropertynameyarn.timeline-service.generic-application-history.max-applications/namevalue100000/value/property!– 设置RM是否发布信息到Timeline服务器 –!– 默认值:false –propertynameyarn.resourcemanager.system-metrics-publisher.enabled/namevaluetrue/valuedescriptionThe setting that controls whether yarn system metrics is published on the timeline server or not byRM./description/propertypropertynameyarn.cluster.max-application-priority/namevalue5/value/propertypropertynamehadoop.http.filter.initializers/namevalueorg.apache.hadoop.security.HttpCrossOriginFilterInitializer,org.apache.hadoop.http.lib.StaticUserWebFilter/value/property /configurationcluster-b yarn-site.xml ?xml version1.0? configuration!– 重试次数 –propertynameyarn.resourcemanager.am.max-attempts/namevalue2/value/property!– 开启 Federation –propertynameyarn.federation.enabled/namevaluetrue/value/propertypropertynameyarn.router.bind-host/namevalue192.168.1.35/value/propertypropertynameyarn.router.hostname/namevalue192.168.1.35/value/propertypropertynameyarn.router.webapp.address/namevalue192.168.1.35:8099/value/propertypropertynameyarn.federation.state-store.class/namevalueorg.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore/value/propertypropertynameyarn.nodemanager.amrmproxy.enabled/namevaluetrue/value/propertypropertynameyarn.nodemanager.amrmproxy.ha.enable/namevaluetrue/value/property!– 开启RM高可用 –propertynameyarn.resourcemanager.ha.enabled/namevaluetrue/value/property!– 指定RM的cluster id –propertynameyarn.resourcemanager.cluster-id/namevaluecluster-b/value/property!– 指定RM的名字 –propertynameyarn.resourcemanager.ha.rm-ids/namevaluerm1,rm2/value/property!– 分别指定RM的地址 –propertynameyarn.resourcemanager.hostname.rm1/namevalue192.168.1.35/value/propertypropertynameyarn.resourcemanager.hostname.rm2/namevalue192.168.1.36/value/propertypropertynameyarn.resourcemanager.webapp.address.rm1/namevalue192.168.1.35:8088/value/propertypropertynameyarn.resourcemanager.webapp.address.rm2/namevalue192.168.1.36:8088/value/property!– 指定zk集群地址 –propertynameyarn.resourcemanager.zk-address/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynamehadoop.zk.address/namevalue192.168.1.31:2181,192.168.1.32:2181,192.168.1.33:2181/value/propertypropertynameyarn.resourcemanager.work-preserving-recovery.enabled/namevaluetrue/value/propertypropertynameyarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms/namevalue10000/value/property!–启动RM重启的功能默认是false–propertynameyarn.resourcemanager.recovery.enabled/namevaluetrue/valuedescriptionEnable RM to recover state after starting. If true, thenyarn.resourcemanager.store.class must be specified/description/property!–propertynameyarn.resourcemanager.store.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore/valuedescriptionThe class to use as the persistent store.用于状态存储的类默认是基于Hadoop 文件系统的实现FileSystemStateStore/description/propertypropertynameyarn.resourcemanager.zk-address/namevalue192.168.1.31:2181,192.168.1.31:2181,192.168.1.32:2181/valuedescriptionComma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server(e.g. 127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002) to be used by the RM for storing RM state.This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStoreas the value for yarn.resourcemanager.store.class被RM用于状态存储的ZK服务器的主机端口号多个ZK之间使用逗号分离/description/propertypropertynameyarn.resourcemanager.store.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore/value/propertypropertynameyarn.resoucemanager.leveldb-state-store.path/namevalue\({hadoop.tmp.dir}/yarn-rm-recovery/leveldb/value/property--propertynameyarn.resourcemanager.store.class/namevalueorg.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore/value/propertypropertynameyarn.resourcemanager.fs.state-store.uri/namevaluehdfs://cluster-b/yarn/rmstore/value/propertypropertynameyarn.resourcemanager.state-store.max-completed-applications/namevalue\){yarn.resourcemanager.max-completed-applications}/value/property!– nodemanager –propertynameyarn.nodemanager.address/namevalue192.168.1.35:45454/value/propertypropertynameyarn.nodemanager.resource.memory-mb/namevalue16384/value/propertypropertynameyarn.nodemanager.container-executor.class/namevalueorg.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor/value/propertypropertynameyarn.nodemanager.remote-app-log-dir/namevalue/yarn/logs/valuedescriptionon HDFS. store app stage info /description/propertypropertynameyarn.nodemanager.local-dirs/namevalue/data/apps/hadoop-3.3.1/data/yarn/local/value/propertypropertynameyarn.nodemanager.log-dirs/namevalue/data/apps/hadoop-3.3.1/data/yarn/log/value/propertypropertynameyarn.nodemanager.aux-services/namevaluemapreduce_shuffle/value/propertypropertynameyarn.nodemanager.aux-services.mapreduce.shuffle.class/namevalueorg.apache.hadoop.mapred.shuffleHandler/value/propertypropertynameyarn.nodemanager.recovery.enabled/namevaluetrue/value/propertypropertynameyarn.nodemanager.recovery.dir/namevalue\({hadoop.tmp.dir}/yarn-nm-recovery/value/propertypropertynameyarn.nodemanager.recovery.supervised/namevaluetrue/value/property!--propertynameyarn.nodemanager.aux-services.timeline_collector.class/namevalueorg.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService/value/property--propertynameyarn.application.classpath/namevalue/data/apps/hadoop-3.3.1/etc/hadoop,/data/apps/hadoop-3.3.1/lib/*,/data/apps/hadoop-3.3.1/share/hadoop/common/*,/data/apps/hadoop-3.3.1/share/hadoop/common/lib/*,/data/apps/hadoop-3.3.1/share/hadoop/hdfs/*,/data/apps/hadoop-3.3.1/share/hadoop/hdfs/lib/*,/data/apps/hadoop-3.3.1/share/hadoop/mapreduce/*,/data/apps/hadoop-3.3.1/share/hadoop/mapreduce/lib/*,/data/apps/hadoop-3.3.1/share/hadoop/yarn/*,/data/apps/hadoop-3.3.1/share/hadoop/yarn/lib/*,/data/apps/hadoop-3.3.1/share/hadoop/tools/*,/data/apps/hadoop-3.3.1/share/hadoop/tools/lib/*/value/property!--需要执行如下命令对应的参数才能生效yarn-daemon.sh start proxyserver然后就可以看到spark的任务监控了--propertynameyarn.webapp.api-service.enable/namevaluetrue/value/propertypropertynameyarn.webapp.ui2.enable/namevaluetrue/value/propertypropertynameyarn.web-proxy.address/namevalue192.168.1.35:8089/value/property!-- if not ha --!--propertynameyarn.resourcemanager.webapp.address/namevalue192.168.1.35:8088/value/property--propertynameyarn.resourcemanager.webapp.ui-actions.enabled/namevaluetrue/value/propertypropertynameyarn.resourcemanager.webapp.cross-origin.enabled/namevaluetrue/value/propertypropertynameyarn.nodemanager.webapp.cross-origin.enabled/namevaluetrue/value/propertypropertynamehadoop.http.cross-origin.allowed-origins/namevalue*/value/property!-- 开启日志聚合 --propertynameyarn.log-aggregation-enable/namevaluetrue/value/propertypropertynameyarn.log.server.url/namevaluehttp://192.168.1.35:19888/jobhistory/logs/value/propertypropertynameyarn.log.server.web-service.url/namevaluehttp://192.168.1.35:8188/ws/v1/applicationhistory/value/property!-- 以下是Timeline相关设置 --!-- 设置是否开启/使用Yarn Timeline服务 --!-- 默认值:false --propertynameyarn.timeline-service.bind-host/namevalue192.168.1.35/value/propertypropertynameyarn.timeline-service.enabled/namevaluetrue/value/propertypropertynameyarn.timeline-service.hostname/namevalue192.168.1.35/value/propertypropertynameyarn.timeline-service.address/namevalue192.168.1.35:10200/value/propertypropertynameyarn.timeline-service.webapp.address/namevalue192.168.1.35:8188/value/propertypropertynameyarn.timeline-service.http-cross-origin.enabled/namevaluetrue/value/propertypropertynameyarn.timeline-service.webapp.https.address/namevalue192.168.1.35:8190/value/propertypropertynameyarn.timeline-service.handler-thread-count/namevalue10/value/propertypropertynameyarn.timeline-service.http-authentication.simple.anonymous.allowed/namevaluetrue/value/propertypropertynameyarn.timeline-service.http-cross-origin.allowed-origins/namevalue*/valuedescriptionComma separated list of origins that are allowed for webservices needing cross-origin (CORS) support. Wildcards (*) and patternsallowed#需要跨域源支持的web服务所允许的以逗号分隔的列表/description/propertypropertynameyarn.timeline-service.http-cross-origin.allowed-methods/namevalueGET,POST,HEAD/valuedescriptionComma separated list of methods that are allowed for webservices needing cross-origin (CORS) support.跨域所允许的请求操作/description/propertypropertynameyarn.timeline-service.http-cross-origin.allowed-headers/namevalueX-Requested-With,Content-Type,Accept,Origin/valuedescriptionComma separated list of headers that are allowed for webservices needing cross-origin (CORS) support.允许用于web的标题的逗号分隔列表/description/propertypropertynameyarn.timeline-service.http-cross-origin.max-age/namevalue1800/valuedescriptionThe number of seconds a pre-flighted request can be cachedfor web services needing cross-origin (CORS) support.可以缓存预先传送的请求的秒数/description/property!-- 设置是否从Timeline history-service中获取常规信息,如果为否,则是通过RM获取 --!-- 默认值:false --propertynameyarn.timeline-service.generic-application-history.enabled/namevaluetrue/valuedescriptionIndicate to clients whether to query generic applicationdata from timeline history-service or not. If not enabled then applicationdata is queried only from Resource Manager.向资源管理器和客户端指示是否历史记录-服务是否启用。如果启用资源管理器将启动记录工时记录服务可以使用历史数据。同样当应用程序如果启用此选项请完成./description/propertypropertynameyarn.timeline-service.generic-application-history.store-class/namevalueorg.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore/valuedescriptionStore class name for history store, defaulting to file system store/description/property!-- 这个地址是HDFS上的地址 --propertynameyarn.timeline-service.generic-application-history.fs-history-store.uri/namevalue/yarn/timeline/generic-history/valuedescriptionURI pointing to the location of the FileSystem path where the history will be persisted./description/propertypropertynameyarn.timeline-service.store-class/namevalueorg.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore/valuedescriptionStore class name for timeline store./description/property!-- leveldb是用于存放Timeline历史记录的数据库,此参数控制leveldb文件存放路径所在 --!-- 默认值:\){hadoop.tmp.dir}/yarn/timeline,其中hadoop.tmp.dir在core-site.xml中设置 –propertynameyarn.timeline-service.leveldb-timeline-store.path/namevalue\({hadoop.tmp.dir}/yarn/timeline/timeline/valuedescriptionStore file name for leveldb timeline store./description/property!-- 设置leveldb中状态文件存放路径 --!-- 默认值:\){hadoop.tmp.dir}/yarn/timeline –propertynameyarn.timeline-service.recovery.enabled/namevaluetrue/value/propertypropertynameyarn.timeline-service.leveldb-state-store.path/namevalue${hadoop.tmp.dir}/yarn/timeline/state/valuedescriptionStore file name for leveldb state store./description/propertypropertynameyarn.timeline-service.ttl-enable/namevaluetrue/valuedescriptionEnable age off of timeline store data./description/propertypropertynameyarn.timeline-service.ttl-ms/namevalue6048000000/valuedescriptionTime to live for timeline store data in milliseconds./description/propertypropertynameyarn.timeline-service.generic-application-history.max-applications/namevalue100000/value/property!– 设置RM是否发布信息到Timeline服务器 –!– 默认值:false –propertynameyarn.resourcemanager.system-metrics-publisher.enabled/namevaluetrue/valuedescriptionThe setting that controls whether yarn system metrics is published on the timeline server or not byRM./description/propertypropertynameyarn.cluster.max-application-priority/namevalue5/value/propertypropertynamehadoop.http.filter.initializers/namevalueorg.apache.hadoop.security.HttpCrossOriginFilterInitializer,org.apache.hadoop.http.lib.StaticUserWebFilter/value/property /configurationHDFS启动命令 启动journalnode 在规划的journalnode上启动journalnode组件(必须先启动 journalnode, 否则namenode启动时无法连接) bin/hdfs –daemon start journalnode

进程为

JournalNode主格式化namenode限首次 bin/hdfs namenode -format -clusterId CID-55a8ca40-4cfa-478f-a93d-3238b8b50e86 -nonInteractive {{ $cluster }} 主启动namenode bin/hdfs –daemon start namenode

进程为

NameNode主格式化zkfc限首次 bin/hdfs zkfc -formatZK主启动zkfc bin/hdfs –daemon start zkfc

进程为

DFSZKFailoverController主启动router限RBF模式 bin/hdfs –daemon start dfsrouter

进程为

DFSRouter从同步namenode元数据限首次 bin/hdfs namenode -bootstrapStandby从启动namenode bin/hdfs –daemon start namenode

进程为

NameNode从启动zkfc bin/hdfs –daemon start zkfc

进程为

DFSZKFailoverController从启动router限RBF模式 bin/hdfs –daemon start dfsrouter

进程为

DFSRouter启动datanode 所有的dn节点执行如下命令 bin/hdfs –daemon start datanode

进程为

DataNode停止组件命令

简便的

kill -9 jps -m | egrep -iw journalnode|NameNode|DataNode|DFSZKFailoverController|DFSRouter | awk {print \(1}kill -9 jps -m | egrep -iw ResourceManager|NodeManager|Router | awk {print \)1}rm -rf /data/apps/hadoop-3.3.1/data/* /data/apps/hadoop-3.3.1/logs/*[roothadoop-31 zookeeper-3.8.0]# bin/zkCli.sh [zk: localhost:2181(CONNECTED) 0] ls / [federationstore, hadoop-ha-cluster-a, hadoop-ha-cluster-b, hdfs-federation, yarn-leader-election, zookeeper] [zk: localhost:2181(CONNECTED) 1] deleteall /federationstore [zk: localhost:2181(CONNECTED) 2] deleteall /hadoop-ha-cluster-a [zk: localhost:2181(CONNECTED) 3] deleteall /hadoop-ha-cluster-b [zk: localhost:2181(CONNECTED) 4] deleteall /hdfs-federation [zk: localhost:2181(CONNECTED) 5] deleteall /yarn-leader-election# 优雅的 bin/hdfs –daemon stop datanode bin/hdfs –daemon stop dfsrouter bin/hdfs –daemon stop zkfc bin/hdfs –daemon stop namenode bin/hdfs –daemon stop journalnode YARN启动命令 主 启动ResourceManager bin/yarn –daemon start resourcemanager

进程为

ResourceManager从 启动ResourceManager bin/yarn –daemon start resourcemanager

进程为

ResourceManager启动nodemanager bin/yarn –daemon start nodemanager

进程为

NodeManager启动timelineserver bin/yarn –daemon start timelineserver# 进程为 ApplicationHistoryServer启动proxyserver bin/yarn –daemon start proxyserver# 进程为 WebAppProxyServer启动historyserver (MR) bin/mapred –daemon start historyserver# 进程为 JobHistoryServer主启动router限Federation模式 bin/yarn –daemon start router

进程为

Router主从切换命令

将 nn1 切换成主节点

bin/hdfs haadmin -ns cluster-a -transitionToActive –forceactive –forcemanual nn1# 将 nn2 切换成主节点 bin/hdfs haadmin -ns cluster-a -failover nn1 nn2