欢迎来到课桌文档! | 帮助中心 课桌文档-建筑工程资料库
课桌文档
全部分类
  • 党建之窗>
  • 感悟体会>
  • 百家争鸣>
  • 教育整顿>
  • 文笔提升>
  • 热门分类>
  • 计划总结>
  • 致辞演讲>
  • 在线阅读>
  • ImageVerifierCode 换一换
    首页 课桌文档 > 资源分类 > DOC文档下载  

    Hadoop2.6.0分布式部署参考手册.doc

    • 资源ID:9389       资源大小:152KB        全文页数:17页
    • 资源格式: DOC        下载积分:10金币
    快捷下载 游客一键下载
    会员登录下载
    三方登录下载: 微信开放平台登录 QQ登录  
    下载资源需要10金币
    邮箱/手机:
    温馨提示:
    用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)
    支付方式: 支付宝    微信支付   
    验证码:   换一换

    加入VIP免费专享
     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    Hadoop2.6.0分布式部署参考手册.doc

    -Hadoop 2.6.0分布式部署参考手册1.环境说明21.1安装环境说明22.2 Hadoop集群环境说明:22.根底环境安装及配置22.1 添加hadoop用户22.2 JDK 1.7安装22.3 SSH无密码登陆配置32.4 修改hosts映射文件33.Hadoop安装及配置43.1 通用局部安装及配置43.2 各节点配置44.格式化/启动集群44.1 格式化集群HDFS文件系统44.2启动Hadoop集群5附录1 关键配置容参考51core-site.*ml52hdfs-site.*ml53mapred-site.*ml64yarn-site.*ml65hadoop-env.sh66slaves7附录2 详细配置容参考71core-site.*ml72hdfs-site.*ml73mapred-site.*ml84yarn-site.*ml105hadoop-env.sh126slaves13附录3 详细配置参数参考13·conf/core-site.*ml13·conf/hdfs-site.*ml13o Configurations for NameNode:13o Configurations for DataNode:14·conf/yarn-site.*ml14o Configurations for ResourceManager and NodeManager:14o Configurations for ResourceManager:14o Configurations for NodeManager:15o Configurations for History Server (Needs to be moved elsewhere):16·conf/mapred-site.*ml17o Configurations for MapReduce Applications:17o Configurations for MapReduce JobHistory Server:171.环境说明1.1安装环境说明本列中,操作系统为Centos 7.0,JDK版本为Oracle HotSpot 1.7,Hadoop版本为Apache Hadoop 2.6.0,操作用户为hadoop。2.2 Hadoop集群环境说明:集群各节点信息参考如下:主机名IP地址角色ResourceManagerResourceManager & MR JobHistory ServerNameNodeNameNodeSecondaryNameNodeSecondaryNameNodeDataNode01DataNode & NodeManagerDataNode02DataNode & NodeManagerDataNode03DataNode & NodeManagerDataNode04DataNode & NodeManagerDataNode05DataNode & NodeManager注:上述表中用&连接多个角色,如主机ResourceManager有两个角色,分别为ResourceManager和MR JobHistory Server。2.根底环境安装及配置2.1 添加hadoop用户useradd hadoop用户“hadoop即为Hadoop集群的安装和使用用户。2.2 JDK 1.7安装 Centos 7自带的JDK版本为 OpenJDK 1.7,本例中需要将其更换为Oracle HotSpot 1.7版,本例中采用解压二进制包方式安装,安装目录为/opt/。1 查看当前JDK rpm包 rpm -qa | grep jdk2 删除自带JDKrpm -e -nodepsrpm -e -nodeps3 安装指定JDK进入安装包所在目录并解压4 配置环境变量编辑/.bashrc或者/etc/profile,添加如下容:#JAVAe*port JAVA_HOME=/opt/jdk1.7e*port PATH=$PATH:$JAVA_HOME/bine*port CLASSPATH=$JAVA_HOME/libe*port CLASSPATH=$CLASSPATH:$JAVA_HOME/jre/lib2.3 SSH无密码登陆配置1 需要设置如上表格所示8台主机间的SSH无密码登陆。2 进入hadoop用户的根目录下并通过命令ssh-keygen -t rsa生成秘钥对3 创立公钥认证文件authorized_keys并将生成的/.ssh目录下的id_rsa.pub文件的容输出至该文件:more id_rsa.pub > auhorized_keys4 分别改变/.ssh目录和authorized_keys文件的权限:chmod700 /.ssh;chmod600 /.ssh/authorized_keys5 每个节点主机都重复以上步骤,并将各自的/.ssh/id_rsa.pub文件的公钥拷贝至其他主机。 对于以上操作,也可以通过一句命令搞定:rm -rf /.ssh;ssh-keygen -t rsa;chmod 700 /.ssh;more /.ssh/id_rsa.pub > /.ssh/authorized_keys;chmod 600 /.ssh/authorized_keys;注:在centos 6中可以用dsa方式:ssh-keygen -t dsa命令来设置无密码登陆,在centos 7中只能用rsa方式,否则只能ssh无密码登陆本机,无能登陆它机。2.4 修改hosts映射文件分别编辑各节点上的/etc/hosts文件,添加如下容:172.15.0.2 ResourceManager172.15.0.3 NameNode172.15.0.4 SecondaryNameNode172.15.0.5 DataNode01172.15.0.6 DataNode02172.15.0.7 DataNode03172.15.0.8 DataNode04172.15.0.9 DataNode05172.15.0.5 NodeManager01172.15.0.6 NodeManager02172.15.0.7 NodeManager03172.15.0.8 NodeManager04172.15.0.9 NodeManager053.Hadoop安装及配置3.1 通用局部安装及配置以下操作容为通用操作局部,及在每个节点上的容一样。分别在每个节点上重复如下操作:1 将hadoop安装包hadoop-2.6.0.tar拷贝至/opt目录下,并解压: 解压后的hadoop-2.6.0目录(/opt/hadoop-2.6.0)即为hadoop的安装根目录。2 更改hadoop安装目录hadoop-2.6.0的所有者为hadoop用户:chown -R hadoop.had3 添加环境变量:#hadoope*port PATH=$PATH:$HADOOP_HOME/bine*port PATH=$PATH:$HADOOP_HOME/sbin3.2 各节点配置分别将如下配置文件解压并分发至每个节点的Hadoop“$HADOOP_HOME/etc/hadoop目录中,如提示是否覆盖文件,确认即可。注:关于各节点的配置参数设置,请参考后面的“附录1或“附录24.格式化/启动集群4.1 格式化集群HDFS文件系统安装完毕后,需登陆NameNode节点或任一DataNode节点执行hdfs namenode -format格式化集群HDFS文件系统;注:如果非第一次格式化HDFS文件系统,则需要在进展格式化操作前分别将NameNode的dfs.namenode.name.dir和各个DataNode节点的dfs.datanode.data.dir目录(在本例中为/home/hadoop/hadoopdata)下的所有容清空。4.2启动Hadoop集群分别登陆如下主机并执行相应命令:1 登陆ResourceManger执行start-yarn.sh命令启动集群资源管理系统yarn2 登陆NameNode执行start-dfs.sh命令启动集群HDFS文件系统3 分别登陆SecondaryNameNode、DataNode01、DataNode02、DataNode03、DataNode04节点执行jps命令,查看每个节点是否有如下Java进程运行:ResourceManger节点运行的进程:ResouceNamagerNameNode节点运行的进程:NameNodeSecondaryNameNode节点运行的进程:SecondaryNameNode各个DataNode节点运行的进程:DataNode & NodeManager如果以上操作正常则说明Hadoop集群已经正常启动。附录1 关键配置容参考1core-site.*ml<configuration> <property> <name>fs.defaultFS</name> <value>hdfs:/NameNode:9000</value> <description>NameNode URI</description> </property></configuration>l 属性fs.defaultFS“表示NameNode节点地址,由hdfs:/主机名(或ip):端口号组成。2hdfs-site.*ml<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/hadoopdata/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/jack/hadoopdata/hdfs/datanode</value> </property <property> <name>dfs.namenode.secondary. -address</name> <value>SecondaryNameNode:50090</value> </property></configuration>l 属性“dfs.namenode.name.dir表示NameNode存储命名空间和操作日志相关的元数据信息的本地文件系统目录,该项默认本地路径为/tmp/hadoop-username/dfs/name;l “表示DataNode节点存储HDFS文件的本地文件系统目录,由组成,该项默认本地路径为/tmp/hadoop-username/dfs/data。l 属性“dfs.namenode.secondary. -address表示SecondNameNode主机及端口号如果无需额外指定SecondNameNode角色,可以不进展此项配置;3mapred-site.*ml<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description>E*ecution framework set to Hadoop YARN.</description> </property></configuration>l “表示执行mapreduce任务所使用的运行框架,默认为local,需要将其改为yarn4yarn-site.*ml<configuration><property>   <name>yarn.resourcemanager.hostname</name>   <value>ResourceManager</value>   <description>ResourceManager host</description> </property> <property> <name>yarn.nodemanager.au*-services</name> <value>mapreduce_shuffle</value> <description>Shuffle service that needs to be set for Map Reduce applications.</description> </property></configuration>l 属性用来指定ResourceManager主机地址;l “表示MR applicatons所使用的shuffle工具类5hadoop-env.shJAVA_HOME表示当前的Java安装目录e*port JAVA_HOME=/opt/jdk-1.76slaves集群中的master节点(NameNode、ResourceManager)需要配置其所拥有的slaver节点,其中:NameNode节点的slaves容为:DataNode01DataNode02DataNode03DataNode04DataNode05ResourceManager节点的slaves容为:NodeManager01NodeManager02NodeManager03NodeManager04NodeManager05附录2 详细配置容参考注:以下的红色字体局部的配置参数为必须配置的局部,其他配置皆为默认配置。1core-site.*ml<configuration><!-Configurations for NameNode(SecondaryNameNode)、DataNode、NodeManager:-><property> <name>fs.defaultFS</name> <value>hdfs:/NameNode:9000</value> <description>NameNode URI</description> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> <description>Size of read/write buffer used in SequenceFiles,The default value is 131072</description> </property></configuration>l 属性fs.defaultFS“表示NameNode节点地址,由hdfs:/主机名(或ip):端口号组成。2hdfs-site.*ml<configuration> <!-Configurations for NameNode:-> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/hadoopdata/hdfs/namenode</value> </property> <property> <name>dfs.namenode.secondary. -address</name> <value>SecondaryNameNode:50090</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property>ount</name> <value>100</value> </property> <!-Configurations for DataNode:-> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/hadoop/hadoopdata/hdfs/datanode</value> </property></configuration>l 属性“dfs.namenode.name.dir表示NameNode存储命名空间和操作日志相关的元数据信息的本地文件系统目录,该项默认本地路径为/tmp/hadoop-username/dfs/name;l “表示DataNode节点存储HDFS文件的本地文件系统目录,由组成,该项默认本地路径为/tmp/hadoop-username/dfs/data。l 属性“dfs.namenode.secondary. -address表示SecondNameNode主机及端口号如果无需额外指定SecondNameNode角色,可以不进展此项配置;3mapred-site.*ml<configuration> <!-Configurations for MapReduce Applications:-> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description>E*ecution framework set to Hadoop YARN.</description> </property> <property> <name>mapreduce.map.memory.mb</name> <value>1024</value> <description>Larger resource limit for maps.</description> </property> <property> <name>mapreduce.map.java.opts</name> <value>*m*1024M</value> <description>Larger heap-size for child jvms of maps.</description> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>1024</value> <description>Larger resource limit for reduces.</description> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>*m*2560M</value> <description></description> </property> <property> <name>mapreduce.task.io.sort.mb</name> <value>512</value> <description></description> </property> <property> <name>mapreduce.task.io.sort.factor</name> <value>10</value> <description>More streams merged at once while sorting files.</description> </property> <property> <name>mapreduce.reduce.shuffle.parallelcopies</name> <value>5</value> <description>Higher number of parallel copies run by reduces to fetch outputs from very large number of maps.</description> </property> <!-Configurations for MapReduce JobHistory Server:-><property> <name>mapreduce.jobhistory.address</name> <value>ResourceManager:10020</value> <description>MapReduce JobHistory Server host:port Default port is 10020</description> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>ResourceManager:19888</value> <description>MapReduce JobHistory Server Web UI host:port Default port is 19888</description> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/mr-history/tmp</value> <description>Directory where history files are written by MapReduce jobs. Defalut is "/mr-history/tmp"</description> </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/mr-history/done</value> <description>Directory where history files are managed by the MR JobHistory Server.Default value is "/mr-history/done"</description> </property></configuration>l “表示执行mapreduce任务所使用的运行框架,默认为local,需要将其改为yarn4yarn-site.*ml<configuration><!-Configurations for ResourceManager and NodeManager:-> <property> <name>yarn.acl.enable</name> <value>false</value> <description>Enable ACLs? Defaults to false. The value of the optional is "true" or "false"</description> </property><property> <name>yarn.admin.acl</name> <value>*</value> <description>ACL to set admins on the cluster. ACLs are of for comma-separated-usersspacecomma-separated-groups. Defaults to special value of * which means anyone. Special value of just space means no one has access</description> </property> <property> <name>yarn.log-aggregation-enable</name> <value>false</value> <description>Configuration to enable or disable log aggregation</description> </property><!-Congrations for ResourceManager:-> <property> <name>yarn.resourcemanager.address</name> <value>ResourceManager:8032</value> <description>ResourceManager host:port for clients to submit jobs.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>ResourceManager:8030</value> <description>ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>ResourceManager:8031</value> <description>ResourceManager host:port for NodeManagers.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description> </property> <property><name>yarn.resourcemanager.admin.address</name><value>ResourceManager:8033</value><description>ResourceManager host:port for administrative commands.NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname.</description> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>ResourceManager:8088</value> <description>ResourceManager web-ui host:port. NOTES:host:port If set, overrides the hostname set in yarn.resourcemanager.hostname</description> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>ResourceManager</value> <description>ResourceManager host</description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> <description>ResourceManager Scheduler class CapacityScheduler (recommended), FairScheduler (also recommended), or FifoScheduler.The default value is "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler". </description> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> <description>Minimum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs</description> </property> <property> <name>yarn.scheduler.ma*imum-allocation-mb</name> <value>8192</value> <description>Ma*imum limit of memory to allocate to each container request at the Resource Manager.NOTES:In MBs</description> </property> <!-Congrations for History Server:-> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>-1</value> <description>How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node.</description> </property> <property> <name>yarn.log-aggregation.retain-check-interval-seconds</name> <value>-1</value> <description>Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node.</description> </property> <!-Configurations for Configurations for NodeManager:-> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>8192</value> <description>Resource i.e. available physical memory, in MB, for given NodeManager. The default value is 8192. NOTES:Defines total available resources on the NodeManager to be made available to running containers </description> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.1</value> <description>Ma*imum ratio by which virtual memory usa

    注意事项

    本文(Hadoop2.6.0分布式部署参考手册.doc)为本站会员(夺命阿水)主动上传,课桌文档仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知课桌文档(点击联系客服),我们立即给予删除!

    温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。




    备案号:宁ICP备20000045号-1

    经营许可证:宁B2-20210002

    宁公网安备 64010402000986号

    课桌文档
    收起
    展开