智慧图书馆项目部署运维手册.docx
智慧图书馆项目部署运维手册XX科技股份有限公司编制群集设0HDT5三JBMKo*->*EIa>HtvfarwrI<1.»«.11MMvW>*,1.*MMr<4<r'S4v>4cMCMony*3MMe*>vIQta*<ta*MIinB'*marv4A4*-<三nOQDDQQ3 .点击''继续",进入下一步,测试数据库连接KBBzIIWmK.»MHk华.u6MtoMX。MW±na*Bma9MWftn»*s*u*WHIMa<wmNw*Mt*XNUMMSMAaBWMWMSKWWP«ZJCCM>MEcm1.19O1.2aW*rHAepcr1.tMngvSHt>W<m1t»0.MM±KS*其父.RMP<m1.*E*wOoMS<MtmCm1.»»6.MVtftRWINDtMMWSWA«Imon1.w*,oc<nOO0ODDOOO4 .测试成功,点击“继续”,进入目录设置,此处使用默认默认目录,根据实际情况进行目录修改胖集设.«««:MWSIttvhffie®DMtG*<).GffitMODODOO5 .点击“继续'',进入各个服务启动群集设置UO05Me1111UtUWT«»O妙CiMMrtHrtFM213½mMwr10aMMe1.1.y:½c4*.,.MN.XrcUM).H1%.1.w«U,C/s.*.C1.wv1.<EmtetR,力,oo6 .安装成功后进入CM管理界面主页C1.OUderdMANAGERCOO不cm1:7180/cmf/home主页次窸所再运行坎况同至QKKm,OC1.uster1*3王机4S16ZP*fC1.)图表GcpuOQHDFS02100%号EI小HuoXVImpa1.aQOozie-C1.UIUf1.三K、主机中的主机CPU快网J1.«!VARN(MR2S。1JZooKeepec.主矶落暮三½阴表仔仿W?C1.ouderaManagementServiceQCkXjdecaMar、五、HBASE安装5.1 准备工作:不同机器之间的时间同步要求每个节点子在30秒root(g)hadpNode5yum-yinsta1.1.ntp#安装ntp软件roothadoopNode5指定与啊里云时间同步服务5.2 安装:1 .下囊hbase1.3.22 .解压ambowhadoopNode1-$tar-xvzf-ZsofVhbase-1.3.2-bin.tar.gz-C-app'3 .配置环境变量/.base_PrOfi1.eHBASE_HOMEPATH4 .hbase-env.sh配置java_h(Mie和ZKambow(g)hadpNode1COnfJ$vi$HBASE_HOME/conf/hbase-env.shexportJAVA_HOME=/home/w1./app/jdk1.8.0_121exportHAD6bP_HOME=/home/w1./app.'hadoop-2.7.3exportHBASE_MANAGES_ZK=fa1.se#禁用Hbase使用内7/.zookeperexportHBASEBACKUPMASTERS=SfHBASEHOME)confbackup-masters#配置HA的笫二个节HMaster节点新建一个$HBASEJOMEJconf,IbaCkUP-masters文件Vi$HBASEHOMEconf.backp-masters把备用的HMaSter节点添加:hadoopNode25 .hbase-site.xm1.配置参数<configuration>< !-#指定hbase在HDFS中目录自动创建-><property><name>hbase.rootdir<name><vaiue>hdfsduster1.dghbase<va1.ue><property>< !-#true时,为集群模式u><property><name>hbase.c1.uster.distributed<name><va1.ue>true<va1.ue><property>< !-#设首用己的ZOOkeePer用的那个几个节点><property><name>hbase.zookeeper.quorum<name><va1.ue>hadoopNode1.,hadoopNode2rhadoopNode3,hadoopNode4,hadoopNode5<va1.ue<property><!-#使用内置Z。OkeePer时要指定-><property><nane>hbase.zookeeper.property.dataDir<name><va1.ue>homeambowzkdatahdata<va1.ue><property><conguration>6 .配置regionserver(配置每一个机器名子节点名不要配主节名)在hbaseConf/下新建regionserver文件,添力口如入内容hadoopNode3hadoopNode4HadoopNodeS7 .scp-rhbase到其他节点ambowhadoopNode1.contSscp-r/app/hbase-1.3.2ambowhadoopNode5:-/app/ambowhadoopNode1.confSscp-r-apphbase-1.3.2ambow1.IadOOPNOde4:/app/ambowhadoopNode1.confJSscp-r-apphbase-1.3.2ambowhadoopNode3:/app/ambowhadoopNode1.confSscpr-*apphbase1.3.2ambowhadoopNode2:-*/app/ambowhadoopNode5:ambowhadoopNode4:ambowhadoopNode3:-ambowhadoopNode2:-ambowhadoopNode1.con11Sscp-/.bashprofi1.eambowhadoopNode1.con11Sscp-/.bashprofi1.eambowhadpNode1.con11Sscp/.bashprofi1.e(ambowhadoopNodeIcon11Sscp/.bash_profi1.e各节点重新加线:source.bash_profi1.e启动hdfsstart-dfs.sh六、F1.ume安装6.1安装1.解压tarzxvfapache-f1.ume-1.6.0-bin.tar.gz(ambowhadoopNode311ume-1.6.0Star-zxvfapache-ume-1.6.0-bin.tar.gzCapp2 .然后进入f1.ume的目录,修改COnf下的f1.ume-env.sh.配置JAVA-HoMEexportjV.H0ME=homeambowappjdk1.8.0,1213 .配置.bash_ProfUe文件exportF1.UME_H0ME=/home/ambow/app/f1.ume-1.6.0exprotPATH=$PATH:$F1.UME_HOME/bin七、Kafka安装:7.1安装1 .下载Apachekafka官方:http:/kafka.apache.org/down1.oads.htm1.Sca1.a2.11-kafka_2.11-0.10.2.0.tgz(asc,md5)注:当Sca1.a用的是2.11那Kafka选择kafka_2.11-0.10.2.0.10.2才是Kafka的版本Kafka集群安装:1 .安装JDK&&闺JAVAJHOME2 .安装ZOOkeePer参照Zookeeper官网搭建一个ZK集群,并启动ZK集群。3 .解压Kafka安装包ambow1.IadOOPNode1.ambowStar-zxvfkafka_2.11-0.10.2.1.tgz-Capp4.配置环境变址exportKAFKA_HOME=/home/ainbow/app/kafka_2.11-0.10.2.1exportP/TH=$PATH:$K/FK?_HOME/b1.n5.修改配设文件COnfigserver.propertiesviServerproperties# 为依次增长的:0.1、2、3、4,集群中节点唯一idbroker.id=0# 删除上返的配置,默认是fa1.se生产环境设为fakede1.etc.topic.enab1.e=truc"监听的主机及端口号各节点改为本机相应的hostName1.isteners=P1.AINTEXT:/hadoopNode1.:9092# Kafka的消息数树存储路径1.og.dirs=homeambowkafkaData1.ogs# 创建主题的时候,戕认有1个分区num.partitiOnS=3# 指定ZooKeeper集群列表,各节点以逗号分zookeeper.connect=hadoopNode1.i2181,hadoopNode2i2181,hadoopNode3t2181,hadoopNode4:2181,hadoopNode5:21816 .分发至各个节点ka!1.<a2.11-0.10.2.1ka1.ka2.11-0.10.2.1kaa2.11-0.10.2.1kat1.<a.2.11-0.10.2.1ambowhadoopNode5:/appambowhadoopNode4:-/appambowhadoopNode3:appambowhadoopNode2:/appambowhadoopNode1.app$scp-r(ambowhadoopNode1appSscp-rambowhadoopNode1.appSSCP-r(ambowhadoopNode1appSscp-r-/.bash.prof1.e/.bash_profi1.e*-.bash.prof1.e一.baSh-PrOfneambow1.IadooPNOde5:ambowhadoopNode4:ambowhadoopNode3:ambowhadoopNode2:ambowhadoopNode1.app)Jscp-rjambow1.IadOOPNode1.appSscp-rambow0>hadoopNode1.app$scp-rambowhadoopNode1.appj$scprsource/.bash,profi1.e7 .修改各个节点的配置文件:"为依次增长的:0、1,2、3,4.集群中节点唯idbroker.id=0#监听的主机及端口号各节点次为本机相应的h。StNameIiStenerS=P1.工INTEXT:had。PNode1.:90928.各台节点上后动Kafka服务ambow1.IadOOPNode1.appSkafka-server-start.sh$KAFKA.HOMEconfigSerVCr.properties&注:要先肩动各节点ZookeeperzkServ1.et.shstartambowhad。PNodeIapp$ka1.-ka-server-stop.sh