Oracle MySQL DB培训与企服
电话同微信
: 13718043309
首页
就业课程
专题课程
认证课程
师资团队
团队博客
猎头服务
企业技术服务
关于博森瑞
当前位置:
首页
>
技术团队博客
>
大数据开发
相关链接
·
为什么Python的64位安装包文件名
·
Zookeeper安装与配置
·
Kafka的安装与配置
·
Azkaban安装实录
·
Flume安装使用实录
Kafka的安装与配置
## 安装Scala ### 上传介质 ```sql [root@bosenrui ~]# cd /usr/local/ [root@bosenrui local]# ll total 123152 -rw-r--r-- 1 root root 2442 Jan 25 02:04 1 -rw-r--r-- 1 root root 52550402 Mar 29 2017 apache-flume-1.6.0-bin.tar.gz drwxr-xr-x 5 root root 4096 Mar 16 04:00 azkaban drwxr-xr-x. 2 root root 4096 Sep 23 2011 bin drwxr-xr-x. 2 root root 4096 Sep 23 2011 etc drwxr-xr-x 7 root root 4096 Mar 16 01:01 flume drwxr-xr-x. 2 root root 4096 Sep 23 2011 games drwxr-xr-x 10 root root 4096 Jan 25 02:23 hadoop drwxr-xr-x 8 root root 4096 Mar 15 03:31 hbase drwxr-xr-x 8 root root 4096 Feb 6 05:02 hive drwxr-xr-x. 2 root root 4096 Sep 23 2011 include -rw-r--r-- 1 root root 44352403 Mar 22 2018 kafka_2.12-1.0.0.tgz drwxr-xr-x. 2 root root 4096 Sep 23 2011 lib drwxr-xr-x. 2 root root 4096 Sep 23 2011 lib64 drwxr-xr-x. 2 root root 4096 Sep 23 2011 libexec drwxr-xr-x. 2 root root 4096 Sep 23 2011 sbin -rw-r--r-- 1 root root 29114457 Mar 22 2018 scala-2.11.12.tgz drwxr-xr-x. 5 root root 4096 Jan 24 19:34 share drwxr-xr-x 9 root root 4096 Apr 27 2015 sqoop drwxr-xr-x. 2 root root 4096 Sep 23 2011 src drwxr-xr-x 12 1000 1000 4096 Feb 6 02:15 zookeeper -rw-r--r-- 1 root root 6105 Mar 15 03:31 zookeeper.out ``` ### 解压 ```sql [root@bosenrui local]# tar -zxvf scala-2.11.12.tgz scala-2.11.12/ scala-2.11.12/lib/ scala-2.11.12/lib/akka-actor_2.11-2.3.16.jar scala-2.11.12/lib/scala-reflect.jar scala-2.11.12/lib/config-1.2.1.jar scala-2.11.12/lib/scala-continuations-plugin_2.11.12-1.0.2.jar scala-2.11.12/lib/scala-parser-combinators_2.11-1.0.4.jar scala-2.11.12/lib/scala-swing_2.11-1.0.2.jar scala-2.11.12/lib/scala-compiler.jar scala-2.11.12/lib/scala-actors-migration_2.11-1.1.0.jar scala-2.11.12/lib/scalap-2.11.12.jar scala-2.11.12/lib/scala-library.jar scala-2.11.12/lib/jline-2.14.3.jar scala-2.11.12/lib/scala-xml_2.11-1.0.5.jar scala-2.11.12/lib/scala-continuations-library_2.11-1.0.2.jar scala-2.11.12/lib/scala-actors-2.11.0.jar scala-2.11.12/bin/ scala-2.11.12/bin/scala scala-2.11.12/bin/scalac.bat scala-2.11.12/bin/scala.bat scala-2.11.12/bin/scalap scala-2.11.12/bin/scalap.bat scala-2.11.12/bin/scaladoc.bat scala-2.11.12/bin/fsc scala-2.11.12/bin/fsc.bat scala-2.11.12/bin/scalac scala-2.11.12/bin/scaladoc scala-2.11.12/man/ scala-2.11.12/man/man1/ scala-2.11.12/man/man1/scalac.1 scala-2.11.12/man/man1/scaladoc.1 scala-2.11.12/man/man1/fsc.1 scala-2.11.12/man/man1/scala.1 scala-2.11.12/man/man1/scalap.1 scala-2.11.12/doc/ scala-2.11.12/doc/tools/ scala-2.11.12/doc/tools/scala.html scala-2.11.12/doc/tools/index.html scala-2.11.12/doc/tools/images/ scala-2.11.12/doc/tools/images/scala_logo.png scala-2.11.12/doc/tools/images/external.gif scala-2.11.12/doc/tools/fsc.html scala-2.11.12/doc/tools/scalac.html scala-2.11.12/doc/tools/css/ scala-2.11.12/doc/tools/css/style.css scala-2.11.12/doc/tools/scalap.html scala-2.11.12/doc/tools/scaladoc.html scala-2.11.12/doc/License.rtf scala-2.11.12/doc/licenses/ scala-2.11.12/doc/licenses/bsd_asm.txt scala-2.11.12/doc/licenses/mit_jquery.txt scala-2.11.12/doc/licenses/bsd_jline.txt scala-2.11.12/doc/licenses/mit_sizzle.txt scala-2.11.12/doc/licenses/mit_jquery-layout.txt scala-2.11.12/doc/licenses/mit_tools.tooltip.txt scala-2.11.12/doc/licenses/mit_jquery-ui.txt scala-2.11.12/doc/licenses/apache_jansi.txt scala-2.11.12/doc/LICENSE.md scala-2.11.12/doc/README ``` ### 修改文件名 ```sql [root@bosenrui local]# mv scala-2.11.12 scala ``` #### 修改环境变量并使之生效 ```sql [root@bosenrui scala]# cat /etc/profile # /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates. pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } if [ -x /usr/bin/id ]; then if [ -z "$EUID" ]; then # ksh workaround EUID=`id -u` UID=`id -ru` fi USER="`id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /sbin pathmunge /usr/sbin pathmunge /usr/local/sbin else pathmunge /usr/local/sbin after pathmunge /usr/sbin after pathmunge /sbin after fi HOSTNAME=`/bin/hostname 2>/dev/null` HISTSIZE=1000 if [ "$HISTCONTROL" = "ignorespace" ] ; then export HISTCONTROL=ignoreboth else export HISTCONTROL=ignoredups fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL # By default, we want umask to get set. This sets it for login shell # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi for i in /etc/profile.d/*.sh ; do if [ -r "$i" ]; then if [ "${-#*i}" != "$-" ]; then . "$i" else . "$i" >/dev/null 2>&1 fi fi done ## 此处以下为新加入内容 export JAVA_HOME=/usr/java export JRE_HOME=/usr/java/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin ## 此处以下为新加入内容 export HADOOP_HOME=/usr/local/hadoop #export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native" export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native export HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib" #export HADOOP_ROOT_LOGGER=DEBUG,console export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ## 以下为ZOOKEEPER部分新加入内容 export ZOOKEEPER_HOME=/usr/local/zookeeper export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin ## 以下为Hive部分新加入内容 export HIVE_HOME=/usr/local/hive export PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin export MYSQL_HOME=/usr/local/mysql ## 以下部分为Hbase新加内容 export HBASE_HOME=/usr/local/hbase export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin ## 以下部分为Sqoop部分增加 export SQOOP_HOME=/usr/local/sqoop export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin ## 以下部分为flume部分增加 export FLUME_HOME=/usr/local/flume export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin ## 以下部分为azkaban部分增加 export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin:/usr/local/azkaban/server/bin:/usr/local/azkaban/executor/bin/ ## 以下部分为scala部分增加 export SCALA_HOME=/usr/local/scala export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin:/usr/local/azkaban/server/bin:/usr/local/azkaban/executor/bin/:$SCALA_HOME/bin unset i unset -f pathmunge ``` ### 启动客户端并测试 ```sql [root@bosenrui scala]# scala Welcome to Scala 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45). Type in expressions for evaluation. Or try :help. scala> :quit ``` ## 安装Kafka ### 解压 ```sql [root@bosenrui local]# tar -zxvf kafka_2.12-1.0.0.tgz kafka_2.12-1.0.0/ kafka_2.12-1.0.0/LICENSE kafka_2.12-1.0.0/NOTICE kafka_2.12-1.0.0/bin/ kafka_2.12-1.0.0/bin/connect-distributed.sh kafka_2.12-1.0.0/bin/connect-standalone.sh kafka_2.12-1.0.0/bin/kafka-acls.sh kafka_2.12-1.0.0/bin/kafka-broker-api-versions.sh kafka_2.12-1.0.0/bin/kafka-configs.sh kafka_2.12-1.0.0/bin/kafka-console-consumer.sh kafka_2.12-1.0.0/bin/kafka-console-producer.sh kafka_2.12-1.0.0/bin/kafka-consumer-groups.sh kafka_2.12-1.0.0/bin/kafka-consumer-perf-test.sh kafka_2.12-1.0.0/bin/kafka-delete-records.sh kafka_2.12-1.0.0/bin/kafka-log-dirs.sh kafka_2.12-1.0.0/bin/kafka-mirror-maker.sh kafka_2.12-1.0.0/bin/kafka-preferred-replica-election.sh kafka_2.12-1.0.0/bin/kafka-producer-perf-test.sh kafka_2.12-1.0.0/bin/kafka-reassign-partitions.sh kafka_2.12-1.0.0/bin/kafka-replay-log-producer.sh kafka_2.12-1.0.0/bin/kafka-replica-verification.sh kafka_2.12-1.0.0/bin/kafka-run-class.sh kafka_2.12-1.0.0/bin/kafka-server-start.sh kafka_2.12-1.0.0/bin/kafka-server-stop.sh kafka_2.12-1.0.0/bin/kafka-simple-consumer-shell.sh kafka_2.12-1.0.0/bin/kafka-streams-application-reset.sh kafka_2.12-1.0.0/bin/kafka-topics.sh kafka_2.12-1.0.0/bin/kafka-verifiable-consumer.sh kafka_2.12-1.0.0/bin/kafka-verifiable-producer.sh kafka_2.12-1.0.0/bin/trogdor.sh kafka_2.12-1.0.0/bin/windows/ kafka_2.12-1.0.0/bin/windows/connect-distributed.bat kafka_2.12-1.0.0/bin/windows/connect-standalone.bat kafka_2.12-1.0.0/bin/windows/kafka-acls.bat kafka_2.12-1.0.0/bin/windows/kafka-broker-api-versions.bat kafka_2.12-1.0.0/bin/windows/kafka-configs.bat kafka_2.12-1.0.0/bin/windows/kafka-console-consumer.bat kafka_2.12-1.0.0/bin/windows/kafka-console-producer.bat kafka_2.12-1.0.0/bin/windows/kafka-consumer-groups.bat kafka_2.12-1.0.0/bin/windows/kafka-consumer-offset-checker.bat kafka_2.12-1.0.0/bin/windows/kafka-consumer-perf-test.bat kafka_2.12-1.0.0/bin/windows/kafka-mirror-maker.bat kafka_2.12-1.0.0/bin/windows/kafka-preferred-replica-election.bat kafka_2.12-1.0.0/bin/windows/kafka-producer-perf-test.bat kafka_2.12-1.0.0/bin/windows/kafka-reassign-partitions.bat kafka_2.12-1.0.0/bin/windows/kafka-replay-log-producer.bat kafka_2.12-1.0.0/bin/windows/kafka-replica-verification.bat kafka_2.12-1.0.0/bin/windows/kafka-run-class.bat kafka_2.12-1.0.0/bin/windows/kafka-server-start.bat kafka_2.12-1.0.0/bin/windows/kafka-server-stop.bat kafka_2.12-1.0.0/bin/windows/kafka-simple-consumer-shell.bat kafka_2.12-1.0.0/bin/windows/kafka-topics.bat kafka_2.12-1.0.0/bin/windows/zookeeper-server-start.bat kafka_2.12-1.0.0/bin/windows/zookeeper-server-stop.bat kafka_2.12-1.0.0/bin/windows/zookeeper-shell.bat kafka_2.12-1.0.0/bin/zookeeper-security-migration.sh kafka_2.12-1.0.0/bin/zookeeper-server-start.sh kafka_2.12-1.0.0/bin/zookeeper-server-stop.sh kafka_2.12-1.0.0/bin/zookeeper-shell.sh kafka_2.12-1.0.0/config/ kafka_2.12-1.0.0/config/connect-console-sink.properties kafka_2.12-1.0.0/config/connect-console-source.properties kafka_2.12-1.0.0/config/connect-distributed.properties kafka_2.12-1.0.0/config/connect-file-sink.properties kafka_2.12-1.0.0/config/connect-file-source.properties kafka_2.12-1.0.0/config/connect-log4j.properties kafka_2.12-1.0.0/config/connect-standalone.properties kafka_2.12-1.0.0/config/consumer.properties kafka_2.12-1.0.0/config/log4j.properties kafka_2.12-1.0.0/config/producer.properties kafka_2.12-1.0.0/config/server.properties kafka_2.12-1.0.0/config/tools-log4j.properties kafka_2.12-1.0.0/config/zookeeper.properties kafka_2.12-1.0.0/libs/ kafka_2.12-1.0.0/libs/kafka-clients-1.0.0.jar kafka_2.12-1.0.0/libs/jackson-databind-2.9.1.jar kafka_2.12-1.0.0/libs/jopt-simple-5.0.4.jar kafka_2.12-1.0.0/libs/metrics-core-2.2.0.jar kafka_2.12-1.0.0/libs/scala-library-2.12.3.jar kafka_2.12-1.0.0/libs/zkclient-0.10.jar kafka_2.12-1.0.0/libs/zookeeper-3.4.10.jar kafka_2.12-1.0.0/libs/slf4j-log4j12-1.7.25.jar kafka_2.12-1.0.0/libs/lz4-java-1.4.jar kafka_2.12-1.0.0/libs/snappy-java-1.1.4.jar kafka_2.12-1.0.0/libs/slf4j-api-1.7.25.jar kafka_2.12-1.0.0/libs/jackson-annotations-2.9.1.jar kafka_2.12-1.0.0/libs/jackson-core-2.9.1.jar kafka_2.12-1.0.0/libs/log4j-1.2.17.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0.jar.asc kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-sources.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-sources.jar.asc kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-javadoc.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-javadoc.jar.asc kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-test.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-test.jar.asc kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-test-sources.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-test-sources.jar.asc kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-scaladoc.jar kafka_2.12-1.0.0/libs/kafka_2.12-1.0.0-scaladoc.jar.asc kafka_2.12-1.0.0/site-docs/ kafka_2.12-1.0.0/site-docs/kafka_2.12-1.0.0-site-docs.tgz kafka_2.12-1.0.0/libs/kafka-tools-1.0.0.jar kafka_2.12-1.0.0/libs/kafka-log4j-appender-1.0.0.jar kafka_2.12-1.0.0/libs/argparse4j-0.7.0.jar kafka_2.12-1.0.0/libs/jackson-jaxrs-json-provider-2.9.1.jar kafka_2.12-1.0.0/libs/jackson-jaxrs-base-2.9.1.jar kafka_2.12-1.0.0/libs/jackson-module-jaxb-annotations-2.9.1.jar kafka_2.12-1.0.0/libs/jersey-container-servlet-2.25.1.jar kafka_2.12-1.0.0/libs/jetty-servlet-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jetty-security-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jetty-server-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jetty-servlets-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jersey-container-servlet-core-2.25.1.jar kafka_2.12-1.0.0/libs/jersey-server-2.25.1.jar kafka_2.12-1.0.0/libs/jersey-client-2.25.1.jar kafka_2.12-1.0.0/libs/jersey-media-jaxb-2.25.1.jar kafka_2.12-1.0.0/libs/jersey-common-2.25.1.jar kafka_2.12-1.0.0/libs/javax.ws.rs-api-2.0.1.jar kafka_2.12-1.0.0/libs/javax.servlet-api-3.1.0.jar kafka_2.12-1.0.0/libs/jetty-http-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jetty-io-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jetty-continuation-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/jetty-util-9.2.22.v20170606.jar kafka_2.12-1.0.0/libs/hk2-locator-2.5.0-b32.jar kafka_2.12-1.0.0/libs/javax.inject-2.5.0-b32.jar kafka_2.12-1.0.0/libs/javax.annotation-api-1.2.jar kafka_2.12-1.0.0/libs/jersey-guava-2.25.1.jar kafka_2.12-1.0.0/libs/hk2-api-2.5.0-b32.jar kafka_2.12-1.0.0/libs/osgi-resource-locator-1.0.1.jar kafka_2.12-1.0.0/libs/validation-api-1.1.0.Final.jar kafka_2.12-1.0.0/libs/hk2-utils-2.5.0-b32.jar kafka_2.12-1.0.0/libs/aopalliance-repackaged-2.5.0-b32.jar kafka_2.12-1.0.0/libs/javassist-3.20.0-GA.jar kafka_2.12-1.0.0/libs/javax.inject-1.jar kafka_2.12-1.0.0/libs/connect-api-1.0.0.jar kafka_2.12-1.0.0/libs/connect-runtime-1.0.0.jar kafka_2.12-1.0.0/libs/connect-transforms-1.0.0.jar kafka_2.12-1.0.0/libs/reflections-0.9.11.jar kafka_2.12-1.0.0/libs/maven-artifact-3.5.0.jar kafka_2.12-1.0.0/libs/guava-20.0.jar kafka_2.12-1.0.0/libs/javassist-3.21.0-GA.jar kafka_2.12-1.0.0/libs/plexus-utils-3.0.24.jar kafka_2.12-1.0.0/libs/commons-lang3-3.5.jar kafka_2.12-1.0.0/libs/connect-json-1.0.0.jar kafka_2.12-1.0.0/libs/connect-file-1.0.0.jar kafka_2.12-1.0.0/libs/kafka-streams-1.0.0.jar kafka_2.12-1.0.0/libs/rocksdbjni-5.7.3.jar kafka_2.12-1.0.0/libs/kafka-streams-examples-1.0.0.jar ``` ### 重命名文件夹 ```sql [root@bosenrui local]# mv kafka_2.12-1.0.0 kafka ``` ### 修改环境变量并使之生效 ```sql [root@bosenrui ~]# cat /etc/profile # /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates. pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } if [ -x /usr/bin/id ]; then if [ -z "$EUID" ]; then # ksh workaround EUID=`id -u` UID=`id -ru` fi USER="`id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /sbin pathmunge /usr/sbin pathmunge /usr/local/sbin else pathmunge /usr/local/sbin after pathmunge /usr/sbin after pathmunge /sbin after fi HOSTNAME=`/bin/hostname 2>/dev/null` HISTSIZE=1000 if [ "$HISTCONTROL" = "ignorespace" ] ; then export HISTCONTROL=ignoreboth else export HISTCONTROL=ignoredups fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL # By default, we want umask to get set. This sets it for login shell # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi for i in /etc/profile.d/*.sh ; do if [ -r "$i" ]; then if [ "${-#*i}" != "$-" ]; then . "$i" else . "$i" >/dev/null 2>&1 fi fi done ## 此处以下为新加入内容 export JAVA_HOME=/usr/java export JRE_HOME=/usr/java/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin ## 此处以下为新加入内容 export HADOOP_HOME=/usr/local/hadoop #export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native" export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native export HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib" #export HADOOP_ROOT_LOGGER=DEBUG,console export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ## 以下为ZOOKEEPER部分新加入内容 export ZOOKEEPER_HOME=/usr/local/zookeeper export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin ## 以下为Hive部分新加入内容 export HIVE_HOME=/usr/local/hive export PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin export MYSQL_HOME=/usr/local/mysql ## 以下部分为Hbase新加内容 export HBASE_HOME=/usr/local/hbase export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin ## 以下部分为Sqoop部分增加 export SQOOP_HOME=/usr/local/sqoop export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin ## 以下部分为flume部分增加 export FLUME_HOME=/usr/local/flume export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin ## 以下部分为azkaban部分增加 export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin:/usr/local/azkaban/server/bin:/usr/local/azkaban/executor/bin/ ## 以下部分为scala部分增加 export SCALA_HOME=/usr/local/scala export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin:/usr/local/azkaban/server/bin:/usr/local/azkaban/executor/bin/:$SCALA_HOME/bin ## 以下部分为kafka部分增加 export KAFKA_HOME=/usr/local/kafka export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin:$FLUME_HOME/bin:/usr/local/azkaban/server/bin:/usr/local/azkaban/executor/bin/:$SCALA_HOME/bin/:$KAFKA_HOME/bin unset i unset -f pathmunge ``` ### 创建日志文件夹 ```sql [root@bosenrui local]# mkdir -p /usr/local/kafka/logs ``` ### 修改配置文件 ```sql [root@bosenrui ~]# cat /usr/local/kafka/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# ## 此处进行修改 # A comma seperated list of directories under which to store log files log.dirs=/usr/local/kafka/logs # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion due to age log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=localhost:2181 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0 ``` ## 启动测试 ### 启动Hadoop ```sql [root@bosenrui ~]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 18/03/18 01:31:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [bosenrui] bosenrui: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-bosenrui.out c^Hlocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-bosenrui.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-bosenrui.out 18/03/18 01:32:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-bosenrui.out localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-bosenrui.out ``` ### 启动zookeeper ```sql [root@bosenrui ~]# zkServer.sh start ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ``` ### 启动Kafka ```sql [root@bosenrui ~]# kafka-server-start.sh /usr/local/kafka/config/server.properties& [1] 3612 [root@bosenrui ~]# [2018-03-18 01:38:55,601] INFO KafkaConfig values: advertised.host.name = null advertised.listeners = null advertised.port = null alter.config.policy.class.name = null authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.id = 0 broker.id.generation.enable = true broker.rack = null compression.type = producer connections.max.idle.ms = 600000 controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 0 group.max.session.timeout.ms = 300000 group.min.session.timeout.ms = 6000 host.name = inter.broker.listener.name = null inter.broker.protocol.version = 1.0-IV0 leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = null log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-logs log.dirs = /usr/local/kafka/logs log.flush.interval.messages = 9223372036854775807 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.format.version = 1.0-IV0 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = message.max.bytes = 1000012 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 num.io.threads = 8 num.network.threads = 3 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 1440 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 port = 9092 principal.builder.class = null producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.consumer.default = 9223372036854775807 quota.producer.default = 9223372036854775807 quota.window.num = 11 quota.window.size.seconds = 1 replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 10000 replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.enabled.mechanisms = [GSSAPI] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism.inter.broker.protocol = GSSAPI security.inter.broker.protocol = PLAINTEXT socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = null ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 50 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.connect = localhost:2181 zookeeper.connection.timeout.ms = 6000 zookeeper.session.timeout.ms = 6000 zookeeper.set.acl = false zookeeper.sync.time.ms = 2000 (kafka.server.KafkaConfig) [2018-03-18 01:38:55,832] INFO starting (kafka.server.KafkaServer) [2018-03-18 01:38:55,840] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer) [2018-03-18 01:38:55,965] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2018-03-18 01:38:56,005] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,005] INFO Client environment:host.name=bosenrui (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,005] INFO Client environment:java.version=1.8.0_45 (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,005] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,005] INFO Client environment:java.home=/usr/java/jre (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,005] INFO Client environment:java.class.path=.:/usr/java/lib/dt.jar:/usr/java/lib/tools.jar:/usr/java/jre/lib:/usr/local/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/usr/local/kafka/bin/../libs/argparse4j-0.7.0.jar:/usr/local/kafka/bin/../libs/commons-lang3-3.5.jar:/usr/local/kafka/bin/../libs/connect-api-1.0.0.jar:/usr/local/kafka/bin/../libs/connect-file-1.0.0.jar:/usr/local/kafka/bin/../libs/connect-json-1.0.0.jar:/usr/local/kafka/bin/../libs/connect-runtime-1.0.0.jar:/usr/local/kafka/bin/../libs/connect-transforms-1.0.0.jar:/usr/local/kafka/bin/../libs/guava-20.0.jar:/usr/local/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/usr/local/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/usr/local/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/usr/local/kafka/bin/../libs/jackson-annotations-2.9.1.jar:/usr/local/kafka/bin/../libs/jackson-core-2.9.1.jar:/usr/local/kafka/bin/../libs/jackson-databind-2.9.1.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/usr/local/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/usr/local/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/usr/local/kafka/bin/../libs/javassist-3.20.0-GA.jar:/usr/local/kafka/bin/../libs/javassist-3.21.0-GA.jar:/usr/local/kafka/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/kafka/bin/../libs/javax.inject-1.jar:/usr/local/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/usr/local/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/kafka/bin/../libs/jersey-client-2.25.1.jar:/usr/local/kafka/bin/../libs/jersey-common-2.25.1.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/usr/local/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/usr/local/kafka/bin/../libs/jersey-guava-2.25.1.jar:/usr/local/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/usr/local/kafka/bin/../libs/jersey-server-2.25.1.jar:/usr/local/kafka/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-http-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-io-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-security-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-server-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jetty-util-9.2.22.v20170606.jar:/usr/local/kafka/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/kafka/bin/../libs/kafka_2.12-1.0.0.jar:/usr/local/kafka/bin/../libs/kafka_2.12-1.0.0-sources.jar:/usr/local/kafka/bin/../libs/kafka_2.12-1.0.0-test-sources.jar:/usr/local/kafka/bin/../libs/kafka-clients-1.0.0.jar:/usr/local/kafka/bin/../libs/kafka-log4j-appender-1.0.0.jar:/usr/local/kafka/bin/../libs/kafka-streams-1.0.0.jar:/usr/local/kafka/bin/../libs/kafka-streams-examples-1.0.0.jar:/usr/local/kafka/bin/../libs/kafka-tools-1.0.0.jar:/usr/local/kafka/bin/../libs/log4j-1.2.17.jar:/usr/local/kafka/bin/../libs/lz4-java-1.4.jar:/usr/local/kafka/bin/../libs/maven-artifact-3.5.0.jar:/usr/local/kafka/bin/../libs/metrics-core-2.2.0.jar:/usr/local/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/kafka/bin/../libs/plexus-utils-3.0.24.jar:/usr/local/kafka/bin/../libs/reflections-0.9.11.jar:/usr/local/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/usr/local/kafka/bin/../libs/scala-library-2.12.3.jar:/usr/local/kafka/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/kafka/bin/../libs/snappy-java-1.1.4.jar:/usr/local/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/kafka/bin/../libs/zkclient-0.10.jar:/usr/local/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,006] INFO Client environment:java.library.path=/usr/local/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,006] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,006] INFO Client environment:java.compiler=
(org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,006] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,007] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,007] INFO Client environment:os.version=2.6.32-504.el6.x86_64 (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,007] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,007] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,007] INFO Client environment:user.dir=/root (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,009] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4b5a5ed1 (org.apache.zookeeper.ZooKeeper) [2018-03-18 01:38:56,039] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient) [2018-03-18 01:38:56,043] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [2018-03-18 01:38:56,064] INFO Socket connection established to localhost/127.0.0.1:2181, initiating session (org.apache.zookeeper.ClientCnxn) [2018-03-18 01:38:56,263] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x162350620220000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn) [2018-03-18 01:38:56,267] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient) [2018-03-18 01:38:56,922] INFO Cluster ID = 4fDVAaJTStGuA4UMQhri-Q (kafka.server.KafkaServer) [2018-03-18 01:38:56,957] WARN No meta.properties file under dir /usr/local/kafka/logs/meta.properties (kafka.server.BrokerMetadataCheckpoint) [2018-03-18 01:38:57,033] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) [2018-03-18 01:38:57,036] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) [2018-03-18 01:38:57,039] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper) [2018-03-18 01:38:57,139] INFO Loading logs. (kafka.log.LogManager) [2018-03-18 01:38:57,163] INFO Logs loading complete in 24 ms. (kafka.log.LogManager) [2018-03-18 01:38:57,848] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) [2018-03-18 01:38:57,859] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) [2018-03-18 01:38:58,561] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) [2018-03-18 01:38:58,569] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer) [2018-03-18 01:38:58,637] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2018-03-18 01:38:58,644] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2018-03-18 01:38:58,646] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2018-03-18 01:38:58,699] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [2018-03-18 01:38:58,864] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2018-03-18 01:38:58,881] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2018-03-18 01:38:58,904] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2018-03-18 01:38:58,916] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral) [2018-03-18 01:38:58,947] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral) [2018-03-18 01:38:58,962] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2018-03-18 01:38:58,982] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2018-03-18 01:38:59,032] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 48 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [2018-03-18 01:38:59,118] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager) [2018-03-18 01:38:59,272] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2018-03-18 01:38:59,279] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [2018-03-18 01:38:59,292] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2018-03-18 01:38:59,477] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral) [2018-03-18 01:38:59,496] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral) [2018-03-18 01:38:59,501] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(bosenrui,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils) [2018-03-18 01:38:59,517] WARN No meta.properties file under dir /usr/local/kafka/logs/meta.properties (kafka.server.BrokerMetadataCheckpoint) [2018-03-18 01:38:59,611] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser) [2018-03-18 01:38:59,658] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser) [2018-03-18 01:38:59,674] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) ``
上一篇:
Zookeeper安装与配置
|
下一篇:
Azkaban安装实录
aiaiDBA
加微信获取免费职业规划+学习资料,
了解年薪50万的DBA是如何练成的
13718043309
010-86462881