Oracle MySQL DB培训与企服
电话同微信
: 13718043309
首页
就业课程
专题课程
认证课程
师资团队
团队博客
猎头服务
企业技术服务
关于博森瑞
当前位置:
首页
>
技术团队博客
>
大数据开发
相关链接
·
为什么Python的64位安装包文件名
·
Zookeeper安装与配置
·
Kafka的安装与配置
·
Azkaban安装实录
·
Flume安装使用实录
Sqoop安装与使用实录
## 上传介质 ```sql [root@bosenrui ~]# cd /usr/local/ [root@bosenrui local]# ll total 17652 -rw-r--r-- 1 root root 2442 Jan 25 02:04 1 drwxr-xr-x. 2 root root 4096 Sep 23 2011 bin drwxr-xr-x. 2 root root 4096 Sep 23 2011 etc drwxr-xr-x. 2 root root 4096 Sep 23 2011 games drwxr-xr-x 10 root root 4096 Jan 25 02:23 hadoop drwxr-xr-x 8 root root 4096 Mar 15 03:31 hbase drwxr-xr-x 8 root root 4096 Feb 6 05:02 hive drwxr-xr-x. 2 root root 4096 Sep 23 2011 include drwxr-xr-x. 2 root root 4096 Sep 23 2011 lib drwxr-xr-x. 2 root root 4096 Sep 23 2011 lib64 drwxr-xr-x. 2 root root 4096 Sep 23 2011 libexec drwxr-xr-x. 2 root root 4096 Sep 23 2011 sbin drwxr-xr-x. 5 root root 4096 Jan 24 19:34 share -rw-r--r-- 1 root root 18004856 Mar 2 2017 sqoop146n.tar.gz drwxr-xr-x. 2 root root 4096 Sep 23 2011 src drwxr-xr-x 12 1000 1000 4096 Feb 6 02:15 zookeeper -rw-r--r-- 1 root root 6105 Mar 15 03:31 zookeeper.out ``` ## 解压 ```sql [root@bosenrui local]# tar -zxvf sqoop146n.tar.gz sqoop/ sqoop/ivy/ sqoop/ivy/ivysettings.xml sqoop/ivy/libraries.properties sqoop/ivy/sqoop-test.xml 解压输出省略 sqoop/testdata/hive/scripts/failingImport.q sqoop/testdata/hive/scripts/normalImport.q sqoop/testdata/hive/scripts/createOnlyImport.q sqoop/testdata/hive/scripts/dateImport.q sqoop/testdata/hive/scripts/partitionImport.q sqoop/build.xml sqoop/pom-old.xml ``` ## 配置系统环境变量 ```sql [root@bosenrui local]# cat /etc/profile # /etc/profile # System wide environment and startup programs, for login setup # Functions and aliases go in /etc/bashrc # It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates. pathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac } if [ -x /usr/bin/id ]; then if [ -z "$EUID" ]; then # ksh workaround EUID=`id -u` UID=`id -ru` fi USER="`id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /sbin pathmunge /usr/sbin pathmunge /usr/local/sbin else pathmunge /usr/local/sbin after pathmunge /usr/sbin after pathmunge /sbin after fi HOSTNAME=`/bin/hostname 2>/dev/null` HISTSIZE=1000 if [ "$HISTCONTROL" = "ignorespace" ] ; then export HISTCONTROL=ignoreboth else export HISTCONTROL=ignoredups fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL # By default, we want umask to get set. This sets it for login shell # Current threshold for system reserved uid/gids is 200 # You could check uidgid reservation validity in # /usr/share/doc/setup-*/uidgid file if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then umask 002 else umask 022 fi for i in /etc/profile.d/*.sh ; do if [ -r "$i" ]; then if [ "${-#*i}" != "$-" ]; then . "$i" else . "$i" >/dev/null 2>&1 fi fi done ## 此处以下为新加入内容 export JAVA_HOME=/usr/java export JRE_HOME=/usr/java/jre export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib export PATH=$PATH:$JAVA_HOME/bin ## 此处以下为新加入内容 export HADOOP_HOME=/usr/local/hadoop #export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native" export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop/lib/native export HADOOP_OPTS="-Djava.library.path=/usr/local/hadoop/lib" #export HADOOP_ROOT_LOGGER=DEBUG,console export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin ## 以下为ZOOKEEPER部分新加入内容 export ZOOKEEPER_HOME=/usr/local/zookeeper export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin ## 以下为Hive部分新加入内容 export HIVE_HOME=/usr/local/hive export PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin export MYSQL_HOME=/usr/local/mysql ## 以下部分为Hbase新加内容 export HBASE_HOME=/usr/local/hbase export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin ## 以下部分为Sqoop部分增加 export SQOOP_HOME=/usr/local/sqoop export PATH=$PATH:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$REDIS_HOME/bin:$HBASE_HOME/bin:$SQOOP_HOME/bin unset i ``` ## 配置sqoop环境变量 ```sql [root@bosenrui local]# cat /usr/local/sqoop/conf/sqoop-env.sh # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # included in all the hadoop scripts with source command # should not be executable directly # also should not be passed any arguments, since we need original $* # Set Hadoop-specific environment variables here. #Set path to where bin/hadoop is available export HADOOP_COMMON_HOME=/usr/local/hadoop #Set path to where hadoop-*-core.jar is available export HADOOP_MAPRED_HOME=/usr/local/hadoop #set the path to where bin/hbase is available #export HBASE_HOME=/usr/local/hbase #Set the path to where bin/hive is available export HIVE_HOME=/usr/local/hive/ #Set the path for where zookeper config dir is export ZOOCFGDIR=/usr/local/zookeeper ``` ## 启动hadoop ```sql [root@bosenrui local]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 18/03/15 21:54:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [bosenrui] bosenrui: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-bosenrui.out localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-bosenrui.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-bosenrui.out 18/03/15 21:54:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-bosenrui.out localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-bosenrui.out ``` ## 查看mysql中的数据库 ```sql [root@bosenrui local]# sqoop list-databases --connect "jdbc:mysql://127.0.0.1:3306/?useUnicode=true&characterEncoding=UTF-8" --username root --password root 18/03/15 21:57:50 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6 18/03/15 21:57:50 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/03/15 21:57:50 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. information_schema hive mysql performance_schema test ```
上一篇:
Flume安装使用实录
|
下一篇:
Hbase安装实录
aiaiDBA
加微信获取免费职业规划+学习资料,
了解年薪50万的DBA是如何练成的
13718043309
010-86462881