今天讲讲怎么用sqoop将Hbase或者Hive的东西导出到mysql。不过事先要告诉大家
目前sqoop没有办法把数据直接从Hbase导出到mysql。必须要通过Hive建立2个表,一个外部表是基于这个Hbase表的,另一个是单纯的基于hdfs的hive原生表,然后把外部表的数据导入到原生表(临时),然后通过hive将临时表里面的数据导出到mysql
数据准备
mysql建立空表
CREATE TABLE `employee` ( `rowkey` int(11) NOT NULL, `id` int(11) NOT NULL, `name` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
注意:因为大家习惯性的把hive表用于映射Hbase的rowkey的字段命名为key,所以在建立mysql的table的时候有可能也建立对应的key字段,但是key是mysql的保留字,会导致insert语句无法插入的问题
Hbase建立employee表
建立employee表,并插入数据hbase(main):005:0> create 'employee','info' 0 row(s) in 0.4740 seconds => Hbase::Table - employee hbase(main):006:0> put 'employee',1,'info:id',1 0 row(s) in 0.2080 seconds hbase(main):008:0> scan 'employee' ROW COLUMN+CELL 1 column=info:id, timestamp=1417591291730, value=1 1 row(s) in 0.0610 seconds hbase(main):009:0> put 'employee',1,'info:name','peter' 0 row(s) in 0.0220 seconds hbase(main):010:0> scan 'employee' ROW COLUMN+CELL 1 column=info:id, timestamp=1417591291730, value=1 1 column=info:name, timestamp=1417591321072, value=peter 1 row(s) in 0.0450 seconds hbase(main):011:0> put 'employee',2,'info:id',2 0 row(s) in 0.0370 seconds hbase(main):012:0> put 'employee',2,'info:name','paul' 0 row(s) in 0.0180 seconds hbase(main):013:0> scan 'employee' ROW COLUMN+CELL 1 column=info:id, timestamp=1417591291730, value=1 1 column=info:name, timestamp=1417591321072, value=peter 2 column=info:id, timestamp=1417591500179, value=2 2 column=info:name, timestamp=1417591512075, value=paul 2 row(s) in 0.0440 seconds
建立Hive外部表
hive 有分为原生表和外部表,原生表是以简单文件方式存储在hdfs里面,外部表依赖别的框架,比如Hbase,我们现在建立一个依赖于我们刚刚建立的employee hbase表的hive 外部表hive> CREATE EXTERNAL TABLE h_employee(key int, id int, name string)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key, info:id,info:name")
> TBLPROPERTIES ("hbase.table.name" = "employee");
OK
Time taken: 0.324 seconds
hive> select * from h_employee;
OK
1 1 peter
2 2 paul
Time taken: 1.129 seconds, Fetched: 2 row(s)
建立Hive原生表
这个hive原生表只是用于导出的时候临时使用的,所以取名叫 h_employee_export,字段之间的分隔符用逗号CREATE TABLE h_employee_export(key INT, id INT, name STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054';
我们去看下实际存储的文本文件是什么样子的
$ hdfs dfs -cat /user/hive/warehouse/h_employee_export/000000_0 1,1,peter 2,2,paul
源Hive表导入数据到临时表
第一步先将数据从 h_employee(基于Hbase的外部表)导入到 h_employee_export(原生Hive表)
hive> insert overwrite table h_employee_export select * from h_employee;
hive> select * from h_employee_export; OK 1 1 peter 2 2 paul Time taken: 0.359 seconds, Fetched: 2 row(s)
我们去看下实际存储的文本文件长什么样子
$ hdfs dfs -cat /user/hive/warehouse/h_employee_export/000000_0 1,1,peter 2,2,paul
从Hive导出数据到mysql
$ sqoop export --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root --table employee --m 1 --export-dir /user/hive/warehouse/h_employee_export/ Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 14/12/05 08:49:35 INFO sqoop.Sqoop: Running Sqoop version