|
Hive
HDFS
存储数据
YARN
资源管理
MapReduce
处理数据
日志
日志内容,统一的规范
* 每一行数据就是一条数据 (RDBMS)
* 很多列,统一的标识符,进行分割
schema
模式
约束
Hive
* 处理的数据存储在HDFS
* 分析数据底层的实现MapReduce
* 执行程序运行的YARN
RDBMS
表的概念
create table bf_log(
ip string,
user string,
date string,
......
)
分析
HQL
HiveQL
select * from bf_log limit 10 ;
select substring(ip,0,4) ip_prex from bg_log ;
SQL On HADOOP
============================================================
HQL
| Engine --Hive
MapReduce
表的元数据
bf_log
============================================================
show databases ;
use default;
show tables ;
create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
load data local inpath '/opt/datas/student.txt'into table student ;
select * from student ;
select id from student ;
# rpm -qa|grep mysql
# rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm
mysqld_safe --user=mysql --skip-grant-tables --skip-networking &
UPDATE user SET Password=PASSWORD('123456') where USER='root';
拷贝mysql驱动jar包,到Hive安装目录的lib下
$ cp mysql-connector-java-5.1.27-bin.jar /opt/modules/hive-0.13.1/lib/
配置的hive metastore
Mysql
与我们hive安装在同一台机器上
============================================================
show databases ;
create database db_hive ;
create table student(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
show tables ;
desc student ;
desc extended student ;
desc formatted student ;
use db_hive ;
load data local inpath '/opt/datas/student.txt'into table db_hive.student ;
show functions ;
desc function upper ;
desc function extended upper ;
select id ,upper(name) uname from db_hive.student ;
============================================================
Hive数据仓库位置配置
default
/user/hive/warehouse
注意事项
* 在仓库目录下,没有对默认的数据库default创建文件夹
* 如果某张表属于default数据库,直接在数据仓库目录下创建一个文件夹
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
Hive运行日志信息位置
$HIVE_HOME/conf/hive-log4j.properties
hive.log.dir=/opt/modules/hive-0.13.1/logs
hive.log.file=hive.log
指定hive运行时显示的log日志的级别
$HIVE_HOME/conf/hive-log4j.properties
hive.root.logger=INFO,DRFA
在cli命令行上显示当前数据库,以及查询表的行头信息
$HIVE_HOME/conf/hive-site.xml
<property>
<name>hive.cli.print.header</name>
<value>true</value>
<description>Whether to print the names of the columns in query output.</description>
</property>
<property>
<name>hive.cli.print.current.db</name>
<value>true</value>
<description>Whether to include the current database in the Hive prompt.</description>
</property>
在启动hive时设置配置属性信息
$ bin/hive --hiveconf <property=value>
查看当前所有的配置信息
hive > set ;
hive (db_hive)> set system:user.name ;
system:user.name=beifeng
hive (db_hive)> set system:user.name=beifeng ;
此种方式,设置属性的值,仅仅在当前会话session生效
============================================================
[51xuetongxin@hadoop-senior hive-0.13.1]$ bin/hive -help
usage: hive
-d,--define <key=value> Variable subsitution to apply to hive
commands. e.g. -d A=B or --define A=B
--database <databasename> Specify the database to use
-e <quoted-query-string> SQL from command line
-f <filename> SQL from files
-H,--help Print help information
-h <hostname> connecting to Hive Server on remote host
--hiveconf <property=value> Use value for given property
--hivevar <key=value> Variable subsitution to apply to hive
commands. e.g. --hivevar A=B
-i <filename> Initialization SQL file
-p <port> connecting to Hive Server on port number
-S,--silent Silent mode in interactive shell
-v,--verbose Verbose mode (echo executed SQL to the
console)
* bin/hive -e <quoted-query-string>
eg:
bin/hive -e "select * from db_hive.student ;"
* bin/hive -f <filename>
eg:
$ touch hivef.sql
select * from db_hive.student ;
$ bin/hive -f /opt/datas/hivef.sql
$ bin/hive -f /opt/datas/hivef.sql > /opt/datas/hivef-res.txt
* bin/hive -i <filename>
与用户udf相互使用
在hive cli命令窗口中如何查看hdfs文件系统
hive (default)> dfs -ls / ;
在hive cli命令窗口中如何查看本地文件系统
hive (default)> !ls /opt/datas ;
|
|