环境
  hadoop-2.6.5
  hive-1.2.1

一、Hive和Hbase整合
如果使用Hive进行分析,Hive要从Hbase取数据(当然可以直接将数据存到Hive),那么就需要将Hive和HBase做整合,将hbase的列映射到hive即可。

步骤1:把hive-hbase-handler-1.2.1.jar cp到hbase/lib 下,同时把hbase中的所有的jar,cp到hive/lib

[root@node101 lib]# scp /usr/local/hive-1.2./lib/hive-hbase-handler-1.2..jar node104:/usr/local/hbase-0.98.12.1-hadoop2/lib
hive-hbase-handler-1.2..jar % 113KB .2KB/s :

[root@node104 lib]# scp /usr/local/hbase-0.98.12.1-hadoop2/lib/* node101:/usr/local/hive-1.2.1/lib/

步骤2:在hive的配置文件hive-site.xml 增加属性:hbase连接的zookeeper节点

<property>
<name>hbase.zookeeper.quorum</name>
<value>node101,node102,node103</value>
</property>

步骤3:在Hive创建映射到HBase的表
举例如下:
(1)内部表

CREATE TABLE hbasetbl(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz", "hbase.mapred.output.outputtable" = "xyz");

hbase.columns.mapping是映射规则==》:key是row_key,cf1是列族,val是列值;

hbase.table.name是映射到HBase的表名,
hbase.mapred.output.outputtable是HBase在HDFS中存储文件名。

如果Hbase不存在表xyx,那么hive创建表xyz之后,在HBase也会创建xyz表,在Hive里向表xyz插入数据 数据会存到HBase里而不是Hive。

(2)外部表

CREATE EXTERNAL TABLE tmp_order
(key string, id string, user_id string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,order:order_id,order:user_id")
TBLPROPERTIES ("hbase.table.name" = "t_order");

创建外部表,需要HBase要事先有这个表:create 't_order','order',否则会报错

hive和hbase同步 整合

二、Hive+Sqoop

1. 在hive中创建hbase的event_log对应表

CREATE EXTERNAL TABLE event_logs(
key string, pl string, en string, s_time bigint, p_url string, u_ud string, u_sd string
) ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe'
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties('hbase.columns.mapping'=':key,log:pl,log:en,log:s_time,log:p_url,log:u_ud,log:u_sd')
tblproperties('hbase.table.name'='eventlog');

-- 2. 在hive中创建mysql的对应表,执行HQL之后分析的结果保存该表,然后通过sqoop工具导出到mysql

CREATE TABLE `stats_view_depth` (
`platform_dimension_id` bigint ,
`data_dimension_id` bigint ,
`kpi_dimension_id` bigint ,
`pv1` bigint ,
`pv2` bigint ,
`pv3` bigint ,
`pv4` bigint ,
`pv5_10` bigint ,
`pv10_30` bigint ,
`pv30_60` bigint ,
`pv60_plus` bigint ,
`created` string
) row format delimited fields terminated by '\t';

3. hive创建临时表:把hql分析之后的中间结果存放到当前的临时表。

CREATE TABLE `stats_view_depth_tmp`(`pl` string, `date` string, `col` string, `ct` bigint);

4. 编写UDF(platformdimension & datedimension)<需要注意,要删除DimensionConvertClient类中所有FileSystem关闭的操作>

-- 5. 上传transformer-0.0.1.jar到hdfs的/sxt/transformer文件夹中
-- 6. 创建hive的function

create function platform_convert as 'com.sxt.transformer.hive.PlatformDimensionUDF' using jar 'hdfs://sxt/sxt/transformer/transformer-0.0.1.jar';
create function date_convert as 'com.sxt.transformer.hive.DateDimensionUDF' using jar 'hdfs://sxt/sxt/transformer/transformer-0.0.1.jar';

以上为准备工作

------------------------------------------------------------------------
7. hql编写(统计用户角度的浏览深度)<注意:时间为外部给定>

from (
select
pl, from_unixtime(cast(s_time/ as bigint),'yyyy-MM-dd') as day, u_ud,
(case when count(p_url) = then "pv1"
when count(p_url) = then "pv2"
when count(p_url) = then "pv3"
when count(p_url) = then "pv4"
when count(p_url) >= and count(p_url) < then "pv5_10"
when count(p_url) >= and count(p_url) < then "pv10_30"
when count(p_url) >= and count(p_url) < then "pv30_60"
else 'pv60_plus' end) as pv
from event_logs
where
en='e_pv'
and p_url is not null
and pl is not null
and s_time >= unix_timestamp('2019-07-08','yyyy-MM-dd')*
and s_time < unix_timestamp('2019-07-09','yyyy-MM-dd')*
group by
pl, from_unixtime(cast(s_time/ as bigint),'yyyy-MM-dd'), u_ud
) as tmp
insert overwrite table stats_view_depth_tmp
select pl,day,pv,count(distinct u_ud) as ct where u_ud is not null group by pl,day,pv;

--把临时表的多行数据,转换一行

with tmp as
(
select pl,`date` as date1,ct as pv1, as pv2, as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv1' union all
select pl,`date` as date1, as pv1,ct as pv2, as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv2' union all
select pl,`date` as date1, as pv1, as pv2,ct as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv3' union all
select pl,`date` as date1, as pv1, as pv2, as pv3,ct as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv4' union all
select pl,`date` as date1, as pv1, as pv2, as pv3, as pv4,ct as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv5_10' union all
select pl,`date` as date1, as pv1, as pv2, as pv3, as pv4, as pv5_10,ct as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv10_30' union all
select pl,`date` as date1, as pv1, as pv2, as pv3, as pv4, as pv5_10, as pv10_30,ct as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv30_60' union all
select pl,`date` as date1, as pv1, as pv2, as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60,ct as pv60_plus from stats_view_depth_tmp where col='pv60_plus' union all select 'all' as pl,`date` as date1,ct as pv1, as pv2, as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv1' union all
select 'all' as pl,`date` as date1, as pv1,ct as pv2, as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv2' union all
select 'all' as pl,`date` as date1, as pv1, as pv2,ct as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv3' union all
select 'all' as pl,`date` as date1, as pv1, as pv2, as pv3,ct as pv4, as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv4' union all
select 'all' as pl,`date` as date1, as pv1, as pv2, as pv3, as pv4,ct as pv5_10, as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv5_10' union all
select 'all' as pl,`date` as date1, as pv1, as pv2, as pv3, as pv4, as pv5_10,ct as pv10_30, as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv10_30' union all
select 'all' as pl,`date` as date1, as pv1, as pv2, as pv3, as pv4, as pv5_10, as pv10_30,ct as pv30_60, as pv60_plus from stats_view_depth_tmp where col='pv30_60' union all
select 'all' as pl,`date` as date1, as pv1, as pv2, as pv3, as pv4, as pv5_10, as pv10_30, as pv30_60,ct as pv60_plus from stats_view_depth_tmp where col='pv60_plus'
)
from tmp
insert overwrite table stats_view_depth
select ,,,sum(pv1),sum(pv2),sum(pv3),sum(pv4),sum(pv5_10),sum(pv10_30),sum(pv30_60),sum(pv60_plus),'2019-07-08' group by pl,date1;

7. sqoop脚步编写(统计用户角度)

sqoop export --connect jdbc:mysql://hh:3306/report --username hive --password hive --table stats_view_depth --export-dir /hive/bigdater.db/stats_view_depth/* --input-fields-terminated-by "\\t" --update-mode allowinsert --update-key platform_dimension_id,data_dimension_id,kpi_dimension_id

8. shell脚步编写 通过脚本自动化操作

view_depth_run.sh

#!/bin/bash

startDate=''
endDate='' until [ $# -eq ]
do
if [ $'x' = '-sdx' ]; then
shift
startDate=$
elif [ $'x' = '-edx' ]; then
shift
endDate=$
fi
shift
done if [ -n "$startDate" ] && [ -n "$endDate" ]; then
echo "use the arguments of the date"
else
echo "use the default date"
startDate=$(date -d last-day +%Y-%m-%d)
endDate=$(date +%Y-%m-%d)
fi
echo "run of arguments. start date is:$startDate, end date is:$endDate"
echo "start run of view depth job " ## 用户角度浏览深度分析
echo "start insert user data to hive tmp table"
hive -e "from (select pl, from_unixtime(cast(s_time/1000 as bigint),'yyyy-MM-dd') as day, u_ud, (case when count(p_url) = 1 then 'pv1' when count(p_url) = 2 then 'pv2' when count(p_url) = 3 then 'pv3' when count(p_url) = 4 then 'pv4' when count(p_url) >= 5 and count(p_url) <10 then 'pv5_10' when count(p_url) >= 10 and count(p_url) <30 then 'pv10_30' when count(p_url) >=30 and count(p_url) <60 then 'pv30_60' else 'pv60_plus' end) as pv from event_logs where en='e_pv' and p_url is not null and pl is not null and s_time >= unix_timestamp('$startDate','yyyy-MM-dd')*1000 and s_time < unix_timestamp('$endDate','yyyy-MM-dd')*1000 group by pl, from_unixtime(cast(s_time/1000 as bigint),'yyyy-MM-dd'), u_ud) as tmp insert overwrite table stats_view_depth_tmp select pl,day,pv,count(distinct u_ud) as ct where u_ud is not null group by pl,day,pv" echo "start insert user data to hive table"
hive -e "with tmp as (select pl,date,ct as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv1' union all select pl,date,0 as pv1,ct as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv2' union all select pl,date,0 as pv1,0 as pv2,ct as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv3' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,ct as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv4' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,ct as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv5_10' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,ct as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv10_30' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,ct as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv30_60' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,ct as pv60_plus from stats_view_depth_tmp where col='pv60_plus' union all select 'all' as pl,date,ct as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv1' union all select 'all' as pl,date,0 as pv1,ct as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv2' union all select 'all' as pl,date,0 as pv1,0 as pv2,ct as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv3' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,ct as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv4' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,ct as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv5_10' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,ct as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv10_30' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,ct as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv30_60' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,ct as pv60_plus from stats_view_depth_tmp where col='pv60_plus' ) from tmp insert overwrite table stats_view_depth select platform_convert(pl),date_convert(date),5,sum(pv1),sum(pv2),sum(pv3),sum(pv4),sum(pv5_10),sum(pv10_30),sum(pv30_60),sum(pv60_plus),date group by pl,date" #会话角度浏览深度分析
echo "start insert session data to hive tmp table"
hive -e "from (select pl, from_unixtime(cast(s_time/1000 as bigint),'yyyy-MM-dd') as day, u_sd, (case when count(p_url) = 1 then 'pv1' when count(p_url) = 2 then 'pv2' when count(p_url) = 3 then 'pv3' when count(p_url) = 4 then 'pv4' when count(p_url) >= 5 and count(p_url) <10 then 'pv5_10' when count(p_url) >= 10 and count(p_url) <30 then 'pv10_30' when count(p_url) >=30 and count(p_url) <60 then 'pv30_60' else 'pv60_plus' end) as pv from event_logs where en='e_pv' and p_url is not null and pl is not null and s_time >= unix_timestamp('$startDate','yyyy-MM-dd')*1000 and s_time < unix_timestamp('$endDate','yyyy-MM-dd')*1000 group by pl, from_unixtime(cast(s_time/1000 as bigint),'yyyy-MM-dd'), u_sd ) as tmp insert overwrite table stats_view_depth_tmp select pl,day,pv,count(distinct u_sd) as ct where u_sd is not null group by pl,day,pv" ## insert into
echo "start insert session data to hive table"
hive --database bigdater -e "with tmp as (select pl,date,ct as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv1' union all select pl,date,0 as pv1,ct as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv2' union all select pl,date,0 as pv1,0 as pv2,ct as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv3' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,ct as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv4' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,ct as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv5_10' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,ct as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv10_30' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,ct as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv30_60' union all select pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,ct as pv60_plus from stats_view_depth_tmp where col='pv60_plus' union all select 'all' as pl,date,ct as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv1' union all select 'all' as pl,date,0 as pv1,ct as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv2' union all select 'all' as pl,date,0 as pv1,0 as pv2,ct as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv3' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,ct as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv4' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,ct as pv5_10,0 as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv5_10' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,ct as pv10_30,0 as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv10_30' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,ct as pv30_60,0 as pv60_plus from stats_view_depth_tmp where col='pv30_60' union all select 'all' as pl,date,0 as pv1,0 as pv2,0 as pv3,0 as pv4,0 as pv5_10,0 as pv10_30,0 as pv30_60,ct as pv60_plus from stats_view_depth_tmp where col='pv60_plus' ) from tmp insert into table stats_view_depth select platform_convert(pl),date_convert(date),6,sum(pv1),sum(pv2),sum(pv3),sum(pv4),sum(pv5_10),sum(pv10_30),sum(pv30_60),sum(pv60_plus),'2015-12-13' group by pl,date" ## sqoop
echo "run the sqoop script,insert hive data to mysql table"
sqoop export --connect jdbc:mysql://hh:3306/report --username hive --password hive --table stats_view_depth --export-dir /hive/bigdater.db/stats_view_depth/* --input-fields-terminated-by \\t --update-mode allowinsert --update-key platform_dimension_id,data_dimension_id,kpi_dimension_id
echo "complete run the view depth job"

【电商日志项目之六】数据分析-Hive方式的更多相关文章

  1. 【电商日志项目之五】数据分析-MR方式

    环境 hadoop-2.6.5 hbase-0.98.12.1-hadoop2 新增用户指标分析(1)用户分析模块 (2)浏览器分析模块 根据分析效果图,找出分析的维度:用户分析是指某个时间段内的数量 ...

  2. 【电商日志项目之四】数据清洗-ETL

    环境 hadoop-2.6.5 首先要知道为什么要做数据清洗?通过各个渠道收集到的数据并不能直接用于下一步的分析,所以需要对这些数据进行缺失值清洗.格式内容清洗.逻辑错误清洗.非需求数据清洗.关联性验 ...

  3. 01-Flutter移动电商实战-项目学习记录

    一直想系统性的学习一下 Flutter,正好看到该课程<Flutter移动电商实战>的百度云资源,共 69 课时,由于怕自己坚持不下去(经常学着学着就不学了),故采用博客监督以记之. 1. ...

  4. Mall电商实战项目发布重大更新,全面支持SpringBoot 2.3.0

    1. 前言 前面近一个月去写自己的mybatis框架了,对mybatis源码分析止步不前,此文继续前面的文章.开始分析mybatis一,二级缓存的实现. 附上自己的项目github地址:https:/ ...

  5. 电商网站项目Angular+Bootstrap+Node+Express+Mysql

    1.登陆 2.注册 3.主页 4.购物车 5.管理中心 6.文件上传 代码: https://github.com/Carol0311/min_Shop.git 后期会持续进行功能更新以及开发阶段遇到 ...

  6. Cloudera Hadoop 4 实战课程(Hadoop 2.0、集群界面化管理、电商在线查询+日志离线分析)

    课程大纲及内容简介: 每节课约35分钟,共不下40讲 第一章(11讲) ·分布式和传统单机模式 ·Hadoop背景和工作原理 ·Mapreduce工作原理剖析 ·第二代MR--YARN原理剖析 ·Cl ...

  7. 16套java架构师,高并发,高可用,高性能,集群,大型分布式电商项目实战视频教程

    16套Java架构师,集群,高可用,高可扩展,高性能,高并发,性能优化,设计模式,数据结构,虚拟机,微服务架构,日志分析,工作流,Jvm,Dubbo ,Spring boot,Spring cloud ...

  8. 微服务电商项目发布重大更新,打造Spring Cloud最佳实践!

    Spring Cloud实战电商项目mall-swarm地址:转发+关注 私信我获取地址 系统架构图   系统架构图 项目组织结构 mall├── mall-common-- 工具类及通用代码模块├─ ...

  9. C# 大型电商项目性能优化(一)

    经过几个月的忙碌,我厂最近的电商平台项目终于上线,期间遇到的问题以及解决方案,也可以拿来和大家多做交流了. 我厂的项目大多采用C#.net,使用逐渐发展并流行起来的EF(Entity Framewor ...

随机推荐

  1. 前端知识--控制input按钮的可用和不可用

    最近在项目的开发的时候,自己虽然是写后端的,但是,在开发核心的时候,前端的知识自己还是会用到的,多以前端这块自己由于好长时间都没有去看,所以几乎已经忘记的差不多了,现在也只能是想起一点记录一点,以便能 ...

  2. CF547E Mike and Friends 后缀自动机+线段树合并

    裸题,敲完后没调就过了 ~ code: #include <bits/stdc++.h> using namespace std; #define ll long long #define ...

  3. learning svn diff --summarize

    # svn diff --summarizeA armbian-custom-dc/test/4g-power.shA armbian-custom-dc/test/4g-reset.shM armb ...

  4. 【批处理】if命令,注释方式

    If 命令 if 表示将判断是否符合规定的条件,从而决定执行不同的命令. 有三种格式:1.if "参数" == "字符串" 待执行的命令参数如果等于指定的字符串 ...

  5. Mac OSX上安装SublimeText 3编译Processing 3.0

    1.安装SublimeText 32.使用Preferences - Package Control:Install Package - Processing安装 如果移动了Processing的位置 ...

  6. SQL基础-过滤数据

    一.过滤数据 1.使用WHERE子句 过滤数据:关键字WHERE SELECT 字段列表 FROM 表名 WHERE 过滤条件; 过滤条件一般由要过滤的字段.操作符.限定值三部分组成: 如: SELE ...

  7. 一篇文章了解Github和Git教程

    有趣有内涵的文章第一时间送达! 喝酒I创作I分享 关注我,每天都有优质技术文章推送,工作,学习累了的时候放松一下自己. 本篇文章同步微信公众号 欢迎大家关注我的微信公众号:「醉翁猫咪」 生活中总有些东 ...

  8. 【LA 3942】 Remember the word

    题意 给定一个字符串和若干个单词,询问能把字符串分解成这些单词的方案数.比如abcd ,有单词a,b,ab,cd:就可以分解成a+b+cd或者ab+cd. 分析 trie树—>DP 代码 (感谢 ...

  9. Hadoop 在启动或者停止的时候需要输入yes确认问题

    启动或者停止hadoop的时候,信息如下: Stopping namenodes on [hadoop1 hadoop2] The authenticity of host 'hadoop2 (172 ...

  10. Eclipse各个版本及其对应代号、下载地址列表【转】

    Eclipse各个版本及其对应代号.下载地址列表 Eclipse各个版本及其对应代号.下载地址列表版本号 代码 日期 下载地址Eclipse 3.1 IO[木卫一,伊奥] 2005 http://ar ...