1. 生成测试数据
在TPC-H的官网http://www.tpc.org/tpch/上下载dbgen工具,生成数据http://www.tpc.org/tpch/spec/tpch_2_17_0.zip

[root@ip---- tpch]# wget http://www.tpc.org/tpch/spec/tpch_2_17_0.zip

解压,到dbgen目录下,复制makefile.suite到makefile并作如下修改

[root@ip---- tpch]# yum install unzip
[root@ip---- tpch]# unzip tpch_2_17_0.zip 
[root@ip---- tpch]# ls
__MACOSX tpch_2_17_0 tpch_2_17_0.zip
[root@ip---- tpch]# cd tpch_2_17_0
[root@ip---- tpch_2_17_0]# ls
dbgen dev-tools ref_data
[root@ip---- tpch_2_17_0]# cd dbgen/
[root@ip---- dbgen]# ls
BUGS README bcd2.h check_answers dbgen.dsp dss.ddl dsstypes.h permute.c qgen.c reference rnd.h shared.h text.c tpch.sln variants
HISTORY answers bm_utils.c column_split.sh dists.dss dss.h load_stub.c permute.h qgen.vcproj release.h rng64.c speed_seed.c tpcd.h tpch.vcproj varsub.c
PORTING.NOTES bcd2.c build.c config.h driver.c dss.ri makefile.suite print.c queries rnd.c rng64.h tests tpch.dsw update_release.sh
[root@ip---- dbgen]# cp makefile.suite makefile

[root@ip-172-31-10-151 dbgen]# vi makefile

################
## CHANGE NAME OF ANSI COMPILER HERE
################
CC = gcc
# Current values for DATABASE are: INFORMIX, DB2, TDAT (Teradata)
# SQLSERVER, SYBASE, ORACLE, VECTORWISE
# Current values for MACHINE are: ATT, DOS, HP, IBM, ICL, MVS,
# SGI, SUN, U2200, VMS, LINUX, WIN32
# Current values for WORKLOAD are: TPCH
DATABASE= ORACLE
MACHINE = LINUX
WORKLOAD = TPCH

编译代码:

make

编译完成之后会在当前目录下生成dbgen

运行./dbgen -help查看如何使用

jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # ./dbgen -help
TPC-H Population Generator (Version 2.17. build )
Copyright Transaction Processing Performance Council -
USAGE:
dbgen [-{vf}][-T {pcsoPSOL}]
[-s <scale>][-C <procs>][-S <step>]
dbgen [-v] [-O m] [-s <scale>] [-U <updates>] Basic Options
===========================
-C <n> -- separate data set into <n> chunks (requires -S, default: )
-f -- force. Overwrite existing files
-h -- display this message
-q -- enable QUIET mode
-s <n> -- set Scale Factor (SF) to <n> (default: )
-S <n> -- build the <n>th step of the data/update set (used with -C or -U)
-U <n> -- generate <n> update sets
-v -- enable VERBOSE mode Advanced Options
===========================
-b <s> -- load distributions for <s> (default: dists.dss)
-d <n> -- split deletes between <n> files (requires -U)
-i <n> -- split inserts between <n> files (requires -U)
-T c -- generate cutomers ONLY
-T l -- generate nation/region ONLY
-T L -- generate lineitem ONLY
-T n -- generate nation ONLY
-T o -- generate orders/lineitem ONLY
-T O -- generate orders ONLY
-T p -- generate parts/partsupp ONLY
-T P -- generate parts ONLY
-T r -- generate region ONLY
-T s -- generate suppliers ONLY
-T S -- generate partsupp ONLY To generate the SF= (1GB), validation database population, use:
dbgen -vf -s To generate updates for a SF= (1GB), use:
dbgen -v -U -s

运行./dbgen -s 1024生成1TB数据

jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # ll *.tbl
-rw-r--r-- root root Jul : customer.tbl
-rw-r--r-- root root Jul : lineitem.tbl
-rw-r--r-- root root Jul : nation.tbl
-rw-r--r-- root root Jul : orders.tbl
-rw-r--r-- root root Jul : part.tbl
-rw-r--r-- root root Jul : partsupp.tbl
-rw-r--r-- root root Jul : region.tbl
-rw-r--r-- root root Jul : supplier.tbl

将数据移动到一个单独的目录

mkdir ../data1024g

mv *.tbl ../data1024g

2.下载impala版本的TPCH-H脚本

建立原始表linetext,为text文件:大小776GB

jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # hdfs dfs -du /shaochen/tpch
/shaochen/tpch/customer
/shaochen/tpch/lineitem
/shaochen/tpch/nation
/shaochen/tpch/orders
/shaochen/tpch/part
/shaochen/tpch/partsupp
/shaochen/tpch/region
/shaochen/tpch/supplier
Create external table lineitem (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE DOUBLE, L_DISCOUNT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SHIPMODE STRING, L_COMMENT STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'  LOCATION '/shaochen/tpch/lineitem';

从原始text表中统计记录条数:

[jfp4-:] > select count(*) from lineitem;
Query: select count(*) from lineitem
+------------+
| count(*) |
+------------+
| |
+------------+
Returned row(s) in .47s

在脚本运行过程中,观察到Cluster Disk IO速度平均接近1GB,原始数据为776GB,由于是IO密集型操作,估算应该在776GB/1GB/s=800s完成。符合预期

将lineitem表保存为parquet格式:

[jfp4-:] > insert overwrite lineitem_parquet select * from lineitem;
Query: insert overwrite lineitem_parquet select * from lineitem
Inserted rows in .52s

在脚本运行过程中,该SQL为由于涉及到parquet文件的转换和Snappy压缩,属于混合型(IO密集+CPU密集),观察到Cluster Disk IO中读速率均值约为210M,估算在776/0.2=3800秒左右完成。符合预期。
根据写速率为140兆,parquet文件大小约为3800*0.14=532GB,再除以复制因子3,为180GB。

jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
/user/hive/warehouse/tpch.db/q1_pricing_summary_report

真实的parquet文件大小为200G,符合预期。

再次统计记录条数:

[jfp4-:] > select count(*) from lineitem_parquet;
Query: select count(*) from lineitem_parquet
+------------+
| count(*) |
+------------+
| |
+------------+
Returned row(s) in .04s

在text文件格式上运行Q1:

[jfp4-:] > -- the query
> INSERT OVERWRITE TABLE q1_pricing_summary_report
> SELECT
> L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int)
> FROM
> lineitem
> WHERE
> L_SHIPDATE<='1998-09-02'
> GROUP BY L_RETURNFLAG, L_LINESTATUS
> ORDER BY L_RETURNFLAG, L_LINESTATUS
> LIMIT ;
Query: INSERT OVERWRITE TABLE q1_pricing_summary_report SELECT L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int) FROM lineitem WHERE L_SHIPDATE<='1998-09-02' GROUP BY L_RETURNFLAG, L_LINESTATUS ORDER BY L_RETURNFLAG, L_LINESTATUS LIMIT
^C[jfp4-:] > INSERT OVERWRITE TABLE q1_pricing_summary_report
> SELECT
> L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int)
> FROM
> lineitem
> WHERE
> L_SHIPDATE<='1998-09-02'
> GROUP BY L_RETURNFLAG, L_LINESTATUS
> ORDER BY L_RETURNFLAG, L_LINESTATUS
> LIMIT ;
Query: insert OVERWRITE TABLE q1_pricing_summary_report SELECT L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int) FROM lineitem WHERE L_SHIPDATE<='1998-09-02' GROUP BY L_RETURNFLAG, L_LINESTATUS ORDER BY L_RETURNFLAG, L_LINESTATUS LIMIT
Inserted rows in .57s

查询查询计划:

[jfp4-:] > explain INSERT OVERWRITE TABLE q1_pricing_summary_report
> SELECT
> L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int)
> FROM
> lineitem
> WHERE
> L_SHIPDATE<='1998-09-02'
> GROUP BY L_RETURNFLAG, L_LINESTATUS
> ORDER BY L_RETURNFLAG, L_LINESTATUS
> LIMIT ;
Query: explain INSERT OVERWRITE TABLE q1_pricing_summary_report SELECT L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int) FROM lineitem WHERE L_SHIPDATE<='1998-09-02' GROUP BY L_RETURNFLAG, L_LINESTATUS ORDER BY L_RETURNFLAG, L_LINESTATUS LIMIT
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=.13GB VCores= |
| WARNING: The following tables are missing relevant table and/or column statistics. |
| tpch.lineitem |
| |
| WRITE TO HDFS [tpch.q1_pricing_summary_report, OVERWRITE=true] |
| | partitions= |
| | |
| :TOP-N [LIMIT=] |
| | order by: L_RETURNFLAG ASC, L_LINESTATUS ASC |
| | |
| :EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| :TOP-N [LIMIT=] |
| | order by: L_RETURNFLAG ASC, L_LINESTATUS ASC |
| | |
| :AGGREGATE [MERGE FINALIZE] |
| | output: sum(sum(L_QUANTITY)), sum(sum(L_EXTENDEDPRICE)), sum(sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT))), sum(sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT) * (1.0 + L_TAX))), sum(count(L_QUANTITY)), sum(count(L_EXTENDEDPRICE)), sum(sum(L_DISCOUNT)), sum(count(L_DISCOUNT)), sum(count()) |
| | group by: L_RETURNFLAG, L_LINESTATUS |
| | |
| :EXCHANGE [PARTITION=HASH(L_RETURNFLAG,L_LINESTATUS)] |
| | |
| :AGGREGATE |
| | output: sum(L_QUANTITY), sum(L_EXTENDEDPRICE), sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT)), sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT) * (1.0 + L_TAX)), count(L_QUANTITY), count(L_EXTENDEDPRICE), sum(L_DISCOUNT), count(L_DISCOUNT), count() |
| | group by: L_RETURNFLAG, L_LINESTATUS |
| | |
| :SCAN HDFS [tpch.lineitem] |
| partitions=/ size=.30GB |
| predicates: L_SHIPDATE <= '1998-09-02' |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Returned row(s) in .15s

计算一下表的统计信息:

[jfp4-:] > compute stats lineitem;
Query: compute stats lineitem
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated partition(s) and column(s). |
+------------------------------------------+
Returned row(s) in .34s

根据执行结果,发现compute stats 原来是如此花费时间!观察执行过程中,前15分钟的DISK IO是非常高,达到900M/s左右,基本上是集群中所有的磁盘都在满负荷的读文件的。之后的IO也保持在130M/s左右。看来compute status是一个昂贵的操作

在parquet表上统计一下:

[jfp4-1:21000] > compute stats lineitem_parquet;
Query: compute stats lineitem_parquet
Query aborted.
[jfp4-1:21000] > SET
> NUM_SCANNER_THREADS=2
> ;
NUM_SCANNER_THREADS set to 2
[jfp4-1:21000] > compute stats lineitem_parquet;
Query: compute stats lineitem_parquet
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 16 column(s). |
+------------------------------------------+
Returned 1 row(s) in 5176.29s
[jfp4-1:21000] >

注意需要设置NUM_SCANNER_THREAD,才能成功

查看snappy压缩对parquet表的压缩和查询效率的影响:

[jfp4-:] > set PARQUET_COMPRESSION_CODEC=snappy;
PARQUET_COMPRESSION_CODEC set to snappy
[jfp4-:] > create table lineitem_parquet_snappy (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT SOMMENT STRING) STORED AS PARQUET;
Query: create table lineitem_parquet_snappy (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SING) STORED AS PARQUET Returned row(s) in .30s

[jfp4-1:21000] > insert overwrite lineitem_parquet_snappy select * from lineitem;
Query: insert overwrite lineitem_parquet_snappy select * from lineitem
Inserted 6144008876 rows in 3836.99s

查看snappy表的大小:

jfp4-:~ # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet_snappy
/user/hive/warehouse/tpch.db/q1_pricing_summary_report

发现lineitem_parquet_snappy和lineitem_parquet大小是一样的,可见默认情况下,impala的parquet表默认是用snappy压缩的

[jfp4-:] > insert overwrite lineitem_parquet_raw select * from lineitem;
Query: insert overwrite lineitem_parquet_raw select * from lineitem
Inserted rows in .22s

snappy + parquet在写数据上比不压缩的parquet还是要节省了一些时间的!

看看raw parquet的大小:

jfp4-:~ # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
319.2 G /user/hive/warehouse/tpch.db/lineitem_parquet_raw
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet_snappy
/user/hive/warehouse/tpch.db/q1_pricing_summary_report

看看gzip+snappy的效果:

[jfp4-:] > set PARQUET_COMPRESSION_CODEC=gzip;
PARQUET_COMPRESSION_CODEC set to gzip
[jfp4-:] > create table lineitem_parquet_gzip (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE DOUBLE, L_DISCOUNT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SHIPMODE STRING, L_COMMENT STRING) STORED AS PARQUET;
Query: create table lineitem_parquet_gzip (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE DOUBLE, L_DISCOUNT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SHIPMODE STRING, L_COMMENT STRING) STORED AS PARQUET Returned row(s) in .26s
[jfp4-:] > insert overwrite lineitem_parquet_gzip select * from lineitem;
Query: insert overwrite lineitem_parquet_gzip select * from lineitem
Inserted rows in .71s
jfp4-:~ # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
155.1 G /user/hive/warehouse/tpch.db/lineitem_parquet_gzip
319.2 G /user/hive/warehouse/tpch.db/lineitem_parquet_raw
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet_snappy
/user/hive/warehouse/tpch.db/q1_pricing_summary_report
[jfp4-:] > select count(*) from lineitem_parquet_gzip;
Query: select count(*) from lineitem_parquet_gzip
+------------+
| count(*) |
+------------+
| |
+------------+
Returned row(s) in .54s

TPCH Benchmark with Impala的更多相关文章

  1. CIB Training Scripts For TPC-H Benchmark

    http://52.11.56.155:7180/http://52.11.56.155:8888/ impala-shell -i 172.31.25.244 sudo -u hdfs hdfs d ...

  2. 运行impala tpch

    1.安装git和下载tpc-h-impala脚步 [root@ip-172-31-34-31 ~]# yum install git [root@ip-172-31-34-31 ~]# git clo ...

  3. 在Linux下将TPC-H数据导入到MySQL

    一.下载TPC-H 下载地址:http://www.tpc.org/tpc_documents_current_versions/current_specifications.asp .从这个页面中找 ...

  4. Impala:新一代开源大数据分析引擎--转载

    原文地址:http://www.parallellabs.com/2013/08/25/impala-big-data-analytics/ 文 / 耿益锋 陈冠诚 大数据处理是云计算中非常重要的问题 ...

  5. Greenplum 源码安装教程 —— 以 CentOS 平台为例

    Greenplum 源码安装教程 作者:Arthur_Qin 禾众 Greenplum 主体以及orca ( 新一代优化器 ) 的代码以可以从 Github 上下载.如果不打算查看代码,想下载编译好的 ...

  6. TPC-H is a Decision Support Benchmark

    TPC-H is a Decision Support Benchmark http://www.dba-oracle.com/t_tpc_benchmarks.htm

  7. 【原创】大数据基础之Benchmark(4)TPC-DS测试结果(hive/hive on spark/spark sql/impala/presto)

    1 测试集群 内存:256GCPU:32Core (Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz)Disk(系统盘):300GDisk(数据盘):1.5T*1 2 ...

  8. 【原创】大数据基础之Benchmark(2)TPC-DS

    tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...

  9. Impala:新一代开源大数据分析引擎

    Impala架构分析 Impala是Cloudera公司主导开发的新型查询系统,它提供SQL语义,能查询存储在Hadoop的HDFS和HBase中的PB级大数据.已有的Hive系统虽然也提供了SQL语 ...

随机推荐

  1. pinpoint 安装部署

    .markdown-preview:not([data-use-github-style]) { padding: 2em; font-size: 1.2em; color: rgb(171, 178 ...

  2. 使用Maven JGit-Flow Plugin

    git flow 请参考 http://www.ituring.com.cn/article/56870 2.开始使用插件,在pom.xml中添加以下代码: https://bitbucket.org ...

  3. 关于 jquery和js获取宽度时只能取整数,取不到小数点

    最近在改版自已的一个网站的时候,遇到了一个问题. 用jquery的width()函数获取元素宽度的时候,返回得到的是整数,而不是小数. 如下图,谷歌上显示的宽度为1078.89px 而我用控制台输出了 ...

  4. maven引入本地jar

    mvn install:install-file -Dfile=***.jar -DgroupId=**.***.** -DartifactId=* -Dversion=0.8.11 -Dpackag ...

  5. contiki-rime-单跳单播

    rucb是单跳单播的最顶层,将数据以块为单位进行传输(Bulk transfer). ruc,Reliable communication,保证可靠通信,主要实现确认和序列功能. suc,Stubbo ...

  6. Oracle安装时OracleMTSRecoveryService找不到

    电脑重做系统之后再安装oracle过程中出现一个问题,说OracleMTSRecoveryService找不到指定的目录文件,卸载重装还是没有改变,挣了半天终于找到怎么更改了,打开注册表编辑器,SYS ...

  7. Java源文件编译成功但是运行时加载不到文件

    最近系统重装了一些,Java等环境变量都需要重新配置,配置好以后编写了一个Java源文件编译了一下,通过Javac编译源文件,编译成功,但是再通过Java运行时没找到报出找不到加载文件或者加载文件不存 ...

  8. 【javascript】作用域和闭包浅析

    作用域 分全局作用域和局部作用域 全局作用域:函数外部定义的变量,可以被整个program的各成员参照利用. 局部作用域:函数内部定义的变量,仅供该函数的各成员参照利用. var val=1; //全 ...

  9. Android 三大图片加载框架的对比——ImageLoader,Picasso,Glide

    一.ImageLaoder介绍 << Universal ImageLoader 是很早开源的图片缓存,在早期被很多应用使用 多线程下载图片,图片可以来源于网络,文件系统,项目文件夹ass ...

  10. display:inline、block、inline-block的区别

    display:block就是将元素显示为块级元素. block元素的特点是: 总是在新行上开始: 高度,行高以及顶和底边距都可控制: 宽度不设是它的容器的100%,除非设定一个宽度 <div& ...