TPCH Benchmark with Impala
1. 生成测试数据
在TPC-H的官网http://www.tpc.org/tpch/上下载dbgen工具,生成数据http://www.tpc.org/tpch/spec/tpch_2_17_0.zip
[root@ip---- tpch]# wget http://www.tpc.org/tpch/spec/tpch_2_17_0.zip
解压,到dbgen目录下,复制makefile.suite到makefile并作如下修改
[root@ip---- tpch]# yum install unzip
[root@ip---- tpch]# unzip tpch_2_17_0.zip
[root@ip---- tpch]# ls
__MACOSX tpch_2_17_0 tpch_2_17_0.zip
[root@ip---- tpch]# cd tpch_2_17_0
[root@ip---- tpch_2_17_0]# ls
dbgen dev-tools ref_data
[root@ip---- tpch_2_17_0]# cd dbgen/
[root@ip---- dbgen]# ls
BUGS README bcd2.h check_answers dbgen.dsp dss.ddl dsstypes.h permute.c qgen.c reference rnd.h shared.h text.c tpch.sln variants
HISTORY answers bm_utils.c column_split.sh dists.dss dss.h load_stub.c permute.h qgen.vcproj release.h rng64.c speed_seed.c tpcd.h tpch.vcproj varsub.c
PORTING.NOTES bcd2.c build.c config.h driver.c dss.ri makefile.suite print.c queries rnd.c rng64.h tests tpch.dsw update_release.sh
[root@ip---- dbgen]# cp makefile.suite makefile
[root@ip-172-31-10-151 dbgen]# vi makefile
################
## CHANGE NAME OF ANSI COMPILER HERE
################
CC = gcc
# Current values for DATABASE are: INFORMIX, DB2, TDAT (Teradata)
# SQLSERVER, SYBASE, ORACLE, VECTORWISE
# Current values for MACHINE are: ATT, DOS, HP, IBM, ICL, MVS,
# SGI, SUN, U2200, VMS, LINUX, WIN32
# Current values for WORKLOAD are: TPCH
DATABASE= ORACLE
MACHINE = LINUX
WORKLOAD = TPCH
编译代码:
make
编译完成之后会在当前目录下生成dbgen
运行./dbgen -help查看如何使用
jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # ./dbgen -help
TPC-H Population Generator (Version 2.17. build )
Copyright Transaction Processing Performance Council -
USAGE:
dbgen [-{vf}][-T {pcsoPSOL}]
[-s <scale>][-C <procs>][-S <step>]
dbgen [-v] [-O m] [-s <scale>] [-U <updates>] Basic Options
===========================
-C <n> -- separate data set into <n> chunks (requires -S, default: )
-f -- force. Overwrite existing files
-h -- display this message
-q -- enable QUIET mode
-s <n> -- set Scale Factor (SF) to <n> (default: )
-S <n> -- build the <n>th step of the data/update set (used with -C or -U)
-U <n> -- generate <n> update sets
-v -- enable VERBOSE mode Advanced Options
===========================
-b <s> -- load distributions for <s> (default: dists.dss)
-d <n> -- split deletes between <n> files (requires -U)
-i <n> -- split inserts between <n> files (requires -U)
-T c -- generate cutomers ONLY
-T l -- generate nation/region ONLY
-T L -- generate lineitem ONLY
-T n -- generate nation ONLY
-T o -- generate orders/lineitem ONLY
-T O -- generate orders ONLY
-T p -- generate parts/partsupp ONLY
-T P -- generate parts ONLY
-T r -- generate region ONLY
-T s -- generate suppliers ONLY
-T S -- generate partsupp ONLY To generate the SF= (1GB), validation database population, use:
dbgen -vf -s To generate updates for a SF= (1GB), use:
dbgen -v -U -s
运行./dbgen -s 1024生成1TB数据
jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # ll *.tbl
-rw-r--r-- root root Jul : customer.tbl
-rw-r--r-- root root Jul : lineitem.tbl
-rw-r--r-- root root Jul : nation.tbl
-rw-r--r-- root root Jul : orders.tbl
-rw-r--r-- root root Jul : part.tbl
-rw-r--r-- root root Jul : partsupp.tbl
-rw-r--r-- root root Jul : region.tbl
-rw-r--r-- root root Jul : supplier.tbl
将数据移动到一个单独的目录
mkdir ../data1024g
mv *.tbl ../data1024g
2.下载impala版本的TPCH-H脚本
建立原始表linetext,为text文件:大小776GB
jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # hdfs dfs -du /shaochen/tpch
/shaochen/tpch/customer
/shaochen/tpch/lineitem
/shaochen/tpch/nation
/shaochen/tpch/orders
/shaochen/tpch/part
/shaochen/tpch/partsupp
/shaochen/tpch/region
/shaochen/tpch/supplier
Create external table lineitem (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE DOUBLE, L_DISCOUNT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SHIPMODE STRING, L_COMMENT STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LOCATION '/shaochen/tpch/lineitem';
从原始text表中统计记录条数:
[jfp4-:] > select count(*) from lineitem;
Query: select count(*) from lineitem
+------------+
| count(*) |
+------------+
| |
+------------+
Returned row(s) in .47s
在脚本运行过程中,观察到Cluster Disk IO速度平均接近1GB,原始数据为776GB,由于是IO密集型操作,估算应该在776GB/1GB/s=800s完成。符合预期
将lineitem表保存为parquet格式:
[jfp4-:] > insert overwrite lineitem_parquet select * from lineitem;
Query: insert overwrite lineitem_parquet select * from lineitem
Inserted rows in .52s
在脚本运行过程中,该SQL为由于涉及到parquet文件的转换和Snappy压缩,属于混合型(IO密集+CPU密集),观察到Cluster Disk IO中读速率均值约为210M,估算在776/0.2=3800秒左右完成。符合预期。
根据写速率为140兆,parquet文件大小约为3800*0.14=532GB,再除以复制因子3,为180GB。
jfp4-:/mnt/disk1/tpch_2_17_0/dbgen # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
/user/hive/warehouse/tpch.db/q1_pricing_summary_report
真实的parquet文件大小为200G,符合预期。
再次统计记录条数:
[jfp4-:] > select count(*) from lineitem_parquet;
Query: select count(*) from lineitem_parquet
+------------+
| count(*) |
+------------+
| |
+------------+
Returned row(s) in .04s
在text文件格式上运行Q1:
[jfp4-:] > -- the query
> INSERT OVERWRITE TABLE q1_pricing_summary_report
> SELECT
> L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int)
> FROM
> lineitem
> WHERE
> L_SHIPDATE<='1998-09-02'
> GROUP BY L_RETURNFLAG, L_LINESTATUS
> ORDER BY L_RETURNFLAG, L_LINESTATUS
> LIMIT ;
Query: INSERT OVERWRITE TABLE q1_pricing_summary_report SELECT L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int) FROM lineitem WHERE L_SHIPDATE<='1998-09-02' GROUP BY L_RETURNFLAG, L_LINESTATUS ORDER BY L_RETURNFLAG, L_LINESTATUS LIMIT
^C[jfp4-:] > INSERT OVERWRITE TABLE q1_pricing_summary_report
> SELECT
> L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int)
> FROM
> lineitem
> WHERE
> L_SHIPDATE<='1998-09-02'
> GROUP BY L_RETURNFLAG, L_LINESTATUS
> ORDER BY L_RETURNFLAG, L_LINESTATUS
> LIMIT ;
Query: insert OVERWRITE TABLE q1_pricing_summary_report SELECT L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int) FROM lineitem WHERE L_SHIPDATE<='1998-09-02' GROUP BY L_RETURNFLAG, L_LINESTATUS ORDER BY L_RETURNFLAG, L_LINESTATUS LIMIT
Inserted rows in .57s
查询查询计划:
[jfp4-:] > explain INSERT OVERWRITE TABLE q1_pricing_summary_report
> SELECT
> L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int)
> FROM
> lineitem
> WHERE
> L_SHIPDATE<='1998-09-02'
> GROUP BY L_RETURNFLAG, L_LINESTATUS
> ORDER BY L_RETURNFLAG, L_LINESTATUS
> LIMIT ;
Query: explain INSERT OVERWRITE TABLE q1_pricing_summary_report SELECT L_RETURNFLAG, L_LINESTATUS, SUM(L_QUANTITY), SUM(L_EXTENDEDPRICE), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)), SUM(L_EXTENDEDPRICE*(-L_DISCOUNT)*(+L_TAX)), AVG(L_QUANTITY), AVG(L_EXTENDEDPRICE), AVG(L_DISCOUNT), cast(COUNT() as int) FROM lineitem WHERE L_SHIPDATE<='1998-09-02' GROUP BY L_RETURNFLAG, L_LINESTATUS ORDER BY L_RETURNFLAG, L_LINESTATUS LIMIT
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Explain String |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=.13GB VCores= |
| WARNING: The following tables are missing relevant table and/or column statistics. |
| tpch.lineitem |
| |
| WRITE TO HDFS [tpch.q1_pricing_summary_report, OVERWRITE=true] |
| | partitions= |
| | |
| :TOP-N [LIMIT=] |
| | order by: L_RETURNFLAG ASC, L_LINESTATUS ASC |
| | |
| :EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| :TOP-N [LIMIT=] |
| | order by: L_RETURNFLAG ASC, L_LINESTATUS ASC |
| | |
| :AGGREGATE [MERGE FINALIZE] |
| | output: sum(sum(L_QUANTITY)), sum(sum(L_EXTENDEDPRICE)), sum(sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT))), sum(sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT) * (1.0 + L_TAX))), sum(count(L_QUANTITY)), sum(count(L_EXTENDEDPRICE)), sum(sum(L_DISCOUNT)), sum(count(L_DISCOUNT)), sum(count()) |
| | group by: L_RETURNFLAG, L_LINESTATUS |
| | |
| :EXCHANGE [PARTITION=HASH(L_RETURNFLAG,L_LINESTATUS)] |
| | |
| :AGGREGATE |
| | output: sum(L_QUANTITY), sum(L_EXTENDEDPRICE), sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT)), sum(L_EXTENDEDPRICE * (1.0 - L_DISCOUNT) * (1.0 + L_TAX)), count(L_QUANTITY), count(L_EXTENDEDPRICE), sum(L_DISCOUNT), count(L_DISCOUNT), count() |
| | group by: L_RETURNFLAG, L_LINESTATUS |
| | |
| :SCAN HDFS [tpch.lineitem] |
| partitions=/ size=.30GB |
| predicates: L_SHIPDATE <= '1998-09-02' |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Returned row(s) in .15s
计算一下表的统计信息:
[jfp4-:] > compute stats lineitem;
Query: compute stats lineitem
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated partition(s) and column(s). |
+------------------------------------------+
Returned row(s) in .34s
根据执行结果,发现compute stats 原来是如此花费时间!观察执行过程中,前15分钟的DISK IO是非常高,达到900M/s左右,基本上是集群中所有的磁盘都在满负荷的读文件的。之后的IO也保持在130M/s左右。看来compute status是一个昂贵的操作
在parquet表上统计一下:
[jfp4-1:21000] > compute stats lineitem_parquet;
Query: compute stats lineitem_parquet
Query aborted.
[jfp4-1:21000] > SET
> NUM_SCANNER_THREADS=2
> ;
NUM_SCANNER_THREADS set to 2
[jfp4-1:21000] > compute stats lineitem_parquet;
Query: compute stats lineitem_parquet
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 16 column(s). |
+------------------------------------------+
Returned 1 row(s) in 5176.29s
[jfp4-1:21000] >
注意需要设置NUM_SCANNER_THREAD,才能成功
查看snappy压缩对parquet表的压缩和查询效率的影响:
[jfp4-:] > set PARQUET_COMPRESSION_CODEC=snappy;
PARQUET_COMPRESSION_CODEC set to snappy
[jfp4-:] > create table lineitem_parquet_snappy (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT SOMMENT STRING) STORED AS PARQUET;
Query: create table lineitem_parquet_snappy (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SING) STORED AS PARQUET Returned row(s) in .30s
[jfp4-1:21000] > insert overwrite lineitem_parquet_snappy select * from lineitem;
Query: insert overwrite lineitem_parquet_snappy select * from lineitem
Inserted 6144008876 rows in 3836.99s
查看snappy表的大小:
jfp4-:~ # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet_snappy
/user/hive/warehouse/tpch.db/q1_pricing_summary_report
发现lineitem_parquet_snappy和lineitem_parquet大小是一样的,可见默认情况下,impala的parquet表默认是用snappy压缩的
[jfp4-:] > insert overwrite lineitem_parquet_raw select * from lineitem;
Query: insert overwrite lineitem_parquet_raw select * from lineitem
Inserted rows in .22s
snappy + parquet在写数据上比不压缩的parquet还是要节省了一些时间的!
看看raw parquet的大小:
jfp4-:~ # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
319.2 G /user/hive/warehouse/tpch.db/lineitem_parquet_raw
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet_snappy
/user/hive/warehouse/tpch.db/q1_pricing_summary_report
看看gzip+snappy的效果:
[jfp4-:] > set PARQUET_COMPRESSION_CODEC=gzip;
PARQUET_COMPRESSION_CODEC set to gzip
[jfp4-:] > create table lineitem_parquet_gzip (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE DOUBLE, L_DISCOUNT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SHIPMODE STRING, L_COMMENT STRING) STORED AS PARQUET;
Query: create table lineitem_parquet_gzip (L_ORDERKEY INT, L_PARTKEY INT, L_SUPPKEY INT, L_LINENUMBER INT, L_QUANTITY DOUBLE, L_EXTENDEDPRICE DOUBLE, L_DISCOUNT DOUBLE, L_TAX DOUBLE, L_RETURNFLAG STRING, L_LINESTATUS STRING, L_SHIPDATE STRING, L_COMMITDATE STRING, L_RECEIPTDATE STRING, L_SHIPINSTRUCT STRING, L_SHIPMODE STRING, L_COMMENT STRING) STORED AS PARQUET Returned row(s) in .26s
[jfp4-:] > insert overwrite lineitem_parquet_gzip select * from lineitem;
Query: insert overwrite lineitem_parquet_gzip select * from lineitem
Inserted rows in .71s
jfp4-:~ # hdfs dfs -du -h /user/hive/warehouse/tpch.db
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet
155.1 G /user/hive/warehouse/tpch.db/lineitem_parquet_gzip
319.2 G /user/hive/warehouse/tpch.db/lineitem_parquet_raw
200.9 G /user/hive/warehouse/tpch.db/lineitem_parquet_snappy
/user/hive/warehouse/tpch.db/q1_pricing_summary_report
[jfp4-:] > select count(*) from lineitem_parquet_gzip;
Query: select count(*) from lineitem_parquet_gzip
+------------+
| count(*) |
+------------+
| |
+------------+
Returned row(s) in .54s
TPCH Benchmark with Impala的更多相关文章
- CIB Training Scripts For TPC-H Benchmark
http://52.11.56.155:7180/http://52.11.56.155:8888/ impala-shell -i 172.31.25.244 sudo -u hdfs hdfs d ...
- 运行impala tpch
1.安装git和下载tpc-h-impala脚步 [root@ip-172-31-34-31 ~]# yum install git [root@ip-172-31-34-31 ~]# git clo ...
- 在Linux下将TPC-H数据导入到MySQL
一.下载TPC-H 下载地址:http://www.tpc.org/tpc_documents_current_versions/current_specifications.asp .从这个页面中找 ...
- Impala:新一代开源大数据分析引擎--转载
原文地址:http://www.parallellabs.com/2013/08/25/impala-big-data-analytics/ 文 / 耿益锋 陈冠诚 大数据处理是云计算中非常重要的问题 ...
- Greenplum 源码安装教程 —— 以 CentOS 平台为例
Greenplum 源码安装教程 作者:Arthur_Qin 禾众 Greenplum 主体以及orca ( 新一代优化器 ) 的代码以可以从 Github 上下载.如果不打算查看代码,想下载编译好的 ...
- TPC-H is a Decision Support Benchmark
TPC-H is a Decision Support Benchmark http://www.dba-oracle.com/t_tpc_benchmarks.htm
- 【原创】大数据基础之Benchmark(4)TPC-DS测试结果(hive/hive on spark/spark sql/impala/presto)
1 测试集群 内存:256GCPU:32Core (Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz)Disk(系统盘):300GDisk(数据盘):1.5T*1 2 ...
- 【原创】大数据基础之Benchmark(2)TPC-DS
tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...
- Impala:新一代开源大数据分析引擎
Impala架构分析 Impala是Cloudera公司主导开发的新型查询系统,它提供SQL语义,能查询存储在Hadoop的HDFS和HBase中的PB级大数据.已有的Hive系统虽然也提供了SQL语 ...
随机推荐
- SYSTick 定时器
CM3 内核的处理器,内部包含了一个 SysTick 定时器,(SysTick 的时钟源自 HCLK 的 8 分频,8个系统时钟周期systick跳一个,即8*1/72M=1/9 us)Sys ...
- kegg-kass注释--转载
在注释KEGG的时候,一直用到kaas,具体kaas是个什么东东,简单的总结一下吧. KEGG是由日本人搞的一个代谢图,收录基因和基因组的数据库,数据库可以分为 3大部分,基因数据库, 化学分 ...
- ajax请求总是进入Error里
ajax请求时找到了正确的地址,执行完返回总是进入error里,并且浏览器错误显示是找不到请求的地址. 解决办法: 查看配置文件的,把maxJsonLength值改大. <system.web. ...
- [地图SkyLine二次开发]框架(2)
上节讲到,地图加载. 但我们可以发现,当没有页面布局的情况下,<OBJECT>控件,没有占满整个屏幕,这里我们就要用到Extjs的功能了. 这节要讲的是用Extjs为<OBJECT& ...
- MongoDB简介与增删改查
一.简介 MongoDB 是由C++语言编写的,是一个基于分布式文件存储的开源数据库系统.MongoDB 旨在为WEB应用提供可扩展的高性能数据存储解决方案.MongoDB 将数据存储为一个文档,数据 ...
- SQLserver2008数据库备份和还原问题(还原是必须有完整备份)
首先,我要说明的是你必须拥有完整的数据库备份,下面的还原教程,才算有用,如果其它问题,请搜索别的大牛的解决办法,本方法只适合菜鸟. 这个连接是站长大人的异常恢复方法,有问题可以自己学习http://w ...
- javabean连数据库
1.在src下建包,然后包中建javabean类,代码如下(我的包名为aa) package aa; import java.sql.*; public class bean { private fi ...
- 《基于Apache Kylin构建大数据分析平台》
Kyligence联合创始人兼CEO,Apache Kylin项目管理委员会主席(PMC Chair)韩卿 武汉市云升科技发展有限公司董事长,<智慧城市-大数据.物联网和云计算之应用>作者 ...
- Ridit分析
对于有序分类资料,由于指标存在等级顺序,因此不能使用卡方检验,除了使用秩和检验之外,ridit检验也是分析有序分类资料的常用方法,属于非参数检验. ridit检验的基本做法是将一组有序分组资料转换成一 ...
- Python学习笔记之字典
一.创建和使用字典 1.创建字典 phonebook={'Alice':'2341','Beth':'9102','Cecil':'3258'} 2.dict,通过映射创建字典 >>> ...