Hive中的order by、sort by、distribute by、cluster by解释及测试
结论:
order by:全局排序,这也是4种排序手段中唯一一个能在终端输出中看出全局排序的方法,只有一个reduce,可能造成renduce任务时间过长,在严格模式下,要求必须具备limit子句。
sort by:可以运行多个reduce,每个reduce内排序,默认升序排序。
distribute by:控制map的输出在reduce中是如何划分的。通常与sort by组合使用,按照特定字段进行排序。
cluster by:如果distribute by字段和sort by字段相同,且安装默认升序方式进行排序,可以使用cluster by语句代替distribute by和sort by,但是这样会剥夺sort by的并行性,但是也不能保证全局输出是有序的(这是测试结果)。
1、order by全局排序测试:
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp order by sal asc limit 20;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp order by sal asc limit 20;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0059, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0059/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0059
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 1
2017-07-30 02:21:58,594 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:22:59,734 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:24:00,084 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:25:00,859 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:25:28,846 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 123.57 sec
2017-07-30 02:26:03,306 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 128.97 sec
MapReduce Total cumulative CPU time: 2 minutes 8 seconds 970 msec
Ended Job = job_1501198522682_0059
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 1 Cumulative CPU: 128.97 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 8 seconds 970 msec
OK
empno ename sal
7369 SMITH 800.0
7900 JAMES 950.0
7876 ADAMS 1100.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7934 MILLER 1300.0
7844 TURNER 1500.0
7499 ALLEN 1600.0
7782 CLARK 2450.0
7698 BLAKE 2850.0
7566 JONES 2975.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7839 KING 5000.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
Time taken: 442.499 seconds, Fetched: 16 row(s)
我们可以从输出日志看出,这个任务一共启动了3个map任务,1个reduce任务,输出结果是按照sal字段内容升序排序,并且全局有序。注意在任务开始前我们设置了reduce数目为3,但是实际仅启动了一个reduce任务,这说明order by是强制启动一个reduce完成全局排序的。当数据集比较大时,一个reduce任务将会成为这个任务的性能瓶颈。
2、sort by测试:
==============================================
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp sort by sal asc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0063, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0063/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0063
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:40:32,898 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:41:18,531 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 5.05 sec
2017-07-30 02:41:27,062 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 17.9 sec
2017-07-30 02:42:15,867 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 28.39 sec
2017-07-30 02:42:18,655 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 30.31 sec
2017-07-30 02:42:20,045 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.92 sec
MapReduce Total cumulative CPU time: 33 seconds 920 msec
Ended Job = job_1501198522682_0063
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 40.36 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 40 seconds 360 msec
OK
empno ename sal
7654 MARTIN 1250.0
7844 TURNER 1500.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7900 JAMES 950.0
7521 WARD 1250.0
7934 MILLER 1300.0
7499 ALLEN 1600.0
7782 CLARK 2450.0
7566 JONES 2975.0
7902 FORD 3000.0
7788 SCOTT 3000.0
7839 KING 5000.0
7369 SMITH 800.0
7876 ADAMS 1100.0
7698 BLAKE 2850.0
Time taken: 136.972 seconds, Fetched: 16 row(s)
sort by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。
============================================
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp sort by sal desc;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp sort by sal desc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0064, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0064/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0064
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:43:57,741 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:44:41,415 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 4.27 sec
2017-07-30 02:44:44,267 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 9.69 sec
2017-07-30 02:44:46,779 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 14.64 sec
2017-07-30 02:45:27,003 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 17.61 sec
2017-07-30 02:45:28,551 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 22.5 sec
2017-07-30 02:45:29,834 Stage-1 map = 100%, reduce = 56%, Cumulative CPU 24.54 sec
2017-07-30 02:45:32,660 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 29.26 sec
2017-07-30 02:45:35,291 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 35.0 sec
2017-07-30 02:45:37,542 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 36.92 sec
MapReduce Total cumulative CPU time: 36 seconds 920 msec
Ended Job = job_1501198522682_0064
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 36.92 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 36 seconds 920 msec
OK
empno ename sal
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7844 TURNER 1500.0
7654 MARTIN 1250.0
7839 KING 5000.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7566 JONES 2975.0
7782 CLARK 2450.0
7499 ALLEN 1600.0
7934 MILLER 1300.0
7521 WARD 1250.0
7900 JAMES 950.0
7698 BLAKE 2850.0
7876 ADAMS 1100.0
7369 SMITH 800.0
Time taken: 132.663 seconds, Fetched: 16 row(s)
sort by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。
======================================================================
3、distribute by测试:
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp distribute by sal sort by sal asc;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp distribute by sal sort by sal asc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0067, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0067/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0067
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:52:14,833 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:53:04,015 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 6.0 sec
2017-07-30 02:53:05,393 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 11.35 sec
2017-07-30 02:53:06,657 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 11.72 sec
2017-07-30 02:53:46,542 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 20.83 sec
2017-07-30 02:53:47,981 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 23.77 sec
2017-07-30 02:53:50,909 Stage-1 map = 100%, reduce = 70%, Cumulative CPU 27.81 sec
2017-07-30 02:53:54,581 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 37.24 sec
MapReduce Total cumulative CPU time: 37 seconds 240 msec
Ended Job = job_1501198522682_0067
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 39.56 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 39 seconds 560 msec
OK
empno ename sal
7876 ADAMS 1100.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7782 CLARK 2450.0
7566 JONES 2975.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7934 MILLER 1300.0
7499 ALLEN 1600.0
7698 BLAKE 2850.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7839 KING 5000.0
7369 SMITH 800.0
7900 JAMES 950.0
7844 TURNER 1500.0
Time taken: 129.0 seconds, Fetched: 16 row(s)
distribute by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。同时我们发现sal值相同的字段并排输出,这说明distribute by sal按照sal的不同值分发的不同的reduce中。
============================================================
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp distribute by sal sort by sal desc;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp distribute by sal sort by sal desc;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0068, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0068/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0068
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:55:49,989 Stage-1 map = 0%, reduce = 0%
2017-07-30 02:56:29,911 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 4.4 sec
2017-07-30 02:56:37,401 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 16.42 sec
2017-07-30 02:57:17,118 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 19.05 sec
2017-07-30 02:57:21,673 Stage-1 map = 100%, reduce = 24%, Cumulative CPU 21.07 sec
2017-07-30 02:57:24,397 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 23.17 sec
2017-07-30 02:57:25,844 Stage-1 map = 100%, reduce = 56%, Cumulative CPU 26.7 sec
2017-07-30 02:57:27,134 Stage-1 map = 100%, reduce = 78%, Cumulative CPU 30.89 sec
2017-07-30 02:57:28,508 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 32.52 sec
2017-07-30 02:57:29,728 Stage-1 map = 100%, reduce = 83%, Cumulative CPU 34.62 sec
2017-07-30 02:57:32,582 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 39.17 sec
MapReduce Total cumulative CPU time: 39 seconds 170 msec
Ended Job = job_1501198522682_0068
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 39.17 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 39 seconds 170 msec
OK
empno ename sal
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7566 JONES 2975.0
7782 CLARK 2450.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7876 ADAMS 1100.0
7839 KING 5000.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7698 BLAKE 2850.0
7499 ALLEN 1600.0
7934 MILLER 1300.0
7844 TURNER 1500.0
7900 JAMES 950.0
7369 SMITH 800.0
Time taken: 135.076 seconds, Fetched: 16 row(s)
distribute by asc进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序,也侧面验证了sort by属于reduce内进行排序的。同时我们发现sal值相同的字段并排输出,这说明distribute by sal按照sal的不同值分发的不同的reduce中。
=========================================================
4、cluster by测试:
set mapred.max.split.size=200;
set mapred.reduce.tasks=3;
select empno,ename,sal from emp cluster by sal;
hive (chavin)> set mapred.max.split.size=200;
hive (chavin)> set mapred.reduce.tasks=3;
hive (chavin)> select empno,ename,sal from emp cluster by sal;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Defaulting to jobconf value of: 3
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1501198522682_0069, Tracking URL = http://chavin.king:8088/proxy/application_1501198522682_0069/
Kill Command = /opt/cdh-5.3.6/hadoop-2.5.0-cdh5.3.6/bin/hadoop job -kill job_1501198522682_0069
Hadoop job information for Stage-1: number of mappers: 3; number of reducers: 3
2017-07-30 02:59:19,969 Stage-1 map = 0%, reduce = 0%
2017-07-30 03:00:00,105 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 5.02 sec
2017-07-30 03:00:02,559 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 9.55 sec
2017-07-30 03:00:03,736 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 14.01 sec
2017-07-30 03:00:40,134 Stage-1 map = 100%, reduce = 22%, Cumulative CPU 18.03 sec
2017-07-30 03:00:41,421 Stage-1 map = 100%, reduce = 44%, Cumulative CPU 21.22 sec
2017-07-30 03:00:44,508 Stage-1 map = 100%, reduce = 81%, Cumulative CPU 27.56 sec
2017-07-30 03:00:45,937 Stage-1 map = 100%, reduce = 92%, Cumulative CPU 31.57 sec
2017-07-30 03:00:47,208 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 33.51 sec
MapReduce Total cumulative CPU time: 33 seconds 510 msec
Ended Job = job_1501198522682_0069
MapReduce Jobs Launched:
Stage-Stage-1: Map: 3 Reduce: 3 Cumulative CPU: 33.51 sec HDFS Read: 2119 HDFS Write: 299 SUCCESS
Total MapReduce CPU Time Spent: 33 seconds 510 msec
OK
empno ename sal
7876 ADAMS 1100.0
7654 MARTIN 1250.0
7521 WARD 1250.0
7782 CLARK 2450.0
7566 JONES 2975.0
8888 王崇闻 300000.0
8888 ChavinKing 300000.0
7934 MILLER 1300.0
7499 ALLEN 1600.0
7698 BLAKE 2850.0
7788 SCOTT 3000.0
7902 FORD 3000.0
7839 KING 5000.0
7369 SMITH 800.0
7900 JAMES 950.0
7844 TURNER 1500.0
Time taken: 119.103 seconds, Fetched: 16 row(s)
cluster by在特定条件下可以替代distribute by和sort by的组合,进行排序操作,启动了3个map任务,3个reduce任务,这与我们前期配置是相符合的。输出结果局部有序。
测试结论:以上4种排序方法中,真正能达到全局排序的只有order by,针对于sort by的局部排序如果想达到全局排序效果还需要对其结果进行一次order by的过程。而distribute by和cluster by可以合并相同的值,但并不是宣传中的那样可以达到全局排序的效果。或许还有其他手段可以达到,但绝不是针对于这2种排序本身。
Hive中的order by、sort by、distribute by、cluster by解释及测试的更多相关文章
- Hive 中的 order by, sort by, distribute by 与 cluster by
Order By order by 会对输入做全排序, 因此只有一个Reducer(多个Reducer无法保证全局有序), 然而只有一个Reducer, 会导致当输入规模较大时, 消耗较长的计算时间. ...
- hive中order by,sort by, distribute by, cluster by作用以及用法
1. order by Hive中的order by跟传统的sql语言中的order by作用是一样的,会对查询的结果做一次全局排序,所以说,只有hive的sql中制定了order by所有的 ...
- [转载]hive中order by,sort by, distribute by, cluster by作用以及用法
1. order by Hive中的order by跟传统的sql语言中的order by作用是一样的,会对查询的结果做一次全局排序,所以说,只有hive的sql中制定了order by所有的 ...
- hive中order by,sort by, distribute by, cluster by的用法
1.order by hive中的order by 和传统sql中的order by 一样,对数据做全局排序,加上排序,会新启动一个job进行排序,会把所有数据放到同一个reduce中进行处理,不管数 ...
- hive中order by ,sort by ,distribute by, cluster by 的区别(**很详细**)
hive 查询语法 select [all | distinct] select_ condition, select_ condition from table_name a [join table ...
- hive 排序 order by sort by distribute by cluster by
order by: order by是全局排序,受hive.mapred.mode的影响. 使用orderby有一些限制: 1.在严格模式下(hive.mapred.mod ...
- Hive中的Order by与关系型数据库中的order by语句的异同点
在Hive中,ORDER BY语句是对查询结果集进行整体的排序,最终将会产生一个reducer进行全局的排序,达到的最终结果是和传统的关系型数据库是一样的. 在数据量非常大的时候,全局排序的单个red ...
- Hadoop Hive 中的排序 Order by ,Sort by ,Distribute by以及 Cluster By
order by order by 会对输入做全局排序,因此只有一个reducer(多个reducer无法保证全局有序)只有一个reducer,会导致当输入规模较大时,需要较长的计算时间. set h ...
- hive 中 Order by, Sort by ,Dristribute by,Cluster By 的作用和用法
order by order by 会对输入做全局排序,因此只有一个reducer(多个reducer无法保证全局有序) 只有一个reducer,会导致当输入规模较大时,需要较长的计算时间. set ...
- hive 的分隔符、orderby sort by distribute by的优化
一.Hive 分号字符 分号是SQL语句结束标记,在HiveQL中也是,可是在HiveQL中,对分号的识别没有那么智慧,比如: select concat(cookie_id,concat(';',' ...
随机推荐
- MySQL 5.6新特性 -- Index Condition Pushdown
Index Condition Pushdown(ICP)是针对mysql使用索引从表中检索行数据时的一种优化方法. 在没有ICP特性之前,存储引擎根据索引去基表查找并将数据返回给mysql se ...
- cocos2d-x与UIKit混合编程实现半透明效果
关键词 cocos2d-x, UIKit, transparent 问题 cocos2d-x使用一个专门的OpenGL View进行渲染, 它的渲染和UIKit是分开进行的, 因此我们使用时一般是把c ...
- 一个vue请求接口渲染页面的例子
new Vue({ el:'#bodylist', data: { list: [ { "type_id": "1", "type_name" ...
- ab压力测试工具的简单使用
ab是一种用于测试Apache超文本传输协议(HTTP)服务器的工具.apache自带ab工具,可以测试 apache.IIs.tomcat.nginx等服务器 但是ab没有Jmeter.Loadru ...
- 译: 3. RabbitMQ Spring AMQP 之 Publish/Subscribe 发布和订阅
在第一篇教程中,我们展示了如何使用start.spring.io来利用Spring Initializr创建一个具有RabbitMQ starter dependency的项目来创建spring-am ...
- etcd 集群运维实践
etcd 是 Kubernetes 集群的数据核心,最严重的情况是,当 etcd 出问题彻底无法恢复的时候,解决问题的办法可能只有重新搭建一个环境.因此围绕 etcd 相关的运维知识就比较重要,etc ...
- JS 日期转换,格式化等常用的函数定义
//判断字符串是否日期格式 function isDate(val) { return new Date(val) != "Invalid Date"; } //日期格式化 fun ...
- Socket网络编程--小小网盘程序(4)
在这一小节中实现了文件的下载,具体的思路是根据用户的uid和用户提供的文件名filename联合两张表,取得md5唯一标识符,然后操作这个标识符对应的文件发送给客户端. 实现下载的小小网盘程序 cli ...
- VueThink配置
vuethink 配置 原文地址:http://blog.csdn.net/hero82748274/article/details/76100938
- Unity5 AssetBundle系列——基本流程
Unity5的AssetBundle修改比较大,所以第一条建议是:忘掉以前的用法,重新来!要知道,Unity5已经没办法加载2.x 3.x的bundle包了…体会一下Unity5 AssetBundl ...