1.环境

  • OS:Red Hat Enterprise Linux Server release 6.4 (Santiago)
  • Hadoop:Hadoop 2.4.1
  • Hive:0.11.0
  • JDK:1.7.0_60
  • Spark:1.1.0(内置SparkSQL)
  • Scala:2.11.2

2.Spark集群规划

  • 账户:ebupt
  • master:eb174
  • slaves:eb174、eb175、eb176

3.SparkSQL发展历史

2014年9月11日,发布Spark1.1.0。Spark从1.0开始引入SparkSQL。Spark1.1.0变化较大是SparkSQL和MLlib。具体参见release note

SparkSQL的前身是Shark。由于Shark自身的不完善,2014年6月1日Reynold Xin宣布:停止对Shark的开发。SparkSQL抛弃原有Shark的代码,汲取了Shark的一些优点,如内存列存储(In-Memory Columnar Storage)、Hive兼容性等,重新开发SparkSQL。

4.配置

  1. 安装配置同Spark-0.9.1(参见博文:Spark、Shark集群安装部署及遇到的问题解决
  2. 将$HIVE_HOME/conf/hive-site.xml配置文件拷贝到$SPARK_HOME/conf目录下。
  3. 将$HADOOP_HOME/etc/hadoop/hdfs-site.xml配置文件拷贝到$SPARK_HOME/conf目录下。

5.运行

  1. 启动Spark集群
  2. 启动SparkSQL Client:./spark/bin/spark-sql --master spark://eb174:7077 --executor-memory 3g
  3. 运行SQL,访问hive的表:spark-sql> select count(*) from test.t1;
14/10/08 20:46:04 INFO ParseDriver: Parsing command: select count(*) from test.t1
14/10/08 20:46:05 INFO ParseDriver: Parse Completed
14/10/08 20:46:05 INFO metastore: Trying to connect to metastore with URI thrift://eb170:9083
14/10/08 20:46:05 INFO metastore: Waiting 1 seconds before next connection attempt.
14/10/08 20:46:06 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@eb174:55408/user/Executor#1282322316] with ID 2
14/10/08 20:46:06 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@eb176:56138/user/Executor#-264112470] with ID 0
14/10/08 20:46:06 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@eb175:43791/user/Executor#-996481867] with ID 1
14/10/08 20:46:06 INFO BlockManagerMasterActor: Registering block manager eb174:54967 with 265.4 MB RAM
14/10/08 20:46:06 INFO BlockManagerMasterActor: Registering block manager eb176:60783 with 265.4 MB RAM
14/10/08 20:46:06 INFO BlockManagerMasterActor: Registering block manager eb175:35197 with 265.4 MB RAM
14/10/08 20:46:06 INFO metastore: Connected to metastore.
14/10/08 20:46:07 INFO deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/10/08 20:46:07 INFO MemoryStore: ensureFreeSpace(406982) called with curMem=0, maxMem=278302556
14/10/08 20:46:07 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 397.4 KB, free 265.0 MB)
14/10/08 20:46:07 INFO MemoryStore: ensureFreeSpace(25198) called with curMem=406982, maxMem=278302556
14/10/08 20:46:07 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 24.6 KB, free 265.0 MB)
14/10/08 20:46:07 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on eb174:49971 (size: 24.6 KB, free: 265.4 MB)
14/10/08 20:46:07 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
14/10/08 20:46:07 INFO SparkContext: Starting job: collect at HiveContext.scala:415
14/10/08 20:46:08 INFO FileInputFormat: Total input paths to process : 1
14/10/08 20:46:08 INFO DAGScheduler: Registering RDD 5 (mapPartitions at Exchange.scala:86)
14/10/08 20:46:08 INFO DAGScheduler: Got job 0 (collect at HiveContext.scala:415) with 1 output partitions (allowLocal=false)
14/10/08 20:46:08 INFO DAGScheduler: Final stage: Stage 0(collect at HiveContext.scala:415)
14/10/08 20:46:08 INFO DAGScheduler: Parents of final stage: List(Stage 1)
14/10/08 20:46:08 INFO DAGScheduler: Missing parents: List(Stage 1)
14/10/08 20:46:08 INFO DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[5] at mapPartitions at Exchange.scala:86), which has no missing parents
14/10/08 20:46:08 INFO MemoryStore: ensureFreeSpace(11000) called with curMem=432180, maxMem=278302556
14/10/08 20:46:08 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 10.7 KB, free 265.0 MB)
14/10/08 20:46:08 INFO MemoryStore: ensureFreeSpace(5567) called with curMem=443180, maxMem=278302556
14/10/08 20:46:08 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 5.4 KB, free 265.0 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on eb174:49971 (size: 5.4 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
14/10/08 20:46:08 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MapPartitionsRDD[5] at mapPartitions at Exchange.scala:86)
14/10/08 20:46:08 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
14/10/08 20:46:08 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 0, eb174, NODE_LOCAL, 1199 bytes)
14/10/08 20:46:08 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 1, eb176, NODE_LOCAL, 1199 bytes)
14/10/08 20:46:08 INFO ConnectionManager: Accepted connection from [eb176/10.1.69.176:49289]
14/10/08 20:46:08 INFO ConnectionManager: Accepted connection from [eb174/10.1.69.174:33401]
14/10/08 20:46:08 INFO SendingConnection: Initiating connection to [eb176/10.1.69.176:60783]
14/10/08 20:46:08 INFO SendingConnection: Initiating connection to [eb174/10.1.69.174:54967]
14/10/08 20:46:08 INFO SendingConnection: Connected to [eb176/10.1.69.176:60783], 1 messages pending
14/10/08 20:46:08 INFO SendingConnection: Connected to [eb174/10.1.69.174:54967], 1 messages pending
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on eb176:60783 (size: 5.4 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on eb174:54967 (size: 5.4 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on eb174:54967 (size: 24.6 KB, free: 265.4 MB)
14/10/08 20:46:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on eb176:60783 (size: 24.6 KB, free: 265.4 MB)
14/10/08 20:46:10 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 1) in 2657 ms on eb176 (1/2)
14/10/08 20:46:10 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 0) in 2675 ms on eb174 (2/2)
14/10/08 20:46:10 INFO DAGScheduler: Stage 1 (mapPartitions at Exchange.scala:86) finished in 2.680 s
14/10/08 20:46:10 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
14/10/08 20:46:10 INFO DAGScheduler: looking for newly runnable stages
14/10/08 20:46:10 INFO DAGScheduler: running: Set()
14/10/08 20:46:10 INFO DAGScheduler: waiting: Set(Stage 0)
14/10/08 20:46:10 INFO DAGScheduler: failed: Set()
14/10/08 20:46:10 INFO DAGScheduler: Missing parents for Stage 0: List()
14/10/08 20:46:10 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[9] at map at HiveContext.scala:360), which is now runnable
14/10/08 20:46:10 INFO MemoryStore: ensureFreeSpace(9752) called with curMem=448747, maxMem=278302556
14/10/08 20:46:10 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 9.5 KB, free 265.0 MB)
14/10/08 20:46:10 INFO MemoryStore: ensureFreeSpace(4941) called with curMem=458499, maxMem=278302556
14/10/08 20:46:10 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 4.8 KB, free 265.0 MB)
14/10/08 20:46:10 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on eb174:49971 (size: 4.8 KB, free: 265.4 MB)
14/10/08 20:46:10 INFO BlockManagerMaster: Updated info of block broadcast_2_piece0
14/10/08 20:46:11 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[9] at map at HiveContext.scala:360)
14/10/08 20:46:11 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
14/10/08 20:46:11 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 2, eb175, PROCESS_LOCAL, 948 bytes)
14/10/08 20:46:11 INFO StatsReportListener: Finished stage: org.apache.spark.scheduler.StageInfo@513f39c
14/10/08 20:46:11 INFO StatsReportListener: task runtime:(count: 2, mean: 2666.000000, stdev: 9.000000, max: 2675.000000, min: 2657.000000)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s 2.7 s
14/10/08 20:46:11 INFO StatsReportListener: shuffle bytes written:(count: 2, mean: 50.000000, stdev: 0.000000, max: 50.000000, min: 50.000000)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B 50.0 B
14/10/08 20:46:11 INFO StatsReportListener: task result size:(count: 2, mean: 1848.000000, stdev: 0.000000, max: 1848.000000, min: 1848.000000)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B 1848.0 B
14/10/08 20:46:11 INFO StatsReportListener: executor (non-fetch) time pct: (count: 2, mean: 86.309428, stdev: 0.103820, max: 86.413248, min: 86.205607)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 86 % 86 % 86 % 86 % 86 % 86 % 86 % 86 % 86 %
14/10/08 20:46:11 INFO StatsReportListener: other time pct: (count: 2, mean: 13.690572, stdev: 0.103820, max: 13.794393, min: 13.586752)
14/10/08 20:46:11 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:11 INFO StatsReportListener: 14 % 14 % 14 % 14 % 14 % 14 % 14 % 14 % 14 %
14/10/08 20:46:11 INFO ConnectionManager: Accepted connection from [eb175/10.1.69.175:36187]
14/10/08 20:46:11 INFO SendingConnection: Initiating connection to [eb175/10.1.69.175:35197]
14/10/08 20:46:11 INFO SendingConnection: Connected to [eb175/10.1.69.175:35197], 1 messages pending
14/10/08 20:46:11 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on eb175:35197 (size: 4.8 KB, free: 265.4 MB)
14/10/08 20:46:12 INFO MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 0 to sparkExecutor@eb175:58085
14/10/08 20:46:12 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 140 bytes
14/10/08 20:46:12 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 2) in 1428 ms on eb175 (1/1)
14/10/08 20:46:12 INFO DAGScheduler: Stage 0 (collect at HiveContext.scala:415) finished in 1.432 s
14/10/08 20:46:12 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/10/08 20:46:12 INFO StatsReportListener: Finished stage: org.apache.spark.scheduler.StageInfo@6e8030b0
14/10/08 20:46:12 INFO StatsReportListener: task runtime:(count: 1, mean: 1428.000000, stdev: 0.000000, max: 1428.000000, min: 1428.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s 1.4 s
14/10/08 20:46:12 INFO StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms
14/10/08 20:46:12 INFO StatsReportListener: remote bytes read:(count: 1, mean: 100.000000, stdev: 0.000000, max: 100.000000, min: 100.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B 100.0 B
14/10/08 20:46:12 INFO SparkContext: Job finished: collect at HiveContext.scala:415, took 4.787407158 s
14/10/08 20:46:12 INFO StatsReportListener: task result size:(count: 1, mean: 1072.000000, stdev: 0.000000, max: 1072.000000, min: 1072.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B 1072.0 B
14/10/08 20:46:12 INFO StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 80.252101, stdev: 0.000000, max: 80.252101, min: 80.252101)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 80 % 80 % 80 % 80 % 80 % 80 % 80 % 80 % 80 %
14/10/08 20:46:12 INFO StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000000, max: 0.000000, min: 0.000000)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 % 0 %
14/10/08 20:46:12 INFO StatsReportListener: other time pct: (count: 1, mean: 19.747899, stdev: 0.000000, max: 19.747899, min: 19.747899)
14/10/08 20:46:12 INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100%
14/10/08 20:46:12 INFO StatsReportListener: 20 % 20 % 20 % 20 % 20 % 20 % 20 % 20 % 20 %
5078
Time taken: 7.581 seconds

注意:

  1. 在启动spark-sql时,如果不指定master,则以local的方式运行,master既可以指定standalone的地址,也可以指定yarn;
  2. 当设定master为yarn时(spark-sql --master yarn)时,可以通过http://$master:8088页面监控到整个job的执行过程;
  3. 如果在$SPARK_HOME/conf/spark-defaults.conf中配置了spark.master spark://eb174:7077,那么在启动spark-sql时不指定master也是运行在standalone集群之上。

6.遇到的问题及解决方案

① 在spark-sql客户端命令行界面运行SQL语句出现无法解析UnknownHostException:ebcloud(这是hadoop的dfs.nameservices)

14/10/08 20:42:44 ERROR CliDriver: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, eb174): java.lang.IllegalArgumentException: java.net.UnknownHostException: ebcloud
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:144)
org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)

原因:Spark无法正确获取HDFS的地址。因此,将hadoop的HDFS配置文件hdfs-site.xml拷贝到$SPARK_HOME/conf目录下。

14/10/08 20:26:46 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
14/10/08 20:26:46 INFO SparkContext: Starting job: collect at HiveContext.scala:415
14/10/08 20:29:19 WARN RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo over eb171/10.1.69.171:8020. Not retrying because failovers (15) exceeded maximum allowed (15)
java.net.ConnectException: Call From eb174/10.1.69.174 to eb171:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)

原因:hdfs连接失败,原因是hdfs-site.xml未全部同步到slaves的节点上。

7.参考资料

  1. sparkSQL1.1入门之六:sparkSQL之基础应用
  2. Spark SQL CLI描述
  3. local class incompatible serialVersionUID
  4. Cannot Submit Tez Application

SparkSQL配置和使用初探的更多相关文章

  1. rabbitmq实践笔记(一):安装、配置与使用初探

    引言: 对于一个大型的软件系统来说,会有很多的组件.模块及不同的子系统一起协同工作,模块之间的通信需要一个可靠的通信管道来保证 ,通信管道需要解决解决很多问题,比如: 1)信息的发送者和接收者如何维持 ...

  2. Zeppelin0.6.2之shiro安全配置 初探

    0.序 默认情况下,Zeppelin安装好并且配置完zeppelin-site.xml和zeppelin-env.sh后,我们进入的模式,从右上角就能看出来是anonymous模式,这种模式下会看见所 ...

  3. WCF初探-9:WCF服务承载 (下)

    在WCF初探-8:WCF服务承载 (上)中,我们对宿主的概念.环境.特点做了文字性的介绍和概括,接下来我们将通过实例对这几种寄宿方式进行介绍.为了更好的说明各寄宿环境特点,本实例采用Http和net. ...

  4. SparkSql官方文档中文翻译(java版本)

    1 概述(Overview) 2 DataFrames 2.1 入口:SQLContext(Starting Point: SQLContext) 2.2 创建DataFrames(Creating ...

  5. 【Spark深入学习 -16】官网学习SparkSQL

    ----本节内容-------1.概览        1.1 Spark SQL        1.2 DatSets和DataFrame2.动手干活        2.1 契入点:SparkSess ...

  6. Spark记录-SparkSql官方文档中文翻译(部分转载)

    1 概述(Overview) Spark SQL是Spark的一个组件,用于结构化数据的计算.Spark SQL提供了一个称为DataFrames的编程抽象,DataFrames可以充当分布式SQL查 ...

  7. 【大数据】SparkSql学习笔记

    第1章 Spark SQL概述 1.1 什么是Spark SQL Spark SQL是Spark用来处理结构化数据的一个模块,它提供了2个编程抽象:DataFrame和 DataSet,并且作为分布式 ...

  8. SparkSQL执行时参数优化

    近期接手了不少大数据表任务调度补数据的工作,补数时发现资源消耗异常的大且运行速度却不怎么给力. 发现根本原因在于sparkSQL配置有诸多问题,解决后总结出来就当抛砖引玉了. 具体现象 内存CPU比例 ...

  9. Spark SQL 官方文档-中文翻译

    Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...

随机推荐

  1. 软件版本中的Alpha,Beta,RC,Trial是什么意思?

    版本号: V(Version):即版本,通常用数字表示版本号.(如:EVEREST Ultimate v4.20.1188 Beta ) Build:用数字或日期标示版本号的一种方式.(如:VeryC ...

  2. 从源码角度理解android动画Interpolator类的使用

    做过android动画的人对Interpolator应该不会陌生,这个类主要是用来控制android动画的执行速率,一般情况下,如果我们不设置,动画都不是匀速执行的,系统默认是先加速后减速这样一种动画 ...

  3. android开发之记录ListView滚动位置

    这个问题本身不难,但是由于项目中的需求太过于复杂,结果导致这个问题变得不是那么容易实现.在网上一搜,结果如下: 我不知道是who copy who?反正介绍的所谓的三种方法,第一种都是无法运行的,很明 ...

  4. Android UI 开发

    今天主要学习了Android UI开发的几个知识 6大布局 样式和主题→自定义样式.主题 JUnit单元测试 Toast弹窗功能简介 6大布局 RelativeLayout LinearLayout ...

  5. select 中使用 case when 和 replace

    在SELECT中,用CASE   例如:     select   a.Cname   as   Tcomname,b.Cname   as   TGoodname,D.nQuanty,c.cNote ...

  6. linux tar 压缩解压缩

    解压 .tar.bz tar zxvf file.tar.gz .tar.gz2 tar jxvf file.tar.bz2 .bz gzip -d file.bz .gz2 bzip2 -d fil ...

  7. jsp获取服务端的访问信息

    获取服务端访问信息 public static String getUrl(HttpServletRequest request){ String url = ""; if(req ...

  8. 第三部分 关于HHibernate中关键字ID的配置

    实体类映射中,必须配置一个关键字,对应着数据表的关键字,大多数的实体类也都有一个属性表示类的唯一性,在实体类配置文件(.hbm.xml)中,<id>元素的就是这个作用. 一个完整的ID配置 ...

  9. MVC中HttpContext, HttpContextBase, HttpContextWrapper联系

    HttpContext // // 摘要: // 封装有关个别 HTTP 请求的所有 HTTP 特定的信息. public sealed class HttpContext : IServicePro ...

  10. oracle 定位热块和热链的方法

    定位热链的方法 declare        v_num number;begin        for i in 1..1000000        loop                sele ...