HBase基准测试
执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation
返回信息:
[root@node1 /]# hbase org.apache.hadoop.hbase.PerformanceEvaluation Java HotSpot(TM) -Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release // :: INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available Usage: java org.apache.hadoop.hbase.PerformanceEvaluation \ <OPTIONS> [-D<property=value>]* <command> <nclients> Options: nomapred Run multiple clients using threads (rather than use mapreduce) rows Rows each client runs. Default: size Total size in GiB. Mutually exclusive with --rows. Default: 1.0. sampleRate Execute test on a sample of total rows. Only supported by randomRead. Default: 1.0 traceRate Enable HTrace spans. Initiate tracing every N rows. Default: table Alternate table name. Default: 'TestTable' multiGet If >, when doing RandomRead, perform multiple gets instead of single gets. Default: compress Compression type to use (GZ, LZO, ...). Default: 'NONE' flushCommits Used to determine if the test should flush the table. Default: false writeToWAL Set writeToWAL on puts. Default: True autoFlush Set autoFlush on htable. Default: False oneCon all the threads share the same connection. Default: False presplit Create presplit table. If a table with same name exists, it'll be deleted and recreated (instead of verifying count of its existing regions). Recommended for accurate perf analysis (see guide). Default: disabled inmemory Tries to keep the HFiles of the CF inmemory as far as possible. Not guaranteed that reads are always served from memory. Default: false usetags Writes tags along with KVs. Use with HFile V3. Default: false numoftags Specify the no of tags that would be needed. This works only filterAll Helps to filter out all the rows on the server side there by not returning any thing back to the client. Helps to check the server side performance. Uses FilterAllFilter internally. latency Set to report operation latencies. Default: False bloomFilter Bloom filter type, one of [NONE, ROW, ROWCOL] blockEncoding Block encoding to use. Value should be one of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE]. Default: NONE valueSize Pass value size to use: Default: valueRandom Set and 'valueSize'; set on read for stats on size: Default: Not set. valueZipf Set and 'valueSize' in zipf form: Default: Not set. period Report every = multiGet Batch gets together into groups of N. Only supported by randomRead. Default: disabled addColumns Adds columns to scans/gets explicitly. Default: true replicas Enable region replica testing. Defaults: . splitPolicy Specify a custom RegionSplitPolicy for the table. randomSleep Do a random and entered value. Defaults: columns Columns to caching Scan caching to use. Default: Note: -D properties will be applied to the conf used. For example: -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.task.timeout= Command: append Append on each row; clients overlap on keyspace so some concurrent operations checkAndDelete CheckAndDelete on each row; clients overlap on keyspace so some concurrent operations checkAndMutate CheckAndMutate on each row; clients overlap on keyspace so some concurrent operations checkAndPut CheckAndPut on each row; clients overlap on keyspace so some concurrent operations filterScan Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20) increment Increment on each row; clients overlap on keyspace so some concurrent operations randomRead Run random read test randomSeekScan Run random seek and scan test randomWrite Run random write test scan Run scan test (read every row) scanRange10 Run random seek scan with both start and stop row (max rows) scanRange100 Run random seek scan with both start and stop row (max rows) scanRange1000 Run random seek scan with both start and stop row (max rows) scanRange10000 Run random seek scan with both start and stop row (max rows) sequentialRead Run sequential read test sequentialWrite Run sequential write test Args: nclients Integer. Required. Total number of clients (and HRegionServers) running. <= value <= Examples: To run a single client doing the default 1M sequentialWrites: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite To run clients doing increments over ten rows: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows= --nomapred increment
执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows= --presplit= sequentialWrite
返回信息:
…… // :: INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /, server: node3/ // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid22929.hprof ... // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node5/, sessionid = // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node3/, sessionid = // :: INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /, server: node4/ // :: INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /, server: node4/ // :: INFO zookeeper.ClientCnxn: Session establishment complete on server node5/, sessionid = Heap dump bytes in 0.962 secs] # # java.lang.OutOfMemoryError: Java heap space # -XX:OnOutOfMemoryError="kill -9 %p" # Executing /bin/sh -c "kill -9 22929"... Killed
分析内存使用情况,执行命令 free ,返回如下信息:
[root@node1 ~]# free
total used free shared buff/cache available
Mem: 65398900 13711168 26692112 115096 24995620 50890860
Swap: 29200380 0 29200380
第1行 Mem:
total:表示物理内存总量。65398900KB/1024=63866MB/1024=62GB 约等于64GB
total = used + free + buff/cache
available = free + buff/cache(部分)
buff:写 IO 缓存
cache:读 IO 缓存
查看哪些进程使用了内存,执行命令 ps aux ,返回如下信息(抽取部分与大数据有关的进程):
[root@node1 ~]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root ? Ss Mar06 : /usr/lib/systemd/systemd --switched apache ? S Apr08 : /usr/sbin/httpd -DFOREGROUND ntp ? Ss Mar06 : /usr/sbin/ntpd -u ntp:ntp -g clouder+ ? Ssl Mar11 : /usr/java/jdk1..0_67-cloudera/bin/ apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? S Mar20 : /bin/bash /usr/lib64/cmf/service/hb flume ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Xm flume ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hive ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Xm hive ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil oozie ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -D spark ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -cp oozie ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil spark ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil mysql ? Ss Mar06 : /bin/sh /usr/bin/mysqld_safe --base mysql ? Sl Mar06 : /usr/libexec/mysqld --basedir=/usr hue ? S Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : python2. /opt/cloudera/parcels/CDH hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hbase ? S Mar20 : /bin/bash /usr/lib64/cmf/service/hb rpc ? Ss Mar13 : /sbin/rpcbind -w hue ? Sl Mar13 : /usr/sbin/httpd -f /run/cloudera-sc hdfs ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp hdfs ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hbase ? S : : mapred ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp mapred ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil yarn ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp yarn ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil
其中RSS列,就是物理内存使用量
VSZ:占用的虚拟内存大小
RSS:占用的物理内存大小
执行命令:
ps aux --sort -rss
根据占用的物理内存大小对进程进行排序,返回如下信息(截取):
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND clouder+ ? Ssl Mar11 : /usr/java/jdk1..0_67-cloudera/bin/ hive ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Xm hdfs ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp oozie ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -D root pts/ Sl+ : : /usr/java/jdk1..0_131/bin/java -cp hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp mapred ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp yarn ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -Dp flume ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Xm hbase ? Sl Mar20 : /usr/java/jdk1..0_131/bin/java -Dp spark ? Sl Mar12 : /usr/java/jdk1..0_131/bin/java -cp gnome-i+ ? Sl Mar06 : gnome-shell --mode=initial-setup mysql ? Sl Mar06 : /usr/libexec/mysqld --basedir=/usr hue ? Sl Mar12 : python2. /opt/cloudera/parcels/CDH gnome-i+ ? Sl Mar06 : /usr/libexec/gnome-initial-setup root ? Ssl Mar07 : python2. /usr/lib64/cmf/agent/buil root ? S<l Mar07 : /root/vpnserver/vpnserver execsvc root ? Sl Mar07 : python2. /usr/lib64/cmf/agent/buil root tty1 Ssl+ Mar06 : /usr/bin/Xorg : -background none - polkitd ? Ssl Mar06 : /usr/lib/polkit-/polkitd --no-debu gnome-i+ ? Sl Mar06 : /usr/libexec/gnome-settings-daemon gnome-i+ ? Sl Mar06 : /usr/libexec/goa-daemon root ? Ssl Mar06 : /usr/bin/python -Es /usr/sbin/tuned root ? Ss Mar07 : /usr/lib64/cmf/agent/build/env/bin/ root ? Ssl Mar06 : /usr/sbin/libvirtd geoclue ? Ssl Mar06 : /usr/libexec/geoclue -t root ? S Mar07 : python2. /usr/lib64/cmf/agent/buil root ? Ssl Mar06 : /usr/sbin/NetworkManager --no-daemo hive ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil gnome-i+ ? Sl Mar06 : /usr/libexec/caribou gnome-i+ ? Sl Mar06 : /usr/libexec/ibus-x11 --kill-daemon gnome-i+ ? Ssl Mar06 : /usr/bin/gnome-session --autostart root ? Ss Mar06 : /usr/lib/systemd/systemd-journald colord ? Ssl Mar06 : /usr/libexec/colord hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil mapred ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hue ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil spark ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil flume ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil hdfs ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil oozie ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil hbase ? Sl Mar20 : python2. /usr/lib64/cmf/agent/buil yarn ? Sl Mar12 : python2. /usr/lib64/cmf/agent/buil gnome-i+ ? Sl Mar06 : ibus-daemon --xim --panel disable root ? Ssl Mar06 : /usr/lib/udisks2/udisksd --no-debug gnome-i+ ? Sl Mar06 : /usr/libexec/mission-control- root ? Ss Mar11 : /usr/sbin/httpd -DFOREGROUND root pts/ S+ : : python /opt/cloudera/parcels/CLABS_ root ? Ss : : sshd: root@notty root ? Ssl Mar06 : /usr/libexec/packagekitd root ? Ssl Mar06 : /usr/sbin/rsyslogd -n gnome-i+ ? Sl Mar06 : /usr/libexec/goa-identity-service root ? Ssl Mar06 : /usr/libexec/upowerd root ? S<Ls Mar06 : /usr/sbin/iscsid root ? Ss Mar06 : /usr/lib/systemd/systemd --switched gnome-i+ ? Sl Mar06 : /usr/libexec/ibus-dconf root ? Ss : : sshd: root@pts/ root ? Ss : : sshd: root@pts/ root ? Ss : : sshd: root@pts/ root ? Ss : : sshd: root@pts/ hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc root ? Sl Mar06 : gdm-session-worker [pam/gdm-launch- hue ? Sl Mar13 : /usr/sbin/httpd -f /run/cloudera-sc hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc root ? Ssl Mar06 : /usr/sbin/ModemManager gnome-i+ ? Sl Mar06 : gnome-keyring-daemon --unlock hue ? Sl Mar12 : /usr/sbin/httpd -f /run/cloudera-sc root ? Ss Mar06 : /usr/sbin/abrtd -d -s hue ? S Mar12 : /usr/sbin/httpd -f /run/cloudera-sc gnome-i+ ? Sl Mar06 : /usr/libexec/gvfsd gnome-i+ ? Sl Mar06 : /usr/libexec/gvfs-afc-volume-monito gnome-i+ ? Sl Mar06 : /usr/libexec/gvfs-udisks2-volume-mo root ? Ssl Mar06 : /usr/lib64/realmd/realmd root ? Ss Mar06 : /usr/bin/abrt-watch-log -F BUG: WAR root ? Ss Mar06 : /usr/bin/abrt-watch-log -F Backtrac apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr08 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND apache ? S Apr07 : /usr/sbin/httpd -DFOREGROUND
执行命令 jps ,查看JVM中运行的进程:
[root@node1 ~]# jps Bootstrap SqlLine HistoryServer Main RESTServer HMaster ResourceManager RunJar Application Jps JobHistoryServer NameNode
执行命令 jmap -heap 31262 ,查看NameNode进程在JVM中的运行情况,返回如下信息:
[root@node1 ~]# jmap -heap
Attaching to process ID , please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.131-b11
using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC
Heap Configuration:
MinHeapFreeRatio =
MaxHeapFreeRatio =
MaxHeapSize = (.0MB)
NewSize = (.3125MB)
MaxNewSize = (.3125MB)
OldSize = (.6875MB)
NewRatio =
SurvivorRatio =
MetaspaceSize = (.796875MB)
CompressedClassSpaceSize = (.0MB)
MaxMetaspaceSize = MB
G1HeapRegionSize = (.0MB)
Heap Usage:
New Generation (Eden + Survivor Space):
capacity = (.8125MB)
used = (.81820678710938MB)
(.9942932128906MB)
9.913490201890799% used
Eden Space:
capacity = (.3125MB)
used = (.02981567382812MB)
(.2826843261719MB)
10.988596731597243% used
From Space:
capacity = (.5MB)
used = (.78839111328125MB)
(.71160888671875MB)
1.3101766397664836% used
To Space:
capacity = (.5MB)
used = (.0MB)
(.5MB)
0.0% used
concurrent mark-sweep generation:
capacity = (.6875MB)
used = (.6791000366211MB)
(.008399963379MB)
2.661567829955683% used
interned Strings occupying bytes.
修改上一次测试的参数,减少占用的资源,执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows= --presplit= sequentialWrite
在这个测试中,把PE模式设为了非MapReduuce(--nomapred),即采用起线程的形式。跑的命令是sequentialWrite,即顺序写入、后面跟的10代表起了10个线程来做写入。--rows=1000 代表每个线程会写入1000行数据。presplit,表的预分裂region个数,在做性能测试时一定要设置region个数,不然所有的读写会落在一个region上,严重影响性能。PE工具的所有的输出都会直接写到LOG文件,LOG的位置需要参照HBase的设置。运行结束后,PE会分别打出每个线程的延迟状况。如下面是其中一个线程的结果:
// :: INFO hbase.PerformanceEvaluation: Latency (us) : mean=.9th=.99th=.999th=347.00 // :: INFO hbase.PerformanceEvaluation: Num measures (latency) : // :: INFO hbase.PerformanceEvaluation: Mean = 56.74 Min = 8.00 Max = 347.00 StdDev = 84.51 50th = 25.00 75th = 35.75 95th = 283.00 99th = 305.98 .9th = 346.99 .99th = 347.00 .999th = 347.00 // :: INFO hbase.PerformanceEvaluation: ValueSize (bytes) : mean=.9th=.99th=.999th=0.00 // :: INFO hbase.PerformanceEvaluation: Num measures (ValueSize): // :: INFO hbase.PerformanceEvaluation: Mean = 0.00 Min = 0.00 Max = 0.00 StdDev = 0.00 50th = 0.00 75th = 0.00 95th = 0.00 99th = 0.00 .9th = 0.00 .99th = 0.00 .999th = 0.00 // :: INFO hbase.PerformanceEvaluation: Test : SequentialWriteTest, Thread : TestClient- // :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1696fc9820c336b
以及如下信息:
// :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1696fc9820c3368 // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.94 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO zookeeper.ZooKeeper: Session: 0x1696fc9820c3368 closed // :: INFO zookeeper.ClientCnxn: EventThread shut down // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (3.03 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.97 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.93 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.94 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest rows (2.87 MB/s) // :: INFO hbase.PerformanceEvaluation: Finished TestClient- rows // :: INFO hbase.PerformanceEvaluation: [SequentialWriteTest] Summary of timings (ms): [, , , , , , , , , ] // :: INFO hbase.PerformanceEvaluation: [SequentialWriteTest] Min: 314ms Max: 342ms Avg: 328ms // :: INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3696fc9821c31b9 // :: INFO zookeeper.ZooKeeper: Session: 0x3696fc9821c31b9 closed // :: INFO zookeeper.ClientCnxn: EventThread shut down
HBase基准测试的更多相关文章
- HBase 管理,性能调优
设置 Hadoop 来扩展磁盘 I/O 现代服务器通常有多个磁盘硬件来提供大存储能力.这些磁盘通常配置成 RAID 阵列,作为它们的出厂设置.这在很多情况下是有益的,但对 Hadoop 却不是. Ha ...
- 公司HBase基准性能测试之准备篇
本次测试主要评估线上HBase的整体性能,量化当前HBase的性能指标,对各种场景下HBase性能表现进行评估,为业务应用提供参考. 测试环境 测试环境包括测试过程中HBase集群的拓扑结构.以及需要 ...
- HBase基准性能测试报告
作者:范欣欣 本次测试主要评估线上HBase的整体性能,量化当前HBase的性能指标,对各种场景下HBase性能表现进行评估,为业务应用提供参考.本篇文章主要介绍此次测试的基本条件,HBase在各种测 ...
- Hadoop基准测试(二)
Hadoop Examples 除了<Hadoop基准测试(一)>提到的测试,Hadoop还自带了一些例子,比如WordCount和TeraSort,这些例子在hadoop-example ...
- 去 HBase,Kylin on Parquet 性能表现如何?
Kylin on HBase 方案经过长时间的发展已经比较成熟,但也存在着局限性,因此,Kyligence 推出了 Kylin on Parquet 方案(了解详情戳此处).通过标准数据集测试,与仍采 ...
- 探究Go-YCSB做数据库基准测试
本篇文章开篇会介绍一下Go-YCSB是如何使用,然后按照惯例会分析一下它是如何做基准测试,看看它有什么优缺点. 转载请声明出处哦~,本篇文章发布于luozhiyun的博客: https://www.l ...
- Mapreduce的文件和hbase共同输入
Mapreduce的文件和hbase共同输入 package duogemap; import java.io.IOException; import org.apache.hadoop.co ...
- Redis/HBase/Tair比较
KV系统对比表 对比维度 Redis Redis Cluster Medis Hbase Tair 访问模式 支持Value大小 理论上不超过1GB(建议不超过1MB) 理论上可配置(默认配置1 ...
- 一篇文章看懂TPCx-BB(大数据基准测试工具)源码
TPCx-BB是大数据基准测试工具,它通过模拟零售商的30个应用场景,执行30个查询来衡量基于Hadoop的大数据系统的包括硬件和软件的性能.其中一些场景还用到了机器学习算法(聚类.线性回归等).为了 ...
随机推荐
- PAT 1003 Emergency (25分)
As an emergency rescue team leader of a city, you are given a special map of your country. The map s ...
- 批处理执行Testng
@echo off set LIB=项目的jar路径 set CLASSPATH=%LIB%\jar\commons-beanutils-.jar;%LIB%\jar\testng.jar;%LIB% ...
- ISR4K-IOS XE EPC
1.该操作在ISR4K的平台操作,简单的执行了一个控制层面的抓包 配置命令: R01#monitor capture A control-plane both R01#monitor capture ...
- 洛谷P1006传纸条
题目描述 小渊和小轩是好朋友也是同班同学,他们在一起总有谈不完的话题.一次素质拓展活动中,班上同学安排做成一个 m 行 n 列的矩阵,而小渊和小轩被安排在矩阵对角线的两端,因此,他们就无法直接交谈了. ...
- XCOJ1250: 语言战争
1250: 语言战争 时间限制: 1 Sec 内存限制: 64 MB提交: 203 解决: 46 标签提交统计讨论版 题目描述 llc和yrc语言的优劣一直都是大家所争论的焦点,但它们之间最大的区 ...
- VUE父子组件相互传值
passer.vue中代码 首先在文件中引入组件 import canvasDraw from '@/components/CanvasDraw/canvasDraw' 局部注册组件:componen ...
- sparksql报错
执行时报错: org.apache.spark.sql.AnalysisException: Unable to generate an encoder for inner class `cn.itc ...
- 【PAT甲级】1042 Shuffling Machine (20 分)
题意: 输入洗牌次数K(<=20),输入54张牌每次洗入的位置(不是交换的位置),输出洗好的牌. AAAAAccepted code: #define HAVE_STRUCT_TIMESPEC ...
- Mysql基本用法-存储引擎-04
MYSQL中只有 INNODB和BDB 类型的数据表才能支持事务处理!其他的类型是不支持的!(切记!) Mysql基本用法-存储引擎-02中的test_user表 和 phpcvs表 <?php ...
- 【转】十步让你成为一名优秀的Web开发人员
第一步:学好HTML HTML(超文本标记语言)是网页的核心,因此你首先应该学好它,不要害怕,HTML很容易学习的,但也很容易误用,学懂容易要学精还得费点功夫,但学好HTML是成为Web开发人员的基本 ...