方法一、使用namespaceID

1、在namenode节点上,将dfs.name.dir指定的目录中(这里是name目录)的内容情况,以此来模拟故障发生。

 [hadoop@node1 name]$ ls
current image in_use.lock
[hadoop@node1 name]$ rm -rf *

2、将集群关闭后,再重启我们看到namenode守护进程消失。

 [hadoop@node1 name]$ stop-all.sh
stopping jobtracker
192.168.1.152: stopping tasktracker
192.168.1.153: stopping tasktracker
stopping namenode
192.168.1.152: stopping datanode
192.168.1.153: stopping datanode
192.168.1.152: stopping secondarynamenode
[hadoop@node1 name]$ start-all.sh
starting namenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
192.168.1.152: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
192.168.1.153: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
192.168.1.152: starting secondarynamenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node2.out
starting jobtracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
192.168.1.152: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
192.168.1.153: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
[hadoop@node1 name]$ jps
31942 Jps
31872 JobTracker

而且namenode的日志中有报错:

 2013-11-14 06:19:59,172 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.1.151
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2013-11-14 06:19:59,395 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
2013-11-14 06:19:59,400 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: node1.com/192.168.1.151:9000
2013-11-14 06:19:59,403 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2013-11-14 06:19:59,407 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2013-11-14 06:19:59,557 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2013-11-14 06:19:59,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-11-14 06:19:59,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-11-14 06:19:59,568 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2013-11-14 06:19:59,569 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2013-11-14 06:19:59,654 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2013-11-14 06:19:59,658 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
2013-11-14 06:19:59,663 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) 2013-11-14 06:19:59,664 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.1.151
************************************************************/

3、查看HDFS的文件失败:

 [hadoop@node1 name]$ hadoop dfs -ls /user/hive/warehouse
13/11/14 06:21:06 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 0 time(s).
13/11/14 06:21:07 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 1 time(s).
13/11/14 06:21:08 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 2 time(s).
13/11/14 06:21:09 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 3 time(s).

4、关闭集群,格式化namenode:

 [hadoop@node1 name]$ stop-all.sh
stopping jobtracker
192.168.1.152: stopping tasktracker
192.168.1.153: stopping tasktracker
no namenode to stop
192.168.1.152: stopping datanode
192.168.1.153: stopping datanode
192.168.1.152: stopping secondarynamenode
[hadoop@node1 name]$ hadoop namenode -format
13/11/14 06:21:37 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.1.151
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /app/user/hdfs/name ? (Y or N) Y
13/11/14 06:21:39 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
13/11/14 06:21:39 INFO namenode.FSNamesystem: supergroup=supergroup
13/11/14 06:21:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/11/14 06:21:39 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/11/14 06:21:39 INFO common.Storage: Storage directory /app/user/hdfs/name has been successfully formatted.
13/11/14 06:21:39 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.1.151
************************************************************/

5、从任意datanode中获取namenode格式化之前namespaceID并修改namenode的namespaceID跟datanode一致:

 [hadoop@node2 current]$ cat VERSION
#Thu Nov 14 02:27:10 CST 2013
namespaceID=2062292356
storageID=DS-107813142-192.168.1.152-50010-1379339943465
cTime=0
storageType=DATA_NODE
layoutVersion=-18
[hadoop@node2 current]$ pwd
/app/user/hdfs/data/current
----修改namenode的namespaceID----
[hadoop@node1 current]$ cat VERSION
#Thu Nov 14 06:29:31 CST 2013
namespaceID=2062292356
cTime=0
storageType=NAME_NODE
layoutVersion=-18

6、删除新的namenode的fsimage文件:

 [hadoop@node1 current]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop 4 Nov 14 06:21 edits
-rw-rw-r-- 1 hadoop hadoop 96 Nov 14 06:21 fsimage
-rw-rw-r-- 1 hadoop hadoop 8 Nov 14 06:21 fstime
-rw-rw-r-- 1 hadoop hadoop 101 Nov 14 06:22 VERSION
[hadoop@node1 current]$ rm fsimage

7、从Secondarynamenode拷贝fsimage到Namenode的current目录下:

[hadoop@node2 current]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop 4 Nov 14 05:38 edits
-rw-rw-r-- 1 hadoop hadoop 2410 Nov 14 05:38 fsimage
-rw-rw-r-- 1 hadoop hadoop 8 Nov 14 05:38 fstime
-rw-rw-r-- 1 hadoop hadoop 101 Nov 14 05:38 VERSION
[hadoop@node2 current]$ scp fsimage node1:/app/user/hdfs/name/current
The authenticity of host 'node1 (192.168.1.151)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.1.151' (RSA) to the list of known hosts.
fsimage 100% 2410 2.4KB/s 00:00

8、重启集群:

[hadoop@node1 current]$ start-all.sh
starting namenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
192.168.1.152: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
192.168.1.153: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
192.168.1.152: starting secondarynamenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node2.out
starting jobtracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
192.168.1.152: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
192.168.1.153: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
[hadoop@node1 current]$ jps
32486 Jps
32419 JobTracker
32271 NameNode

9、验证数据的完整性:

 [hadoop@node1 current]$ hadoop dfs -ls /user/hive/warehouse
Found 8 items
drwxr-xr-x - hadoop supergroup 0 2013-10-17 16:18 /user/hive/warehouse/echo
drwxr-xr-x - hadoop supergroup 0 2013-10-28 13:48 /user/hive/warehouse/jack
drwxr-xr-x - hadoop supergroup 0 2013-09-18 15:54 /user/hive/warehouse/table4
drwxr-xr-x - hadoop supergroup 0 2013-09-18 15:53 /user/hive/warehouse/table5
drwxr-xr-x - hadoop supergroup 0 2013-09-18 15:48 /user/hive/warehouse/test
drwxr-xr-x - hadoop supergroup 0 2013-10-25 14:50 /user/hive/warehouse/test1
drwxr-xr-x - hadoop supergroup 0 2013-10-25 14:52 /user/hive/warehouse/test2
drwxr-xr-x - hadoop supergroup 0 2013-10-25 14:30 /user/hive/warehouse/test3 [hadoop@node3 conf]$ hive Logging initialized using configuration in jar:file:/app/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_7451@node3_201311111325_424288589.txt
hive> show tables;
OK
echo
jack
table4
table5
test
test1
test2
test3
Time taken: 27.589 seconds, Fetched: 8 row(s)
hive> select * from table4;
OK
NULL NULL NULL
1 1 5
2 4 5
3 4 5
4 5 6
5 6 7
6 1 5
7 5 6
8 3 6
NULL NULL NULL
Time taken: 2.124 seconds, Fetched: 10 row(s)

之前里面的数据没有丢失。

方法二:使用hadoop namenode -importCheckpoint

1、删除name目录:

 [hadoop@node1 hdfs]$ rm -rf name

2、关闭集群,从secondarynamenode拷贝namesecondary目录到dfs.name.dir:

[hadoop@node2 hdfs]$ scp -r namesecondary node1:/app/user/hdfs/
fsimage 100% 157 0.2KB/s 00:00
fstime 100% 8 0.0KB/s 00:00
fsimage 100% 2410 2.4KB/s 00:00
VERSION 100% 101 0.1KB/s 00:00
edits 100% 4 0.0KB/s 00:00
fstime 100% 8 0.0KB/s 00:00
fsimage 100% 2410 2.4KB/s 00:00
VERSION 100% 101 0.1KB/s 00:00
edits 100% 4 0.0KB/s 00:00

3、在namenode节点上执行hadoop namenode -importCheckpoint

[hadoop@node1 hdfs]$ hadoop namenode -importCheckpoint
13/11/14 07:24:20 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.1.151
STARTUP_MSG: args = [-importCheckpoint]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
13/11/14 07:24:20 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
13/11/14 07:24:20 INFO namenode.NameNode: Namenode up at: node1.com/192.168.1.151:9000
13/11/14 07:24:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
13/11/14 07:24:20 INFO metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
13/11/14 07:24:21 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
13/11/14 07:24:21 INFO namenode.FSNamesystem: supergroup=supergroup
13/11/14 07:24:21 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/11/14 07:24:21 INFO metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
13/11/14 07:24:21 INFO namenode.FSNamesystem: Registered FSNamesystemStatusMBean
13/11/14 07:24:21 INFO common.Storage: Storage directory /app/user/hdfs/name is not formatted.
13/11/14 07:24:21 INFO common.Storage: Formatting ...
13/11/14 07:24:21 INFO common.Storage: Number of files = 26
13/11/14 07:24:21 INFO common.Storage: Number of files under construction = 0
13/11/14 07:24:21 INFO common.Storage: Image file of size 2410 loaded in 0 seconds.
13/11/14 07:24:21 INFO common.Storage: Edits file /app/user/hdfs/namesecondary/current/edits of size 4 edits # 0 loaded in 0 seconds.
13/11/14 07:24:21 INFO common.Storage: Image file of size 2410 saved in 0 seconds.
13/11/14 07:24:21 INFO common.Storage: Image file of size 2410 saved in 0 seconds.
13/11/14 07:24:21 INFO namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
13/11/14 07:24:21 INFO namenode.FSNamesystem: Finished loading FSImage in 252 msecs
13/11/14 07:24:21 INFO hdfs.StateChange: STATE* Safe mode ON.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
13/11/14 07:24:21 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/11/14 07:24:21 INFO http.HttpServer: Port returned by webServer.getConnectors()[].getLocalPort() before open() is -1. Opening the listener on 50070
13/11/14 07:24:21 INFO http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[].getLocalPort() returned 50070
13/11/14 07:24:21 INFO http.HttpServer: Jetty bound to port 50070
13/11/14 07:24:21 INFO mortbay.log: jetty-6.1.14
13/11/14 07:24:21 INFO mortbay.log: Started SelectChannelConnector@node1.com:50070
13/11/14 07:24:21 INFO namenode.NameNode: Web-server up at: node1.com:50070
13/11/14 07:24:21 INFO ipc.Server: IPC Server Responder: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server listener on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 0 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 1 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 2 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 3 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 4 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 5 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 6 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 9 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 7 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 8 on 9000: starting
13/11/14 07:37:05 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.1.151
************************************************************/
[hadoop@node1 current]$ start-all.sh
starting namenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
192.168.1.152: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
192.168.1.153: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
192.168.1.152: starting secondarynamenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node2.out
starting jobtracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
192.168.1.152: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
192.168.1.153: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
[hadoop@node1 current]$ jps
1027 JobTracker
1121 Jps
879 NameNode

4、验证数据的完整性:

 [hadoop@node3 conf]$ hive

 Logging initialized using configuration in jar:file:/app/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_8383@node3_201311111443_2018635710.txt
hive> select * from table4;
OK
NULL NULL NULL
1 1 5
2 4 5
3 4 5
4 5 6
5 6 7
6 1 5
7 5 6
8 3 6
NULL NULL NULL
Time taken: 3.081 seconds, Fetched: 10 row(s)

总结:

注意:恢复的namenode中secondarynamenode的最近一次check到故障发生这段时间的内容将丢失,所以fs.checkpoint.period参数值在实际设定中要尽可能的权衡。并且也时常备份secondarynamenode节点中的内容,因为scondarynamenode也是单点的,以防发生故障。

补充说明:如果是用新的节点来恢复namenode,则要注意

1、新节点的Linux环境,目录结构,环境变量等等配置需要跟原来的namenode一模一样,包括conf目录下的所有文件配置。

2、新namenode的主机名要与原namenode保持一致,如果是重新命名主机名的话,则需要批量替换datanode和secondarynamenode的hosts文件,并且重新配置以下文件的部分core-site.xml文件中的fs.default.name

hdfs-site.xml文件中的dfs.http.address(secondarynamenode节点上)

mapred-site.xml文件中的mapred.job.tracker(如果jobtracker与namenode在同一个机器上,一般都是同一台机器上)。

模拟namenode崩溃,使用secondarynamenode恢复的更多相关文章

  1. hadoop 根据SecondaryNameNode恢复Namenode

    1.修改conf/core-site.xml 增加 <property> <name>fs.checkpoint.period</name> <value&g ...

  2. NameNode备份策略以及恢复方法

    一.dits和fsimage      首先要提到两个文件edits和fsimage,下面来说说他们是做什么的. 集群中的名称节点(NameNode)会把文件系统的变化以追加保存到日志文件edits中 ...

  3. MySQL 系列(四) 主从复制、读写分离、模拟宕机、备份恢复方案生产环境实战

    本章内容: 主从复制 简介原理 备份主库及恢复从库,配置从库生效 读写分离 如果主宕机了,怎么办? 双主的情况 MySQL 备份及恢复方案 备份单个及多个数据库 mysqldump 的常用参数 如何增 ...

  4. RAC 11gR2模拟OCR和Votedisk损坏恢复过程

    1)破坏前的ocr和votedisk信息检查 检查ocr自动备份 [root@rac1 ~]# ocrconfig -showbackup rac2 2013/10/13 09:45:30 /u01/ ...

  5. Blocked Billboard II题解--模拟到崩溃的模拟

    前言 比赛真的状态不好(腐了一小会),导致差点爆0. 这个题解真的是在非常非常专注下写出来的,要不然真的心态崩. 题目 题目描述 奶牛Bassie想要覆盖一大块广告牌,她在之前已经覆盖了一小部分广告牌 ...

  6. ## 【分布式事务】面试官问我:MySQL中的XA事务崩溃了如何恢复??

    写在前面 前段时间搭建了一套MySQL分布式数据库集群,数据库节点有12个,用来测试各种分布式事务方案的性能和优缺点.测试MySQL XA事务时,正当测试脚本向数据库中批量插入数据时,强制服务器断电! ...

  7. Hadoop第3周练习--Hadoop2.X编译安装和实验

    作业题目 位系统下进行本地编译的安装方式 选2 (1) 能否给web监控界面加上安全机制,怎样实现?抓图过程 (2)模拟namenode崩溃,例如将name目录的内容全部删除,然后通过secondar ...

  8. NameNode和SecondaryNameNode工作原理剖析

    NameNode和SecondaryNameNode工作原理剖析 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.NameNode中的元数据是存储在那里的? 1>.首先,我 ...

  9. HDFS中NameNode发生故障没有备份从SecondNameNode恢复

    1.Secondary NameNode目录结构 Secondary NameNode用来监控HDFS状态的辅助后台程序,每隔一段时间获取HDFS元数据的快照. 在/opt/module/hadoop ...

随机推荐

  1. 构造方法后面加上了:base

    今天看公司软件的代码碰到一个奇怪的方法 ,寻早了各种方法后终于明白了,在构造方法后面加上了:base(message),该类如下: public NONEDIException(string mess ...

  2. android UI进阶之用ViewPager实现欢迎引导页面

    ViewPager需要android-support-v4.jar这个包的支持,来自google提供的一个附加包.大家搜下即可. ViewPager主要用来组织一组数据,并且通过左右滑动的方式来展示. ...

  3. RDIFramework.NET V2.7 Web版本升手风琴+树型目录(2级+)方法

    RDIFramework.NET V2.7 Web版本升手风琴+树型目录(2级+)方法 手风琴风格在Web应用非常的普遍,越来越多的Web应用都是采用这种方式来体现各个功能模块,传统的手风琴风格只支持 ...

  4. 详细讲解Quartz.NET

    Quartz.NET是一个开源的作业调度框架,是OpenSymphony 的 Quartz API的.NET移植,它用C#写成,可用于winform和asp.net应用中.它提供了巨大的灵活性而不牺牲 ...

  5. 支持正则或通配符的hashmap

    RegexpKeyedMap http://wiki.apache.org/jakarta/RegexpKeyedMap RegexHashMap https://heideltime.googlec ...

  6. js功能汇总

    请编写一个JavaScript 函数toRGB,它的作用是转换CSS中常用的颜色编码. 要求: 1 alert(toRGB("#0000FF")); // 输出 rgb(0, 0, ...

  7. csuoj 1395: Timebomb

    http://acm.csu.edu.cn/OnlineJudge/problem.php?id=1395 1395: Timebomb Time Limit: 1 Sec  Memory Limit ...

  8. Java基础(49):快速排序的Java封装(含原理,完整可运行,结合VisualGo网站更好理解)

    快速排序 对冒泡排序的一种改进,若初始记录序列按关键字有序或基本有序,蜕化为冒泡排序.使用的是递归原理,在所有同数量级O(n longn) 的排序方法中,其平均性能最好.就平均时间而言,是目前被认为最 ...

  9. 经常遇到Please ensure that adb is correctly located at 'D:\java\sdk\platform-tools\adb.exe' and can be e

     遇到问题描述: 运行android程序控制台输出 [2012-07-18 16:18:26 - ] The connection to adb is down, and a severe error ...

  10. jquery ajax 个人总结

    jquery : 在获取对象的时候,不要用dem的与jquery的混合写法,有的时候 用js获取到的对象 没有JQUERY对应的方法  会报一些不知道的错误.(即如果要使用jquery 就使用jque ...