方法一、使用namespaceID

1、在namenode节点上,将dfs.name.dir指定的目录中(这里是name目录)的内容情况,以此来模拟故障发生。

 [hadoop@node1 name]$ ls
current image in_use.lock
[hadoop@node1 name]$ rm -rf *

2、将集群关闭后,再重启我们看到namenode守护进程消失。

 [hadoop@node1 name]$ stop-all.sh
stopping jobtracker
192.168.1.152: stopping tasktracker
192.168.1.153: stopping tasktracker
stopping namenode
192.168.1.152: stopping datanode
192.168.1.153: stopping datanode
192.168.1.152: stopping secondarynamenode
[hadoop@node1 name]$ start-all.sh
starting namenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
192.168.1.152: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
192.168.1.153: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
192.168.1.152: starting secondarynamenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node2.out
starting jobtracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
192.168.1.152: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
192.168.1.153: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
[hadoop@node1 name]$ jps
31942 Jps
31872 JobTracker

而且namenode的日志中有报错:

 2013-11-14 06:19:59,172 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.1.151
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2013-11-14 06:19:59,395 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
2013-11-14 06:19:59,400 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: node1.com/192.168.1.151:9000
2013-11-14 06:19:59,403 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2013-11-14 06:19:59,407 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2013-11-14 06:19:59,557 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hadoop,hadoop
2013-11-14 06:19:59,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-11-14 06:19:59,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-11-14 06:19:59,568 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2013-11-14 06:19:59,569 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2013-11-14 06:19:59,654 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2013-11-14 06:19:59,658 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
2013-11-14 06:19:59,663 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) 2013-11-14 06:19:59,664 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.1.151
************************************************************/

3、查看HDFS的文件失败:

 [hadoop@node1 name]$ hadoop dfs -ls /user/hive/warehouse
13/11/14 06:21:06 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 0 time(s).
13/11/14 06:21:07 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 1 time(s).
13/11/14 06:21:08 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 2 time(s).
13/11/14 06:21:09 INFO ipc.Client: Retrying connect to server: node1/192.168.1.151:9000. Already tried 3 time(s).

4、关闭集群,格式化namenode:

 [hadoop@node1 name]$ stop-all.sh
stopping jobtracker
192.168.1.152: stopping tasktracker
192.168.1.153: stopping tasktracker
no namenode to stop
192.168.1.152: stopping datanode
192.168.1.153: stopping datanode
192.168.1.152: stopping secondarynamenode
[hadoop@node1 name]$ hadoop namenode -format
13/11/14 06:21:37 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.1.151
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /app/user/hdfs/name ? (Y or N) Y
13/11/14 06:21:39 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
13/11/14 06:21:39 INFO namenode.FSNamesystem: supergroup=supergroup
13/11/14 06:21:39 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/11/14 06:21:39 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/11/14 06:21:39 INFO common.Storage: Storage directory /app/user/hdfs/name has been successfully formatted.
13/11/14 06:21:39 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.1.151
************************************************************/

5、从任意datanode中获取namenode格式化之前namespaceID并修改namenode的namespaceID跟datanode一致:

 [hadoop@node2 current]$ cat VERSION
#Thu Nov 14 02:27:10 CST 2013
namespaceID=2062292356
storageID=DS-107813142-192.168.1.152-50010-1379339943465
cTime=0
storageType=DATA_NODE
layoutVersion=-18
[hadoop@node2 current]$ pwd
/app/user/hdfs/data/current
----修改namenode的namespaceID----
[hadoop@node1 current]$ cat VERSION
#Thu Nov 14 06:29:31 CST 2013
namespaceID=2062292356
cTime=0
storageType=NAME_NODE
layoutVersion=-18

6、删除新的namenode的fsimage文件:

 [hadoop@node1 current]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop 4 Nov 14 06:21 edits
-rw-rw-r-- 1 hadoop hadoop 96 Nov 14 06:21 fsimage
-rw-rw-r-- 1 hadoop hadoop 8 Nov 14 06:21 fstime
-rw-rw-r-- 1 hadoop hadoop 101 Nov 14 06:22 VERSION
[hadoop@node1 current]$ rm fsimage

7、从Secondarynamenode拷贝fsimage到Namenode的current目录下:

[hadoop@node2 current]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop 4 Nov 14 05:38 edits
-rw-rw-r-- 1 hadoop hadoop 2410 Nov 14 05:38 fsimage
-rw-rw-r-- 1 hadoop hadoop 8 Nov 14 05:38 fstime
-rw-rw-r-- 1 hadoop hadoop 101 Nov 14 05:38 VERSION
[hadoop@node2 current]$ scp fsimage node1:/app/user/hdfs/name/current
The authenticity of host 'node1 (192.168.1.151)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,192.168.1.151' (RSA) to the list of known hosts.
fsimage 100% 2410 2.4KB/s 00:00

8、重启集群:

[hadoop@node1 current]$ start-all.sh
starting namenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
192.168.1.152: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
192.168.1.153: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
192.168.1.152: starting secondarynamenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node2.out
starting jobtracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
192.168.1.152: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
192.168.1.153: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
[hadoop@node1 current]$ jps
32486 Jps
32419 JobTracker
32271 NameNode

9、验证数据的完整性:

 [hadoop@node1 current]$ hadoop dfs -ls /user/hive/warehouse
Found 8 items
drwxr-xr-x - hadoop supergroup 0 2013-10-17 16:18 /user/hive/warehouse/echo
drwxr-xr-x - hadoop supergroup 0 2013-10-28 13:48 /user/hive/warehouse/jack
drwxr-xr-x - hadoop supergroup 0 2013-09-18 15:54 /user/hive/warehouse/table4
drwxr-xr-x - hadoop supergroup 0 2013-09-18 15:53 /user/hive/warehouse/table5
drwxr-xr-x - hadoop supergroup 0 2013-09-18 15:48 /user/hive/warehouse/test
drwxr-xr-x - hadoop supergroup 0 2013-10-25 14:50 /user/hive/warehouse/test1
drwxr-xr-x - hadoop supergroup 0 2013-10-25 14:52 /user/hive/warehouse/test2
drwxr-xr-x - hadoop supergroup 0 2013-10-25 14:30 /user/hive/warehouse/test3 [hadoop@node3 conf]$ hive Logging initialized using configuration in jar:file:/app/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_7451@node3_201311111325_424288589.txt
hive> show tables;
OK
echo
jack
table4
table5
test
test1
test2
test3
Time taken: 27.589 seconds, Fetched: 8 row(s)
hive> select * from table4;
OK
NULL NULL NULL
1 1 5
2 4 5
3 4 5
4 5 6
5 6 7
6 1 5
7 5 6
8 3 6
NULL NULL NULL
Time taken: 2.124 seconds, Fetched: 10 row(s)

之前里面的数据没有丢失。

方法二:使用hadoop namenode -importCheckpoint

1、删除name目录:

 [hadoop@node1 hdfs]$ rm -rf name

2、关闭集群,从secondarynamenode拷贝namesecondary目录到dfs.name.dir:

[hadoop@node2 hdfs]$ scp -r namesecondary node1:/app/user/hdfs/
fsimage 100% 157 0.2KB/s 00:00
fstime 100% 8 0.0KB/s 00:00
fsimage 100% 2410 2.4KB/s 00:00
VERSION 100% 101 0.1KB/s 00:00
edits 100% 4 0.0KB/s 00:00
fstime 100% 8 0.0KB/s 00:00
fsimage 100% 2410 2.4KB/s 00:00
VERSION 100% 101 0.1KB/s 00:00
edits 100% 4 0.0KB/s 00:00

3、在namenode节点上执行hadoop namenode -importCheckpoint

[hadoop@node1 hdfs]$ hadoop namenode -importCheckpoint
13/11/14 07:24:20 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.1.151
STARTUP_MSG: args = [-importCheckpoint]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
13/11/14 07:24:20 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000
13/11/14 07:24:20 INFO namenode.NameNode: Namenode up at: node1.com/192.168.1.151:9000
13/11/14 07:24:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
13/11/14 07:24:20 INFO metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
13/11/14 07:24:21 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
13/11/14 07:24:21 INFO namenode.FSNamesystem: supergroup=supergroup
13/11/14 07:24:21 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/11/14 07:24:21 INFO metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
13/11/14 07:24:21 INFO namenode.FSNamesystem: Registered FSNamesystemStatusMBean
13/11/14 07:24:21 INFO common.Storage: Storage directory /app/user/hdfs/name is not formatted.
13/11/14 07:24:21 INFO common.Storage: Formatting ...
13/11/14 07:24:21 INFO common.Storage: Number of files = 26
13/11/14 07:24:21 INFO common.Storage: Number of files under construction = 0
13/11/14 07:24:21 INFO common.Storage: Image file of size 2410 loaded in 0 seconds.
13/11/14 07:24:21 INFO common.Storage: Edits file /app/user/hdfs/namesecondary/current/edits of size 4 edits # 0 loaded in 0 seconds.
13/11/14 07:24:21 INFO common.Storage: Image file of size 2410 saved in 0 seconds.
13/11/14 07:24:21 INFO common.Storage: Image file of size 2410 saved in 0 seconds.
13/11/14 07:24:21 INFO namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
13/11/14 07:24:21 INFO namenode.FSNamesystem: Finished loading FSImage in 252 msecs
13/11/14 07:24:21 INFO hdfs.StateChange: STATE* Safe mode ON.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990. Safe mode will be turned off automatically.
13/11/14 07:24:21 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
13/11/14 07:24:21 INFO http.HttpServer: Port returned by webServer.getConnectors()[].getLocalPort() before open() is -1. Opening the listener on 50070
13/11/14 07:24:21 INFO http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[].getLocalPort() returned 50070
13/11/14 07:24:21 INFO http.HttpServer: Jetty bound to port 50070
13/11/14 07:24:21 INFO mortbay.log: jetty-6.1.14
13/11/14 07:24:21 INFO mortbay.log: Started SelectChannelConnector@node1.com:50070
13/11/14 07:24:21 INFO namenode.NameNode: Web-server up at: node1.com:50070
13/11/14 07:24:21 INFO ipc.Server: IPC Server Responder: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server listener on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 0 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 1 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 2 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 3 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 4 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 5 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 6 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 9 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 7 on 9000: starting
13/11/14 07:24:21 INFO ipc.Server: IPC Server handler 8 on 9000: starting
13/11/14 07:37:05 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.1.151
************************************************************/
[hadoop@node1 current]$ start-all.sh
starting namenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
192.168.1.152: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
192.168.1.153: starting datanode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
192.168.1.152: starting secondarynamenode, logging to /app/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node2.out
starting jobtracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
192.168.1.152: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
192.168.1.153: starting tasktracker, logging to /app/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
[hadoop@node1 current]$ jps
1027 JobTracker
1121 Jps
879 NameNode

4、验证数据的完整性:

 [hadoop@node3 conf]$ hive

 Logging initialized using configuration in jar:file:/app/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/tmp/hadoop/hive_job_log_hadoop_8383@node3_201311111443_2018635710.txt
hive> select * from table4;
OK
NULL NULL NULL
1 1 5
2 4 5
3 4 5
4 5 6
5 6 7
6 1 5
7 5 6
8 3 6
NULL NULL NULL
Time taken: 3.081 seconds, Fetched: 10 row(s)

总结:

注意:恢复的namenode中secondarynamenode的最近一次check到故障发生这段时间的内容将丢失,所以fs.checkpoint.period参数值在实际设定中要尽可能的权衡。并且也时常备份secondarynamenode节点中的内容,因为scondarynamenode也是单点的,以防发生故障。

补充说明:如果是用新的节点来恢复namenode,则要注意

1、新节点的Linux环境,目录结构,环境变量等等配置需要跟原来的namenode一模一样,包括conf目录下的所有文件配置。

2、新namenode的主机名要与原namenode保持一致,如果是重新命名主机名的话,则需要批量替换datanode和secondarynamenode的hosts文件,并且重新配置以下文件的部分core-site.xml文件中的fs.default.name

hdfs-site.xml文件中的dfs.http.address(secondarynamenode节点上)

mapred-site.xml文件中的mapred.job.tracker(如果jobtracker与namenode在同一个机器上,一般都是同一台机器上)。

模拟namenode崩溃,使用secondarynamenode恢复的更多相关文章

  1. hadoop 根据SecondaryNameNode恢复Namenode

    1.修改conf/core-site.xml 增加 <property> <name>fs.checkpoint.period</name> <value&g ...

  2. NameNode备份策略以及恢复方法

    一.dits和fsimage      首先要提到两个文件edits和fsimage,下面来说说他们是做什么的. 集群中的名称节点(NameNode)会把文件系统的变化以追加保存到日志文件edits中 ...

  3. MySQL 系列(四) 主从复制、读写分离、模拟宕机、备份恢复方案生产环境实战

    本章内容: 主从复制 简介原理 备份主库及恢复从库,配置从库生效 读写分离 如果主宕机了,怎么办? 双主的情况 MySQL 备份及恢复方案 备份单个及多个数据库 mysqldump 的常用参数 如何增 ...

  4. RAC 11gR2模拟OCR和Votedisk损坏恢复过程

    1)破坏前的ocr和votedisk信息检查 检查ocr自动备份 [root@rac1 ~]# ocrconfig -showbackup rac2 2013/10/13 09:45:30 /u01/ ...

  5. Blocked Billboard II题解--模拟到崩溃的模拟

    前言 比赛真的状态不好(腐了一小会),导致差点爆0. 这个题解真的是在非常非常专注下写出来的,要不然真的心态崩. 题目 题目描述 奶牛Bassie想要覆盖一大块广告牌,她在之前已经覆盖了一小部分广告牌 ...

  6. ## 【分布式事务】面试官问我:MySQL中的XA事务崩溃了如何恢复??

    写在前面 前段时间搭建了一套MySQL分布式数据库集群,数据库节点有12个,用来测试各种分布式事务方案的性能和优缺点.测试MySQL XA事务时,正当测试脚本向数据库中批量插入数据时,强制服务器断电! ...

  7. Hadoop第3周练习--Hadoop2.X编译安装和实验

    作业题目 位系统下进行本地编译的安装方式 选2 (1) 能否给web监控界面加上安全机制,怎样实现?抓图过程 (2)模拟namenode崩溃,例如将name目录的内容全部删除,然后通过secondar ...

  8. NameNode和SecondaryNameNode工作原理剖析

    NameNode和SecondaryNameNode工作原理剖析 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.NameNode中的元数据是存储在那里的? 1>.首先,我 ...

  9. HDFS中NameNode发生故障没有备份从SecondNameNode恢复

    1.Secondary NameNode目录结构 Secondary NameNode用来监控HDFS状态的辅助后台程序,每隔一段时间获取HDFS元数据的快照. 在/opt/module/hadoop ...

随机推荐

  1. Wget命令

    Linux wget是一个下载文件的工具,它用在命令行下.对于Linux用户是必不可少的工具,尤其对于网络管理员,经常要下载一些软件或从远程服务器恢复备份到本地服务器.如果我们使用虚拟主机,处理这样的 ...

  2. REST Security with JWT using Java and Spring Security

    Security Security is the enemy of convenience, and vice versa. This statement is true for any system ...

  3. iOS自动偏移64个像素

    自从iOS7开始,如果添加的scrollview是uiviewController第一个视图,系统会默认自动添加-64的偏移量,所以规避的方案就添加一个UIView之后再添加你的scrollview.

  4. AVAudioPlayer init 报错: Error Domain=NSOSStatusErrorDomain Code=1937337955

    Error Domain=NSOSStatusErrorDomain Code=1937337955 原因: 音频文件的格式不规范导致 对于iOS7以上的系统(含iOS7),在确定文件格式的情况下可以 ...

  5. 编写简单的Mapreduce程序并部署在Hadoop2.2.0上运行

    今天主要来说说怎么在Hadoop2.2.0分布式上面运行写好的 Mapreduce 程序. 可以在eclipse写好程序,export或用fatjar打包成jar文件. 先给出这个程序所依赖的Mave ...

  6. NSDateFormatter

    NSDate *now = [NSDate date]; NSDateFormatter *fmt = [[NSDateFormatter alloc] init]; fmt.dateFormat = ...

  7. Static Const

    Static 内部的 Const 不可变的 一般写法 在.m文件中, static NSString *const ID = @"shop"; static const CGFlo ...

  8. swift语言实战晋级-第9章 游戏实战-跑酷熊猫-9-10 移除平台与视差滚动

    9.9 移除场景之外的平台 用为平台是源源不断的产生的,如果不注意销毁,平台就将越积越多,虽然在游戏场景中看不到.几十个还看不出问题,那几万个呢?几百万个呢? 所以我们来看看怎么移除平台,那什么样的平 ...

  9. F面经prepare:strstr变种

    * Given an integer k>=1 and two strings A and B (length ~n each); * Figure out if there is any co ...

  10. qsort函数用法

    qsort函数用法   qsort 功 能: 使用快速排序例程进行排序 用 法: void qsort(void *base, int nelem, int width, int (*fcmp)(co ...