在Hadoop分布式模式部署完毕后,通过start-dfs.sh启动NameNodeDataNodeSecondaryNameNode,在master节点通过jps命令查看,看到NameNodeSecondaryNameNode已启动,在slave节点通过jps命令查看,DataNode也已经启动。(此时颇为欣喜,首次全然分布式部署即成功,可是。。。)

1 发现问题

  准备好WordCount測试用例所需文件,通过hadoop fs -put file /命令上传文件,发现竟然报错,这但是在单节点伪分布模式中操作无数次的命令啊>_<

14/10/13 14:40:25 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/test._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2702)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
put: File /input/test._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

  细致看看错误,could only be replicated to 0 nodes,竟然是0个节点。

  通过訪问http://master:50070通过web ui查看节点状态,发现Live Nodes为0,Dead Nodes也为0,说明slave节点的DataNode节点尽管启动正常,可是与master通信失败,查看slave上的datanode的日志,发现出现信息发送失败的日志:

2014-10-13 14:04:46,379 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2014-10-13 14:04:46,496 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2014-10-13 14:04:46,531 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2014-10-13 14:04:46,856 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2014-10-13 14:04:46,905 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to master/192.168.1.132:9000 starting to offer service
2014-10-13 14:04:46,970 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2014-10-13 14:04:46,967 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2014-10-13 14:04:51,373 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.132:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-10-13 14:04:52,375 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.132:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-10-13 14:04:53,376 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.132:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
。。。
2014-10-13 14:05:00,394 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.132:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-10-13 14:05:00,396 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: master/192.168.1.132:9000

2 分析问题

  通过日志能够看出,slave节点的DataNode启动成功之后尝试向master传输心跳信息,每轮尝试10次,所有失败,然后继续向master传输信息。

  再次检查各种配置:/etc/hosts文件、slaves文件、ssh无password登录、.bashrc中的环境变量、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml,检查结果为正常,全部配置无误。

  再分析,由于masterslave节点发送启动请求成功,slave启动DataNode,可是slave节点向master返回心跳失败。通过ping命令验证一番,masterslave互相可以平通,说明两个节点间通信正常。

  此时仅仅剩下一种可能:防火墙。福尔摩斯说过,把全部不可能的情况剔除后,剩下的就是答案。

3 解决这个问题

  使用root用户查看master节点的防火墙是否启动:/etc/init.d/iptables status

Table: filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0
3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22
5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT)
num target prot opt source destination
1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT)
num target prot opt source destination

  通过/etc/init.d/iptables stop命令直接关闭防火墙,在查看slaveDataNode日志:

2014-10-13 14:56:23,646 INFO org.apache.hadoop.hdfs.server.common.Storage: Data-node version: -55 and name-node layout version: -56
2014-10-13 14:56:23,662 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/lxh/hadoop/hdfs/data/in_use.lock acquired by nodename 2737@slave2
2014-10-13 14:56:23,790 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-480719120-192.168.1.132-1413164135404
2014-10-13 14:56:23,791 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2014-10-13 14:56:23,791 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
2014-10-13 14:56:23,793 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=161264755;bpid=BP-480719120-192.168.1.132-1413164135404;lv=-55;nsInfo=lv=-56;cid=CID-84fef657-142a-46c4-8329-3df7025286a0;nsid=161264755;c=0;bpid=BP-480719120-192.168.1.132-1413164135404;dnuuid=62322351-4f82-42eb-baa4-f3b38675234e
2014-10-13 14:56:23,815 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /home/lxh/hadoop/hdfs/data/current, StorageType: DISK
2014-10-13 14:56:23,951 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2014-10-13 14:56:23,979 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1413202418979 with interval 21600000
2014-10-13 14:56:23,984 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-480719120-192.168.1.132-1413164135404
2014-10-13 14:56:23,986 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-480719120-192.168.1.132-1413164135404 on volume /home/lxh/hadoop/hdfs/data/current...
2014-10-13 14:56:24,047 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-480719120-192.168.1.132-1413164135404 on /home/lxh/hadoop/hdfs/data/current: 59ms
2014-10-13 14:56:24,047 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-480719120-192.168.1.132-1413164135404: 62ms
2014-10-13 14:56:24,049 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-480719120-192.168.1.132-1413164135404 on volume /home/lxh/hadoop/hdfs/data/current...
2014-10-13 14:56:24,050 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-480719120-192.168.1.132-1413164135404 on volume /home/lxh/hadoop/hdfs/data/current: 0ms
2014-10-13 14:56:24,050 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 3ms
2014-10-13 14:56:24,052 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-480719120-192.168.1.132-1413164135404 (Datanode Uuid null) service to master/192.168.1.132:9000 beginning handshake with NN
2014-10-13 14:56:24,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-480719120-192.168.1.132-1413164135404 (Datanode Uuid null) service to master/192.168.1.132:9000 successfully registered with NN
2014-10-13 14:56:24,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/192.168.1.132:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2014-10-13 14:56:24,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-480719120-192.168.1.132-1413164135404 (Datanode Uuid 62322351-4f82-42eb-baa4-f3b38675234e) service to master/192.168.1.132:9000 trying to claim ACTIVE state with txid=430
2014-10-13 14:56:24,249 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-480719120-192.168.1.132-1413164135404 (Datanode Uuid 62322351-4f82-42eb-baa4-f3b38675234e) service to master/192.168.1.132:9000
2014-10-13 14:56:24,350 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 100 msecs for RPC and NN processing. Got back commands org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@563b100c
2014-10-13 14:56:24,354 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-480719120-192.168.1.132-1413164135404
2014-10-13 14:56:24,369 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2014-10-13 14:56:24,370 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2014-10-13 14:56:24,373 INFO org.apache.hadoop.util.GSet: 0.5% max memory 966.7 MB = 4.8 MB
2014-10-13 14:56:24,375 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2014-10-13 14:56:24,376 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-480719120-192.168.1.132-1413164135404
2014-10-13 14:56:24,389 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-480719120-192.168.1.132-1413164135404 to blockPoolScannerMap, new size=1

问题解决。

  为了以防万一,再次把master的防火墙打开/etc/init.d/iptables start,再查看slaveDataNode日志:

2014-10-13 14:57:45,119 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.net.SocketTimeoutException: Call From slave2/192.168.1.133 to master:9000 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.1.133:39010 remote=master/192.168.1.132:9000]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:749)
at org.apache.hadoop.ipc.Client.call(Client.java:1414)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.sendHeartbeat(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy14.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:178)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:570)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:668)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:838)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.168.1.133:39010 remote=master/192.168.1.132:9000]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1054)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:949)
2014-10-13 14:57:46,128 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.132:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2014-10-13 14:57:47,129 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.132:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

果然,引起错误的原因是master的防火墙。

4 总结问题

  由于masterslave之间通信正常,互相能够ping通,所以master的命令能够向slave下发,slave接收到启动命令,開始启动DataNode,并在启动成功后,向master节点发送心跳信息。在正常情况下,master接收到心跳信息,记录该节点的DataNode为正常,能够讲hdfs上的数据拷贝到该节点,并能够在该节点进行运算。

  但因为防火墙拦截,slave的心跳信息不可以传递给master,所以master觉得该节点不存在,所以不会记录在Live NodesDead Nodes中,对master来说,该节点就变成了消失的幽灵节点。

  在解决过程中,出于安全方面的考虑,应该开放master的9000port,而不是关闭防火墙。由于我是在虚拟机中进行的測试环境,所以直接关闭,并且还通过chkconfig iptables off命令,使防火墙开机不启动了。

DataNode连接错误Retrying connect to server的更多相关文章

  1. hadoop启动之后出现错误:Retrying connect to server: hadoop/192.168.73.100:9000. Already tried 0 time(s);

    INFO ipc.Client: Retrying connect to server: hadoop/192.168.73.100:9000. Already tried 0 time(s); re ...

  2. hadoop 开始时间datanode一个错误 Problem connecting to server

    刚刚配置hadoop,namenode常开,但datanode但保留了错误.但不启动: 2014-05-04 10:43:33,970 WARNorg.apache.hadoop.hdfs.serve ...

  3. INFO ipc.Client:Retrying connect to server 9000

    hadoop使用bin/start_all.sh命令之后,使用jps发现datanode无法启动 This problem comes when Datanode daemon on the syst ...

  4. hadoop 异常 INFO ipc.Client: Retrying connect to server:

    // :: INFO ipc.Client: Retrying connect to server: master/. Already tried , sleepTime= SECONDS) // : ...

  5. ipc.Client: Retrying connect to server: .../10.0.0.27:10020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

    运行 时候爆出这个错 Exception in thread "main" java.io.IOException: java.net.ConnectException: Call ...

  6. 【打印机】无法连接 fail to connect to server cups 服务器错误 打印机

       打印机     fail to connect to server cups 服务器错误兆芯 打印机 连不上了####message可知 服务fail## systemd: cups.servi ...

  7. navicat创建连接 2002-can‘t connect to server on ....

    环境: 系统:centos7 生产环境:docker 中部署MySQL 报错提示符:"2002-Can't connect to server on '192.168.200.22'(100 ...

  8. org.apache.hadoop.ipc.Client: Retrying connect to server异常的解决

    检查发现是DataNode一直连接不到NameNode. 检查各个节点在etc/hosts中的配置是否有127.0.1.1 xxxxxx.如果有把其屏蔽或者删除,重启各节点即可. 原因:127.0.1 ...

  9. sql连接错误(Microsoft SQL Server,错误:2)

    昨天用SQL语句建表的时候写了一段代码,对于代码的逻辑和内容我不太肯定对不正确.反正是毫不犹豫的让它运行了,过程中出现好几个错误,当时没有太在意,想着大不了出错了再重写一个.结果--玩坏了,从昨天到如 ...

随机推荐

  1. Just like normal variables,

    Just like normal variables, pointers can be declared constant. There are two different ways that poi ...

  2. poj 2253 Frogger (最长路中的最短路)

    链接:poj 2253 题意:给出青蛙A,B和若干石头的坐标,现青蛙A想到青蛙B那,A可通过随意石头到达B, 问从A到B多条路径中的最长边中的最短距离 分析:这题是最短路的变形,曾经求的是路径总长的最 ...

  3. BZOJ 3747 POI2015 Kinoman 段树

    标题效果:有m点,每个点都有一个权值.现在我们有这个m为点的长度n该序列,寻求区间,它仅出现一次在正确的点区间内值和最大 想了很久,甚至神标题,奔说是水的问题--我醉了 枚举左点 对于每个请求留点右键 ...

  4. Java 大数类

    划分结果存在数组.供应商下标0 在剩下的标记1 import java.math.BigInteger; import java.util.Scanner; public class Main { p ...

  5. 乐在其中设计模式(C#) - 适配器模式(Adapter Pattern)

    原文:乐在其中设计模式(C#) - 适配器模式(Adapter Pattern) [索引页][源码下载] 乐在其中设计模式(C#) - 适配器模式(Adapter Pattern) 作者:webabc ...

  6. ORA-12012: error on auto execute of job &quot;ORACLE_OCM

    ALERT日志中报错例如以下: Sun Mar 30 06:05:40 2014 Errors in file /oracle/app/oracle/diag/rdbms/zscims/zscims1 ...

  7. WebCollector 2.x 新手教程

    WebCollector爬虫官网:https://github.com/CrawlScript/WebCollector 技术讨论群:250108697 WebCollector 2.x教程列表 We ...

  8. cocos2d-x路~使得第一个字游戏(一个)

    前言 去年的回忆.另外,在第三.他们开发了他们的第一场比赛四月,它是游戏.所以我决定走上独立开发的道路上.了.第一款游戏达到它应有的盈利水平.然而这款游戏开发后的时间里.都没再取得还有一款令自己惬意的 ...

  9. 采用Java语言如何实现高速文件复制?

    今天review代码也看到了"大神"用老方法来实现文件拷贝.今天归结一下使用Java语言怎样实现高速文件复制: 代码1--使用文件通道的方式: import java.io.Fil ...

  10. Directx11学习笔记【三】 第一个D3D11程序

    在先前的解决方案中新建一个新的Win32项目FirstD3D11Demo.在写代码之前,我们必须先添加dx11所需要的库.为了链接dx库,右键项目选择属性->vc++目录,在包含目录中添加你所安 ...