1.hbase的改的会影响器他的组件的使用,

故而, 在修改 hadoop的任何组件后, 一定要记得其它的组件也能受到影响,

一下是我在将hadoop的集群改了之后 , 再次运行hbase的时候, 就会发生异常, 原因是在连接namenode的时候,发生连接不到, 就是因为我改了之前的配置,将端口号改了,

没有去将配置文件在hbase中进行更新, 具体问如下:

##我今天起得hbase的集群时候,发生了

1.http://node3:60010/master-status

HTTP ERROR 503

Problem accessing /master-status. Reason:

    Master not ready

Powered by Jetty://

2.http://node3:60030/rs-status

The RegionServer is initializing!

在hbase-hadoop-master-node3.log中, 报异常是:

2016-10-12 09:45:04,616 DEBUG [main-EventThread] master.ActiveMasterManager: A master is now available

2016-10-12 09:59:10,484 WARN [master:node3:60000] retry.RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode. Not retrying because failovers (15) exceeded maximum allowed (15)
java.net.ConnectException: Call From node3/192.168.8.13 to node2:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy12.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy13.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2146)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:983)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:426)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:851)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:435)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:607)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
at org.apache.hadoop.ipc.Client.call(Client.java:1318)
... 25 more
2016-10-12 10:02:04,280 WARN [master:node3:60000] retry.RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode. Not retrying because failovers (15) exceeded maximum allowed (15)
java.net.ConnectException: Call From node3/192.168.8.13 to node1:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor5.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy12.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:561)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy13.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2146)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:983)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:967)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:432)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:851)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:435)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:607)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
at org.apache.hadoop.ipc.Client.call(Client.java:1318)
... 21 more
2016-10-12 10:02:04,282 FATAL [master:node3:60000] master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call From node3/192.168.8.13 to node1:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor5.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy12.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:561)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy13.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2146)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:983)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:967)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:432)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:851)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:435)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:607)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
at org.apache.hadoop.ipc.Client.call(Client.java:1318)
... 21 more

2016-10-12 10:02:04,352 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2799)

而:hbase-hadoop-regionserver-node3.log中没有错误, 但是有如下:

016-10-12 09:55:51,150 INFO [main] regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
2016-10-12 10:00:45,947 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN
2016-10-12 10:05:45,947 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN
2016-10-12 10:10:45,948 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN
2016-10-12 10:15:45,948 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN
2016-10-12 10:20:45,948 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN
2016-10-12 10:25:45,981 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN
2016-10-12 10:30:45,947 DEBUG [LruStats #0] hfile.LruBlockCache: Total=3.26 MB, free=393.42 MB, max=396.68 MB, blocks=0, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=0, evicted=0, evictedPerRun=NaN

这是为啥, 我找了很久 , 记得以前是如下原因引起的,

1.通过hostname设置为regionserver端链接时指定的hostname即可 , 这个就是域名和主机不一致导致的,  (hbase分布式部署问题之org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to connect to master)

--->很快我就排除了, 因为我的hadoop的集群启动正常,故这个排除

2.时间不一致引起的,类似这个人提到(时间同步上有问题,查了一下RegionServer和Master两台机的时间,果然不一致。由于之前把主节点的时间同步而未同步子节点的时间 ,)

我在各台机子上都就行查看,用:date;hwclock -r 进行查看 , 没有问题,(如何 一步到位修改linux系统时间  date -s "20091112 18:30:50" &&hwclock --systohc)

3.hbase 未成功启动  1、关闭防火墙:service iptables stop  ,我有排除了, 原因是我的hadoop集群是启动的, 没有这个问题

最后, 我无奈去继续找, 去看看我之前配置hbase的配置, 是我之前有其他问题, 因为我早一个多月前试了试hbase,是没有问题,还有简单编程的尝试的,于是

查看搭建:分布式的hbase 的文档,

1.上传hbase安装包

2.解压

3.配置hbase集群,要修改3个文件(首先zk集群已经安装好了)

注意:要把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下

3.1修改hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_55

//告诉hbase使用外部的zk

export HBASE_MANAGES_ZK=false

vim hbase-site.xml

<configuration>

<!-- 指定hbase在HDFS上存储的路径,其中:ns1是在hadoop的core-site.xml中配置的 -->

<property>

<name>hbase.rootdir</name>

<value>hdfs://ns1/hbase</value>

</property>

<!-- 指定hbase是分布式的 -->

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<!-- 指定zk的地址,多个用“,”分割 -->

<property>

<name>hbase.zookeeper.quorum</name>

<value>itcast04:2181,itcast05:2181,itcast06:2181</value>

</property>

</configuration>

vim regionservers

itcast03

itcast04

itcast05

itcast06

3.2拷贝hbase到其他节点

scp -r /itcast/hbase-0.96.2-hadoop2/ itcast02:/itcast/

scp -r /itcast/hbase-0.96.2-hadoop2/ itcast03:/itcast/

scp -r /itcast/hbase-0.96.2-hadoop2/ itcast04:/itcast/

scp -r /itcast/hbase-0.96.2-hadoop2/ itcast05:/itcast/

scp -r /itcast/hbase-0.96.2-hadoop2/ itcast06:/itcast/

4.将配置好的HBase拷贝到每一个节点并同步时间。

5.启动所有的hbase

分别启动zk

./zkServer.sh start

启动hbase集群

start-dfs.sh

启动hbase,在主节点上运行:

start-hbase.sh

6.通过浏览器访问hbase管理页面

192.168.1.201:60010

7.为保证集群的可靠性,要启动多个HMaster

hbase-daemon.sh  start master

到第三部的时候,我就发现了问题, 为啥, 原因是我之前改过那个hadoop的dfs.namenode.rpc-address.myserver.nn1的端口 <value>node1:8020</value>,

改为了

<property>
<name>dfs.namenode.rpc-address.myserver.nn2</name>
<value>node2:9000</value>
</property>

,这时候,

Call From node3/192.168.8.13 to node1:8020 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

 

故,问题就在这里

配置问题的问题 

hbase集群的启动,注意几个问题的更多相关文章

  1. hbase集群在启动的时候找不到JAVA_HOME的问题

    hbase集群在启动的时候找不到JAVA_HOME的问题,启动集群的时候报错信息如下: root@master:/usr/local/hbase-/bin# ./start-hbase.sh star ...

  2. HBase集群搭建

    HBase集群搭建 搭建环境:假设我们的linux环境已经准备好,包括网络.JDK.防火墙.主机名.免密登录等都没有问题,而且一定要有zookeeper.下面我们用3台linux虚拟机来搭建Hbase ...

  3. HBase学习之路 (二)HBase集群安装

    前提 1.HBase 依赖于 HDFS 做底层的数据存储 2.HBase 依赖于 MapReduce 做数据计算 3.HBase 依赖于 ZooKeeper 做服务协调 4.HBase源码是java编 ...

  4. Hadoop hbase集群断电数据块被破坏无法启动

    集群机器意外断电重启,导致hbase 无法正常启动,抛出reflect invocation异常,可能是正在执行的插入或合并等操作进行到一半时中断,导致部分数据文件不完整格式不正确或在hdfs上blo ...

  5. HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)

    HBase的多节点集群详细启动步骤(3或5节点)分为: 1.HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐) 2.HBASE_MANAGES_ZK的默认值是tru ...

  6. hbase配置-集群无法启动问题

    root@cslave2:/]#jps 2834 NodeManager 2487 DataNode 12282 Jps 2415 QuorumPeerMain root@cslave2:/]#sud ...

  7. 读者来信 | 刚搭完HBase集群,Phoenix一启动,HBase就全崩了,是什么原因?(已解决)

    前言:之前有朋友加好友与我探讨一些问题,我觉得这些问题倒挺有价值的:于是就想在本公众号开设一个问答专栏,方便技术交流与分享,专栏名就定为:<读者来信>.如遇到本人能力有限难以解决的问题,我 ...

  8. hbase集群安装与部署

    1.相关环境 centos7 hadoop2.6.5 zookeeper3.4.9 jdk1.8 hbase1.2.4 本篇文章仅涉及hbase集群的搭建,关于hadoop与zookeeper的相关部 ...

  9. Hbase集群搭建及所有配置调优参数整理及API代码运行

    最近为了方便开发,在自己的虚拟机上搭建了三节点的Hadoop集群与Hbase集群,hadoop集群的搭建与zookeeper集群这里就不再详细说明,原来的笔记中记录过.这里将hbase配置参数进行相应 ...

随机推荐

  1. iOS开发小技巧--相机相册的正确打开方式

    iOS相机相册的正确打开方式- UIImagePickerController 通过指定sourceType来实现打开相册还是相机 UIImagePickerControllerSourceTypeP ...

  2. 通过form上传文件(php)

    前段代码 <html> <head> <meta http-equiv="Content-Type" content="text/html; ...

  3. 【USACO 2.4】Cow Tours (最短路)

    题意:给你n(最多150)个点的坐标,给出邻接矩阵,并且整个图至少两个联通块,现在让你连接一条边,使得所有可联通的两点的最短距离的最大值最小. 题解:先dfs染色,再用floyd跑出原图的直径O($n ...

  4. 【原】fiddler修改线上的内容

    摘要:当我们线上的代码出bug了,咋办呢?有时候本地的代码跟线上的代码还是运行环境还是有区别的.比如有些封装的方法需要运动到手机上可以调试,而浏览器是无法调试的.如果不想每次修改完再放上到测试环境看效 ...

  5. ecshop 支付

    支付分成两部分 1.订单信息 2.支付日志ID 3.生成支付代码 一次性支付完成 // 支付信息 include_once('includes/lib_payment.php'); $order['l ...

  6. eclipse中导入jar文件的源码

    有时候想看看一个jar包的源码是怎么写的,想按Ctrl+鼠标左键点击来自动导航这时候就需要先把源码给导入到eclipse中,步骤如下:首先准备jar包和源文件包比如:

  7. 100多个基础常用JS函数和语法集合大全

    网站特效离不开脚本,javascript是最常用的脚本语言,我们归纳一下常用的基础函数和语法: 1.输出语句:document.write(""); 2.JS中的注释为//3.传统 ...

  8. /usr/bin/cd 是什么鬼

    上文中曾讲到,我在我的 Mac 上发现很多和 Bash 内部命令同名的外部命令,在那 24 个外部命令中,我发现个奇怪的现象:它们中有 15 个居然是 Shell 脚本,更奇怪的是,居然是同一个 Sh ...

  9. 原生JavaScript技巧

    时常在技术论坛有看见一些比较好的示例,于是就出于一种收集并学习的态度,于是就保留下来啦~  当然现在展示的也只是一部分,先放一部分出来尝尝鲜~~~

  10. 微信小程序开发视频教程新鲜出炉

    微信小程序开发公测了,可是对于新手来说,不同的框架不同的开发机制,如何快速适应呢?微信小程序开发视频教程新鲜出炉了,从零开始一步一步搭建微信小程序,每个章节都会涉及到不同的知识点,等教程学习完你不但掌 ...