1 VM下Ubuntu安装和配置
1.1 安装Ubuntu系统
 这个就不说了,不知道的可以去看看其他的博文。
 
1.2 集群配置
    搭建一个由3台机器组成的集群:
IP user/passwd hostname role System
192.168.174.160 hadoop/hadoop master nn/snn/rm Ubuntu-14.04-32bit
192.168.174.161 hadoop/hadoop slave1 dn/nm Ubuntu-14.04-32bit
192.168.174.162 hadoop/hadoop slave2 dn/nm Ubuntu-14.04-32bit
        nn:    namenode
        snn:  secondary namenode
        dn:    datanode
        rm:    resourcemanager
        nm:    nodemanager
 
1.3 创建用户
    我这里给每台机器创建一个hadoop的用户,密码也为hadoop,修改/etc/sudoers文件,增加:hadoop  ALL=(ALL) ALL,给每个账户分配sudo的权限。
root@master:/home/duanwf# useradd --create-home hadoop

root@master:/home/duanwf# passwd hadoop

root@master:~# vi /etc/sudoers
# User privilege specification
root ALL=(ALL:ALL) ALL
duanwf ALL=(ALL:ALL) ALL
hadoop ALL=(ALL:ALL) ALL
1.4 设定电脑的IP为静态地址
 
1.5 设置各个主机的hostname

  打开/etc/hostname文件:

root@master:~# vi /etc/hostname
master

  将/etc/hostname文件中的机器名改为你想取的机器名, 重启系统后才会生效。

 
1.6 在以上三台电脑的/etc/hosts添加以上配置的hostname
root@master:~# vi /etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu # The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters 192.168.174.160 master
192.168.174.161 slave1
192.168.174.162 slave2

  

1.7 设置SSH无密码登陆
  安装SSH
duanwf@master:~$ sudo apt-get install ssh 

  查看SSH是否安装成功以及版本

duanwf@master:~$ ssh -V
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014

  安装完成后会在~目录(当前用户主目录,即这里的/home/hadoop)下产生一个隐藏文件夹.ssh(ls  -a 可以查看隐藏文件)。如果没有这个文件,自己新建即可(mkdir .ssh)。

duanwf@master:~$ cd /home/hadoop 

duanwf@master:~$ ls -a

duanwf@master:~$ mkdir .ssh

  进入.ssh文件夹

duanwf@master:~$ cd .ssh

  产生秘钥

duanwf@master:~/.ssh$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/duanwf/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/duanwf/.ssh/id_rsa.
Your public key has been saved in /home/duanwf/.ssh/id_rsa.pub.
The key fingerprint is:
49:ad:12:42:36:15:c8:f6:42:08:c1:d9:a6:04:27:a1 duanwf@master
The key's randomart image is:
+--[ RSA 2048]----+
|O++o+oo. |
|.*.==. . |
|E oo... . . |
| . ...o o |
| .. S |
| . |
| |
| |
| |
+-----------------+

  把id_rsa.pub 追加到授权的 key 里面去

duanwf@master:~/.ssh$ cat id_rsa.pub >> authorized_keys

  重启 SSH 服务命令使其生效

duanwf@master:~/.ssh$ service ssh restart
stop: Rejected send message, 1 matched rules; type="method_call", sender=":1.109" (uid=1000 pid=8874 comm="stop ssh ") interface="com.ubuntu.Upstart0_6.Job" member="Stop" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
start: Rejected send message, 1 matched rules; type="method_call", sender=":1.110" (uid=1000 pid=8868 comm="start ssh ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")

   将master的authorized_keys追加到slave1和slave2的authorized_keys:

  把所有主机都配好之后再来完成该步骤。

  这里只有一个是master,如果有多个namenode,或者rm的话则需要打通所有master都其他剩余节点的免密码登陆。

duanwf@master:~/.ssh$ scp authorized_keys duanwf@slave1:~/.ssh/authorized_keys_from_master
The authenticity of host 'slave1 (192.168.174.161)' can't be established.
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave1,192.168.174.161' (ECDSA) to the list of known hosts.
duanwf@slave1's password:
authorized_keys 100% 395 0.4KB/s 00:00 duanwf@master:~/.ssh$ scp authorized_keys duanwf@slave2:~/.ssh/authorized_keys_from_master
The authenticity of host 'slave2 (192.168.174.162)' can't be established.
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave2,192.168.174.162' (ECDSA) to the list of known hosts.
duanwf@slave2's password:
authorized_keys 100% 395 0.4KB/s 00:00

  进入slave1和slave2的.ssh目录

duanwf@slave1:~$ cd .ssh
duanwf@slave1:~/.ssh$ ssh -V
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014
duanwf@slave1:~/.ssh$ cat authorized_keys_from_master >> authorized_keys
duanwf@slave1:~/.ssh$ ls
authorized_keys authorized_keys_from_master duanwf@slave2:~/.ssh$ ssh -V
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2, OpenSSL 1.0.1f 6 Jan 2014
duanwf@slave2:~/.ssh$ cat authorized_keys_from_master >> authorized_keys
duanwf@slave2:~/.ssh$ ls
authorized_keys authorized_keys_from_master

  验证SSH无密码登录

duanwf@master:~/.ssh$ ssh slave1
Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic i686) * Documentation: https://help.ubuntu.com/ 208 packages can be updated.
110 updates are security updates. Last login: Tue Oct 7 18:25:31 2014 from 192.168.174.1

  如上显示,说明已经安装成功,第一次登录时会询问你是否继续链接,输入yes即可以进入。

  实际上,在hadoop的安装过程中,是否无密码登陆不是必须的,但是如果不配置无密码登陆的话,每次启动hadoop,都需要输入密码以登陆到每台daotanode,考虑到一般的hadoop集群动辄数十数百台机器,因此一般来说都会配置ssh的无密码登陆。

2 JDK安装和配置
2.1 JDK下载
 
2.2 解压下载的jdk到/opt目录下
hadoop@master:/home/duanwf/Installpackage$ sudo tar zxvf jdk-7u51-linux-i586.tar.gz -C /opt/

2.3 jdk环境变量配置

在终端输入以下命令来修改文件/etc/profile

hadoop@master:~$ sudo vi /etc/profile 

在文件最后加上

export JAVA_HOME=/opt/jdk1.7.0_51
export JRE_HOME=/opt/jdk1.7.0_51/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH

使修改生效,执行命令:  

hadoop@master:~$ source /etc/profile

验证java环境变量是否配置成功:

hadoop@master:~$ java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) Client VM (build 24.51-b03, mixed mode)
3、防火墙配置
ubuntu下关闭防火墙,执行命令:
hadoop@master:~$ sudo ufw disable
4、Hadoop-2.4.1的安装和配置
4.1 编译hadoop-2.4.1-src.tar.gz源包
对于64位操作系统,需要重新编译源码包。
 
4.2 解压安装包hadoop-2.4.1.tar.gz
hadoop@master:/home/duanwf/Installpackage$ sudo tar zxvf hadoop-2.4.1.tar.gz -C /opt/
4.3 Hadoop环境变量配置
修改/etc/profile文件,加入以下内容:
hadoop@master:~$ sudo vi /etc/profile
export HADOOP_DEV_HOME=/home/hadoop/hadoop-2.4.1/
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop export PATH=$HADOOP_DEV_HOME/bin:$HADOOP_DEV_HOME/sbin:$PATH
使修改的配置生效,在终端输入命令:
hadoop@master:~$ source /etc/profile

查看Hadoop环境变量是否生效,在终端执行命令:

hadoop@master:~$ hadoop
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
or
CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters.

    

4.4 hadoop配置

配置之前,需要在master本地文件系统创建以下文件夹:

~/dfs/name

~/dfs/data

~/temp

hadoop@master:~$ mkdir ~/dfs 

hadoop@master:~$ mkdir ~/temp 

hadoop@master:~$ mkdir ~/dfs/name 

hadoop@master:~$ mkdir ~/dfs/data

    

这里要涉及到的配置文件有7个:

~/hadoop-2.4.1/etc/hadoop/hadoop-env.sh

~/hadoop-2.4.1/etc/hadoop/yarn-env.sh

~/hadoop-2.4.1/etc/hadoop/slaves

~/hadoop-2.4.1/etc/hadoop/core-site.xml

~/hadoop-2.4.1/etc/hadoop/hdfs-site.xml

~/hadoop-2.4.1/etc/hadoop/mapred-site.xml.template

~/hadoop-2.4.1/etc/hadoop/yarn-site.xml

<---------------------------------- hadoop-env.sh --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/opt/jdk1.7.0_51/
<---------------------------------- yarn-env.sh --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi yarn-env.sh
# some Java parameters
export JAVA_HOME=/opt/jdk1.7.0_51/
<---------------------------------- slaves --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi slaves
slave1
slave2
<---------------------------------- core-site.xml --------------------------------->
hadoop@master:~/hadoop-2.4.1/etc/hadoop$ sudo vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/temp/</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.hduser.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hduser.groups</name>
<value>*</value>
</property>
</configuration>
<---------------------------------- hdfs-site.xml --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/dfs/name/</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/dfs/data/</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
<---------------------------------- mapred-site.xml.template --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi mapred-site.xml.template
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
<---------------------------------- yarn-site.xml --------------------------------->
hadoop@master:/opt/hadoop-2.4.1/etc/hadoop$ sudo vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value> master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value> master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value> master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value> master:8088</value>
</property>
</configuration>

Hadoop配置文件修改

  

4.5 复制到其他节点
  进入slave1:
hadoop@slave1:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/

  进入slave2:

hadoop@slave2:~$ scp -r hadoop@master:/home/hadoop/hadoop-2.4.1/ /home/hadoop/
4.6 Hadoop启动
(1)格式化HDFS
hadoop@master:~/hadoop-2.4.1$ ./bin/hdfs namenode -format
14/10/08 18:43:05 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.174.160
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.1
STARTUP_MSG: classpath = /home/hadoop/hadoop-2.4.1//etc/hadoop:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/hadoop-annotations-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/hadoop-auth-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/hadoop-nfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/hadoop-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/common/hadoop-common-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/hadoop-hdfs-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1//share/hadoop/hdfs/hadoop-hdfs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-client-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-server-common-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/yarn/hadoop-yarn-api-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.1-tests.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.1.jar:/home/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.1.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1604318; compiled by 'jenkins' on 2014-06-21T05:43Z
STARTUP_MSG: java = 1.7.0_51
************************************************************/
14/10/08 18:43:05 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/10/08 18:43:05 INFO namenode.NameNode: createNameNode [-format]
14/10/08 18:43:06 WARN common.Util: Path /home/hadoop/dfs/name/ should be specified as a URI in configuration files. Please update hdfs configuration.
14/10/08 18:43:06 WARN common.Util: Path /home/hadoop/dfs/name/ should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-f1441872-89ef-4733-98df-454c18da5043
14/10/08 18:43:06 INFO namenode.FSNamesystem: fsLock is fair:true
14/10/08 18:43:06 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/10/08 18:43:06 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/10/08 18:43:06 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/10/08 18:43:06 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map BlocksMap
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit
14/10/08 18:43:06 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/10/08 18:43:06 INFO util.GSet: capacity = 2^22 = 4194304 entries
14/10/08 18:43:06 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/10/08 18:43:06 INFO blockmanagement.BlockManager: defaultReplication = 3
14/10/08 18:43:06 INFO blockmanagement.BlockManager: maxReplication = 512
14/10/08 18:43:06 INFO blockmanagement.BlockManager: minReplication = 1
14/10/08 18:43:06 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/10/08 18:43:06 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/10/08 18:43:06 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/10/08 18:43:06 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/10/08 18:43:06 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/10/08 18:43:06 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
14/10/08 18:43:06 INFO namenode.FSNamesystem: supergroup = supergroup
14/10/08 18:43:06 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/10/08 18:43:06 INFO namenode.FSNamesystem: HA Enabled: false
14/10/08 18:43:06 INFO namenode.FSNamesystem: Append Enabled: true
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map INodeMap
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit
14/10/08 18:43:06 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/10/08 18:43:06 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/10/08 18:43:06 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map cachedBlocks
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit
14/10/08 18:43:06 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/10/08 18:43:06 INFO util.GSet: capacity = 2^19 = 524288 entries
14/10/08 18:43:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/10/08 18:43:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/10/08 18:43:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/10/08 18:43:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/10/08 18:43:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/10/08 18:43:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/10/08 18:43:06 INFO util.GSet: VM type = 32-bit
14/10/08 18:43:06 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/10/08 18:43:06 INFO util.GSet: capacity = 2^16 = 65536 entries
14/10/08 18:43:06 INFO namenode.AclConfigFlag: ACLs enabled? false
Re-format filesystem in Storage Directory /home/hadoop/dfs/name ? (Y or N) Y
14/10/08 18:43:10 INFO namenode.FSImage: Allocated new BlockPoolId: BP-215877782-192.168.174.160-1412764990823
14/10/08 18:43:10 INFO common.Storage: Storage directory /home/hadoop/dfs/name has been successfully formatted.
14/10/08 18:43:11 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/10/08 18:43:11 INFO util.ExitUtil: Exiting with status 0
14/10/08 18:43:11 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.174.160
************************************************************/

格式化HDFS:hadoop@master:~/hadoop-2.4.1$ ./bin/hdfs namenode -format

(2)启动HDFS
执行一下命令启动HDFS,会自动启动所有master的namenode和slave1,slave2的datanode:
hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh
Starting namenodes on [master]
The authenticity of host 'master (192.168.174.160)' can't be established.
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42.
Are you sure you want to continue connecting (yes/no)? yes
master: Warning: Permanently added 'master,192.168.174.160' (ECDSA) to the list of known hosts.
hadoop@master's password:
master: mkdir: 无法创建目录"/opt/hadoop-2.4.1/logs": 权限不够
master: chown: 无法访问"/opt/hadoop-2.4.1/logs": 没有那个文件或目录
master: starting namenode, logging to /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out
master: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 151: /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out: 没有那个文件或目录
master: head: 无法打开"/opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out" 读取数据: 没有那个文件或目录
master: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 166: /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out: 没有那个文件或目录
master: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 167: /opt/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out: 没有那个文件或目录
The authenticity of host 'slave2 (192.168.174.162)' can't be established.
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42.
Are you sure you want to continue connecting (yes/no)? The authenticity of host 'slave1 (192.168.174.161)' can't be established.
ECDSA key fingerprint is 1f:c0:2a:ed:c1:7b:6e:26:46:e3:c3:b6:87:bb:99:42.
Are you sure you want to continue connecting (yes/no)? yes
slave2: Warning: Permanently added 'slave2,192.168.174.162' (ECDSA) to the list of known hosts.
hadoop@slave2's password: Please type 'yes' or 'no':
slave1: Warning: Permanently added 'slave1,192.168.174.161' (ECDSA) to the list of known hosts.
hadoop@slave1's password:
slave2: mkdir: 无法创建目录"/opt/hadoop-2.4.1/logs": 权限不够
slave2: chown: 无法访问"/opt/hadoop-2.4.1/logs": 没有那个文件或目录
slave2: starting datanode, logging to /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out
slave2: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 151: /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out: 没有那个文件或目录
slave2: head: 无法打开"/opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out" 读取数据: 没有那个文件或目录
slave2: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 166: /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out: 没有那个文件或目录
slave2: /opt/hadoop-2.4.1/sbin/hadoop-daemon.sh: 行 167: /opt/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out: 没有那个文件或目录

hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh

【出现问题】
mkdir: 无法创建目录"/home/hadoop/hadoop-2.4.1/logs": 权限不够 
 
【解决办法】
在master上都执行命令:
hadoop@master:~$ sudo chown -R hadoop:hadoop hadoop-2.4.1/

slave1和slave2同样需要执行。

重新启动HDFS
hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh
Starting namenodes on [master]
hadoop@master's password:
master: starting namenode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-namenode-master.out
hadoop@slave1's password: hadoop@slave2's password:
slave1: starting datanode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave1.out hadoop@slave2's password: slave2: Permission denied, please try again. slave2: starting datanode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-datanode-slave2.out
Starting secondary namenodes [master]
hadoop@master's password:
master: starting secondarynamenode, logging to /home/hadoop/hadoop-2.4.1/logs/hadoop-hadoop-secondarynamenode-master.out

hadoop@master:~/hadoop-2.4.1$ ./sbin/start-dfs.sh

检查Hadoop集群是否安装好了,在master上面运行jps,如果有NameNode这个进程,说明master安装好了:
hadoop@master:~/hadoop-2.4.1$ jps
31711 SecondaryNameNode
31464 NameNode
31857 Jps
在slave1上面运行jps,如果有DataNode这个进程,说明slave1安装好了。
hadoop@slave1:~$ jps
5529 DataNode
5610 Jps
在slave2上面运行jps,如果有DataNode这个进程,说明slave1安装好了。
hadoop@slave2:~$ jps
8119 Jps
8035 DataNode

本文出自 “Forever Love” 博客,转载请务必保留此出处http://www.cnblogs.com/dwf07223/p/4012406.html

VMware下Hadoop 2.4.1完全分布式集群平台安装与设置的更多相关文章

  1. 布式实时日志系统(三) 环境搭建之centos 6.4下hadoop 2.5.2完全分布式集群搭建最全资料

    最近公司业务数据量越来越大,以前的基于消息队列的日志系统越来越难以满足目前的业务量,表现为消息积压,日志延迟,日志存储日期过短,所以,我们开始着手要重新设计这块,业界已经有了比较成熟的流程,即基于流式 ...

  2. Hadoop实战4:MapR分布式集群的安装配置及shell自动化脚本

    MapR的分布式集群安装过程还是很艰难的,远远没有计划中的简单.本人总结安装配置,由于集群有很多机器,手动每台配置是很累的,编写了一个自动化配置脚本,下面以脚本为主线叙述(脚本并不完善,后续继续完善中 ...

  3. Hadoop教程(五)Hadoop分布式集群部署安装

    Hadoop教程(五)Hadoop分布式集群部署安装 1 Hadoop分布式集群部署安装 在hadoop2.0中通常由两个NameNode组成,一个处于active状态,还有一个处于standby状态 ...

  4. Hadoop及Zookeeper+HBase完全分布式集群部署

    Hadoop及HBase集群部署 一. 集群环境 系统版本 虚拟机:内存 16G CPU 双核心 系统: CentOS-7 64位 系统下载地址: http://124.202.164.6/files ...

  5. Linux下Apache与Tomcat的完全分布式集群配置(负载均衡)

    最近公司要给客户提供一套集群方案,项目组采用了Apache和Tomcat的集群配置,用于实现负载均衡的实现. 由于以前没有接触过Apache,因此有些手生,另外在网上搜寻了很多有关这方面的集群文章,但 ...

  6. centos6.8系统安装 Hadoop 2.7.3伪分布式集群

    安装 Hadoop 2.7.3 配置ssh免密码登陆 cd ~/.ssh                       # 若没有该目录,请先执行一次ssh localhost ssh-keygen - ...

  7. mac 下搭建Elasticsearch 5.4.3分布式集群

    一.集群角色 多机集群中的节点可以分为master nodes和data nodes,在配置文件中使用Zen发现(Zen discovery)机制来管理不同节点.Zen发现是ES自带的默认发现机制,使 ...

  8. hadoop分布式集群完全安装(非HA)

    一.各节点基础环境配置(最好每台都配置) 先输入su获取root权限 1修改主机名 输入vim /etc/sysconfig/network 改成: NETWORKING=yes HOSTNAME=m ...

  9. Linux下部署Kafka分布式集群,安装与测试

    注意:部署Kafka之前先部署环境JAVA.Zookeeper 准备三台CentOS_6.5_x64服务器,分别是:IP: 192.168.0.249 dbTest249 Kafka IP: 192. ...

随机推荐

  1. Python--day71--ORM分组补充

    1,ORM映射对应的sql语句: 2,QuerySet QuerySet方法大全 ########################################################### ...

  2. 走过的laravel-admin 的坑

    一.http://laravel-admin.org/docs/#/zh/  大家可以根据这个安装1.5 版本的laravel后台管理, 他很方便哦,有很多方法他都自己自己封装了. 二.大家如果想好好 ...

  3. Codeforces Round #198 (Div. 1 + Div. 2)

    A. The Wall 求下gcd即可. B. Maximal Area Quadrilateral 枚举对角线,根据叉积判断顺.逆时针方向构成的最大面积. 由于点坐标绝对值不超过1000,用int比 ...

  4. Vue vue-resource三种请求数据方式pet,post,jsonp

    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...

  5. element 树形控件使用

    <el-tree :data="morkDataList" show-checkbox ref="tree" node-key="id" ...

  6. VisualStudio 扩展开发 获得输出窗口内容

    本文告诉大家如何拿到 VisualStudio 输出窗口的内容 在上一篇告诉大家如何开发添加菜单 点击的时候可以使用方法,如果需要拿到 VisualStudio 的输出窗口的内容,如想要开发一个插件, ...

  7. linux strace 命令

    有时小问题可以通过观察用户空间的应用程序的行为来追踪. 监视程序也有助于建立对驱 动正确工作的信心. 例如, 我们能够对 scull 感到有信心, 在看了它的读实现如何响应 不同数量数据的读请求之后. ...

  8. NetBIOS 计算机名称命名限制

    本文告诉大家对于 NetBIOS 的命名的限制 长度限制 最小长度是 1 最长长度是 15 因为默认是 16 字符,但是微软使用最后一个字符作为后缀 可以使用的字符 可以使用英文和数字 abcdefg ...

  9. H3C VLAN基本配置

  10. HDU6578 2019HDU多校训练赛第一场 1001 (dp)

    HDU6578 2019HDU多校训练赛第一场 1001 (dp) 传送门:http://acm.hdu.edu.cn/showproblem.php?pid=6578 题意: 你有n个空需要去填,有 ...