hadoop全分布式环境搭建
本文主要介绍基本的hadoop的搭建过程。首先说下我的环境准备。我的笔记本使用的是Windows10专业版,装的虚拟机软件为VMware WorkStation Pro,虚拟机使用的系统为centos7。hadoop安装需要的软件有hadoop-2.6.0,jdk-1.8.0。软件版本可不同,请网友们自行百度下载。
整体规划
1.本次集群搭建共需要四个节点,每个节点都是最小化安装的centos7。并且每个节点都有一个zgw用户。将安装所需要的hadoop,jdk文件已预先放置在了zgw用户的家目录。
2.四个节点的名字分别为namenode,datanode1,datanode2,SecondNamenode。其中namenode为Master节点。
CentOS安装之前的准备工作
1.准备四台安装好的centos7虚拟机。安装过程请自行百度。
2.设置静态IP.
1 sudo cp /etc/sysconfig/network-scripts/ifcfg-eno16777736 /etc/sysconfig/network-scripts/ifcfg-eno16777736.bak.zgw
2 sudo vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
内容如下:
1 TYPE=Ethernet
2 BOOTPROTO=static
3 DEFROUTE=yes
4 PEERDNS=yes
5 PEERROUTES=yes
6 IPV4_FAILURE_FATAL=no
7 IPV6INIT=yes
8 IPV6_AUTOCONF=yes
9 IPV6_DEFROUTE=yes
10 IPV6_PEERDNS=yes
11 IPV6_PEERROUTES=yes
12 IPV6_FAILURE_FATAL=no
13 NAME=eno16777736
14 UUID=32b53370-f40b-4b40-b29a-daef1a58d6dc
15 DEVICE=eno16777736
16 ONBOOT=yes
17 IPADDR=192.168.190.11
18 NETMASK=255.255.255.0
19 DNS1=192.168.190.2
20 DNS2=223.5.5.5
21 GATEWAY=192.168.190.2
3.关闭centos7防火墙。
1 systemctl stop firewalld.service #停止firewall
2
3 systemctl disable firewalld.service #禁止firewall开机启动
4.修改每个节点的/etc/hostname,修改成相应的主机名(四个节点不能相同)。
sudo vi /etc/hostname
5.将每个节点的IP地址和主机名对应关系写到每个节点的/etc/hosts中。
5.1修改hosts。
sudo vi /etc/hosts
内容如下,后四行为添加内容。
1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
2 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
3 192.168.190.11 namenode
4 192.168.190.12 datanode1
5 192.168.190.13 datanode2
6 192.168.190.14 SecondNamenode
5.2测试
1 [zgw@namenode ~]$ ping datanode1
2 PING datanode1 (192.168.190.12) 56(84) bytes of data.
3 64 bytes from datanode1 (192.168.190.12): icmp_seq=1 ttl=64 time=0.711 ms
4 64 bytes from datanode1 (192.168.190.12): icmp_seq=2 ttl=64 time=0.377 ms
5 64 bytes from datanode1 (192.168.190.12): icmp_seq=3 ttl=64 time=0.424 ms
6 ^C
7 --- datanode1 ping statistics ---
8 3 packets transmitted, 3 received, 0% packet loss, time 2016ms
9 rtt min/avg/max/mdev = 0.377/0.504/0.711/0.147 ms
10 [zgw@namenode ~]$ ping datanode2
11 PING datanode2 (192.168.190.13) 56(84) bytes of data.
12 64 bytes from datanode2 (192.168.190.13): icmp_seq=1 ttl=64 time=2.31 ms
13 64 bytes from datanode2 (192.168.190.13): icmp_seq=2 ttl=64 time=3.22 ms
14 64 bytes from datanode2 (192.168.190.13): icmp_seq=3 ttl=64 time=2.62 ms
15 ^C
16 --- datanode2 ping statistics ---
17 3 packets transmitted, 3 received, 0% packet loss, time 2025ms
18 rtt min/avg/max/mdev = 2.316/2.722/3.221/0.375 ms
19 [zgw@namenode ~]$ ping SecondNamenode
20 PING SecondNamenode (192.168.190.14) 56(84) bytes of data.
21 64 bytes from SecondNamenode (192.168.190.14): icmp_seq=1 ttl=64 time=1.23 ms
22 64 bytes from SecondNamenode (192.168.190.14): icmp_seq=2 ttl=64 time=0.404 ms
23 ^C
24 --- SecondNamenode ping statistics ---
25 2 packets transmitted, 2 received, 0% packet loss, time 1011ms
26 rtt min/avg/max/mdev = 0.404/0.817/1.230/0.413 ms
6添加免密钥登录
6.1在namenode节点上生成密钥对:ssh-keygen。
1 [zgw@namenode ~]$ ssh-keygen
2 Generating public/private rsa key pair.
3 Enter file in which to save the key (/home/zgw/.ssh/id_rsa):
4 Created directory '/home/zgw/.ssh'.
5 Enter passphrase (empty for no passphrase):
6 Enter same passphrase again:
7 Your identification has been saved in /home/zgw/.ssh/id_rsa.
8 Your public key has been saved in /home/zgw/.ssh/id_rsa.pub.
9 The key fingerprint is:
10 b1:a5:c5:c6:81:9e:8a:68:0c:ba:b6:76:24:3c:5c:33 zgw@namenode
11 The key's randomart image is:
12 +--[ RSA 2048]----+
13 | .. |
14 | .o . |
15 | ...* |
16 |. E oB |
17 |+o..o. .S |
18 |.=+.. . |
19 | o+ |
20 |.o . |
21 |o.o |
22 +-----------------+
6.2将namenode机器上的~/.ssh/id_rsa.pub复制到其他机器上。此处需注意,一定要个namenode本机发送一份。
1 [zgw@namenode ~]$ ssh-copy-id namenode
2 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
3 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
4 zgw@namenode's password:
5
6 Number of key(s) added: 1
7
8 Now try logging into the machine, with: "ssh 'namenode'"
9 and check to make sure that only the key(s) you wanted were added.
10
11 [zgw@namenode ~]$ ssh-copy-id datanode1
12 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
13 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
14 zgw@datanode1's password:
15
16 Number of key(s) added: 1
17
18 Now try logging into the machine, with: "ssh 'datanode1'"
19 and check to make sure that only the key(s) you wanted were added.
20
21 [zgw@namenode ~]$ ssh-copy-id datanode2
22 The authenticity of host 'datanode2 (192.168.190.13)' can't be established.
23 ECDSA key fingerprint is 63:6b:24:0d:60:93:5c:a0:98:2f:b9:79:85:ca:90:dd.
24 Are you sure you want to continue connecting (yes/no)? yes
25 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
26 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
27 zgw@datanode2's password:
28
29 Number of key(s) added: 1
30
31 Now try logging into the machine, with: "ssh 'datanode2'"
32 and check to make sure that only the key(s) you wanted were added.
33
34 [zgw@namenode ~]$ ssh-copy-id SecondNamenode
35 The authenticity of host 'secondnamenode (192.168.190.14)' can't be established.
36 ECDSA key fingerprint is 63:6b:24:0d:60:93:5c:a0:98:2f:b9:79:85:ca:90:dd.
37 Are you sure you want to continue connecting (yes/no)? yes
38 /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
39 /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
40 zgw@secondnamenode's password:
41
42 Number of key(s) added: 1
43
44 Now try logging into the machine, with: "ssh 'SecondNamenode'"
45 and check to make sure that only the key(s) you wanted were added.
46
47 [zgw@namenode ~]$
6.3复制完成后进行测试。
1 [zgw@namenode ~]$ ssh datanode1
2 Last login: Tue Dec 27 06:26:37 2016 from 192.168.190.1
3 [zgw@datanode1 ~]$ exit
4 登出
5 Connection to datanode1 closed.
6 [zgw@namenode ~]$ ssh datanode2
7 Last login: Tue Dec 27 05:56:22 2016 from 192.168.190.1
8 [zgw@datanode2 ~]$ exit
9 登出
10 Connection to datanode2 closed.
11 [zgw@namenode ~]$ ssh SecnodNamenode
12 ssh: Could not resolve hostname secnodnamenode: Name or service not known
13 [zgw@namenode ~]$ ssh SecondNamenode
14 Last login: Tue Dec 27 05:56:27 2016 from 192.168.190.1
15 [zgw@SecondNamenode ~]$ exit
16 登出
17 Connection to secondnamenode closed.
18 [zgw@namenode ~]$
hadoop安装准备
1jdk的安装与配置。
1.1解压jdk到/opt目录。若jdk不在命令行所在的目录下,需加上路径。
1 tar -zxvf jdk-8u91-linux-x64.tar.gz -C /opt
1.2设置软连接。方便以后的升级。
1 ln -s /opt/jdk1.8.0_91 /opt/jdk
1.3设置环境变量。必须执行source命令。
1 echo "export JAVA_HOME=/opt/jdk" >> /etc/profile #设置JAVA_HONE
2 echo 'export PATH=$JAVA_HOME/bin:$PATH' >> /etc/profile #添加PATH
3 source /etc/profile
1.4jdk测试。如出现如下结果则表示安装成功。
1 [zgw@namenode ~]$ java -version
2 java version "1.8.0_91"
3 Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
4 Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
5 [zgw@namenode ~]$
2创建hadoop用户。
2.1 创建hadoop用户组。
groupadd -g 20000 hadoop #组号为20000
2.2创建用户hdfs,yarn,mr。
1 useradd -m -d /home/hdfs -u 20001 -s /bin/bash -g hadoop hdfs
2 useradd -m -d /home/yarn -u 20002 -s /bin/bash -g hadoop yarn
3 useradd -m -d /home/mr -u 20003 -s /bin/bash -g hadoop mr
2.3为用户创建密码。
1 echo hdfs:zgw | chpasswd
2 echo yarn:zgw | chpasswd
3 echo mr:zgw | chpasswd
2.4将用户添加到sudo用户组。
1 usermod -G sudo hdfs
2 usermod -G sudo yarn
3 usermod -G sudo mr
2.5为每个用户创建免密钥登录。如前文所述,这里不再赘述。切记,一定要做!!!!
3创建目录。
3.1创建hadoop所需的目录如下。
1 mkdir -p /data/hadoop/hdfs/nn
2 mkdir -p /data/hadoop/hdfs/snn
3 mkdir -p /data/hadoop/hdfs/dn
4 mkdir -p /data/hadoop/yarn/nm
3.2设置目录权限。/opt为安装hadoop的目录,在这里一并设置。
1 chown -R 20000:hadoop /data
2 chown -R hdfs /data/hadoop/hdfs
3 chown -R yarn /data/hadoop/yarn
4 chmod -R 777 /opt
5 chmod -R 777 /data/hadoop/hdfs
6 chmod -R 777 /data/hadoop/yarn
hadoop安装
1解压hadoop。若hadoop不在命令行所在的目录下,需加上路径。
1 tar -zxvf hadoop-2.6.0.tar.gz -C /opt
2创建软连接。
1 ln -s /opt/hadoop-2.6.0 /opt/hadoop
3设置环境变量。
1 echo "export HADOOP_HOME=/opt/hadoop" >> /etc/profile
2 echo 'export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH' >> /etc/profile
3 source /etc/profile
4hadoop命令测试。
1 [zgw@namenode ~]$ hadoop version
2 Hadoop 2.6.0
3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
4 Compiled by jenkins on 2014-11-13T21:10Z
5 Compiled with protoc 2.5.0
6 From source with checksum 18e43357c8f927c0695f1e9522859d6a
7 This command was run using /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
8 [zgw@namenode ~]$
5hadoop设置如下。
5.1 core-site.xml设置。
1 sudo vi /opt/hadoop/etc/hadoop/core-site.xml
内容如下:
1 <?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <property>
21 <name>fs.defaultFS</name>
22 <value>hdfs://192.168.190.11:9000</value>
23 </property>
24 </configuration>
22行配置为Master节点IP。我的是namenode节点。
5.2hdfs-site.xml设置。
1 sudo vi /opt/hadoop/etc/hadoop/hdfs-site.xml
内容如下:
1 <?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <property>
21 <name>dfs.permissions.enabled</name>
22 <value>false</value>
23 </property>
24 <property>
25 <name>dfs.blocksize</name>
26 <value>32m</value>
27 <description>
28 The default block size for new files, in bytes.
29 You can use the following suffix (case insensitive):
30 k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.),
31 Or provide complete size in bytes (such as 134217728 for 128 MB).
32 </description>
33 </property>
34
35 <property>
36 <name>dfs.nameservices</name>
37 <value>hadoop-cluster-zgw</value>
38 </property>
39 <property>
40 <name>dfs.replication</name>
41 <value>3</value>
42 </property>
43 <property>
44 <name>dfs.namenode.name.dir</name>
45 <value>/data/hadoop/hdfs/nn</value>
46 </property>
47 <property>
48 <name>dfs.namenode.checkpoint.dir</name>
49 <value>/data/hadoop/hdfs/snn</value>
50 </property>
51 <property>
52 <name>dfs.namenode.checkpoint.edits.dir</name>
53 <value>/data/hadoop/hdfs/snn</value>
54 </property>
55 <property>
56 <name>dfs.datanode.data.dir</name>
57 <value>/data/hadoop/hdfs/dn</value>
58 </property>
59 <property>
60 <name>dfs.namenode.secondary.http-address</name>
61 <value>192.168.190.14:50090</value>
62 </property>
63 </configuration>
61行IP为SecondNamenode节点。
5.3yarn-site.xml设置。
1 sudo vi /opt/hadoop/etc/hadoop/yarn-site.xml
内容如下:
1 <?xml version="1.0"?>
2 <!--
3 Licensed under the Apache License, Version 2.0 (the "License");
4 you may not use this file except in compliance with the License.
5 You may obtain a copy of the License at
6
7 http://www.apache.org/licenses/LICENSE-2.0
8
9 Unless required by applicable law or agreed to in writing, software
10 distributed under the License is distributed on an "AS IS" BASIS,
11 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 See the License for the specific language governing permissions and
13 limitations under the License. See accompanying LICENSE file.
14 -->
15 <configuration>
16
17 <!-- Site specific YARN configuration properties -->
18 <property>
19 <name>yarn.resourcemanager.hostname</name>
20 <value>192.168.190.11</value>
21 </property>
22 <property>
23 <name>yarn.nodemanager.aux-services</name>
24 <value>mapreduce_shuffle</value>
25 </property>
26 <property>
27 <name>yarn.nodemanager.local-dirs</name>
28 <value>/data/hadoop/yarn/nm</value>
29 </property>
30 </configuration>
20行IP为yarn集群的resourcemanager节点。可以和namenode相同,也可以不同,他们没有必然联系。
5.4mapred设置。
1 sudo vi /opt/hadoop/etc/hadoop/mapred-site.xml
内容如下:
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <!--
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License. See accompanying LICENSE file.
15 -->
16
17 <!-- Put site-specific property overrides in this file. -->
18
19 <configuration>
20 <property>
21 <name>mapreduce.framework.name</name>
22 <value>yarn</value>
23 </property>
24 </configuration>
5.5slaves设置。将datanode写进slaves,我把SecondNamenode也作为一个datanode。
1 sudo vi /opt/hadoop/etc/hadoop/slaves
内容如下:
datanode1
datanode2
SecondNamenode
5.6设置JDK路径。
1 sudo vi /opt/hadoop/etc/hadoop/hadoop-env.sh
修改内容如下,找到#export JAVA_HOME=/opt/jdk,我的在25行。将其设置如下。
1 export JAVA_HOME=/opt/jdk
注意:一定要去掉前面的#。
5.7创建logs目录。在/opt/hadoop/下查看,如若有logs目录则不用创建,如若没有logs目录,则创建logs。
1 sudo mkdir /opt/hadoop/logs
然后修改其权限:
1 chown -R mr:hadoop //opt/hadoop/logs
2 chmod 777 //opt/hadoop/logs
hadoop集群开启
1在每个节点上格式化HDFS文件系统。只在主节点格式化也行,但有时候只在主节点格式化不行,我一般都是在每个节点都格式化。
1 hdfs namenode –format
2开启hdfs集群。
2.1切换到hdfs用户。
1 su - hdfs
2.2在namenode上执行如下命令开启hdfs集群。
1 start-dfs.sh
2.3在namenode上执行jps查看。
1 [hdfs@namenode ~]$ start-dfs.sh
2 Starting namenodes on [namenode]
3 namenode: starting namenode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-namenode-namenode.out
4 SecondNamenode: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-datanode-SecondNamenode.out
5 datanode2: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-datanode-datanode2.out
6 datanode1: starting datanode, logging to /opt/hadoop-2.6.0/logs/hadoop-hdfs-datanode-datanode1.out
7 [hdfs@namenode ~]$ jps
8 10902 Jps
9 10712 NameNode
2.4在其他三个节点上执行jps查看如下。
1 [hdfs@datanode1 ~]$ jps
2 4547 DataNode
3 5190 Jps
1 [hdfs@datanode2 ~]$ jps
2 4416 DataNode
3 5070 Jps
1 [hdfs@SecondNamenode ~]$ jps
2 5110 Jps
3 4394 DataNode
2.5网页端查看:http://192.168.190.11:50070。


2.6在namenode上关闭集群。
1 stop-dfs.sh
3开启yarn集群。
3.1切换到yarn用户。
1 su - yarn
3.2在namenode上执行如下命令开启yarn集群。
1 start-yarn.sh
3.3在namenode上执行jps查看。
1 [yarn@namenode ~]$ start-yarn.sh
2 starting yarn daemons
3 resourcemanager running as process 9741. Stop it first.
4 datanode2: nodemanager running as process 4713. Stop it first.
5 SecondNamenode: nodemanager running as process 4695. Stop it first.
6 datanode1: nodemanager running as process 4828. Stop it first.
7 [yarn@namenode ~]$ jps
8 11004 Jps
9 9741 ResourceManager
3.4在其他三个节点上执行jps查看如下。
1 [yarn@datanode1 ~]$ jps
2 5398 Jps
3 4828 NodeManager
1 [yarn@datanode2 ~]$ jps
2 5281 Jps
3 4713 NodeManager
1 [yarn@SecondNamenode ~]$ jps
2 4695 NodeManager
3 5308 Jps
3.5网页端查看:http://192.168.190.11:8088。


3.6 在namenode上关闭yarn集群
1 stop-yarn.sh
4开启作业日志服务器。
4.1切换到mr用户。
1 su - mr
4.2执行如下命令开启作业日志服务器。
1 mr-jobhistory-daemon.sh start historyserver
4.3执行jps查看如下。
1 [mr@namenode ~]$ mr-jobhistory-daemon.sh start historyserver
2 starting historyserver, logging to /opt/hadoop-2.6.0/logs/mapred-mr-historyserver-namenode.out
3 [mr@namenode ~]$ jps
4 11157 Jps
5 11126 JobHistoryServer
4.4网页端查看:http://192.168.190.11:19888。

4.5在namenode上关闭作业日志服务器。
1 mr-jobhistory-daemon.sh stop historyserver
hadoop全分布式环境搭建的更多相关文章
- 【转】Hadoop HDFS分布式环境搭建
原文地址 http://blog.sina.com.cn/s/blog_7060fb5a0101cson.html Hadoop HDFS分布式环境搭建 最近选择给大家介绍Hadoop HDFS系统 ...
- 【Hadoop离线基础总结】CDH版本Hadoop 伪分布式环境搭建
CDH版本Hadoop 伪分布式环境搭建 服务规划 步骤 第一步:上传压缩包并解压 cd /export/softwares/ tar -zxvf hadoop-2.6.0-cdh5.14.0.tar ...
- hadoop ——完全分布式环境搭建
hadoop 完全分布式环境搭建 1.虚拟机角色分配: 192.168.44.184 hadoop02 NameNode/DataNode ResourceManager/NodeManager 19 ...
- CentOS7下Hadoop伪分布式环境搭建
CentOS7下Hadoop伪分布式环境搭建 前期准备 1.配置hostname(可选,了解) 在CentOS中,有三种定义的主机名:静态的(static),瞬态的(transient),和灵活的(p ...
- Linux下配置Hadoop全分布式环境
1. 前提 部署全分布式环境,我们肯定不能在一台服务器上了,这里我用了7台服务器,在VMware上开了7个虚拟机,如下图所示: 我基本配置了一晚上才搞定,第一次配置一般都有错,这时候不妨去到hadoo ...
- Hadoop完全分布式环境搭建(二)——基于Ubuntu16.04设置免密登录
在Windows里,使用虚拟机软件Vmware WorkStation搭建三台机器,操作系统Ubuntu16.04,下面是IP和机器名称. [实验目标]:在这三台机器之间实现免密登录 1.从主节点可以 ...
- 《OD大数据实战》Hadoop伪分布式环境搭建
一.安装并配置Linux 8. 使用当前root用户创建文件夹,并给/opt/下的所有文件夹及文件赋予775权限,修改用户组为当前用户 mkdir -p /opt/modules mkdir -p / ...
- Hadoop完全分布式环境搭建
前言 本文搭建了一个由三节点(master.slave1.slave2)构成的Hadoop完全分布式集群(区别单节点伪分布式集群),并通过Hadoop分布式计算的一个示例测试集群的正确性. 本文集群三 ...
- Hadoop伪分布式环境搭建+Ubuntu:16.04+hadoop-2.6.0
Hello,大家好 !下面就让我带大家一起来搭建hadoop伪分布式的环境吧!不足的地方请大家多交流.谢谢大家的支持 准备环境: 1, ubuntu系统,(我在16.04测试通过.其他版本请自行测试, ...
随机推荐
- cursor 属性
鼠标 样式 default 默认光标(通常是一个箭头) auto 默认.浏览器设置的光标. crosshair 光标呈现为十字线. pointer 光标呈现为指示链接的指针(一只手) move 此光标 ...
- 隐马尔科夫模型HMM(一)HMM模型
隐马尔科夫模型HMM(一)HMM模型基础 隐马尔科夫模型HMM(二)前向后向算法评估观察序列概率 隐马尔科夫模型HMM(三)鲍姆-韦尔奇算法求解HMM参数(TODO) 隐马尔科夫模型HMM(四)维特比 ...
- C#之out修饰符、ref修饰符、params修饰符的简单介绍
一.out修饰符 1.调用一个带有输出参数的方法也需要使用out 修饰符,但是作为输出变量传递的本地变量在将他们作为输出变量传递前不需要赋值(因为调用后会改变或丢失),编译器允 许 ...
- Android应用安全学习笔记前言
Android是基于Linux kernel的一个自由及开放源代码的操作系统,主要用于移动设备.在2011年第一季度超越了塞班系统跃居了全球第一.本系列作为分享的东西吧.比较基础. 文章也不知道会分为 ...
- 软件测试基础(软件测试分类和工具组)firebug、firepath的安装
白盒测试:需要了解内部结构和代码 黑盒测试:不关心内部结构和代码 灰盒测试:介于白盒黑盒之间 静态测试:测试时不执行被测试软件 动态测试:测试时执行被测试软件 单元测试:测试软件的单元模块 集成测试: ...
- 表单的get和post使用情景
GET和POST两种方法都是将数据送到服务器,但你该用哪一种呢? HTTP标准包含这两种方法是为了达到不同的目的.POST用于创建资源,资源的内容会被编入HTTP请示的内容中.例如,处理订货表单.在数 ...
- springMVC 配置和使用
springMVC相对于Struts2学习难度较为简单,并且更加灵活轻便. 第一步:导入jar包 spring.jar.spring-webmvc.jar.commons-logging.jar.sp ...
- 关于MATLAB处理大数据坐标文件201762
经过头脑风暴法想出了很多特征,目前经过筛选已经提交了两次数据,数据提交结果不尽如人意,但是收获很大. 接下来继续提取特征,特征数达到27时筛选出20条特征,并找出最佳搭配
- 【 js 基础 】Javascript “继承”
是时候写一写 "继承"了,为什么加引号,因为当你阅读完这篇文章,你会知道,说是 继承 其实是不准确的. 一.类1.传统的面向类的语言中的类:类/继承 描述了一种代码的组织结构形式. ...
- $.Deferred 延迟对象
一.什么是deferred对象? 开发网站的过程中,我们经常遇到某些耗时很长的javascript操作.其中,既有异步的操作(比如ajax读取服务器数据),也有同步的操作(比如遍历一个大型数组),它们 ...