网盘下载地址

链接: https://pan.baidu.com/s/19qWnP6LQ-cHVrvT0o1jTMg 密码: 44hs

Hadoop伪分布式配置 

Hadoop 可以在单节点上以伪分布式的方式运行,Hadoop 进程以分离的 Java 进程来运行,节点既作为 NameNode 也作为 DataNode,同时,读取的是 HDFS 中的文件。

Hadoop 的配置文件位于 /usr/local/hadoop/etc/hadoop/ 中,伪分布式需要修改2个配置文件 core-site.xmlhdfs-site.xml 。Hadoop的配置文件是 xml 格式.

修改配置文件 core-site.xml:

通过 gedit 编辑会比较方便: gedit ./etc/hadoop/core-site.xml

    <configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

修改配置文件 hdfs-site.xml

gedit ./etc/hadoop/hdfs-site.xml

    <configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>

配置完成后,执行 NameNode 的格式化:

./bin/hdfs namenode -format

成功的话,会看到 “successfully formatted” 和 “Exitting with status 0” 的提示.

Hadoop 的运行方式是由配置文件决定的(运行 Hadoop 时会读取配置文件),因此如果需要从伪分布式模式切换回非分布式模式,需要删除 core-site.xml 中的配置项。

伪分布式运行MapReduce作业:

./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep input output 'dfs[a-z.]+'

---------------------------

Hbase伪分布式配置

1.配置/usr/local/hbase/conf/hbase-env.sh。命令如下:

gedit /usr/local/hbase/conf/hbase-env.sh

 

配置JAVA_HOME,HBASE_CLASSPATH,HBASE_MANAGES_ZK.HBASE_CLASSPATH设置为本机Hadoop安装目录下的conf目录(即/usr/local/hadoop/conf)

export JAVA_HOME=/usr/lib/jvm/default-java

export HBASE_CLASSPATH=/usr/local/hadoop/conf

export HBASE_MANAGES_ZK=true

2.配置/usr/local/hbase/conf/hbase-site.xml
用命令vi打开并编辑hbase-site.xml,命令如下:

gedit /usr/local/hbase/conf/hbase-site.xml

修改hbase.rootdir,指定HBase数据在HDFS上的存储路径;将属性hbase.cluter.distributed设置为true。假设当前Hadoop集群运行在伪分布式模式下,在本机上运行,且NameNode运行在9000端口。

    <configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>

  

------------------------------------------------------

Python - MapReduce - WorldCount

1.1 Map阶段:mapper.py

#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
words = line.split()
for word in words:
print "%s\t%s" % (word, 1)

1.2 Reduce阶段:reducer.py

#!/usr/bin/env python
from operator import itemgetter
import sys current_word = None
current_count = 0
word = None for line in sys.stdin:
line = line.strip()
word, count = line.split('\t', 1)
try:
count = int(count)
except ValueError:
continue
if current_word == word:
current_count += count
else:
if current_word:
print "%s\t%s" % (current_word, current_count)
current_count = count
current_word = word if word == current_word:
print "%s\t%s" % (current_word, current_count)

1.3 本地测试代码(cat data | map | sort | reduce)

$echo "foo foo quux labs foo bar quux" | ./mapper.py

$echo "foo foo quux labs foo bar quux" | ./mapper.py | sort -k1,1 | ./reducer.py

1.4 在Hadoop上运行python代码

 ~/.bashrc  
export STREAM=$HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.ja


run.sh
hadoop jar $STREAM  \
-file /home/hadoop/wc/mapper.py \
-mapper /home/hadoop/wc/mapper.py \
-file /home/hadoop/wc/reducer.py \
-reducer /home/hadoop/wc/reducer.py \
-input /user/hadoop/input/*.txt \
-output /user/hadoop/wcoutput

--------------------------------------------

hive配置

3. 修改/usr/local/hive/conf下的hive-site.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against metastore database</description>
</property>
</configuration>

-------------------------------------------

Hive user

pre_deal.sh

#!/bin/bash
infile=$1
outfile=$2
awk -F "," 'BEGIN{
srand();
id=0;
Province[0]="山东";Province[1]="山西";Province[2]="河南";Province[3]="河北";Province[4]="陕西";Province[5]="内蒙古";Province[6]="上海市";
Province[7]="北京市";Province[8]="重庆市";Province[9]="天津市";Province[10]="福建";Province[11]="广东";Province[12]="广西";Province[13]="云南";
Province[14]="浙江";Province[15]="贵州";Province[16]="新疆";Province[17]="西藏";Province[18]="江西";Province[19]="湖南";Province[20]="湖北";
Province[21]="黑龙江";Province[22]="吉林";Province[23]="辽宁"; Province[24]="江苏";Province[25]="甘肃";Province[26]="青海";Province[27]="四川";
Province[28]="安徽"; Province[29]="宁夏";Province[30]="海南";Province[31]="香港";Province[32]="澳门";Province[33]="台湾";
}
{
id=id+1;
value=int(rand()*34);
print id"\t"$1"\t"$2"\t"$3"\t"$5"\t"substr($6,1,10)"\t"Province[value]
}' $infile > $outfile

 Hive word_count
create table word_count as
select word, count(1) as count from
(select explode(split(line,' '))as word from docs) w
group by word
order by word;

  

create table word_counts as select word,count(1) as count from (select explode(split(line,' ')) as word from docs) word group by word order by word;

  

Hive user analyse

CREATE EXTERNAL TABLE dblab.bigdata_user(id INT,uid STRING,item_id STRING,behavior_type INT,item_category STRING,visit_date DATE,province STRING) COMMENT 'Welcome to dblab!' ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' STORED AS TEXTFILE LOCATION '/bigdatacase/dataset';

 

查询不重复的数据有多少条

select count(*) from (select uid,item_id,behavior_type,item_category,visit_date,province from bigdata_user group by uid,item_id,behavior_type,item_category,visit_date,province having count(*)=1)a;

5.https://www.cnblogs.com/kaituorensheng/p/3826114.html

https://blog.csdn.net/qq_39662852/article/details/84318619

https://www.liaoxuefeng.com/article/1280231425966113

https://blog.csdn.net/helloxiaozhe/article/details/88964067

https://www.jianshu.com/p/21c880ee93a9

wget http://www.gutenberg.org/files/5000/5000-8.txt

wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt

Hadoop生态的配置的更多相关文章

  1. hadoop生态搭建(3节点)-07.hive配置

    # http://archive.apache.org/dist/hive/hive-2.1.1/ # ================================================ ...

  2. 安装高可用Hadoop生态 (一 ) 准备环境

    为了学习Hadoop生态的部署和调优技术,在笔记本上的3台虚拟机部署Hadoop集群环境,要求保证HA,即主要服务没有单点故障,能够执行最基本功能,完成小内存模式的参数调整. 1.    准备环境 1 ...

  3. 基于Hadoop生态SparkStreaming的大数据实时流处理平台的搭建

    随着公司业务发展,对大数据的获取和实时处理的要求就会越来越高,日志处理.用户行为分析.场景业务分析等等,传统的写日志方式根本满足不了业务的实时处理需求,所以本人准备开始着手改造原系统中的数据处理方式, ...

  4. Hadoop演进与Hadoop生态

    1.了解对比Hadoop不同版本的特性,可以用图表的形式呈现. (1)0.20.0~0.20.2: Hadoop的0.20分支非常稳定,虽然看起来有些落后,但是经过生产环境考验,是 Hadoop历史上 ...

  5. 初识Hadoop一,配置及启动服务

    一.Hadoop简介: Hadoop是由Apache基金会所开发的分布式系统基础架构,实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS:Hadoo ...

  6. [Hadoop入门] - 2 ubuntu安装与配置 hadoop安装与配置

    ubuntu安装(这里我就不一一捉图了,只引用一个网址, 相信大家能力) ubuntu安装参考教程:  http://jingyan.baidu.com/article/14bd256e0ca52eb ...

  7. Hadoop伪分布模式配置

    本作品由Man_华创作,采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可.基于http://www.cnblogs.com/manhua/上的作品创作. 请先按照上一篇文章H ...

  8. Hadoop伪分布配置与基于Eclipse开发环境搭建

    国内私募机构九鼎控股打造APP,来就送 20元现金领取地址:http://jdb.jiudingcapital.com/phone.html内部邀请码:C8E245J (不写邀请码,没有现金送)国内私 ...

  9. Hadoop集群配置(最全面总结)

    Hadoop集群配置(最全面总结) 通常,集群里的一台机器被指定为 NameNode,另一台不同的机器被指定为JobTracker.这些机器是masters.余下的机器即作为DataNode也作为Ta ...

随机推荐

  1. C#编辑EXE使用的appSettings节点的Config文件

    /// <summary> /// 保存配置文件的设定 /// </summary> /// <param name="Key"></pa ...

  2. Linux系统中Redis和Tomcat的PID文件路径设置

    Tomcat: /bin/catalina.sh 文件头注释下面添加一行:CATALINA_PID=/var/run/tomcat.pid Redis: redis.conf配置文件里面搜索pidfi ...

  3. June 11. 2018 Week 24th, Monday

    Love is the beauty of the soul. 爱是灵魂之美. From Saint Augustine. The complete version of this quote goe ...

  4. Python Numpy-基础教程

    目录 1. 为什么要学习numpy? 2. Numpy基本用法 2.1. 创建np.ndarry 2.2. Indexing and Slicing Boolean Index 2.3. Univer ...

  5. python 反射、md5加密

    一.issubclass,type,isinstance 1.issubclass :判断xx类是否是yyy类型(包括子类),用于类之间的判定 class GrandF: pass class Fat ...

  6. PHP程序员的能力水平层次

    PHP程序员的能力水平层次 之前看过很多篇关于服务端工程师和PHP开发者的能力模型介绍,每篇都对能力有侧重点. 下面我们来详细谈谈以开发能力为基准点的PHP程序员的能力水平层次. 层层递进 1.功能开 ...

  7. http: server gave HTTP response to HTTPS client & Get https://192.168.2.119/v2/: dial tcp 192.168.2.119:443: getsockopt: connection refused

    http: server gave HTTP response to HTTPS client 出现这问题的原因是:Docker自从1.3.X之后docker registry交互默认使用的是HTTP ...

  8. UVA12563-Jin Ge Jin Qu hao(动态规划基础)

    Problem UVA12563-Jin Ge Jin Qu hao Accept: 642  Submit: 7638Time Limit: 3000 mSec Problem Descriptio ...

  9. UVA12558-Efyptian Fractions(HARD version)(迭代加深搜索)

    Problem UVA12558-Efyptian Fractions(HARD version) Accept:187  Submit:3183 Time Limit: 3000 mSec  Pro ...

  10. 1-tomcat简介

    一.tomcate的目录结构说明: 1.bin:存放服务器启动和关闭的命令文件.2.conf:存放服务器的配置信息文件3.lib:存放服务器自身需要的所有jar文件,也称为全局jar文件(只要部署在当 ...