[root@node1 ~]# lscpu

Architecture: x86_64

CPU op-mode(s): 32-bit, 64-bit

Byte Order: Little Endian

CPU(s): 1

On-line CPU(s) list: 0

Thread(s) per core: 1

Core(s) per socket: 1

Socket(s): 1

NUMA node(s): 1

Vendor ID: GenuineIntel

CPU family: 6

Model: 58

Stepping: 8

CPU MHz: 2299.062

BogoMIPS: 4598.12

L1d cache: 32K

L1d cache: 32K

L2d cache: 6144K

NUMA node0 CPU(s): 0

[root@node1 ~]# free -m

total used free shared buffers cached

Mem: 996 647 348 0 9 109

-/+ buffers/cache: 527 468

Swap: 1839 0 1839

[root@node1 ~]# cat /proc/meminfo

MemTotal: 1020348 kB

MemFree: 357196 kB

Buffers: 10156 kB

Cached: 112464 kB

SwapCached: 0 kB

Active: 505360 kB

Inactive: 70812 kB

Active(anon): 453560 kB

Inactive(anon): 196 kB

Active(file): 51800 kB

Inactive(file): 70616 kB

Unevictable: 0 kB

Mlocked: 0 kB

SwapTotal: 1884152 kB

SwapFree: 1884152 kB

Dirty: 96 kB

Writeback: 0 kB

AnonPages: 453564 kB

Mapped: 25552 kB

Shmem: 208 kB

Slab: 63916 kB

SReclaimable: 12588 kB

SUnreclaim: 51328 kB

KernelStack: 2280 kB

PageTables: 5644 kB

NFS_Unstable: 0 kB

Bounce: 0 kB

WritebackTmp: 0 kB

CommitLimit: 2394324 kB

Committed_AS: 722540 kB

VmallocTotal: 34359738367 kB

VmallocUsed: 7852 kB

VmallocChunk: 34359717412 kB

HardwareCorrupted: 0 kB

AnonHugePages: 358400 kB

HugePages_Total: 0

HugePages_Free: 0

HugePages_Rsvd: 0

HugePages_Surp: 0

Hugepagesize: 2048 kB

DirectMap4k: 8128 kB

DirectMap2M: 1040384 kB

[root@node1 ~]# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sr0 11:0 1 1024M 0 rom

sda 8:0 0 18G 0 disk

鈹溾攢sda1 8:1 0 500M 0 part /boot

鈹斺攢sda2 8:2 0 17.5G 0 part

鈹溾攢vg_node1-lv_root (dm-0) 253:0 0 15.7G 0 lvm /

鈹斺攢vg_node1-lv_swap (dm-1) 253:1 0 1.8G 0 lvm [SWAP]

[root@node1 ~]# fdisk -l

Disk /dev/sda: 19.3 GB, 19327352832 bytes

255 heads, 63 sectors/track, 2349 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000ecb12

Device Boot Start End Blocks Id System

/dev/sda1 * 1 64 512000 83 Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 64 2350 18361344 8e Linux LVM

Disk /dev/mapper/vg_node1-lv_root: 16.9 GB, 16869490688 bytes

255 heads, 63 sectors/track, 2050 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/mapper/vg_node1-lv_swap: 1929 MB, 1929379840 bytes

255 heads, 63 sectors/track, 234 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

[hadoop@node1 root]hadoopdfsadmin−reportWarning:HADOOP_HOME is deprecated.

Configured Capacity: 16604643328 (15.46 GB)

Present Capacity: 13766094848 (12.82 GB)

DFS Remaining: 13747478528 (12.8 GB)

DFS Used: 18616320 (17.75 MB)

DFS Used%: 0.14%

Under replicated blocks: 30

Blocks with corrupt replicas: 0

Missing blocks: 0


Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010

Decommission Status : Normal

Configured Capacity: 16604643328 (15.46 GB)

DFS Used: 18616320 (17.75 MB)

Non DFS Used: 2838548480 (2.64 GB)

DFS Remaining: 13747478528(12.8 GB)

DFS Used%: 0.11%

DFS Remaining%: 82.79%

Last contact: Sun Jul 05 01:31:44 EDT 2015

hive> select year,avg(air) from ncdc group by year;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=

In order to set a constant number of reducers:

set mapred.reduce.tasks=

Starting Job = job_201507050117_0001, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201507050117_0001

Kill Command = /opt/software/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201507050117_0001

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2015-07-05 01:33:08,403 Stage-1 map = 0%, reduce = 0%

2015-07-05 01:33:18,463 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:19,470 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:20,479 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:21,486 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:22,492 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:23,500 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:24,505 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:25,510 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:26,517 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:27,530 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:28,541 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:29,549 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:30,556 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:31,562 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:32,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:33,574 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.36 sec

2015-07-05 01:33:34,579 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 3.36 sec

2015-07-05 01:33:35,591 Stage-1 map = 100%, reduce = 33%, Cumulative CPU 3.36 sec

2015-07-05 01:33:36,605 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.38 sec

2015-07-05 01:33:37,611 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.38 sec

2015-07-05 01:33:38,620 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.38 sec

2015-07-05 01:33:39,626 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.38 sec

2015-07-05 01:33:40,640 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.38 sec

2015-07-05 01:33:41,646 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.38 sec

MapReduce Total cumulative CPU time: 5 seconds 380 msec

Ended Job = job_201507050117_0001

MapReduce Jobs Launched:

Job 0: Map: 1 Reduce: 1 Cumulative CPU: 5.38 sec HDFS Read: 17013533 HDFS Write: 537 SUCCESS

Total MapReduce CPU Time Spent: 5 seconds 380 msec

OK

1901 45.16831683168317

1902 21.659558263518658

1903 -17.67699115044248

1904 33.32224247948952

1905 43.3322664228014

1906 47.0834855681403

1907 28.09189090243456

1908 28.80607441154138

1909 25.24907112526539

1910 29.00013071895425

1911 28.088644112247575

1912 16.801145236855803

1913 8.191569568197396

1914 26.378301131816624

1915 2.811635615498914

1916 21.42393787117405

1917 22.895140080045742

1918 27.712506047411708

1919 23.67520250849229

1920 43.508667830133795

1921 31.834957020057306

1922 -44.03716409376787

1923 26.79247747159462

Time taken: 68.15 seconds, Fetched: 23 row(s)

hive> select year,max(air) from ncdc group by year;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=

In order to set a constant number of reducers:

set mapred.reduce.tasks=

Starting Job = job_201507050117_0004, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201507050117_0004

Kill Command = /opt/software/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201507050117_0004

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2015-07-05 01:40:13,809 Stage-1 map = 0%, reduce = 0%

2015-07-05 01:40:24,856 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:25,863 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:26,868 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:27,873 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:28,881 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:29,885 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:30,893 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:31,897 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:32,906 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:33,912 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:34,917 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:35,924 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:36,928 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:37,933 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.48 sec

2015-07-05 01:40:38,938 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.69 sec

2015-07-05 01:40:39,943 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.69 sec

2015-07-05 01:40:40,950 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.69 sec

2015-07-05 01:40:41,956 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.69 sec

2015-07-05 01:40:42,968 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 5.69 sec

MapReduce Total cumulative CPU time: 5 seconds 690 msec

Ended Job = job_201507050117_0004

MapReduce Jobs Launched:

Job 0: Map: 1 Reduce: 1 Cumulative CPU: 5.69 sec HDFS Read: 17013533 HDFS Write: 184 SUCCESS

Total MapReduce CPU Time Spent: 5 seconds 690 msec

OK

1901 94

1902 94

1903 94

1904 94

1905 94

1906 94

1907 94

1908 94

1909 94

1910 94

1911 94

1912 94

1913 94

1914 94

1915 94

1916 94

1917 94

1918 94

1919 94

1920 94

1921 94

1922 94

1923 94

Time taken: 66.373 seconds, Fetched: 23 row(s)

hive> select count(*) from ncdc;

Total MapReduce jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=

In order to set a constant number of reducers:

set mapred.reduce.tasks=

Starting Job = job_201507050117_0006, Tracking URL = http://node1:50030/jobdetails.jsp?jobid=job_201507050117_0006

Kill Command = /opt/software/hadoop-1.2.1/libexec/../bin/hadoop job -kill job_201507050117_0006

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2015-07-05 02:09:03,771 Stage-1 map = 0%, reduce = 0%

2015-07-05 02:09:12,807 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:13,812 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:14,817 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:15,821 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:16,826 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:17,831 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:18,837 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:19,843 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:20,850 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:21,856 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:22,863 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:23,871 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:24,876 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.43 sec

2015-07-05 02:09:25,880 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.55 sec

2015-07-05 02:09:26,886 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.55 sec

2015-07-05 02:09:27,891 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.55 sec

2015-07-05 02:09:28,900 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.55 sec

2015-07-05 02:09:29,907 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.55 sec

2015-07-05 02:09:30,912 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 4.55 sec

MapReduce Total cumulative CPU time: 4 seconds 550 msec

Ended Job = job_201507050117_0006

MapReduce Jobs Launched:

Job 0: Map: 1 Reduce: 1 Cumulative CPU: 4.55 sec HDFS Read: 17013533 HDFS Write: 7 SUCCESS

Total MapReduce CPU Time Spent: 4 seconds 550 msec

OK

335346

Time taken: 64.48 seconds, Fetched: 1 row(s)

查询1920的平均气温,比1923年高多少或者低多少。\

select a-b from(

select max(air) as a from ncdc where year=1920

select max(air) as b from ncdc where year=1923)

版权声明:本文为博主原创文章,未经博主允许不得转载。

伪分布模式 hive查询的更多相关文章

  1. hadoop伪分布模式的配置和一些常用命令

    大数据的发展历史 3V:volume.velocity.variety(结构化和非结构化数据).value(价值密度低) 大数据带来的技术挑战 存储容量不断增加 获取有价值的信息的难度:搜索.广告.推 ...

  2. hadoop的安装和配置(二)伪分布模式

    博主会用三篇文章为大家详细的说明hadoop的三种模式: 本地模式 伪分布模式 完全分布模式 伪分布式模式: 这篇为大家带来hadoop的伪分布模式: 从最简单的方面来说,伪分布模式就是在本地模式上修 ...

  3. 3-2 Hadoop集群伪分布模式配置部署

    Hadoop伪分布模式配置部署 一.实验介绍 1.1 实验内容 hadoop配置文件介绍及修改 hdfs格式化 启动hadoop进程,验证安装 1.2 实验知识点 hadoop核心配置文件 文件系统的 ...

  4. hadoop1.2.1伪分布模式配置

    1.修改core-site.xml,配置hdfs <configuration> <property> <name>fs.default.name</name ...

  5. 【Hadoop环境搭建】Centos6.8搭建hadoop伪分布模式

    阅读目录 ~/.ssh/authorized_keys 把公钥加到用于认证的公钥文件中,authorized_keys是用于认证的公钥文件 方式2: (未测试,应该可用) 基于空口令创建新的SSH密钥 ...

  6. Hadoop伪分布模式配置

    本作品由Man_华创作,采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可.基于http://www.cnblogs.com/manhua/上的作品创作. 请先按照上一篇文章H ...

  7. 【原】Hadoop伪分布模式的安装

    Hadoop伪分布模式的安装 [环境参数] (1)Host OS:Win7 64bit (2)IDE:Eclipse Version: Luna Service Release 2 (4.4.2) ( ...

  8. 伪分布模式下使用java接口,访问hdfs

    package com.bq.pro; import java.io.IOException; import org.apache.hadoop.conf.Configuration; import ...

  9. Hadoop伪分布模式配置部署

    .实验环境说明 注意:本实验需要按照上一节单机模式部署后继续进行操作 1. 环境登录 无需密码自动登录,系统用户名 shiyanlou,密码 shiyanlou 2. 环境介绍 本实验环境采用带桌面的 ...

随机推荐

  1. js实现select动态添加option

    关于 select 的添加 option 应该注意的问题. 标准的做法如上也就是说,标准的做法是 s.options.add();但是如果你一定要用 s.appendChild(option);注意了 ...

  2. Data Structure Array: Find the Missing Number

    http://www.geeksforgeeks.org/find-the-missing-number/ 1. use sum formula, O(n), O(1) 2. use XOR, O(n ...

  3. zabbix实现mysql数据库的监控(一)

    zabbix是一个基于WEB界面的提供分布式系统监视以及网络监视功能的企业级的开源解决方案.它能监视各种网络参数,保证服务器系统的安全运营:并提供灵活的通知机制以让系统管理员快速定位/解决存在的各种问 ...

  4. python中自定义排序函数

    Python内置的 sorted()函数可对list进行排序: >>>sorted([36, 5, 12, 9, 21]) [5, 9, 12, 21, 36] 但 sorted() ...

  5. inline 元素的特性

    http://www.maxdesign.com.au/articles/inline/ http://www.w3.org/TR/CSS2/visuren.html#inline-boxes htt ...

  6. 比较分析与数组相关的sizeof和strlen

    首先,我们要清楚sizeof是C/C++中的一个操作符,其作用就是返回一个对象或者类型所占的内存字节数. 而,strlen是一个函数,函数原型为: size_t strlen(const char * ...

  7. shell脚本默认变量值

    脚本参数相关: $# 是传给脚本的参数个数 $ 是脚本本身的名字 $ 是传递给该shell脚本的第一个参数 $ 是传递给该shell脚本的第二个参数 $@ 是传给脚本的所有参数的列表 $* 是以一个单 ...

  8. 开发rsync启动脚本2

    使用函数更加规范的开发rsync启动脚本 #!/bin/bash #chkconfig: #description: create by vincen . /etc/init.d/functions ...

  9. hiho一下 第二十九周 最小生成树三·堆优化的Prim算法【14年寒假弄了好长时间没搞懂的prim优化:prim算法+堆优化 】

    题目1 : 最小生成树三·堆优化的Prim算法 时间限制:10000ms 单点时限:1000ms 内存限制:256MB 描述 回到两个星期之前,在成功的使用Kruscal算法解决了问题之后,小Ho产生 ...

  10. Hibernate技术

    Hibernate中3个重要的类: 配置类(configuration) 负责管理Hibernate的配置信息,包含数据库连接URL.数据库用户.数据库密麻麻.数据库驱动等. 会话工厂类(Sessio ...