<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>fs.defaultFS</name>
<value>hdfs://ochadoopcluster</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property> <property>
<name>hadoop.tmp.dir</name>
<value>/home/ochadoop/tmp/hadoop/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property> <property>
<name>ipc.server.listen.queue.size</name>
<value>32768</value>
<description>Indicates the length of the listen queue for servers accepting
client connections.
</description>
</property> <property>
<name>io.native.lib.available</name>
<value>true</value>
<description>Should native hadoop libraries, if present, be used.</description>
</property> <property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
<description>A comma-separated list of the compression codec classes that can
be used for compression/decompression. In addition to any classes specified
with this property (which take precedence), codec classes on the classpath
are discovered using a Java ServiceLoader.</description>
</property> <property>
<description>
The user name to filter as, on static web filters
while rendering content. An example use is the HDFS
web UI (user to be used for browsing files).
</description>
<name>hadoop.http.staticuser.user</name>
<value>ochadoop</value>
</property> <property>
<name>fs.trash.interval</name>
<value>1440</value>
<description>Number of minutes after which the checkpoint
gets deleted. If zero, the trash feature is disabled.
This option may be configured both on the server and the
client. If trash is disabled server side then the client
side configuration is checked. If trash is enabled on the
server side then the value configured on the server is
used and the client configuration value is ignored.
</description>
</property> <property>
<name>fs.trash.checkpoint.interval</name>
<value>30</value>
<description>Number of minutes between trash checkpoints.
Should be smaller or equal to fs.trash.interval. If zero,
the value is set to the value of fs.trash.interval.
Every time the checkpointer runs it creates a new checkpoint
out of current and removes checkpoints created more than
fs.trash.interval minutes ago.
</description>
</property> <property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzopCodec</value>
</property> <property>
<name>net.topology.script.file.name</name>
<value>/home/ochadoop/app/hadoop/etc/hadoop/rack.py</value>
<description> The script name that should be invoked to resolve DNS names to
NetworkTopology names. Example: the script would take host.foo.bar as an
argument, and return /rack1 as the output.
</description>
</property> <property>
<name>net.topology.script.number.args</name>
<value>1</value>
<description> The max number of args that the script configured with
net.topology.script.file.name should be run with. Each arg is an
IP address.
</description>
</property> <property>
<name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
<value>60000</value>
<description>
Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState
</description>
</property> <property>
<name>ipc.client.connect.timeout</name>
<value>60000</value>
<description>Indicates the number of milliseconds a client will wait for the
socket to establish a server connection.
</description>
</property> </configuration>

hadoop core-site.xml的更多相关文章

  1. hadoop conf中xml文件修改

    core-site.xml <?xml version="1.0"?><?xml-stylesheet type="text/xsl" hre ...

  2. Hadoop配置文件-core-site.xml

     name value  Description   fs.default.name hdfs://hadoopmaster:9000 定义HadoopMaster的URI和端口  fs.checkp ...

  3. 【Hadoop需要的Jar包】Hadoop编程-pom.xml文件

    JDK版本的要求 Hadoop 2.7 以及之后的版本,需要JDK 7: Hadoop 2.6 以及之前的版本,支持JDK 6: 对于Hadoop1.x.x版本,只需要引入1个jar: hadoop- ...

  4. 解决:Unable to connect to repository https://dl-ssl.google.com/android/eclipse/site.xml

    ailed to fectch URl https://dl-ssl.google.com/android/repository/addons_list.xml, reason: Connection ...

  5. Hadoop配置文件-mapred-site.xml

    name value Description hadoop.job.history.location   job历史文件保存路径,无可配置参数,也不用写在配置文件里,默认在logs的history文件 ...

  6. Hadoop配置文件-hdfs-site.xml

     name  value Description  dfs.default.chunk.view.size 32768 namenode的http访问页面中针对每个文件的内容显示大小,通常无需设置. ...

  7. 最新hadoop+hbase+spark+zookeeper环境安装(vmmare下)

    说明:我这里安装的版本是hadoop2.7.3,hbase1.2.4,spark2.0.2,zookeeper3.4.9 (安装包:链接:http://pan.baidu.com/s/1c25hI4g ...

  8. Hadoop 配置文件 & 启动方式

    配置文件: 默认的配置文件:相对应的jar 中 core-default.xml hdfs-default.xml yarn-default.xml mapred-default.xml 自定义配置文 ...

  9. Hadoop 集群的三种方式

    1,Local(Standalone) Mode 单机模式 $ mkdir input $ cp etc/hadoop/*.xml input $ bin/hadoop jar share/hadoo ...

  10. hadoop安装及配置入门篇

    声明: author: 龚细军 时间: -- 类型: 笔记 转载时请注明出处及相应链接. 链接地址: http://www.cnblogs.com/gongxijun/p/5726024.html 本 ...

随机推荐

  1. C#学习笔记(1) --简叙.net体系结构

    1 C#与.NET的关系 (1) C#是专门为与Microsoft的.Net Framework一起使用而设计的. (2) C#是一种基于面向对象设计方法的的语言. (3) 需要注意的是,C#就其本身 ...

  2. Python打包-py2exe使用

    Py2exe 64位下载地址:http://download.csdn.net/detail/henujyj/8532827 Py2exe 32位下载地址:https://sourceforge.ne ...

  3. 2 Orchard汉化资源包的使用

    Orchard安装完毕之后我们就可以在后台尝试做一些基本的操作感受下Orchard提供的一些功能,比如添加一个页面.菜单.文章什么的.也可以试着新建一些部件.布局之类的感受下.个人建议摆弄一下了解下就 ...

  4. EasyUI中Base(基础)的基本用法

    EasyUI中Base(基础)的用法 一.Base(基础) 1.parser 解析器 2.easyloader 简单加载 3.draggable 拖动 4.droppable 放置 5.resizab ...

  5. 重新想象 Windows 8 Store Apps (58) - 微软账号

    [源码下载] 重新想象 Windows 8 Store Apps (58) - 微软账号 作者:webabcd 介绍重新想象 Windows 8 Store Apps 之 微软账号 获取微软账号的用户 ...

  6. 与众不同 windows phone (46) - 8.0 通信: Socket, 其它

    [源码下载] 与众不同 windows phone (46) - 8.0 通信: Socket, 其它 作者:webabcd 介绍与众不同 windows phone 8.0 之 通信 Socket ...

  7. java int转byte和long转byte

    在网络编程中,出于节约带宽或者编码的需要,通常需要以原生方式处理long和int,而不是转换为string. public class ByteOrderUtils { public static b ...

  8. 【Asphyre引擎】发布了新版本V101

    引擎简称还是PXL,但是这个P是Platform而不是Pascal. 修复了一些bug,增加了轻量级的随机数发生器,进一步完善了XML的解析. 不是很明白,为何把Pascal扩展库改成Platform ...

  9. XStream的例子

    写个小例子,方便以后复习: 1 package com.demo; 2 3 import java.util.ArrayList; 4 import java.util.List; 5 6 impor ...

  10. C# 线程基础

    1. 线程的基本概念 简单的讲进程就是程序分配在内存当中,等待处理器进行处理,请记住线程会消耗大量的操作系统资源.多个线程共享一个物理处理器将导致处理器忙于处理管理这些进程,而无法运行程序. 使用线程 ...