<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>fs.defaultFS</name>
<value>hdfs://ochadoopcluster</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property> <property>
<name>hadoop.tmp.dir</name>
<value>/home/ochadoop/tmp/hadoop/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property> <property>
<name>ipc.server.listen.queue.size</name>
<value>32768</value>
<description>Indicates the length of the listen queue for servers accepting
client connections.
</description>
</property> <property>
<name>io.native.lib.available</name>
<value>true</value>
<description>Should native hadoop libraries, if present, be used.</description>
</property> <property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
<description>A comma-separated list of the compression codec classes that can
be used for compression/decompression. In addition to any classes specified
with this property (which take precedence), codec classes on the classpath
are discovered using a Java ServiceLoader.</description>
</property> <property>
<description>
The user name to filter as, on static web filters
while rendering content. An example use is the HDFS
web UI (user to be used for browsing files).
</description>
<name>hadoop.http.staticuser.user</name>
<value>ochadoop</value>
</property> <property>
<name>fs.trash.interval</name>
<value>1440</value>
<description>Number of minutes after which the checkpoint
gets deleted. If zero, the trash feature is disabled.
This option may be configured both on the server and the
client. If trash is disabled server side then the client
side configuration is checked. If trash is enabled on the
server side then the value configured on the server is
used and the client configuration value is ignored.
</description>
</property> <property>
<name>fs.trash.checkpoint.interval</name>
<value>30</value>
<description>Number of minutes between trash checkpoints.
Should be smaller or equal to fs.trash.interval. If zero,
the value is set to the value of fs.trash.interval.
Every time the checkpointer runs it creates a new checkpoint
out of current and removes checkpoints created more than
fs.trash.interval minutes ago.
</description>
</property> <property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzopCodec</value>
</property> <property>
<name>net.topology.script.file.name</name>
<value>/home/ochadoop/app/hadoop/etc/hadoop/rack.py</value>
<description> The script name that should be invoked to resolve DNS names to
NetworkTopology names. Example: the script would take host.foo.bar as an
argument, and return /rack1 as the output.
</description>
</property> <property>
<name>net.topology.script.number.args</name>
<value>1</value>
<description> The max number of args that the script configured with
net.topology.script.file.name should be run with. Each arg is an
IP address.
</description>
</property> <property>
<name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
<value>60000</value>
<description>
Timeout that the CLI (manual) FC waits for monitorHealth, getServiceState
</description>
</property> <property>
<name>ipc.client.connect.timeout</name>
<value>60000</value>
<description>Indicates the number of milliseconds a client will wait for the
socket to establish a server connection.
</description>
</property> </configuration>

hadoop core-site.xml的更多相关文章

  1. hadoop conf中xml文件修改

    core-site.xml <?xml version="1.0"?><?xml-stylesheet type="text/xsl" hre ...

  2. Hadoop配置文件-core-site.xml

     name value  Description   fs.default.name hdfs://hadoopmaster:9000 定义HadoopMaster的URI和端口  fs.checkp ...

  3. 【Hadoop需要的Jar包】Hadoop编程-pom.xml文件

    JDK版本的要求 Hadoop 2.7 以及之后的版本,需要JDK 7: Hadoop 2.6 以及之前的版本,支持JDK 6: 对于Hadoop1.x.x版本,只需要引入1个jar: hadoop- ...

  4. 解决:Unable to connect to repository https://dl-ssl.google.com/android/eclipse/site.xml

    ailed to fectch URl https://dl-ssl.google.com/android/repository/addons_list.xml, reason: Connection ...

  5. Hadoop配置文件-mapred-site.xml

    name value Description hadoop.job.history.location   job历史文件保存路径,无可配置参数,也不用写在配置文件里,默认在logs的history文件 ...

  6. Hadoop配置文件-hdfs-site.xml

     name  value Description  dfs.default.chunk.view.size 32768 namenode的http访问页面中针对每个文件的内容显示大小,通常无需设置. ...

  7. 最新hadoop+hbase+spark+zookeeper环境安装(vmmare下)

    说明:我这里安装的版本是hadoop2.7.3,hbase1.2.4,spark2.0.2,zookeeper3.4.9 (安装包:链接:http://pan.baidu.com/s/1c25hI4g ...

  8. Hadoop 配置文件 & 启动方式

    配置文件: 默认的配置文件:相对应的jar 中 core-default.xml hdfs-default.xml yarn-default.xml mapred-default.xml 自定义配置文 ...

  9. Hadoop 集群的三种方式

    1,Local(Standalone) Mode 单机模式 $ mkdir input $ cp etc/hadoop/*.xml input $ bin/hadoop jar share/hadoo ...

  10. hadoop安装及配置入门篇

    声明: author: 龚细军 时间: -- 类型: 笔记 转载时请注明出处及相应链接. 链接地址: http://www.cnblogs.com/gongxijun/p/5726024.html 本 ...

随机推荐

  1. Python入门笔记(18):Python函数(1):基础部分

    一.什么是函数.方法.过程 推荐阅读:http://www.cnblogs.com/snandy/archive/2011/08/29/2153871.html 一般程序设计语言包含两种基本的抽象:过 ...

  2. ActiveReports 报表应用教程 (1)---Hello ActiveReports

    在开始专题内容之前,我们还是了解一下 ActiveReports 是一款什么产品:ActiveReports是一款在全球范围内应用非常广泛的报表控件,以提供.NET报表所需的全部报表设计功能领先于同类 ...

  3. 狂屌的Windows下的定时任务工具xStarter

    xStarter是一款将某些常规计算机操作自动化进行为目的的程序. 它不能为你生成word文件,但是它可以周期性地为你备份文件以保持完整性. 程序的特点有:加强的任务计划工具;在系统事件上执行任务;用 ...

  4. JavaMail入门第三篇 发送邮件

    JavaMail API中定义了一个java.mail.Transport类,它专门用于执行邮件发送任务,这个类的实例对象封装了某种邮件发送协议的底层实施细节,应用程序调用这个类中的方法就可以把Mes ...

  5. U-boot的环境变量值得注意的有两个: bootcmd 和bootargs

    本文转载至:http://www.cnblogs.com/cornflower/archive/2010/03/27/1698279.html U-boot的环境变量值得注意的有两个: bootcmd ...

  6. 度娘果然毫无节操,纯粹就是order by 广告费 desc

    度娘果然毫无节操,纯粹就是order by 广告费 desc 必应搜索出来排第一,度娘根本就找不到在哪....

  7. 轻量级SaaS在线作图工具(继之前介绍后完整介绍)

    俗话说“一图胜千言”,在办公应用领域,流程图是一个非常好的表现企业业务流程或工作岗位规范等内容的展现形式,比如去给客户做调研,回来后都要描述出客户的关键业务流程,谁.什么时候.在什么地方.负责什么事情 ...

  8. express新旧语法对比

    备个份, 原文: http://stackoverflow.com/questions/25550819/error-most-middleware-like-bodyparser-is-no-lon ...

  9. EntityFramework学习

    本文档主要介绍.NET开发中两项新技术,.NET平台语言中的语言集成查询技术 - LINQ,与ADO.NET中新增的数据访问层设计技术ADO.NET Entity Framework.ADO.NET的 ...

  10. 安卓开发_浅谈ListView(SimpleAdapter数组适配器)

    安卓开发_浅谈ListView(ArrayAdapter数组适配器) 学习使用ListView组件和SimapleAdapter适配器实现一个带图标的ListView列表 总共3部分 一.MainAc ...