linux 多个文件中查找字符串 hadoop 3 安装 调试
http://www.cnblogs.com/iLoveMyD/p/4281534.html
2015年2月9日 14:36:38
# find <directory> -type f -name "*.c" | xargs grep "<strings>" <directory>是你要找的文件夹;如果是当前文件夹可以省略
-type f 意思是只找文件
-name "*.c" 表示只找C语言写的代码,从而避免去查binary;也可以不写,表示找所有文件
<strings>是你要找的某个字符串 Stopping secondary namenodes [bigdata-server-02]
Last login: Thu Dec 21 17:18:39 CST 2017 on pts/0
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping nodemanagers
Last login: Thu Dec 21 17:18:42 CST 2017 on pts/0
Stopping resourcemanager
Last login: Thu Dec 21 17:18:46 CST 2017 on pts/0
[root@bigdata-server-02 hadoop]# vim etc/hadoop/hadoop-env.sh [root@bigdata-server-02 hadoop]# find . -type f | xargs grep HADOOP_WORKER
./sbin/workers.sh:# HADOOP_WORKERS File naming remote hosts.
./sbin/workers.sh:# HADOOP_WORKER_SLEEP Seconds to sleep between spawning remote commands.
grep: ./share/hadoop/yarn/webapps/ui2/assets/images/datatables/Sorting: No such file or directory
grep: icons.psd: No such file or directory
./share/doc/hadoop/hadoop-project-dist/hadoop-common/UnixShellAPI.html:<p>Connect to ${HADOOP_WORKERS} or ${HADOOP_WORKER_NAMES} and execute command.</p>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/UnixShellAPI.html:<p>Connect to ${HADOOP_WORKER_NAMES} and execute command under the environment which does not support pdsh.</p>
./bin/hadoop:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/yarn:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/mapred:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/hdfs:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./etc/hadoop/hadoop-env.sh:#export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"
./etc/hadoop/hadoop-user-functions.sh.example:# tmpslvnames=$(echo "${HADOOP_WORKER_NAMES}" | tr ' ' '\n' )
./libexec/hadoop-config.cmd: set HADOOP_WORKERS=%HADOOP_CONF_DIR%\%2
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVES HADOOP_WORKERS
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVE_NAMES HADOOP_WORKER_NAMES
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVE_SLEEP HADOOP_WORKER_SLEEP
./libexec/yarn-config.sh: hadoop_deprecate_envvar YARN_SLAVES HADOOP_WORKERS
./libexec/hadoop-functions.sh: HADOOP_WORKERS="${workersfile}"
./libexec/hadoop-functions.sh: HADOOP_WORKERS="${HADOOP_CONF_DIR}/${workersfile}"
./libexec/hadoop-functions.sh:## @description Connect to ${HADOOP_WORKERS} or ${HADOOP_WORKER_NAMES}
./libexec/hadoop-functions.sh: if [[ -n "${HADOOP_WORKERS}" && -n "${HADOOP_WORKER_NAMES}" ]] ; then
./libexec/hadoop-functions.sh: hadoop_error "ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting."
./libexec/hadoop-functions.sh: elif [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
./libexec/hadoop-functions.sh: if [[ -n "${HADOOP_WORKERS}" ]]; then
./libexec/hadoop-functions.sh: worker_file=${HADOOP_WORKERS}
./libexec/hadoop-functions.sh: if [[ -z "${HADOOP_WORKER_NAMES}" ]] ; then
./libexec/hadoop-functions.sh: tmpslvnames=$(echo ${HADOOP_WORKER_NAMES} | tr -s ' ' ,)
./libexec/hadoop-functions.sh: if [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
./libexec/hadoop-functions.sh: HADOOP_WORKER_NAMES=$(sed 's/#.*$//;/^$/d' "${worker_file}")
./libexec/hadoop-functions.sh:## @description Connect to ${HADOOP_WORKER_NAMES} and execute command
./libexec/hadoop-functions.sh: local workers=(${HADOOP_WORKER_NAMES})
./libexec/hadoop-functions.sh: HADOOP_WORKER_NAMES="$1"
./libexec/hadoop-functions.sh: HADOOP_WORKER_MODE=true
[root@bigdata-server-02 hadoop]#
[root@hadoop3 logs]# cat hadoop-root-namenode-hadoop3.log
2017-12-29 15:06:50,183 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-12-29 15:06:50,190 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:9870
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1133)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1155)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1214)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1069)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:888)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:950)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:929)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
Caused by: java.net.BindException: 地址已在使用
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1120)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1151)
... 9 more
2017-12-29 15:06:50,192 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
但是并没有配置这个端口啊
find检索字符串
[root@hadoop3 hadoop]# find . -type f | xargs grep 9870
grep: ./share/hadoop/yarn/webapps/ui2/assets/images/datatables/Sorting: 没有那个文件或目录
grep: icons.psd: 没有那个文件或目录
./share/doc/hadoop/hadoop-yarn/hadoop-yarn-registry/apidocs/org/apache/hadoop/registry/client/types/AddressTypes.html: ["namenode.example.org", "9870"]
./share/doc/hadoop/api/org/apache/hadoop/registry/client/types/AddressTypes.html: ["namenode.example.org", "9870"]
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml: <value>0.0.0.0:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html:<p>NameNode and DataNode each run an internal web server in order to display basic information about the current status of the cluster. With the default configuration, the NameNode front page is at <tt>http://namenode-name:9870/</tt>. It lists the DataNodes in the cluster and basic statistics of the cluster. The web interface can also be used to browse the file system (using “Browse the file system” link on the NameNode front page).</p></div>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html: <value>machine1.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html: <value>machine2.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html: <value>machine3.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html: <value>machine1.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html: <value>machine2.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html: <value>machine3.example.com:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/3.0.0-alpha1/CHANGES.3.0.0-alpha1.html:<td align="left"> <a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9870">HDFS-9870</a> </td>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/3.0.0-alpha1/RELEASENOTES.3.0.0-alpha1.html:<p>The patch updates the HDFS default HTTP/RPC ports to non-ephemeral ports. The changes are listed below: Namenode ports: 50470 –> 9871, 50070 –> 9870, 8020 –> 9820 Secondary NN ports: 50091 –> 9869, 50090 –> 9868 Datanode ports: 50020 –> 9867, 50010 –> 9866, 50475 –> 9865, 50075 –> 9864</p><hr />
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/2.8.0/CHANGES.2.8.0.html:<td align="left"> <a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9870">HDFS-9870</a> </td>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/CommandsManual.html:<pre class="source">$ bin/hadoop daemonlog -setlevel 127.0.0.1:9870 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG
./share/doc/hadoop/hadoop-project-dist/hadoop-common/SingleCluster.html:<li>NameNode - <tt>http://localhost:9870/</tt></li>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/ClusterSetup.html:<td align="left"> Default HTTP port is 9870. </td></tr>
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:06:50,085 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:06:50,193 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:23:48,931 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:23:49,035 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Port in use: 0.0.0.0:9870
[root@hadoop3 hadoop]# xlc
Stopping namenodes on [hadoop3]
9870为默认
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml: <value>0.0.0.0:9870</value>
<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:9870</value>
<description>
The address and the base port where the dfs namenode web ui will listen on.
</description>
</property>
vim查找9870
:/9870
:?9870
linux 多个文件中查找字符串 hadoop 3 安装 调试的更多相关文章
- linux 多个文件中查找字符串
2015年2月9日 14:36:38 # find <directory> -type f -name "*.c" | xargs grep "<str ...
- 在文件夹中 的指定类型文件中 查找字符串(CodeBlocks+GCC编译,控制台程序,仅能在Windows上运行)
说明: 程序使用 io.h 中的 _findfirst 和 _findnext 函数遍历文件夹,故而程序只能在 Windows 下使用. 程序遍历当前文件夹,对其中的文件夹执行递归遍历.同时检查遍历到 ...
- linux上查找文件存放地点和文件中查找字符串方法
一.查找文件存放地点 1.locate 语法:locate <filename> locate命令实际是"find -name"的另一种写法,但是查找方式跟find不同 ...
- 在文件夹下所有文件中查找字符串(linux/windows)
在linux下可以用 grep "String" filename.txt#字符串 文件名grep -r "String" /home/#递归查找目录下所有文件 ...
- Linux平台从文件中查找字符赋值于变量
以telnet方式登录Linux主机,在默认目录下用命令创建一个包含DUT wanIP的文本文件.[root] echo wanIP=88.0.100.253 > ./wanIP.txt在默认目 ...
- linux命令统计文件中某个字符串出现的次数
1.使用grep linux grep命令在我的随笔linux分类里有过简单的介绍,这里就只简单的介绍下使用grep命令统计某个文件这某个字符串出现的次数,首先介绍grep命令的几个参数,详细参数请自 ...
- linux批量修改文件中包含字符串的查找替换
find -name "*.env" | xargs perl -pi -e 's|\babcdefg\b|hahaha|g' .env 文件中abcdef 改为hahaha
- 【Linux】查询文件中指定字符串的记录
语法 cat 文件 |grep 查询字符串 例如现在有文件file.dat,文件中内容如下: zhangsan Lisi wangwu123 wangwu890 zhangsan28290 现在想从文 ...
- linux在所有文件中查找某一个字符
# find <directory> -type f -name "*.c" | xargs grep "<strings>" < ...
随机推荐
- Django框架基础知识13-auth系统
我们昨天登录admin时创建的用户信息是存放在哪里了呢? auth系统的数据表: 从表的名称我们就能看出, auth_user,auth_group,auth_permission分别存放了用户,用户 ...
- 关于linux安装kettle的总结
一.部署准备 1.1 JDK安装配置 命令行键入“cd /etc”进入etc目录 命令行键入“vi profile”打开profile文件 敲击键盘ctrl+F到文件末尾 在末尾处,即第一个~的地方, ...
- HDU 1242 Rescue 营救天使
Description Angel was caught by the MOLIGPY! He was put in prison by Moligpy. The prison is describe ...
- 【ORACLE】查看死锁进程并结束死锁的脚本
--共享锁:Share:排他锁:Exclusive:行共享锁:Row-S:行排他锁:Row-X select V$SESSION.sid,v$session.SERIAL#,v$process.spi ...
- 404 Not Found 由来
404 NOT FOUND! 抱歉,沒有找到您需要的文章!! 什么是 404 Not Found 404页面是网站必备的一个页面,它承载着用户体验与SEO优化的重任.404页面通常为用户访问了网站上不 ...
- (转)]PYTHON Tkinter GUI
import Tkinterroot=Tkinter.Tk()label=Tkinter.Label(root,text='hello ,python')label.pack() #将LAB ...
- 性能测试工具 - ab 简单应用
之前知道一般网站性能可以通过 LoadRunner, JMeter, QTP 等相应的软件进行测试, 印象中本科学习 “软件测试” 这门课程时安装并使用过, LoadRunner等不是一个小软件, 安 ...
- 巴蜀3540 -- 【Violet 6 最终话】蒲公英
Description 原题的时间限制是 2s . 亲爱的哥哥: 你在那个城市里面过得好吗? 我在家里面最近很开心呢.昨天晚上奶奶给我讲了那个叫「绝望」的大坏蛋的故事的说!它把人们的房子和田地搞坏,还 ...
- Python基础教程笔记——第7章:更加抽象(类)
下面进入Python的面向对象: 对象的魔力: 多态:---可以对不同类的对象使用同样的操作 封装:---对外部隐藏对象内部的工作方式 继承:---以普通的类为基础建立专门的类对象 (1)多态: is ...
- mysql查所有列名
查询该视图 information_schema.columns 该有的都有 desc information_schema.columns; select * from information_ ...