面试hadoop可能被问到的问题,你能回答出几个 ?

1、hadoop运行的原理?

2、mapreduce的原理?

3、HDFS存储的机制?

4、举一个简单的例子说明mapreduce是怎么来运行的 ?

5、面试的人给你出一些问题,让你用mapreduce来实现?

比如:现在有10个文件夹,每个文件夹都有1000000个url.现在让你找出top1000000url。

6、hadoop中Combiner的作用?

Src: http://p-x1984.javaeye.com/blog/859843

Q1. Name the most common InputFormats defined in Hadoop? Which one is default ? 
Following 2 are most common InputFormats defined in Hadoop 
- TextInputFormat
- KeyValueInputFormat
- SequenceFileInputFormat

Q2. What is the difference between TextInputFormatand KeyValueInputFormat class
TextInputFormat:
It reads lines of text files and provides the offset of the line as key
to the Mapper and actual line as Value to the mapper
KeyValueInputFormat:
Reads text file and parses lines into key, val pairs. Everything up to
the first tab character is sent as key to the Mapper and the remainder
of the line is sent as value to the mapper.

 
Q3. What is InputSplit in Hadoop
When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split 
 
Q4. How is the splitting of file invoked in Hadoop Framework 
It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user 
 
Q5. Consider case scenario: In M/R system,
    - HDFS block size is 64 MB
    - Input format is FileInputFormat
    - We have 3 files of size 64K, 65Mb and 127Mb 
then how many input splits will be made by Hadoop framework?
Hadoop will make 5 splits as follows 
- 1 split for 64K files 
- 2  splits for 65Mb files 
- 2 splits for 127Mb file 
 
Q6. What is the purpose of RecordReader in Hadoop
The
InputSplithas defined a slice of work, but does not describe how to
access it. The RecordReaderclass actually loads the data from its source
and converts it into (key, value) pairs suitable for reading by the
Mapper. The RecordReader instance is defined by the InputFormat 
 
Q7. After the Map phase finishes, the hadoop framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?
- Partitioning
Partitioning
is the process of determining which reducer instance will receive which
intermediate keys and values. Each mapper must determine for all of its
output (key, value) pairs which reducer will receive them. It is
necessary that for any key, regardless of which mapper instance
generated it, the destination partition is the same
 
- Shuffle
After
the first map tasks have completed, the nodes may still be performing
several more map tasks each. But they also begin exchanging the
intermediate outputs from the map tasks to where they are required by
the reducers. This process of moving map outputs to the reducers is
known as shuffling.
 
- Sort
Each reduce
task is responsible for reducing the values associated with several
intermediate keys. The set of intermediate keys on a single node is
automatically sorted by Hadoop before they are presented to the Reducer 
 
Q9. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer 
The default partitioner computes a hash value for the key and assigns the partition based on this result 
 
Q10. What is a Combiner 
The
Combiner is a "mini-reduce" process which operates only on data
generated by a mapper. The Combiner will receive as input all data
emitted by the Mapper instances on a given node. The output from the
Combiner is then sent to the Reducers, instead of the output from the
Mappers.
Q11. Give an example scenario where a cobiner can be used and where it cannot be used
There can be several examples following are the most common ones
- Scenario where you can use combiner
  Getting list of distinct words in a file
 
- Scenario where you cannot use a combiner
  Calculating mean of a list of numbers 
Q12. What is job tracker
Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster
 
Q13. What are some typical functions of Job Tracker
The following are some typical tasks of Job Tracker
- Accepts jobs from clients
- It talks to the NameNode to determine the location of the data
- It locates TaskTracker nodes with available slots at or near the data
- It
submits the work to the chosen Task Tracker nodes and monitors progress
of each task by receiving heartbeat signals from Task tracker 
 
Q14. What is task tracker
Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker 
 
Q15. Whats the relationship between Jobs and Tasks in Hadoop
One job is broken down into one or many tasks in Hadoop
 
Q16. Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What willhadoop do ?
It
will restart the task again on some other task tracker and only if the
task fails more than 4 (default setting and can be changed) times will
it kill the job
 
Q17. Hadoop achieves
parallelism by dividing the tasks across many nodes, it is possible for
a few slow nodes to rate-limit the rest of the program and slow down
the program. What mechanism Hadoop provides to combat this
 
Speculative Execution 
 
Q18. How does speculative execution works in Hadoop 
Job
tracker makes different task trackers process same input. When tasks
complete, they announce this fact to the Job Tracker. Whichever copy of a
task finishes first becomes the definitive copy. If other copies were
executing speculatively, Hadoop tells
the Task Trackers to abandon the tasks and discard their outputs. The
Reducers then receive their inputs from whichever Mapper completed
successfully, first. 
 
Q19. Using command line in Linux, how will you 
- see all jobs running in the hadoop cluster
- kill a job
hadoop job -list
hadoop job -kill jobid 
 
Q20. What is Hadoop Streaming 
Streaming is a generic API that allows programs written in virtually any language to be used asHadoop Mapper and Reducer implementations 
 

Q21.
What is the characteristic of streaming API that makes it flexible run
map reduce jobs in languages like perl, ruby, awk etc. 
Hadoop Streaming
allows to use arbitrary programs for the Mapper and Reducer phases of a
Map Reduce job by having both Mappers and Reducers receive their input
on stdin and emit output (key, value) pairs on stdout.
Q22. Whats is Distributed Cache in Hadoop
Distributed
Cache is a facility provided by the Map/Reduce framework to cache files
(text, archives, jars and so on) needed by applications during
execution of the job. The framework will copy the necessary files to the
slave node before any tasks for the job are executed on that node.
Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it 
This
is because distributed cache is much faster. It copies the file to all
trackers at the start of the job. Now if the task tracker runs 10 or 100
mappers or reducer, it will use the same copy of distributed cache. On
the other hand, if you put code in file to read it from HDFS in the MR
job then every mapper will try to access it from HDFS hence if a task
tracker run 100 map jobs then it will try to read this file 100 times
from HDFS. Also HDFS is not very efficient when used like this.

Q.24 What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application 
This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution

Q25. Have you ever used Counters in Hadoop. Give us an example scenario
Anybody who claims to have worked on a Hadoop project is expected to use counters

Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job 
Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how 
Yes, by using Multiple Outputs class

Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it
- overwrite it
- warn you and continue
- throw an exception and exit

The hadoop job will throw an exception and exit.

Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop 
This is a trick question. You cannot set it

Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop 
You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting

hadoop面试时可能遇到的问题的更多相关文章

  1. hadoop面试时的一些问题解答

    一.         linux部分 请阐述swap分区作用,您认为hadoop集群中的linux是否必须有swap分区? 答:在Linux中,如果一个进程的内存空间不足,那么,它会将内存中的部分数据 ...

  2. hadoop面试100道收集(带答案)

    1.列出安装Hadoop流程步骤 a) 创建hadoop账号 b) 更改ip c) 安装Java 更改/etc/profile 配置环境变量 d) 修改host文件域名 e) 安装ssh 配置无密码登 ...

  3. 面试时遇到的SQL

    CustomerID DateTime ProductName Price C001 2014-11-20 16:02:59 123 PVC 100 C001 2014-11-19 16:02:59 ...

  4. (Java后端 Java web)面试时如何展示自己非技术方面的能力(其实就是综合能力)

    这篇文章的适用范围其实不仅限于Java后端或Java Web,不过其中有些是拿这方面举例的,在其它方面,大家可以举一反三,应该也能得到些启示. 我们在面试时,会发现有些候选人技术不错,比如在Java ...

  5. 面试时,当你有权提问时,别客气,这是个逆转的好机会(内容摘自Java Web轻量级开发面试教程)

    前些天,我在博客园里写了篇文章,如何在面试中介绍自己的项目经验,收获了2千多个点击,这无疑鼓舞了我继续分享的热情,今天我来分享另外一个面试中的甚至可以帮助大家逆转的技巧,本文来是从 java web轻 ...

  6. 通过软引用和弱引用提升JVM内存使用性能的方法(面试时找机会说出,一定能提升成功率)

    初学者或初级程序员在面试时如果能证明自己具有分析内存用量和内存调优的能力,这相当有利,因为这是针对5年左右相关经验的高级程序员的要求.而对于高级程序员来说,如果能在面试时让面试官感觉你确实做过内存调优 ...

  7. 面试时怎样回答:你对原生ajax的理解

    很多人跟我一样用习惯了jq封装好的$.ajax,但是面试时,原生ajax是很多面试官喜欢问的问题,今天再查资料,打算好好整理一下自己理解的原生ajax. 首先,jq的ajax:一般我常用的参数就是这些 ...

  8. hadoop启动时,报ssh: Could not resolve hostname xxx: Name or service not known

    本文转载自:http://blog.csdn.net/wodewutai17quiet/article/details/76795951 问题:hadoop启动时,报ssh: Could not re ...

  9. opensips编译安装时可能遇到的问题

    错误一: ERROR: could not load the script in /usr/local//lib64/opensips/opensipsctl/opensipsdbctl.pgsql ...

随机推荐

  1. 二维树状数组——SuperBrother打鼹鼠(Vijos1512)

    树状数组(BIT)是一个查询和修改复杂度都为log(n)的数据结构,主要用于查询任意两位之间的所有元素之和,其编程简单,很容易被实现.而且可以很容易地扩展到二维.让我们来看一道很裸的二维树状数组题: ...

  2. TLF相关资料

    1\(转)http://blog.csdn.net/hu36978/article/details/5796165 TFL 一般先创建TextFlow  通过控制flowComposer属性来控制文本 ...

  3. 初学java,遇到的陌生词语(1)

    字节码文件:不包含硬件信息,完全与硬件平台无关,因此,无法直接由操作系统来运行. Java应用程序的执行过程:代码装入.代码检验.代码执行. 1.代码装入:由类装配器完成,装入程序运行时所需的所有源代 ...

  4. 14_Response对象

    [简述] Web服务器收到客户端的http请求,会针对每一次请求,分别创建一个用于代表请求的request对象和代表响应的response对象. request和response对象既然代表请求和响应 ...

  5. java继承实例。

    定义了一个点类point,然后线条类line继承了point类,正方形类Suare继承point类. package test; import javax.swing.*; public class ...

  6. FileInputStream 与 BufferedInputStream 效率对比

    我的技术博客经常被流氓网站恶意爬取转载.请移步原文:http://www.cnblogs.com/hamhog/p/3550158.html ,享受整齐的排版.有效的链接.正确的代码缩进.更好的阅读体 ...

  7. Linux之在CentOS上一次艰难的木马查杀过程

    今天朋友说他一台要准备上线的生产服务器被挂马,特征ps命令找不到进程,top能看到负载最高的一个程序是一个随机的10位字母的东西,kill掉之后自动再次出现一个随机10位字母的进程. 我让他关闭这个机 ...

  8. SQL索引问题

    很多文章都提到使用IN,OR会破坏索引,造成全表扫描,但实际测试却不是这样. ) 或者 ,) 以上SQL文,第一组(=,IN),第二组(=,OR,IN),每一组的两个SQL文都使用相同的执行计划,执行 ...

  9. C# 多线程基础

    多线程 无论您是为具有单个处理器的计算机还是为具有多个处理器的计算机进行开发,您都希望应用程序为用户提供最好的响应性能,即使应用程序当前正在完成其 他工作.要使应用程序能够快速响应用户操作,同时在用户 ...

  10. Maven插件实现的autoconfig机制(转)

    autoconfig这种机制在软件开发和发布的过程中是非常方便也是非常必要的一种动态替换配置信息的一种手段,一种很贴切的比喻:这个就像在windows下面安装一个软件时,我们按照安装向导给我们弹出提示 ...