December 22, 2018

Srikaanth

Symantec Most Frequently Asked Latest Hadoop Interview Questions Answers

What Is Big Data?

Big Data is nothing but an assortment of such a huge and complex data that it becomes very tedious to capture, store, process, retrieve and analyze it with the help of on-hand database management tools or traditional data processing techniques.

Can You Give Some Examples Of Big Data?

There are many real life examples of Big Data! Facebook is generating 500+ terabytes of data per day, NYSE (New York Stock Exchange) generates about 1 terabyte of new trade data per day, a jet airline collects 10 terabytes of censor data for every 30 minutes of flying time. All these are day to day examples of Big Data!

Explain How Input And Output Data Format Of The Hadoop Framework?

The MapReduce framework operates exclusively on pairs, that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output of the job, conceivably of different types.

See the flow mentioned below

(input) -> map -> -> combine/sorting -> -> reduce -> (output)

What Are The Restriction To The Key And Value Class ?

he key and value classes have to be serialized by the framework. To make them serializable Hadoop provides a Writable interface. As you know from the java itself that the key of the Map should be comparable, hence the key has to implement one more interface Writable Comparable.

Explain The Wordcount Implementation Via Hadoop Framework ?

We will count the words in all the input file flow as below

input Assume there are two files each having a sentence Hello World Hello World (In file 1) Hello World Hello World (In file 2)
Mapper : There would be each mapper for the a file For the given sample input the first map output
< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>

The second map output

< Hello, 1>
< World, 1>
< Hello, 1>
< World, 1>

Combiner/Sorting (This is done for each individual map) So output looks like this The output of the first map
< Hello, 2>
< World, 2>

The output of the second map

< Hello, 2>
< World, 2>

Reducer : It sums up the above output and generates the output as below
< Hello, 4>
< World, 4>

Output

Final output would look like

Hello 4 times
World 4 times
Symantec Most Frequently Asked Latest Hadoop Interview Questions Answers
Symantec Most Frequently Asked Latest Hadoop Interview Questions Answers

On What Concept The Hadoop Framework Works?

It works on MapReduce, and it is devised by the Google.

What Is Mapreduce?

Map reduce is an algorithm or concept to process Huge amount of data in a faster way. As per its name you can divide it Map and Reduce.

The main MapReduce job usually splits the input data-set into independent chunks. (Big data sets in the multiple small datasets)
MapTask: will process these chunks in a completely parallel manner (One node can process one or more chunks).The framework sorts the outputs of the maps.
Reduce Task : And the above output will be the input for the reducetasks, produces the final result.
Your business logic would be written in the MappedTask and ReducedTask. Typically both the input and the output of the job are stored in a file-system (Not database). The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks

What Is Compute And Storage Nodes?

Compute Node: This is the computer or machine where your actual business logic will be executed.

torage Node: This is the computer or machine where your file system reside to store the processing data.

In most of the cases compute node and storage node would be the same machine.

How Does Master Slave Architecture In The Hadoop?

The MapReduce framework consists of a single master JobTracker and multiple slaves, each cluster-node will have one TaskTracker. The master is responsible for scheduling the jobs' component tasks on the slaves, monitoring them and re-executing the failed tasks. The slaves execute the tasks as directed by the master.

Which Interface Needs To Be Implemented To Create Mapper And Reducer For The Hadoop?

org.apache.hadoop.mapreduce.Mapper
org.apache.hadoop.mapreduce.Reducer

What Mapper Does?

Maps are the individual tasks that transform input records into intermediate records. The transformed intermediate records do not need to be of the same type as the input records. A given input pair may map to zero or many output pairs.

What Is The Inputsplit In Map Reduce Software?

An InputSplit is a logical representation of a unit (A chunk) of input work for a map task; e.g., a file name and a byte range within that file to process or a row set in a text file.

What Is The Inputformat ?

The InputFormat is responsible for enumerate (itemise) the InputSplits, and producing a RecordReader which will turn those logical work units into actual physical input records.

Where Do You Specify The Mapper Implementation?

Generally mapper implementation is specified in the Job itself.

Explain The Core Methods Of The Reducer?

The API of Reducer is very similar to that of Mapper, there's a run() method that receives a Context containing the job's configuration as well as interfacing methods that return data from the reducer itself back to the framework. The run() method calls setup() once, reduce() once for each key associated with the reduce task, and cleanup() once at the end. Each of these methods can access the job's configuration data by using Context.getConfiguration().

As in Mapper, any or all of these methods can be overridden with custom implementations. If none of these methods are overridden, the default reducer operation is the identity function; values are passed through without further processing.

The heart of Reducer is its reduce() method. This is called once per key; the second argument is an Iterable which returns all the values associated with that key.

What Are The Primary Phases Of The Reducer?

Shuffle, Sort and Reduce.

Explain The Shuffle?

Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches the relevant partition of the output of all the mappers, via HTTP.

Explain The Reducer's Sort Phase?

The framework groups Reducer inputs by keys (since different mappers may have output the same key) in this stage. The shuffle and sort phases occur simultaneously; while map-outputs are being fetched they are merged (It is similar to merge-sort).

Explain The Reducer's Reduce Phase?

In this phase the reduce(MapOutKeyType, Iterable, Context) method is called for each pair in the grouped inputs. The output of the reduce task is typically written to the FileSystem via Context.write (ReduceOutKeyType, ReduceOutValType). Applications can use the Context to report progress, set application-level status messages and update Counters, or just indicate that they are alive. The output of the Reducer is not sorted.

How Many Reducers Should Be Configured?

The right number of reduces seems to be 0.95 or 1.75 multiplied by
(<no.of nades> * mapreduce.tasktracker.reduce.tasks.maximum).

With 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. With 1.75 the faster nodes will finish their first round of reduces and launch a second wave of reduces doing a much better job of load balancing. Increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.

It Can Be Possible That A Job Has 0 Reducers?

It is legal to set the number of reduce-tasks to zero if no reduction is desired.

What Happens If Number Of Reducers Are 0?

In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set by setOutputPath(Path). The framework does not sort the map-outputs before writing them out to the FileSystem.

How Many Instances Of Jobtracker Can Run On A Hadoop Cluster?

Only one

How Mapper Is Instantiated In A Running Job?

The Mapper itself is instantiated in the running job, and will be passed a MapContext object which it can use to configure itself.

Subscribe to get more Posts :