December 22, 2018

Srikaanth

Apple Most Frequently Asked Latest Hadoop Interview Questions Answers

What Are The Benefits Of Block Transfer?

A file can be larger than any single disk in the network. There’s nothing that requires the blocks from a file to be stored on the same disk, so they can take advantage of any of the disks in the cluster. Making the unit of abstraction a block rather than a file simplifies the storage subsystem. Blocks provide fault tolerance and availability. To insure against corrupted blocks and disk and machine failure, each block is replicated to a small number of physically separate machines (typically three). If a block becomes unavailable, a copy can be read from another location in a way that is transparent to the client.

If We Want To Copy 10 Blocks From One Machine To Another, But Another Machine Can Copy Only 8.5 Blocks, Can The Blocks Be Broken At The Time Of Replication?

In HDFS, blocks cannot be broken down. Before copying the blocks from one machine to another, the Master node will figure out what is the actual amount of space required, how many block are being used, how much space is available, and it will allocate the blocks accordingly.

How Indexing Is Done In Hdfs?

Hadoop has its own way of indexing. Depending upon the block size, once the data is stored, HDFS will keep on storing the last part of the data which will say where the next part of the data will be. In fact, this is the base of HDFS.

Who Is A 'user' In Hdfs?

A user is like you or me, who has some query or who needs some kind of data.

Is Client The End User In Hdfs?

No, Client is an application which runs on your machine, which is used to interact with the Namenode (job tracker) or datanode (task tracker).
Apple Most Frequently Asked Latest Hadoop Interview Questions Answers
Apple Most Frequently Asked Latest Hadoop Interview Questions Answers

What Is The Communication Channel Between Client And Namenode/datanode?

The mode of communication is SSH.

What Is A Rack?

Rack is a storage area with all the datanodes put together. These datanodes can be physically located at different places. Rack is a physical collection of datanodes which are stored at a single location. There can be multiple racks in a single location.

On What Basis Data Will Be Stored On A Rack?

When the client is ready to load a file into the cluster, the content of the file will be divided into blocks. Now the client consults the Namenode and gets 3 datanodes for every block of the file which indicates where the block should be stored. While placing the datanodes, the key rule followed is “for every block of data, two copies will exist in one rack, third copy in a different rack“. This rule is known as “Replica Placement Policy“.

Do We Need To Place 2nd And 3rd Data In Rack 2 Only?

Yes, this is to avoid datanode failure.

What If Rack 2 And Datanode Fails?

If both rack2 and datanode present in rack 1 fails then there is no chance of getting data from it. In order to avoid such situations, we need to replicate that data more number of times instead of replicating only thrice. This can be done by changing the value in replication factor which is set to 3 by default.

What Is A Secondary Namenode? Is It A Substitute To The Namenode?

The secondary Namenode constantly reads the data from the RAM of the Namenode and writes it into the hard disk or the file system. It is not a substitute to the Namenode, so if the Namenode fails, the entire Hadoop system goes down.

If A Data Node Is Full How It's Identified?

When data is stored in datanode, then the metadata of that data will be stored in the Namenode. So Namenode will identify if the data node is full.

What Is A Identitymapper And Identityreducer In Mapreduce ?

org.apache.hadoop.mapred.lib.IdentityMapper Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer do not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value.
org.apache.hadoop.mapred.lib.IdentityReducer Performs no reduction, writing all input values directly to the output. If MapReduce programmer do not set the Reducer Class using JobConf.setReducerClass then IdentityReducer.class is used as a default value.

When Is The Reducers Are Started In A Mapreduce Job?

In a MapReduce job reducers do not start executing the reduce method until the all Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The programmer defined reduce method is called only after all the mappers have finished.

If Reducers Do Not Start Before All Mappers Finish Then Why Does The Progress On Mapreduce Job Shows Something Like Map(50%) Reduce(10%)? Why Reducers Progress Percentage Is Displayed When Mapper Is Not Finished Yet?

Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer. Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished.

If Datanodes Increase, Then Do We Need To Upgrade Namenode?

While installing the Hadoop system, Namenode is determined based on the size of the clusters. Most of the time, we do not need to upgrade the Namenode because it does not store the actual data, but just the metadata, so such a requirement rarely arise.

Are Job Tracker And Task Trackers Present In Separate Machines?

Yes, job tracker and task tracker are present in different machines. The reason is job tracker is a single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.

When We Send A Data To A Node, Do We Allow Settling In Time, Before Sending Another Data To That Node?

Yes, we do.

Does Hadoop Always Require Digital Data To Process?

Yes. Hadoop always require digital data to be processed.

On What Basis Namenode Will Decide Which Datanode To Write On?

As the Namenode has the metadata (information) related to all the data nodes, it knows which datanode is free.

Doesn't Google Have Its Very Own Version Of Dfs?

Yes, Google owns a DFS known as “Google File System (GFS)” developed by Google Inc. for its own use.

What Is The Difference Between Gen1 And Gen2 Hadoop With Regards To The Namenode?

In Gen 1 Hadoop, Namenode is the single point of failure. In Gen 2 Hadoop, we have what is known as Active and Passive Namenodes kind of a structure. If the active Namenode fails, passive Namenode takes over the charge.

Can You Explain How Do 'map' And 'reduce' Work?

Namenode takes the input and divide it into parts and assign them to data nodes. These datanodes process the tasks assigned to them and make a key-value pair and returns the intermediate output to the Reducer. The reducer collects this key value pairs of all the datanodes and combines them and generates the final output.

What Is 'key Value Pair' In Hdfs?

Key value pair is the intermediate data generated by maps and sent to reduces for generating the final output.

What Is The Difference Between Mapreduce Engine And Hdfs Cluster?

HDFS cluster is the name given to the whole configuration of master and slaves where data is stored. Map Reduce Engine is the programming module which is used to retrieve and analyze data.

Subscribe to get more Posts :