1. What is Hadoop ?
Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.
2. Who is the provider of Hadoop?
Hadoop forms part of Apache project provided by Apache Software Foundation.
3. What is the use of Hadoop?
With Hadoop the user can run applications on the systems that have thousands of nodes spreading through innumerable terabytes. Rapid data processing and transfer among nodes helps uninterrupted operation even when a node fails preventing system failure.
4. Compare HDFS and HBase ?
5. What are the operating systems on which Hadoop works?
Windows and Linux are the preferred operating system though Hadoop can work on OS x and BSD.
6. What is meant by Big Data?
Big Data refers to assortment of huge amount of data which is difficult capturing, storing, processing or reprieving. Traditional database management tools cannot handle them but Hadoop can.
7. Can you indicate Big Data examples?
Facebook alone generates more than 500 terabytes of data daily whereas many other organizations like Jet Air and Stock Exchange Market generates 1+ terabytes of data every hour. These are Big Data.
8. What are major characteristics of Big Data?
The three characteristics of Big Data are volume, velocity, and veracity. Earlier it was assessed in megabytes and gigabytes but now the assessment is made in terabytes.
9. What is the use of Big Data Analysis for an enterprise?
Analysis of Big Data identifies the problem and focus points in an enterprise. It can prevent big losses and make profits helping the entrepreneurs take informed decision.
10. What are the characteristics of data scientists?
Data scientists analyze data and provide solutions for business problems. They are gradually replacing business and data analysts.
11. What are the basic characteristics of Hadoop?
Written in Java, Hadoop framework has the capability of solving issues involving Big Data analysis. Its programming model is based on Google MapReduce and infrastructure is based on Google’s Big Data and distributed file systems. Hadoop is scalable and more nodes can be added to it.
12. Which are the major players on the web that uses Hadoop?
Introduce in 2002 by Doug Cutting, Hadoop was used in Google MapReduce and HDFS project in 2004 and 2006. Yahoo and Facebook adopted it in 2008 and 2009 respectively. Major commercial enterprises using Hadoop include EMC, Hortonworks, Cloudera, MaOR, Twitter, EBay, and Amazon among others.
13. How is Hadoop different from traditional RDBMS?
RDBMS can be useful for single files and short data whereas Hadoop is useful for handling Big Data in one shot.
14. What are the main components of Hadoop?
Main components of Hadoop are HDFS used to store large databases and MapReduce used to analyze them.
15. What is HDFS?
HDFS is filing system use to store large data files. It handles streaming data and running clusters on the commodity hardware.
16. What are the main features of HDFS?
Great fault tolerance, high throughput, suitability for handling large data sets, and streaming access to file system data are the main features of HDFS. It can be built with commodity hardware.
17. Why replication is pursued in HDFS though it may cause data redundancy?
Systems with average configuration are vulnerable to crash at any time. HDFS replicates and stores data at three different locations that makes the system highly fault tolerant. If data at one location becomes corrupt and is inaccessible it can be retrieved from another location.
18. Would the calculations made on one node be replicated to others in HDFS?
No! The calculation would be made on the original node only. In case the node fails then only the master node would replicate the calculation on to a second node.
19. What is meant by streaming access?
HDFS works on the principle of “write once, read many” and the focus is on fast and accurate data retrieval. Steaming access refers to reading the complete data instead of retrieving single record from the database.
20. What is meant by ‘commodity hardware’? Can Hadoop work on them?
Average and non-expensive systems are known as commodity hardware and Hadoop can be installed on any of them. Hadoop does not require high end hardware to function.
21. Which one is the master node in HDFS? Can it be commodity?
Name node is the master node in HDFS and job tracker runs on it. The node contains metadata and works as high availability machine and single pint of failure in HDFS. It cannot be commodity as the entire HDFS works on it.
22. What is meant by Data node?
Data node is the slave deployed in each of the systems and provides the actual storage locations and serves read and writer requests for clients.
23. What is daemon?
Daemon is the process that runs in background in the UNIX environment. In Windows it is ‘services’ and in DOS it is ‘TSR’.
24. What is the function of ‘job tracker’?
Job tracker is one of the daemons that runs on name node and submits and tracks the MapReduce tasks in Hadoop. There is only one job tracker who distributes the task to various task trackers. When it goes down all running jobs comes to a halt.
25. What is the role played by task trackers?
Daemons that run on What data nodes, the task tracers take care of individual tasks on slave node as entrusted to them by job tracker.
26. What is meant by heartbeat in HDFS?
Data nodes and task trackers send heartbeat signals to Name node and Job tracker respectively to inform that they are alive. If the signal is not received it would indicate problems with the node or task tracker.
27. Is it necessary that Name node and job tracker should be on the same host?
No! They can be on different hosts.
28. What is meant by ‘block’ in HDFS?
Block in HDFS refers to minimum quantum of data for reading or writing. Default block size is 64 MB in HDFS. If a file is 52 MB then HDFS would store it and leave 12 MB empty and ready to use.
29. Can blocks be broken down by HDFS if a machine does not have the capacity to copy as many
blocks as the user wants?
Blocks in HDFS cannot be broken. Master node calculates the required space and how data would be transferred to a machine having lower space.
30. What is the process of indexing in HDFS?
Once data is stored HDFS will depend on the last part to find out where the next part of data would be stored.
31. How a data node is identified as saturated?
When a data node is full and has no space left the name node will identify it.
32. What type of data is processed by Hadoop?
Hadoop processes the digital data only.
33. How Name node determines which data node to write on?
Name node contains metadata or information in respect of all the data nodes and it will decide which data node to be used for storing data.
34. Who is the ‘user’ in HDFS?
Anyone who tries to retrieve data from database using HDFS is the user. Client is not end user but an application that uses job tracker and task tracker to retrieve data.
35. How the client communicates with Name node and Data node in HDFS?
The communication mode for clients with name node and data node in HDFS is SSH.
36. What is a rack in HDFS?
Rack is the storage location where all the data nodes are put together. Thus it is a physical collection of data nodes stored in a single location.
37. What is a block and block scanner in HDFS?
Block - The minimum amount of data that can be read or written is generally referred to as a “block” in HDFS. The default size of a block in HDFS is 64MB.
Block Scanner - Block Scanner tracks the list of blocks present on a DataNode and verifies them to find any kind of checksum errors. Block Scanners use a throttling mechanism to reserve disk bandwidth on the datanode.
38. Explain the difference between NameNode, Backup Node and Checkpoint NameNode?
NameNode:
NameNode is at the heart of the HDFS file system which manages the metadata i.e. the data of the files is not stored on the NameNode but rather it has the directory tree of all the files present in the HDFS file system on a hadoop cluster. NameNode uses two files for the namespace-
fsimage file- It keeps track of the latest checkpoint of the namespace.
edits file-It is a log of changes that have been made to the namespace since checkpoint.
Checkpoint Node:
Checkpoint Node keeps track of the latest checkpoint in a directory that has same structure as that of NameNode’s directory. Checkpoint node creates checkpoints for the namespace at regular intervals by downloading the edits and fsimage file from the NameNode and merging it locally. The new image is then again updated back to the active NameNode.
BackupNode:
Backup Node also provides check pointing functionality like that of the checkpoint node but it also maintains its up-to-date in-memory copy of the file system namespace that is in sync with the active NameNode.
39. Explain about the process of inter cluster data copying?
HDFS provides a distributed data copying facility through the DistCP from source to destination. If this data copying is within the hadoop cluster then it is referred to as inter cluster data copying. DistCP requires both source and destination to have a compatible or same version of hadoop.
40. What is the port number for NameNode, Task Tracker and Job Tracker?
NameNode 50070
Job Tracker 50030
Task Tracker 50060.
Hadoop is an open source, Java-based programming framework that supports the processing and storage of extremely large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.
2. Who is the provider of Hadoop?
Hadoop forms part of Apache project provided by Apache Software Foundation.
3. What is the use of Hadoop?
With Hadoop the user can run applications on the systems that have thousands of nodes spreading through innumerable terabytes. Rapid data processing and transfer among nodes helps uninterrupted operation even when a node fails preventing system failure.
4. Compare HDFS and HBase ?
Criteria | HDFS | HBase |
Data write process | Append method | Bulk incremental, random write |
Data read process | Table scan | Table scan/random read/small range scan |
Hive SQL querying | Excellent | Average |
Windows and Linux are the preferred operating system though Hadoop can work on OS x and BSD.
6. What is meant by Big Data?
Big Data refers to assortment of huge amount of data which is difficult capturing, storing, processing or reprieving. Traditional database management tools cannot handle them but Hadoop can.
7. Can you indicate Big Data examples?
Facebook alone generates more than 500 terabytes of data daily whereas many other organizations like Jet Air and Stock Exchange Market generates 1+ terabytes of data every hour. These are Big Data.
8. What are major characteristics of Big Data?
The three characteristics of Big Data are volume, velocity, and veracity. Earlier it was assessed in megabytes and gigabytes but now the assessment is made in terabytes.
9. What is the use of Big Data Analysis for an enterprise?
Analysis of Big Data identifies the problem and focus points in an enterprise. It can prevent big losses and make profits helping the entrepreneurs take informed decision.
10. What are the characteristics of data scientists?
Data scientists analyze data and provide solutions for business problems. They are gradually replacing business and data analysts.
11. What are the basic characteristics of Hadoop?
Written in Java, Hadoop framework has the capability of solving issues involving Big Data analysis. Its programming model is based on Google MapReduce and infrastructure is based on Google’s Big Data and distributed file systems. Hadoop is scalable and more nodes can be added to it.
12. Which are the major players on the web that uses Hadoop?
Introduce in 2002 by Doug Cutting, Hadoop was used in Google MapReduce and HDFS project in 2004 and 2006. Yahoo and Facebook adopted it in 2008 and 2009 respectively. Major commercial enterprises using Hadoop include EMC, Hortonworks, Cloudera, MaOR, Twitter, EBay, and Amazon among others.
13. How is Hadoop different from traditional RDBMS?
RDBMS can be useful for single files and short data whereas Hadoop is useful for handling Big Data in one shot.
14. What are the main components of Hadoop?
Main components of Hadoop are HDFS used to store large databases and MapReduce used to analyze them.
15. What is HDFS?
HDFS is filing system use to store large data files. It handles streaming data and running clusters on the commodity hardware.
16. What are the main features of HDFS?
Great fault tolerance, high throughput, suitability for handling large data sets, and streaming access to file system data are the main features of HDFS. It can be built with commodity hardware.
17. Why replication is pursued in HDFS though it may cause data redundancy?
Systems with average configuration are vulnerable to crash at any time. HDFS replicates and stores data at three different locations that makes the system highly fault tolerant. If data at one location becomes corrupt and is inaccessible it can be retrieved from another location.
18. Would the calculations made on one node be replicated to others in HDFS?
No! The calculation would be made on the original node only. In case the node fails then only the master node would replicate the calculation on to a second node.
19. What is meant by streaming access?
HDFS works on the principle of “write once, read many” and the focus is on fast and accurate data retrieval. Steaming access refers to reading the complete data instead of retrieving single record from the database.
20. What is meant by ‘commodity hardware’? Can Hadoop work on them?
Average and non-expensive systems are known as commodity hardware and Hadoop can be installed on any of them. Hadoop does not require high end hardware to function.
21. Which one is the master node in HDFS? Can it be commodity?
Name node is the master node in HDFS and job tracker runs on it. The node contains metadata and works as high availability machine and single pint of failure in HDFS. It cannot be commodity as the entire HDFS works on it.
22. What is meant by Data node?
Data node is the slave deployed in each of the systems and provides the actual storage locations and serves read and writer requests for clients.
23. What is daemon?
Daemon is the process that runs in background in the UNIX environment. In Windows it is ‘services’ and in DOS it is ‘TSR’.
24. What is the function of ‘job tracker’?
Job tracker is one of the daemons that runs on name node and submits and tracks the MapReduce tasks in Hadoop. There is only one job tracker who distributes the task to various task trackers. When it goes down all running jobs comes to a halt.
25. What is the role played by task trackers?
Daemons that run on What data nodes, the task tracers take care of individual tasks on slave node as entrusted to them by job tracker.
26. What is meant by heartbeat in HDFS?
Data nodes and task trackers send heartbeat signals to Name node and Job tracker respectively to inform that they are alive. If the signal is not received it would indicate problems with the node or task tracker.
27. Is it necessary that Name node and job tracker should be on the same host?
No! They can be on different hosts.
28. What is meant by ‘block’ in HDFS?
Block in HDFS refers to minimum quantum of data for reading or writing. Default block size is 64 MB in HDFS. If a file is 52 MB then HDFS would store it and leave 12 MB empty and ready to use.
29. Can blocks be broken down by HDFS if a machine does not have the capacity to copy as many
blocks as the user wants?
Blocks in HDFS cannot be broken. Master node calculates the required space and how data would be transferred to a machine having lower space.
30. What is the process of indexing in HDFS?
Once data is stored HDFS will depend on the last part to find out where the next part of data would be stored.
31. How a data node is identified as saturated?
When a data node is full and has no space left the name node will identify it.
32. What type of data is processed by Hadoop?
Hadoop processes the digital data only.
33. How Name node determines which data node to write on?
Name node contains metadata or information in respect of all the data nodes and it will decide which data node to be used for storing data.
34. Who is the ‘user’ in HDFS?
Anyone who tries to retrieve data from database using HDFS is the user. Client is not end user but an application that uses job tracker and task tracker to retrieve data.
35. How the client communicates with Name node and Data node in HDFS?
The communication mode for clients with name node and data node in HDFS is SSH.
36. What is a rack in HDFS?
Rack is the storage location where all the data nodes are put together. Thus it is a physical collection of data nodes stored in a single location.
37. What is a block and block scanner in HDFS?
Block - The minimum amount of data that can be read or written is generally referred to as a “block” in HDFS. The default size of a block in HDFS is 64MB.
Block Scanner - Block Scanner tracks the list of blocks present on a DataNode and verifies them to find any kind of checksum errors. Block Scanners use a throttling mechanism to reserve disk bandwidth on the datanode.
38. Explain the difference between NameNode, Backup Node and Checkpoint NameNode?
NameNode:
NameNode is at the heart of the HDFS file system which manages the metadata i.e. the data of the files is not stored on the NameNode but rather it has the directory tree of all the files present in the HDFS file system on a hadoop cluster. NameNode uses two files for the namespace-
fsimage file- It keeps track of the latest checkpoint of the namespace.
edits file-It is a log of changes that have been made to the namespace since checkpoint.
Checkpoint Node:
Checkpoint Node keeps track of the latest checkpoint in a directory that has same structure as that of NameNode’s directory. Checkpoint node creates checkpoints for the namespace at regular intervals by downloading the edits and fsimage file from the NameNode and merging it locally. The new image is then again updated back to the active NameNode.
BackupNode:
Backup Node also provides check pointing functionality like that of the checkpoint node but it also maintains its up-to-date in-memory copy of the file system namespace that is in sync with the active NameNode.
39. Explain about the process of inter cluster data copying?
HDFS provides a distributed data copying facility through the DistCP from source to destination. If this data copying is within the hadoop cluster then it is referred to as inter cluster data copying. DistCP requires both source and destination to have a compatible or same version of hadoop.
40. What is the port number for NameNode, Task Tracker and Job Tracker?
NameNode 50070
Job Tracker 50030
Task Tracker 50060.
Post a Comment