300+ TOP MAPREDUCE Interview Questions and Answers

MAPREDUCE Interview Questions for freshers experienced :-

1. What is MapReduce?
It is a framework or a programming model that is used for processing large data sets over clusters of computers using distributed programming.

2. What are ‘maps’ and ‘reduces’?
‘Maps’ and ‘Reduces’ are two phases of solving a query in HDFS. ‘Map’ is responsible to read data from input location, and based on the input type, it will generate a key value pair,that is, an intermediate output in local machine.’Reducer’ is responsible to process the intermediate output received from the mapper and generate the final output.

3. What are the four basic parameters of a mapper?
The four basic parameters of a mapper are LongWritable, text, text and IntWritable. The first two represent input parameters and the second two represent intermediate output parameters.

4. What are the four basic parameters of a reducer?
The four basic parameters of a reducer are Text, IntWritable, Text, IntWritable.The first two represent intermediate output parameters and the second two represent final output parameters.

5. What do the master class and the output class do?
Master is defined to update the Master or the job tracker and the output class is defined to write data onto the output location.

6. What is the input type/format in MapReduce by default?
By default the type input type in MapReduce is ‘text’.

7. Is it mandatory to set input and output type/format in MapReduce?
No, it is not mandatory to set the input and output type/format in MapReduce. By default, the cluster takes the input and the output type as ‘text’.

8. What does the text input format do?
In text input format, each line will create a line off-set, that is an hexa-decimal number. Key is considered as a line off-set and value is considered as a whole line text. This is how the data gets processed by a mapper. The mapper will receive the ‘key’ as a ‘LongWritable’ parameter and value as a ‘Text’ parameter.

9. What does job conf class do?
MapReduce needs to logically separate different jobs running on the same cluster. ‘Job conf class’ helps to do job level settings such as declaring a job in real environment. It is recommended that Job name should be descriptive and represent the type of job that is being executed.

10. What does conf.setMapper Class do?
Conf.setMapperclass sets the mapper class and all the stuff related to map job such as reading a data and generating a key-value pair out of the mapper.

MAPREDUCE Interview Questions
MAPREDUCE Interview Questions

11. What do sorting and shuffling do?
Sorting and shuffling are responsible for creating a unique key and a list of values.Making similar keys at one location is known as Sorting. And the process by which the intermediate output of the mapper is sorted and sent across to the reducers is known as Shuffling.

12. What does a split do?
Before transferring the data from hard disk location to map method, there is a phase or method called the ‘Split Method’. Split method pulls a block of data from HDFS to the framework. The Split class does not write anything, but reads data from the block and pass it to the mapper.Be default, Split is taken care by the framework. Split method is equal to the block size and is used to divide block into bunch of splits.

13. How can we change the split size if our commodity hardware has less storage space?
If our commodity hardware has less storage space, we can change the split size by writing the ‘custom splitter’. There is a feature of customization in Hadoop which can be called from the main method.

14. What does a MapReduce partitioner do?
A MapReduce partitioner makes sure that all the value of a single key goes to the same reducer, thus allows evenly distribution of the map output over the reducers. It redirects the mapper output to the reducer by determining which reducer is responsible for a particular key.

15. How is Hadoop different from other data processing tools?
In Hadoop, based upon your requirements, you can increase or decrease the number of mappers without bothering about the volume of data to be processed. this is the beauty of parallel processing in contrast to the other data processing tools available.

16. Can we rename the output file?
Yes we can rename the output file by implementing multiple format output class.

17. Why we cannot do aggregation (addition) in a mapper? Why we require reducer for that?
We cannot do aggregation (addition) in a mapper because, sorting is not done in a mapper. Sorting happens only on the reducer side. Mapper method initialization depends upon each input split. While doing aggregation, we will lose the value of the previous instance. For each row, a new mapper will get initialized. For each row, inputsplit again gets divided into mapper, thus we do not have a track of the previous row value.

18. What is Streaming?
Streaming is a feature with Hadoop framework that allows us to do programming using MapReduce in any programming language which can accept standard input and can produce standard output. It could be Perl, Python, Ruby and not necessarily be Java. However, customization in MapReduce can only be done using Java and not any other programming language.

19. What is a Combiner?
A ‘Combiner’ is a mini reducer that performs the local reduce task. It receives the input from the mapper on a particular node and sends the output to the reducer. Combiners help in enhancing the efficiency of MapReduce by reducing the quantum of data that is required to be sent to the reducers.

20. What happens in a TextInputFormat?
In TextInputFormat, each line in the text file is a record. Key is the byte offset of the line and value is the content of the line.
For instance,Key: LongWritable, value: Text.

21. What do you know about KeyValueTextInputFormat?
In KeyValueTextInputFormat, each line in the text file is a ‘record’. The first separator character divides each line. Everything before the separator is the key and everything after the separator is the value.
For instance,Key: Text, value: Text.

22. What do you know about SequenceFileInputFormat?
SequenceFileInputFormat is an input format for reading in sequence files. Key and value are user defined. It is a specific compressed binary file format which is optimized for passing the data between the output of one MapReduce job to the input of some other MapReduce job.

23. What do you know about NLineOutputFormat?
NLineOutputFormat splits ‘n’ lines of input as one split.

24. What is the difference between an HDFS Block and Input Split?
HDFS Block is the physical division of the data and Input Split is the logical division of the data.

25. After restart of namenode, Mapreduce jobs started failing which worked fine before restart. What could be the wrong ?
The cluster could be in a safe mode after the restart of a namenode. The administrator needs to wait for namenode to exit the safe mode before restarting the jobs again. This is a very common mistake by Hadoop administrators.

26. What do you always have to specify for a MapReduce job ?

  • The classes for the mapper and reducer.
  • The classes for the mapper, reducer, and combiner.
  • The classes for the mapper, reducer, partitioner, and combiner.
  • None; all classes have default implementations.

27. How many times will a combiner be executed ?

  1. At least once.
  2. Zero or one times.
  3. Zero, one, or many times.
  4. It’s configurable.

28. You have a mapper that for each key produces an integer value and the following set of
reduce operations

Reducer A: outputs the sum of the set of integer values.
Reducer B: outputs the maximum of the set of values.
Reducer C: outputs the mean of the set of values.
Reducer D: outputs the difference between the largest and smallest values
in the set.

29. Which of these reduce operations could safely be used as a combiner ?

All of them.
A and B.
A, B, and D.
C and D.
None of them.
Explanation: Reducer C cannot be used because if such reduction were to occur, the final reducer could receive from the combiner a series of means with no knowledge of how many items were used to generate them, meaning the overall mean is impossible to calculate.

Reducer D is subtle as the individual tasks of selecting a maximum or minimum are safe for use as combiner operations. But if the goal is to determine the overall variance between the maximum and minimum value for each key, this would not work. If the combiner that received the maximum key had values clustered around it, this would generate small results; similarly for the one receiving the minimum value. These sub ranges have little value in isolation and again the final reducer cannot construct the desired result.

30. What is Uber task in YARN ?
If the job is small, the application master may choose to run them in the same JVM as itself, since it judges the overhead of allocating new containers and running tasks in them as outweighing the gain to be had in running them in parallel, compared to running them sequentially on one node. (This is different to Mapreduce
1, where small jobs are never run on a single task tracker.)

Such a job is said to be Uberized, or run as an Uber task.

31. How to configure Uber Tasks ?
By default a job that has less than 10 mappers only and one reducer, and the input size is less than the size of one HDFS block is said to be small job. These values may
be changed for a job by setting mapreduce.job.ubertask.maxmaps , mapreduce.job.uber task.maxreduces , and mapreduce.job.ubertask.maxbytes

It’s also possible to disable Uber tasks entirely by setting mapreduce.job.ubertask.enable to false.

32. What are the ways to debug a failed mapreduce job ?
Commonly there are two ways.

By using mapreduce job counters
YARN Web UI for looking into syslogs for actual error messages or status.

33. What is the importance of heartbeats in HDFS/Mapreduce Framework ?
A heartbeat in master/slave architecture is a signal indicating that it is alive. A datanode sends heartbeats to Namenode and node managers send their heartbeats to Resource Managers to tell the master node that these are still alive.

If the Namenode or Resource manager does not receive heartbeat from any slave node then they will decide that there is some problem in data node or node manager and is unable to perform the assigned task, then master (namenode or resource manager) will reassign the same task to other live nodes.

34. Can we rename the output file ?
Yes, we can rename the output file by implementing multiple format output class.

35. What are the default input and output file formats in Mapreduce jobs ?
If input file or output file formats are not specified, then the default file input or output formats are considered as text files.

MAPREDUCE Questions and Answers pdf Download

Leave a Reply

Your email address will not be published. Required fields are marked *