10 Essential Hadoop Interview Questions *

question badge

What is the MapReduce programming paradigm and how can it be used to design parallel programs?

answer badge

MapReduce is a programming model used to implement parallel programs. It provides a programming model to run a program on a distributed set of machines. The similarly named “Hadoop MapReduce” is an implementation of the MapReduce model.

Input and output data in MapReduce are modeled as records of key-value pairs.

Central to MapReduce are map and reduce programs, reminiscent of map and reduce in functional programming. They transform data in two phases, each running in parallel and linearly scalable.

The map function takes each key-value pair and outputs a list of key-value pairs. The reduce function receives an aggregate of all values emitted for each key across all outputs of instances of map invocations and reduces them to a single final value.

MapReduce integrates with HDFS to provide data locality for the data it processes. For sufficiently large data, a map or reduce program is better to be sent to run where the data lives, rather than bringing the data to them.

Hadoop’s implementation of MapReduce provides native support for the JVM runtime and extended support for other runtimes communicating via standard in/out.

answer badge

The Hadoop Distributed File System (HDFS) is a distributed file system and a central part of the Hadoop collection of software. HDFS attempts to abstract away the complexities involved in distributed file systems, including replication, high availability, and hardware heterogeneity.

Two major components of HDFS are NameNode and a set of DataNodes. NameNode exposes the filesystem API, persists metadata, and orchestrates replication amongst DataNodes.

MapReduce natively makes use of HDFS’ data locality API to dispatch MapReduce tasks to run where the data lives.

question badge

What read and write consistency guarantees does HDFS provide?

answer badge

Even though data is distributed amongst multiple DataNodes, NameNode is the central authority for file metadata and replication (and as a result, a single point of failure). The configuration parameter dfs.NameNode.replication.min defines the number of replicas a block should replicate to in order for the write to return as successful.

Find top Hadoop developers today. Toptal can match you with the best engineers to finish your project.

Hire Toptal’s Hadoop developers
question badge

What common data serialization formats are used to store data in HDFS and what are their properties?

answer badge

HDFS can store any type of file regardless of format; however, certain properties make some file formats better suited for distributed computation.

HDFS organises and distributes files in blocks of fixed size. For example, given a block size of 128MB, a 257MB file is split into three blocks. Records at block boundaries, as a result, may be split. File formats designed to be consumed when split, also called “splittable,” include “sync markers” between groups of records so that any contiguous chunk of the file can be consumed. Furthermore, compression may be desired in conjunction with splittability.

Support for compression is particularly important because it trades off IO and CPU resources. A compressed file is quicker to load from disk but takes extra time to decompress.

CSV files, for instance, are splittable since they include a “line separator” between records. However, they are not suitable for binary data, and they do not support compression.

The SequenceFile format, native to the Hadoop ecosystem, is a binary format that stores key-value records, is splittable, and supports compression at the block and record levels.

Apache Avro, a data serialization and RPC framework, defines the Avro Object Container File format that stores Avro-encoded records. It is both splittable and compressible. Having also a flexible schema definition language, it’s widely used.

The Parquet file format, another Apache project, supports columnar data, where fields belonging to each column are stored efficiently together.

question badge

What availability guarantees does HDFS provide?

answer badge

HDFS relies on NameNode to store metadata about which DataNodes different blocks are stored at. Since NameNode runs on a single node, it’s a single point of failure and its failure makes HDFS unavailable.

A standby NameNode may be configured to be able to fail-over to in order to achieve high availability. In order to achieve this, the Active NameNode streams a log of mutations to a group of JournalNodes, from which the Standby NameNode receives the latest changes to the filesystem metadata.

Automatic failover between Active and Standby NameNodes can be configured by maintaining an ephemeral lock on a quorum of a Zookeeper cluster. A failover controller process on NameNodes is responsible for checking the NameNodes’ health, for maintaining the ephemeral lock, and for executing a fencing mechanism that makes sure that upon failover, the previous NameNode does indeed act passively.

question badge

What’s the purpose of Hadoop Streaming and how does it work?

answer badge

Hadoop Streaming is an extension of Hadoop’s MapReduce API that makes it possible for programs that run within runtimes other than the JVM to act as map and reduce programs. Hadoop Streaming defines an interface where data can be sent and received via the standard out and standard in streams provided by operating systems (and hence its name).

question badge

What is speculative execution and when can it be used?

answer badge

A MapReduce program may translate into many invocations of mapper and reducer tasks on different HDFS DataNodes. If a task is slow to respond, MapReduce “speculatively” runs the same task on another replica, as the first node might have been overloaded or faulty.

For speculative execution to work correctly, tasks need to have no side effects; or if they do they need to be “idempotent.” A side-effect-free task is one that besides producing the expected output, does not mutate any external state (such as writing into a database). Idempotence in this context means that if a side effect is repeatedly applied (due to speculative execution), it would not change the end result. Nevertheless, side effects are generally undesirable for a MapReduce task regardless of speculative execution.

question badge

How can one define custom input and output data formats for MapReduce jobs?

answer badge

Hadoop MapReduce comes with built-in support for many common file formats such as SequenceFile. To implement custom types, one has to implement the InputFormat and OutputFormat Java interfaces for reading and writing, respectively.

A class implementing InputFormat (and similarly OutputFormat), should implement the logic to split the data and also the logic on how to read records out of each split. The latter should be an implementation of the RecordReader (and RecordWriter) interfaces.

Implementations of InputFormat and OutputFormat may retrieve data by means other than from files on HDFS. For instance, Apache Cassandra ships with implementations of InputFormat and RecordReader.

question badge

What is the “small files problem” with Hadoop?

answer badge

NameNode is the registry for all metadata in HDFS. The metadata, although journaled on disk, is served from memory and as a result is subject to the limitations of the runtime. NameNode, being a Java application, runs using the JVM runtime and cannot operate efficiently with larger heap allocations.

question badge

Explain rack awareness in Hadoop.

answer badge

HDFS replicates blocks onto multiple machines. In order to have higher fault tolerance against rack failures (network or physical), HDFS is able to distribute replicas across multiple racks.

Hadoop obtains network topology information by either invoking a user-defined script or by loading a Java class which should be an implementation of the DNSToSwitchMapping interface. It’s the administrator’s responsibility to choose the method, to set the right configuration, and to provide the implementation of said method.

* There is more to interviewing than tricky technical questions, so these are intended merely as a guide. Not every “A” candidate worth hiring will be able to answer them all, nor does answering them all guarantee an “A” candidate. At the end of the day, hiring remains an art, a science — and a lot of work.
Submit an interview question
Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.
All fields are required
Thanks for submitting your question.
Our editorial staff will review it shortly. Please note that submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.
Looking for Hadoop experts? Check out Toptal’s Hadoop developers.