10 Essential Hadoop Interview Questions *
Toptal sourced essential questions that the best Hadoop developers and engineers can answer. Driven from our community, we encourage experts to submit questions and offer feedback.
Hire a Top Hadoop Developer NowInterview Questions
How can one define custom input and output data formats for MapReduce jobs?
Hadoop MapReduce comes with built-in support for many common file formats such as SequenceFile. To implement custom types, one has to implement the InputFormat
and OutputFormat
Java interfaces for reading and writing, respectively.
A class implementing InputFormat
(and similarly OutputFormat
), should implement the logic to split the data and also the logic on how to read records out of each split. The latter should be an implementation of the RecordReader
(and RecordWriter
) interfaces.
Implementations of InputFormat
and OutputFormat
may retrieve data by means other than from files on HDFS. For instance, Apache Cassandra ships with implementations of InputFormat
and RecordReader
.
What is HDFS?
The Hadoop Distributed File System (HDFS) is a distributed file system and a central part of the Hadoop collection of software. HDFS attempts to abstract away the complexities involved in distributed file systems, including replication, high availability, and hardware heterogeneity.
Two major components of HDFS are NameNode and a set of DataNodes. NameNode exposes the filesystem API, persists metadata, and orchestrates replication amongst DataNodes.
MapReduce natively makes use of HDFS’ data locality API to dispatch MapReduce tasks to run where the data lives.
What read and write consistency guarantees does HDFS provide?
Even though data is distributed amongst multiple DataNodes, NameNode is the central authority for file metadata and replication (and as a result, a single point of failure). The configuration parameter dfs.NameNode.replication.min
defines the number of replicas a block should replicate to in order for the write to return as successful.
Apply to Join Toptal's Development Network
and enjoy reliable, steady, remote Freelance Hadoop Developer Jobs
What is the MapReduce programming paradigm and how can it be used to design parallel programs?
MapReduce is a programming model used to implement parallel programs. It provides a programming model to run a program on a distributed set of machines. The similarly named “Hadoop MapReduce” is an implementation of the MapReduce model.
Input and output data in MapReduce are modeled as records of key-value pairs.
Central to MapReduce are map
and reduce
programs, reminiscent of map and reduce in functional programming. They transform data in two phases, each running in parallel and linearly scalable.
The map
function takes each key-value pair and outputs a list of key-value pairs. The reduce
function receives an aggregate of all values emitted for each key across all outputs of instances of map
invocations and reduces them to a single final value.
MapReduce integrates with HDFS to provide data locality for the data it processes. For sufficiently large data, a map
or reduce
program is better to be sent to run where the data lives, rather than bringing the data to them.
Hadoop’s implementation of MapReduce provides native support for the JVM runtime and extended support for other runtimes communicating via standard in/out.
What common data serialization formats are used to store data in HDFS and what are their properties?
HDFS can store any type of file regardless of format; however, certain properties make some file formats better suited for distributed computation.
HDFS organises and distributes files in blocks of fixed size. For example, given a block size of 128MB, a 257MB file is split into three blocks. Records at block boundaries, as a result, may be split. File formats designed to be consumed when split, also called “splittable,” include “sync markers” between groups of records so that any contiguous chunk of the file can be consumed. Furthermore, compression may be desired in conjunction with splittability.
Support for compression is particularly important because it trades off IO and CPU resources. A compressed file is quicker to load from disk but takes extra time to decompress.
CSV files, for instance, are splittable since they include a “line separator” between records. However, they are not suitable for binary data, and they do not support compression.
The SequenceFile format, native to the Hadoop ecosystem, is a binary format that stores key-value records, is splittable, and supports compression at the block and record levels.
Apache Avro, a data serialization and RPC framework, defines the Avro Object Container File format that stores Avro-encoded records. It is both splittable and compressible. Having also a flexible schema definition language, it’s widely used.
The Parquet file format, another Apache project, supports columnar data, where fields belonging to each column are stored efficiently together.
What availability guarantees does HDFS provide?
HDFS relies on NameNode to store metadata about which DataNodes different blocks are stored at. Since NameNode runs on a single node, it’s a single point of failure and its failure makes HDFS unavailable.
A standby NameNode may be configured to be able to fail-over to in order to achieve high availability. In order to achieve this, the Active NameNode streams a log of mutations to a group of JournalNodes, from which the Standby NameNode receives the latest changes to the filesystem metadata.
Automatic failover between Active and Standby NameNodes can be configured by maintaining an ephemeral lock on a quorum of a Zookeeper cluster. A failover controller process on NameNodes is responsible for checking the NameNodes’ health, for maintaining the ephemeral lock, and for executing a fencing mechanism that makes sure that upon failover, the previous NameNode does indeed act passively.
What’s the purpose of Hadoop Streaming and how does it work?
Hadoop Streaming is an extension of Hadoop’s MapReduce API that makes it possible for programs that run within runtimes other than the JVM to act as map
and reduce
programs. Hadoop Streaming defines an interface where data can be sent and received via the standard out and standard in streams provided by operating systems (and hence its name).
What is speculative execution and when can it be used?
A MapReduce program may translate into many invocations of mapper and reducer tasks on different HDFS DataNodes. If a task is slow to respond, MapReduce “speculatively” runs the same task on another replica, as the first node might have been overloaded or faulty.
For speculative execution to work correctly, tasks need to have no side effects; or if they do they need to be “idempotent.” A side-effect-free task is one that besides producing the expected output, does not mutate any external state (such as writing into a database). Idempotence in this context means that if a side effect is repeatedly applied (due to speculative execution), it would not change the end result. Nevertheless, side effects are generally undesirable for a MapReduce task regardless of speculative execution.
What is the “small files problem” with Hadoop?
NameNode is the registry for all metadata in HDFS. The metadata, although journaled on disk, is served from memory and as a result is subject to the limitations of the runtime. NameNode, being a Java application, runs using the JVM runtime and cannot operate efficiently with larger heap allocations.
Explain rack awareness in Hadoop.
HDFS replicates blocks onto multiple machines. In order to have higher fault tolerance against rack failures (network or physical), HDFS is able to distribute replicas across multiple racks.
Hadoop obtains network topology information by either invoking a user-defined script or by loading a Java class which should be an implementation of the DNSToSwitchMapping
interface. It’s the administrator’s responsibility to choose the method, to set the right configuration, and to provide the implementation of said method.
There is more to interviewing than tricky technical questions, so these are intended merely as a guide. Not every “A” candidate worth hiring will be able to answer them all, nor does answering them all guarantee an “A” candidate. At the end of the day, hiring remains an art, a science — and a lot of work.
Why Toptal
Submit an interview question
Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.
Looking for Hadoop Developers?
Looking for Hadoop Developers? Check out Toptal’s Hadoop developers.
Ghassan Hallaq
Freelance Hadoop Developer
Gus is passionate and curious about the latest tech trends and makes sure to be up-to-date with all useful tools, such as Java, Scala, JavaScript, TypeScript, Python, and Rust, to name a few. He aims to bring new solutions to today’s challenges in the realms of data warehouses and pipelines, eCommerce, eBPF, cryptocurrency, NFT, dApp, and Blockchain.
Show MoreSung Jun (Andrew) Kim
Freelance Hadoop Developer
As a highly effective technical leader with over 20 years of experience, Andrew specializes in data: integration, conversion, engineering, analytics, visualization, science, ETL, big data architecture, analytics platforms, and cloud architecture. He has an array of skills in building data platforms, analytic consulting, trend monitoring, data modeling, data governance, and machine learning.
Show MoreAbhimanyu Veer Aditya
Freelance Hadoop Developer
Abhimanyu is a machine learning expert with 15 years of experience creating predictive solutions for business and scientific applications. He’s a cross-functional technology leader, experienced in building teams and working with C-level executives. Abhimanyu has a proven technical background in computer science and software engineering with expertise in high-performance computing, big data, algorithms, databases, and distributed systems.
Show MoreToptal Connects the Top 3% of Freelance Talent All Over The World.
Join the Toptal community.