10 Essential Hadoop Interview Questions *

Toptal sourced essential questions that the best Hadoop developers and engineers can answer. Driven from our community, we encourage experts to submit questions and offer feedback.

Hire a Top Hadoop Developer Now
Toptal logois an exclusive network of the top freelance software developers, designers, marketing experts, product managers, project managers, and finance experts in the world. Top companies hire Toptal freelancers for their most important projects.

Interview Questions

1.

How can one define custom input and output data formats for MapReduce jobs?

View answer

Hadoop MapReduce comes with built-in support for many common file formats such as SequenceFile. To implement custom types, one has to implement the InputFormat and OutputFormat Java interfaces for reading and writing, respectively.

A class implementing InputFormat (and similarly OutputFormat), should implement the logic to split the data and also the logic on how to read records out of each split. The latter should be an implementation of the RecordReader (and RecordWriter) interfaces.

Implementations of InputFormat and OutputFormat may retrieve data by means other than from files on HDFS. For instance, Apache Cassandra ships with implementations of InputFormat and RecordReader.

2.

What is HDFS?

View answer

The Hadoop Distributed File System (HDFS) is a distributed file system and a central part of the Hadoop collection of software. HDFS attempts to abstract away the complexities involved in distributed file systems, including replication, high availability, and hardware heterogeneity.

Two major components of HDFS are NameNode and a set of DataNodes. NameNode exposes the filesystem API, persists metadata, and orchestrates replication amongst DataNodes.

MapReduce natively makes use of HDFS’ data locality API to dispatch MapReduce tasks to run where the data lives.

3.

What read and write consistency guarantees does HDFS provide?

View answer

Even though data is distributed amongst multiple DataNodes, NameNode is the central authority for file metadata and replication (and as a result, a single point of failure). The configuration parameter dfs.NameNode.replication.min defines the number of replicas a block should replicate to in order for the write to return as successful.

Apply to Join Toptal's Development Network

and enjoy reliable, steady, remote Freelance Hadoop Developer Jobs

Apply as a Freelancer
4.

What is the MapReduce programming paradigm and how can it be used to design parallel programs?

View answer

MapReduce is a programming model used to implement parallel programs. It provides a programming model to run a program on a distributed set of machines. The similarly named “Hadoop MapReduce” is an implementation of the MapReduce model.

Input and output data in MapReduce are modeled as records of key-value pairs.

Central to MapReduce are map and reduce programs, reminiscent of map and reduce in functional programming. They transform data in two phases, each running in parallel and linearly scalable.

The map function takes each key-value pair and outputs a list of key-value pairs. The reduce function receives an aggregate of all values emitted for each key across all outputs of instances of map invocations and reduces them to a single final value.

MapReduce integrates with HDFS to provide data locality for the data it processes. For sufficiently large data, a map or reduce program is better to be sent to run where the data lives, rather than bringing the data to them.

Hadoop’s implementation of MapReduce provides native support for the JVM runtime and extended support for other runtimes communicating via standard in/out.

5.

What common data serialization formats are used to store data in HDFS and what are their properties?

View answer

HDFS can store any type of file regardless of format; however, certain properties make some file formats better suited for distributed computation.

HDFS organises and distributes files in blocks of fixed size. For example, given a block size of 128MB, a 257MB file is split into three blocks. Records at block boundaries, as a result, may be split. File formats designed to be consumed when split, also called “splittable,” include “sync markers” between groups of records so that any contiguous chunk of the file can be consumed. Furthermore, compression may be desired in conjunction with splittability.

Support for compression is particularly important because it trades off IO and CPU resources. A compressed file is quicker to load from disk but takes extra time to decompress.

CSV files, for instance, are splittable since they include a “line separator” between records. However, they are not suitable for binary data, and they do not support compression.

The SequenceFile format, native to the Hadoop ecosystem, is a binary format that stores key-value records, is splittable, and supports compression at the block and record levels.

Apache Avro, a data serialization and RPC framework, defines the Avro Object Container File format that stores Avro-encoded records. It is both splittable and compressible. Having also a flexible schema definition language, it’s widely used.

The Parquet file format, another Apache project, supports columnar data, where fields belonging to each column are stored efficiently together.

6.

What availability guarantees does HDFS provide?

View answer

HDFS relies on NameNode to store metadata about which DataNodes different blocks are stored at. Since NameNode runs on a single node, it’s a single point of failure and its failure makes HDFS unavailable.

A standby NameNode may be configured to be able to fail-over to in order to achieve high availability. In order to achieve this, the Active NameNode streams a log of mutations to a group of JournalNodes, from which the Standby NameNode receives the latest changes to the filesystem metadata.

Automatic failover between Active and Standby NameNodes can be configured by maintaining an ephemeral lock on a quorum of a Zookeeper cluster. A failover controller process on NameNodes is responsible for checking the NameNodes’ health, for maintaining the ephemeral lock, and for executing a fencing mechanism that makes sure that upon failover, the previous NameNode does indeed act passively.

7.

What’s the purpose of Hadoop Streaming and how does it work?

View answer

Hadoop Streaming is an extension of Hadoop’s MapReduce API that makes it possible for programs that run within runtimes other than the JVM to act as map and reduce programs. Hadoop Streaming defines an interface where data can be sent and received via the standard out and standard in streams provided by operating systems (and hence its name).

8.

What is speculative execution and when can it be used?

View answer

A MapReduce program may translate into many invocations of mapper and reducer tasks on different HDFS DataNodes. If a task is slow to respond, MapReduce “speculatively” runs the same task on another replica, as the first node might have been overloaded or faulty.

For speculative execution to work correctly, tasks need to have no side effects; or if they do they need to be “idempotent.” A side-effect-free task is one that besides producing the expected output, does not mutate any external state (such as writing into a database). Idempotence in this context means that if a side effect is repeatedly applied (due to speculative execution), it would not change the end result. Nevertheless, side effects are generally undesirable for a MapReduce task regardless of speculative execution.

9.

What is the “small files problem” with Hadoop?

View answer

NameNode is the registry for all metadata in HDFS. The metadata, although journaled on disk, is served from memory and as a result is subject to the limitations of the runtime. NameNode, being a Java application, runs using the JVM runtime and cannot operate efficiently with larger heap allocations.

10.

Explain rack awareness in Hadoop.

View answer

HDFS replicates blocks onto multiple machines. In order to have higher fault tolerance against rack failures (network or physical), HDFS is able to distribute replicas across multiple racks.

Hadoop obtains network topology information by either invoking a user-defined script or by loading a Java class which should be an implementation of the DNSToSwitchMapping interface. It’s the administrator’s responsibility to choose the method, to set the right configuration, and to provide the implementation of said method.

There is more to interviewing than tricky technical questions, so these are intended merely as a guide. Not every “A” candidate worth hiring will be able to answer them all, nor does answering them all guarantee an “A” candidate. At the end of the day, hiring remains an art, a science — and a lot of work.

Why Toptal

Tired of interviewing candidates? Not sure what to ask to get you a top hire?

Let Toptal find the best people for you.

Hire a Top Hadoop Developer Now

Our Exclusive Network of Hadoop Developers

Looking to land a job as a Hadoop Developer?

Let Toptal find the right job for you.

Apply as a Hadoop Developer

Job Opportunities From Our Network

Submit an interview question

Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.

* All fields are required

Looking for Hadoop Developers?

Looking for Hadoop Developers? Check out Toptal’s Hadoop developers.

Nazar Barabash

Freelance Hadoop Developer
United States
Toptal Member Since July 3, 2020

Nazar is a lead big data engineer with comprehensive experience in Hadoop and Spark. He has successfully led and delivered a number of Java and Scala-based projects and frequently participates in building CI/CD pipelines using Jenkins, GitlabCI, Airflow, and Ozzie.

Show More

Adam Knust

Freelance Hadoop Developer
United States
Toptal Member Since June 18, 2020

Adam is an experienced and successful data architect and engineer, having led the BI practice for a $4 billion company and three successful data warehouse and business intelligence implementations. Along with RDBMS platforms, Adam has Spark, Hive, and Sqoop experience on Hadoop with Java, Python, and Scala. As a former business manager, he communicates very well with all levels and picks up on the business domain extremely fast.

Show More

Karip Kaya

Freelance Hadoop Developer
United Kingdom
Toptal Member Since March 29, 2022

Karip is a data engineer and developer with 18 years of experience in the IT industry and six years in the freelance community. He specializes in Java, Scala, and Python and is skilled at Hadoop, Spark, and PL/SQL. Karip enjoys working on the back end, database, and big data projects.

Show More

Toptal Connects the Top 3% of Freelance Talent All Over The World.

Join the Toptal community.

Learn more