12 Essential Core Java Interview Questions *
Toptal sourced essential questions that the best Core Java developers and engineers can answer. Driven from our community, we encourage experts to submit questions and offer feedback.
Hire a Top Core Java Developer NowInterview Questions
How do we solve the problem of adding a new method in an interface without breaking all the child classes that implement the interface?
Default methods in an interface allow us to add new functionality without breaking old code.
Before Java 8, if a new method was added to an interface, then all the implementation classes of that interface were bound to override that new method, even if they did not use the new functionality.
With Java 8, we can add the default implementation for the new method by using the default
keyword before the method implementation.
Even with anonymous classes or functional interfaces, if we see that some code is reusable and we don’t want to define the same logic everywhere in the code, we can write default implementations of those and reuse them.
Oracle defines a stream like this: “A stream is a sequence of elements. Unlike a collection, it is not a data structure that stores elements. Instead, a stream carries values from a source through a pipeline.”
Let’s take an example: Say we want to find all players whose batting average is greater than or equal to 45.
Using a for
loop, this might look like:
public List<Player> getPlayersWithAverageGtThn45(List<Player> allPlayers) {
List<Player> playersWithAvgGtThn45 = new ArrayList<>();
for (Player player : allPlayers) {
if (player.getBattingAverage() >= 45) {
playersWithAvgGtThn45.add(player);
}
}
return playersWithAvgGtThn45;
}
Using streams instead:
public List<Player> getPlayersWithAverageGtThn45(List<Player> allPlayers)
{
return allPlayers.stream().filter(player -> player.getBattingAverage() >= 45)
.collect(Collectors.toList());
}
Advantages of a Stream-based Approach in Java
Streams are more expressive. With streams, the logic lies bundled together in 3-4 lines, unlike a for
loop where we need to go deep into the loop to understand the logic.
Streams can be parallelized to whatever extent multiple CPU cores are available. The thread life cycle methods to create parallel threads are abstracted from the developer, while in a for
loop, if we want to achieve parallelism, then we need to implement our own thread life cycle methods.
Streams support lazy loading. This means that the intermediate operations just create other streams, and won’t be processed until the terminal operation is called.
Now, what if the requirement was to find any player whose batting average is at least 45? Then the code would have been:
public Player getAnyPlayerWithAverageGtThn45(List<Player> allPlayers)
{
return allPlayers.stream().filter(player -> player.getBattingAverage() >= 45)
.findAny().get();
}
The above code looks like it will find all players whose average is greater than 45, and then will return any player out of it. But this statement is not true. What actually happens behind the scenes is, when the terminal operation findAny()
is called, the filter
stream is processed, and as soon as a Player
is found with an average of at least 45, that Player
is returned without processing any other elements.
Short-circuiting. This is used to terminate the processing once a condition is met, so in the above example findAny()
acts as short-circuit and terminates the processing as soon as the first Player
is found meeting the requirement. This is somewhat synonymous to break
in a loop.
Disadvantages of Using Streams in Java
Performance. It has been observed that if the number of elements in the collection is not very large—in the hundreds or low thousands—then the performance of loops and streams does not differ too much. But if the number of elements is in the hundreds of thousands or more, then normal loops are faster than parallel streams and sequential streams.
As far as comparison among sequential and parallel streams is considered, then their behavior is slightly tricky. Because parallel streams have to manage the thread lifecycle, most of their time is spent there, and that makes them slow in many cases compared to sequential streams.
List<Player> playerList = new ArrayList<>();
float j = 1000000.0 f;
for (int i = 1; i <= 1000000; i++) {
playerList.add(new Player(j, i));
j--;
}
long startTime = System.currentTimeMillis();
for (Player player : playerList) {
System.out.println(player);
}
long endTime = System.currentTimeMillis();
System.out.println(endTime - startTime);
The output here is 5826, but for the same code above when streams and parallel streams are used, the time taken is 6164 and 8324 respectively.
These differences of ~300 ms and 2.5 seconds are quite significant—5.8 percent and 42.9 percent, respectively.
If the number of elements in the above code is changed from 1000000 to 10, then the time becomes 3, 64, and 63 respectively. Though the difference looks huge, remember, it’s in milliseconds. So if the number of elements is small, then streams are more intuitive and are worth the small performance hit.
Parallel streams only: Possible complications interacting with synchronous code. In the method getPlayersWithAverageGtThn45
, if the code would have been like this then we would have missed some data:
public List<Player> getPlayersWithAverageGtThn45(List<Player> allPlayers) {
List<Player> players = new ArrayList<>();
allPlayers.parallelStream().forEach(player -> {
if (player.getBattingAverage() >= 45) {
players.add(player);
}
});
return players;
}
So when we mix lambda functions with the “normal” way, then it can be very dangerous.
It’s simply because if the object is mutable, then it is very much likely that the hashcode will change. If the hashcode changes, then the search for the bucket using the new hashcode might not give the same bucket as with the previous hashcode, and hence, we won’t be able to find the element.
Apply to Join Toptal's Development Network
and enjoy reliable, steady, remote Freelance Core Java Developer Jobs
What’s the difference between a raw type collection—e.g. Collection x
—and an unbounded wildcard type collection—e.g. Collection<?> x
?
Collection x
means it can hold any values of any type. For example:
List l = new ArrayList();
l.add("toptal");
l.add(1);
First of all, we should understand that this is not the best way to define a list, because whenever operations are performed on an element of the list, then those are normally performed by assuming that the list is made of elements of only a single data type. If there are multiple types among the elements, then we have to place an instanceof
check in the code, and the code does not remain clean—nor fast. (This is why we have generics.)
public void wildCard(List<?> list)
{
list.stream().forEach(System.out::println);
}
When the above code is used, this means that we can use any single type of list. Either it can be List<String>
or List<Integer>
or similarly any other type, but only one type. This safeguards us against the above downsides, and this is used when we know that the operations on the elements of the list are—e.g. from the object class we see we can use functions like toString
, equals
, etc.
Solve the producer-consumer problem using the following techniques:
- Wait and notify
BlockingQueue
The producer-consumer problem is a multi-process synchronization problem where a producer process is trying to add an element to a fixed-size, shared buffer and a consumer process reads from that buffer.
Now the producer has to make sure that it does not add an element if the buffer is full, and should wait until an element is removed from it. Similarly, the consumer should not remove an element if the buffer is empty, and should wait until something is added to the buffer.
Using Wait and Notify
The code is pretty self-explanatory:
import java.util.Queue;
class Producer extends Thread {
private int maxSize;
private Queue<String> buffer;
public Producer(Queue<String> buffer, int maxSize, String name) {
super(name);
this.buffer = buffer;
this.maxSize = maxSize;
}
@Override
public void run() {
while (true) {
while (buffer.size() == maxSize) {
try {
System.out.println("Producer waiting for consumer to pick up an element");
synchronized(buffer) {
buffer.wait();
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
synchronized(buffer) {
String value = "Toptal";
System.out.println("Producing: " + value);
buffer.add(value);
buffer.notifyAll();
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
import java.util.Queue;
class Consumer extends Thread {
private Queue<String> buffer;
private int maxSize;
public Consumer(Queue<String> buffer, int maxSize, String name) {
super(name);
this.buffer = buffer;
this.maxSize = maxSize;
}
@Override public void run() {
while (true) {
while (buffer.isEmpty()) {
System.out.println("Waiting for producer to put something in the buffer");
try {
synchronized(buffer) {
buffer.wait();
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
synchronized(buffer) {
System.out.println("Consuming: " + buffer.remove());
buffer.notifyAll();
}
try {
Thread.sleep(0000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
import java.util.LinkedList;
import java.util.Queue;
public class ProducerConsumer {
public static void main(String args[]) {
int size = 7;
Queue<String> buffer = new LinkedList<>();
Thread producer = new Producer(buffer, size, "I am the producer");
Thread consumer = new Consumer(buffer, size, "Myself, consumer");
consumer.start();
producer.start();
}
}
On running the main method, the result is something like this:
Waiting for producer to put something in the queue
Producing: Toptal
Producing: Toptal
Consuming: Toptal
Consuming: Toptal
Waiting for producer to put something in the queue
Producing: Toptal
Consuming: Toptal
Waiting for producer to put something in the queue
Producing: Toptal
Consuming: Toptal
Waiting for producer to put something in the queue
Using BlockingQueue
The code should look similar to the following:
import java.util.concurrent.BlockingQueue;
class Producer extends Thread {
private int maxSize;
private BlockingQueue<String> buffer;
public Producer(BlockingQueue<String> buffer, int maxSize, String name) {
super(name);
this.buffer = buffer;
this.maxSize = maxSize;
}
@Override
public void run() {
while (true) {
try {
String value = "Toptal";
System.out.println("Producing: " + value);
buffer.put(value);
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
import java.util.concurrent.BlockingQueue;
class Consumer extends Thread {
private BlockingQueue<String> buffer;
private int maxSize;
public Consumer(BlockingQueue<String> buffer, int maxSize, String name) {
super(name);
this.buffer = buffer;
this.maxSize = maxSize;
}
@Override public void run() {
while (true) {
try {
System.out.println("Consuming: " + buffer.take());
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
public class ProducerConsumer {
public static void main(String args[]) {
int size = 7;
BlockingQueue<String> buffer = new LinkedBlockingQueue<>();
Thread producer = new Producer(buffer, size, "I am the producer");
Thread consumer = new Consumer(buffer, size, "Myself, consumer");
consumer.start();
producer.start();
}
}
Note that the code is simplified to a great extent when we use a BlockingQueue
because all the synchronization overhead is taken care of by the put()
and take()
methods in Producer
and Consumer
classes.
Though both are used for thread safety, the performance of concurrent collection is better than that of synchronized collection. This is because the latter acquires a lock on the full collection, so all reads and writes are stopped. The concurrent collection instead divides the whole collection into segments, and locks are acquired on a particular segment while other segments remain open for reads and writes.
For example, if multiple threads are reading and writing values from a hashmap, then one thing is for sure: We do want to use a concurrent or synchronized collection. But which kind depends upon our use case.
Synchronized collection: We use this when we want to iterate through the collection and be sure that nothing changes while iterating. Time is not a constraint here.
Concurrent collection: We use this when multiple threads are reading and writing to a collection, and apart from thread safety, we need to perform reads and writes as fast as we can.
Why is it advisable to use a thread pool executor custom implementation, rather than a fixed or cached thread pool executor from a built-in service?
A thread pool serves the purpose of improving performance at thread creation time. It does this by reusing threads from the pool.
Out of many types of thread pool provided by Java, most of the time, we tend to use these three:
Executors.newFixedThreadPool(...)
Executors.newCachedThreadPool(...)
- A custom implementation using
ThreadPoolExecutor
FixedThreadPool
will always keep a fixed number of threads even if they are unused. So even if our application at peak time needs N threads (where N is very large) then even at times of lowest traffic, we will still have N threads open.
CachedThreadPool
starts with zero threads and can go up to Integer.MAX_VALUE
depending upon usage. So though at times of low traffic, it will keep only as many threads as are needed by the application at that time. But if the application demands low latency and is highly loaded, then this may cause us to run out of memory, increasing latency as the OS starts paging to the hard drive.
A custom thread pool using ThreadPoolExecutor
provides better control, as we can define the number of initial threads (core pool size) and the maximum number of threads that a thread pool can create. Even the time-to-live for a thread being reused can be controlled.
So because you are the owner of the application and you know the ins and outs of your app, it’s better to have control in your hands rather than leaving it to the underlying base implementations.
To hold a new task if the number of threads is greater than corePoolSize
, BlockingQueue
is used. Array
s or List
s are not used because unlike them, BlockingQueue
is thread-safe.
It also solves the producer-consumer problem so that if the queue is full, it will not try to add new tasks in the queue, but will reject them via RejectedExecutionHandler
so they can be caught there and handled appropriately.
The volatile
Keyword
Sometimes a variable is shared among a number of threads. When there is only one thread that can write to a variable, and all others read from it, then volatile
guarantees that all the threads that read the value of the variable will get the latest value. So when there are multiple ChangeListener
s for a variable and only one thread performing write actions, this is the best choice.
For example, if there is one thread which is continuously updating the current score and there are many threads which are reading the current score (T1
for calculating the average, T2
for checking new records based on the current score, etc.) then the volatile
keyword should be used when defining the currentScore
variable.
The ThreadLocal
Class
When we know that threads will use the same type of variable to perform a task, but each thread will hold its own copy of the variable, then either we can create a new local variable every time we create a thread, or we can mark a variable as ThreadLocal
at the global level and pass it to the thread. This will ensure that every thread will have a value of the variable that is local to it.
For example:
public class ThreadlocalExample {
public static class ThreadRunnable implements Runnable {
private ThreadLocal<String> threadLocal =
new ThreadLocal<String> ();
@Override
public void run() {
threadLocal.set("demo" + Thread.currentThread().getName());
System.out.println(threadLocal.get());
}
}
public static void main(String[] args) {
ThreadRunnable threadRunnable = new ThreadRunnable();
Thread thread1 = new Thread(threadRunnable);
Thread thread2 = new Thread(threadRunnable);
// this will call the run() method
thread1.start();
thread2.start();
}
}
Why should you prefer coding to interfaces rather than implementations? Give examples.
Two main reasons:
- Interfaces are contracts, so by merely looking at the interfaces and the method declarations, one can get a feel for what the application developer’s intent is, and how objects interact.
- Loose coupling: For example, if a method is exposed through a JAR which returns a
List
. Internally the method uses anArrayList
, but to the client of the JAR, the implementation details are hidden. What the client knows is that it gets aList
in return.
If in future versions of the JAR, the implementation changes to a LinkedList
, then there is no client-side change, as the implementation details are hidden from the client.
Say this is the original implementation:
public interface Names {
List<String> getNames();
}
public class AnimalNames implements Names {
@Override
public List<String> getNames() {
List<String> l = new ArrayList<>();
l.add("Zebra");
l.add("Lion");
return l;
}
}
Now any time in the future, the implementation of AnimalNames can be changed to use a LinkedList
as shown below, and there will be no client-side change needed for this:
public class AnimalNames implements Names {
@Override
public List<String> getNames() {
List<String> l = new LinkedList<>();
l.add("Zebra");
l.add("Lion");
return l;
}
}
There are three major concepts involved with functional interfaces: anonymous classes, lambda expressions, and java.util.function
.
Normally we might use an anonymous class if we know that:
- An interface has only one or two methods,
- The logic in the method(s) is not to be reused, and
- We do not want to create a new class every time for such small logic
For example, say we want to get all the qualifying teams for the World Cup. That is decided by the world rank of the team. If the worldRank
of the team is less than or equal to 10, then the team is qualified.
public class Team {
private String name;
private Integer worldRank;
public Integer getWorldRank() {
return worldRank;
}
}
public interface Qualifier {
boolean isWorldCupQualifier(Team team);
}
Then somewhere in a class where we want to get all the qualifying teams, we would write:
public List<Team> getQualifyingTeamsForWorldCup(List<Team> teams, Qualifier qualifier) {
List<Team> qualifiedTeams = new ArrayList<Team>();
for (Team team : teams) {
if (qualifier.isWorldCupQualifier(team)) {
qualifiedTeams.add(team);
}
}
return qualifiedTeams;
}
Now using an anonymous class, we would add the implementation of the function isWorldCupQualifier
in the method argument, something like this:
getQualifyingTeamsForWorldCup(teams, new Qualifier() {
public boolean isWorldCupQualifier(Team team) {
if (team.getWorldRank() <= 10) {
return true;
}
return false;
}
});
As we can see, the boilerplate code—of using new Qualifier()
and then the function overriding implementation using an anonymous class—makes the code less readable and the number of lines in the code increases.
But the above snippet can be written in a more elegant and readable way, thanks to lambdas and functional interfaces.
A functional interface is an interface with only one abstract method and any number of default implementations. They are usually denoted by the @FunctionalInterface
annotation (though this is not compulsory) to get compile time errors if someone tries to define more than one abstract method in the interface.
Since a functional interface has only one abstract method, whatever lambda expression is written in the second argument is mapped only to that abstract method declaration.
Now the interface changes to:
@FunctionalInterface
public interface Qualifier
{
boolean isWorldCupQualifier(Team team);
}
Java provides some built-in functional interfaces in java.util.function
. They can even be used to omit the declaration of the interface. Examples include Predicate
and Supplier
:
@FunctionalInterface
public interface Predicate<T> {
boolean test(T t);
}
@FunctionalInterface
public interface Supplier<T> {
T get();
}
Now in our use case, we can reduce our code even more by using the above Predicate
interface in place of our Qualifier
. The name Qualifier
is replaced by Predicate<Team>
and the function boolean isWorldCupQualifier(Team team);
is replaced by boolean test(Team t);
.
The new code becomes:
public List<Team> getQualifyingTeamsForWorldCup(List<Team> teams, Predicate<Team> predicate) {
List<Team> qualifiedTeams = new ArrayList<Team>();
for (Team team: teams) {
if (predicate.test(team)) {
qualifiedTeams.add(team);
}
}
return qualifiedTeams;
}
And somewhere in the code where want to get all the qualifying teams, we can simply write:
getQualifyingTeamsForWorldCup(teams, (Team team) -> team.getWorldRank() <= 10);
So if we know that the functional interface that we declaring matches a functional interface already defined in the package java.util.function
, then we can use that interface and reduce the boilerplate even more.
Before jumping into the answer we should know what the difference between a shallow copy and a deep copy.
Shallow copy: Any changes made to a cloned object will be reflected in the original object and vice-versa.
Deep copy: Any changes made to a cloned object will not be reflected in the original object nor vice-versa.
Shallow Copy Example
public class Player {
Float battingAverage;
Integer worldRank;
public Float getBattingAverage() {
return battingAverage;
}
public void setBattingAverage(Float battingAverage) {
this.battingAverage = battingAverage;
}
public Integer getWorldRank() {
return worldRank;
}
public void setWorldRank(Integer worldRank) {
this.worldRank = worldRank;
}
@Override public String toString() {
return "Player{" +
"battingAverage=" + battingAverage +
", worldRank=" + worldRank +
'}';
}
}
import java.util.LinkedList;
public class Test {
public static void main(String[] args) throws CloneNotSupportedException {
LinkedList<Player> players1 = new LinkedList<>();
Player p1 = new Player();
p1.setBattingAverage(46.7 f);
p1.setWorldRank(4);
Player p2 = new Player();
p2.setBattingAverage(56.9 f);
p2.setWorldRank(1);
players1.add(p1);
players1.add(p2);
LinkedList<Player> players2 = new LinkedList<>();
for (Player p: players1) {
players2.add(p);
}
System.out.println(players1);
System.out.println(players2);
players2.get(0).setWorldRank(5);
System.out.println(players1);
System.out.println(players2);
}
}
The result:
[Player{battingAverage=46.7, worldRank=4}, Player{battingAverage=56.9, worldRank=1}]
[Player{battingAverage=46.7, worldRank=4}, Player{battingAverage=56.9, worldRank=1}]
[Player{battingAverage=46.7, worldRank=5}, Player{battingAverage=56.9, worldRank=1}]
[Player{battingAverage=46.7, worldRank=5}, Player{battingAverage=56.9, worldRank=1}]
This shows that changes to players2
affect players1
.
Deep Copy Example
To allow a deep copy—such that changes to both the lists are independent of each other—we implement Cloneable
in Person
and override the clone
method:
public class Player implements Cloneable {
Float battingAverage;
Integer worldRank;
public Float getBattingAverage() {
return battingAverage;
}
public void setBattingAverage(Float battingAverage) {
this.battingAverage = battingAverage;
}
public Integer getWorldRank() {
return worldRank;
}
public void setWorldRank(Integer worldRank) {
this.worldRank = worldRank;
}
@Override public String toString() {
return "Player{" +
"battingAverage=" + battingAverage +
", worldRank=" + worldRank +
'}';
}
protected Object clone() throws CloneNotSupportedException {
Player clone = (Player) super.clone();
return clone;
}
}
import java.util.LinkedList;
public class Test {
public static void main(String[] args) throws CloneNotSupportedException {
LinkedList<Player> players1 = new LinkedList<>();
Player p1 = new Player();
p1.setBattingAverage(46.7 f);
p1.setWorldRank(4);
Player p2 = new Player();
p2.setBattingAverage(56.9 f);
p2.setWorldRank(1);
players1.add(p1);
players1.add(p2);
LinkedList<Player> players2 = new LinkedList<>();
for (Player p : players1) {
players2.add((Player) p.clone());
}
System.out.println(players1);
System.out.println(players2);
players2.get(0).setWorldRank(5);
System.out.println(players1);
System.out.println(players2);
}
}
The result makes the lists’ independence clear:
[Player{battingAverage=46.7, worldRank=4}, Player{battingAverage=56.9, worldRank=1}]
[Player{battingAverage=46.7, worldRank=4}, Player{battingAverage=56.9, worldRank=1}]
[Player{battingAverage=46.7, worldRank=4}, Player{battingAverage=56.9, worldRank=1}]
[Player{battingAverage=46.7, worldRank=5}, Player{battingAverage=56.9, worldRank=1}]
These sample questions are intended as a starting point for your interview process. If you need additional help, explore our hiring resources—or let Toptal find the best developers, designers, marketing experts, product managers, project managers, and finance experts for you.
Why Toptal
Submit an interview question
Submitted questions and answers are subject to review and editing, and may or may not be selected for posting, at the sole discretion of Toptal, LLC.
Looking for Core Java Developers?
Looking for Core Java Developers? Check out Toptal’s Core Java developers.
Doug Sparling
Doug is driven by a need to improve himself, his colleagues, and the products they build. He has experience with many web, back-end, and mobile platforms, most prominently those that are Java-based, such as JVM or Android. Doug's comfortable working at multiple levels at once, from twiddling bits on the wire or providing technical guidance to teams and C-suites alike.
Show MoreDaniel Campos
Daniel is a full-stack software engineer with extensive experience designing and implementing large-scale web-based applications using Java and JavaScript. Lately, he's been working mainly in a microservices architecture leveraging the power of the cloud—particularly the AWS environment. Daniel is a driven individual, a team player, an enthusiastic learner, and, most importantly, a passionate professional.
Show MoreDebadutta Panda
Debadutta is a seasoned professional with 17 years of expertise in Adobe Experience Manager and application development. With Agile certification and vast experience in banking and financial services, he excels in crafting robust architectures, leveraging Java, and optimizing core frameworks. Combining diligence and adaptability, Debadutta consistently drives towards organizational goals.
Show MoreToptal Connects the Top 3% of Freelance Talent All Over The World.
Join the Toptal community.