Cover image
12 minute read

Introduction to Python Microservices with Nameko

The microservices architectural pattern is an architectural style that is growing in popularity, given its flexibility and resilience. In this article, Toptal Freelance Python Developer Guilherme Caminha will focus on building a proof of concept microservices application in Python using Nameko, a microservices framework.


The microservices architectural pattern is an architectural style that is growing in popularity, given its flexibility and resilience. Together with technologies such as Kubernetes, it is getting easier to bootstrap an application using a Microservices architecture as never before.

According to a classic article from Martin Fowler’s blog, the Microservices architectural style can be summarized as:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery.

In other words, an application following a microservices architecture is composed of several independent and dynamic services that communicate with each other using a communication protocol. It is common to use HTTP (and REST), but as we’ll see, we can use other types of communication protocols such as RPC (Remote Procedure Call) over AMQP (Advanced Message Queuing Protocol).

The microservices pattern can be thought as a specific case of SOA (service oriented architecture). In SOA it is common, however, to use an ESB (enterprise service bus) to manage communication between services. ESBs are usually highly sophisticated and include functionalities for complex message routing and business rules application. In microservices, it is more common to employ an alternative approach: “smart endpoints and dumb pipes,” meaning that the services themselves should contain all the business logic and complexity (high cohesion), but the connection between the services should be as simple as possible (high decoupling), meaning that a service does not necessarily need to know which other services will communicate with it. This is a separation of concerns applied at the architectural level.

Another aspect of microservices is that there is no enforcement about which technologies should be used within each service. You should be able to write a service with any software stack that can communicate with the other services. Each service has its own lifecycle management as well. All of that means that in a company, it is possible to have teams work on separate services, with different technologies and even management methodologies. Each team will be concerned with business capabilities, helping build a more agile organization.

Python Microservices

Having these concepts in mind, in this article we will focus on building a proof of concept Microservices application using Python. For that, we will use Nameko, a Python microservices framework. It has RPC over AMQP built in, allowing for you to easily communicate between your services. It also has a simple interface for HTTP queries, which we’ll use in this tutorial. However, for writing Microservices that expose an HTTP endpoint, it is recommended that you use another framework, such as Flask. To call Nameko methods over RPC using Flask, you can use flask_nameko, a wrapper built just for interoperating Flask with Nameko.

Setting the Basic Environment

Let’s start by running the simplest possible example, extracted from the Nameko website, and expand it for our purposes. First, you will need Docker installed. We will use Python 3 in our examples, so make sure you have it installed as well. Then, create a python virtualenv and run $ pip install nameko.

To run Nameko, we need the RabbitMQ message broker. It will be responsible for the communication between our Nameko services. Don’t worry, though, as you don’t need to install one more dependency on your machine. With Docker, we can simply download a pre-configured image, run it, and when we’re done simply stop the container. No daemons, apt-get or dnf install.

Python Microservices with Nameko talking to a RabbitMQ broker

Start a RabbitMQ container by running $ docker run -p 5672:5672 --hostname nameko-rabbitmq rabbitmq:3 (you might need sudo to do that). This will start a Docker container using the most recent version 3 RabbitMQ and expose it over the default port 5672.

Hello World with Microservices

Go ahead and create a file called with the following content:

from nameko.rpc import rpc

class GreetingService:
    name = "greeting_service"

    def hello(self, name):
        return "Hello, {}!".format(name)

Nameko services are classes. These classes expose entry points, which are implemented as extensions. The built-in extensions include the ability to create entry points that represent RPC methods, event listeners, HTTP endpoints or timers. There are also community extensions that can be used to interact with the PostgreSQL database, Redis, etc… It is possible to write your own extensions.

Let’s go ahead and run our example. If you got RabbitMQ running on the default port, simply run $ nameko run hello. It will find RabbitMQ and connect to it automatically. Then, to test our service, run $ nameko shell in another terminal. This will create an interactive shell which will connect to that same RabbitMQ instance. The great thing is, by using RPC over AMQP, Nameko implements automatic service discovery. When calling an RPC method, nameko will try to find the corresponding running service.

Two Nameko services talking via RabbitMQ RPC

When running the Nameko shell, you will get a special object called n added to the namespace. This object allows for dispatching events and doing RPC calls. To do an RPC call to our service, run:

> >> n.rpc.greetingservice.hello(name='world')
'Hello, world!'

Concurrent Calls

These service classes are instantiated at the moment a call is made and destroyed after the call is completed. Therefore, they should be inherently stateless, meaning you should not try to keep any state in the object or class between calls. This implies that the services themselves must be stateless. With the assumption that all services are stateless, Nameko is able to leverage concurrency by using eventlet greenthreads. The instantiated services are called “workers,” and there can be a configured maximum number of workers running at the same time.

To verify Nameko concurrency in practice, modify the source code by adding a sleep to the procedure call before returning the response:

from time import sleep

from nameko.rpc import rpc

class GreetingService:
    name = "greeting_service"

    def hello(self, name):
        return "Hello, {}!".format(name)

We are using sleep from the time module, which is not async-enabled. However, when running our services using nameko run, it will automatically patch trigger yields from blocking calls such as sleep(5).

It is now expected that the response time from a procedure call should take around 5 seconds. However, what will be the behavior from the following snippet, when we run it from the nameko shell?

res = []
for i in range(5):
    hello_res = n.rpc.greeting_service.hello.call_async(name=str(i))

for hello_res in res:

Nameko provides a non-blocking call_async method for each RPC entry point, returning a proxy reply object that can then be queried for its result. The result method, when called on the reply proxy, will be blocked until the response is returned.

As expected, this example runs in just around five seconds. Each worker will be blocked waiting for the sleep call to finish, but this does not stop another worker to start. Replace this sleep call with a useful blocking I/O database call, for example, and you got an extremely fast concurrent service.

As explained earlier, Nameko creates workers when a method is called. The maximum number of workers is configurable. By default, that number is set to 10. You can test changing the range(5) in the above snippet to, for example, range(20). This will call the hello method 20 times, which should now take ten seconds to run:

> >> res = []
> >> for i in range(20):
...     hello_res = n.rpc.greeting_service.hello.call_async(name=str(i))
...     res.append(hello_res)
> >> for hellores in res:
...     print(hello_res.result())
Hello, 0!
Hello, 1!
Hello, 2!
Hello, 3!
Hello, 4!
Hello, 5!
Hello, 6!
Hello, 7!
Hello, 8!
Hello, 9!
Hello, 10!
Hello, 11!
Hello, 12!
Hello, 13!
Hello, 14!
Hello, 15!
Hello, 16!
Hello, 17!
Hello, 18!
Hello, 19!

Now, suppose that you were getting too many (more than 10) concurrent users calling that hello method. Some users will hang waiting more than the expected five seconds for the response. One solution was to increase the number of works by overriding the default settings using, for example, a config file. However, if your server is already on its limit with those ten workers because the called method relies on some heavy database queries, increasing the number of workers could cause the response time to increase even more.

Scaling Our Service

A better solution is to use Nameko Microservices capabilities. Until now, we have only used one server (your computer), running one instance of RabbitMQ, and one instance of the service. In a production environment, you will want to arbitrarily increase the number of nodes running the service that is getting too many calls. You can also build a RabbitMQ cluster if you want your message broker to be more reliable.

To simulate a service scaling, we can simply open another terminal and run the service as before, using $ nameko run hello. This will start another service instance with the potential to run ten more workers. Now, try running that snippet again with range(20). It should now take five seconds again to run. When there are more than one service instances running, Nameko will round-robin the RPC requests among the available instances.

Nameko is built to robustly handle those methods calls in a cluster. To test that, try running the snipped and before it finishes, go to one of the terminals running the Nameko service and press Ctrl+C twice. This would shut down the host without waiting for the workers to finish. Nameko will reallocate the calls to another available service instance.

In practice, you would be using Docker to containerize your services, as we will later, and an orchestration tool such as Kubernetes to manage your nodes running the service and other dependencies, such as the message broker. If done correctly, with Kubernetes, you would effectively transform your application in a robust distributed system, immune to unexpected peaks. Also, Kubernetes allows for zero-downtime deploys. Therefore, deploying a new version of a service will not affect the availability of your system.

It’s important to build services with some backward compatibility in mind, since in a production environment it can happen for several different versions of the same service to be running at the same time, especially during deployment. If you use Kubernetes, during deployment it will only kill all the old version containers when there are enough running new containers.

For Nameko, having several different versions of the same service running at the same time is not a problem. Since it distributes the calls in a round-robin fashion, the calls might go through old or new versions. To test that, keep one terminal with our service running the old version, and edit the service module to look like:

from time import sleep

from nameko.rpc import rpc

class GreetingService:
    name = "greeting_service"

    def hello(self, name):
        return "Hello, {}! (version 2)".format(name)

If you run that service from another terminal, you will get the two versions running at the same time. Now, run our test snippet again and you will see both versions being shown:

> >> res = []
> >> for i in range(5):
...     hello_res = n.rpc.greeting_service.hello.call_async(name=str(i))
...     res.append(hello_res)
> >> for hellores in res:
...     print(hello_res.result())
Hello, 0!
Hello, 1! (version 2)
Hello, 2!
Hello, 3! (version 2)
Hello, 4!

Working with Multiple Instances

Now we know how to effectively work with Nameko, and how scaling works. Let’s now take a step further and use more tool from the Docker ecosystem: docker-compose. This will work if you’re deploying to a single server, which is definitely not ideal since you will not leverage many of the advantages of a Microservices architecture. Again, if you want to have a more suitable infrastructure, you might use an orchestration tool such as Kubernetes to manage a distributed system of containers. So, go ahead and install docker-compose.

Again, all we have to do is deploy a RabbitMQ instance and Nameko will take care of the rest, given that all services can access that RabbitMQ instance. The full source code for this example is available in this GitHub repository.

Let’s build a simple travel application to test Nameko capabilities. That application allows registering airports and trips. Each airport is simply stored as the name of the airport, and the trip stores the ids for the origin and destination airports. The architecture of our system looks like the following:

Travel application illustration

Ideally, each microservice would have its own database instance. However, for simplicity, I have created a single Redis database for both Trips and Airports microservices to share. The Gateway microservice will receive HTTP requests via a simple REST-like API and use RPC to communicate with Airports and Trips.

Let’s begin with the Gateway microservice. Its structure is straightforward and should be very familiar to anyone coming from a framework like Flask. We basically define two endpoints, each allowing both GET and POST methods:

import json

from nameko.rpc import RpcProxy
from nameko.web.handlers import http

class GatewayService:
    name = 'gateway'

    airports_rpc = RpcProxy('airports_service')
    trips_rpc = RpcProxy('trips_service')

    @http('GET', '/airport/<string:airport_id>')
    def get_airport(self, request, airport_id):
        airport = self.airports_rpc.get(airport_id)
        return json.dumps({'airport': airport})

    @http('POST', '/airport')
    def post_airport(self, request):
        data = json.loads(request.get_data(as_text=True))
        airport_id = self.airports_rpc.create(data['airport'])

        return airport_id

    @http('GET', '/trip/<string:trip_id>')
    def get_trip(self, request, trip_id):
        trip = self.trips_rpc.get(trip_id)
        return json.dumps({'trip': trip})

    @http('POST', '/trip')
    def post_trip(self, request):
        data = json.loads(request.get_data(as_text=True))
        trip_id = self.trips_rpc.create(data['airport_from'], data['airport_to'])

        return trip_id

Let’s take a look at the Airports service now. As expected, it exposes two RPC methods. The get method will simply query the Redis database and return the airport for the given id. The create method will generate a random id, store the airport information, and return the id:

import uuid

from nameko.rpc import rpc
from nameko_redis import Redis

class AirportsService:
    name = "airports_service"

    redis = Redis('development')

    def get(self, airport_id):
        airport = self.redis.get(airport_id)
        return airport

    def create(self, airport):
        airport_id = uuid.uuid4().hex
        self.redis.set(airport_id, airport)
        return airport_id

Notice how we are using the nameko_redis extension. Take a look at the community extensions list. Extensions are implemented in a way that employs dependency injection. Nameko takes care of initiating the actual extension object that each worker will use.

There is not much difference between the Airports and the Trips microservices. Here is how the Trips microservice would look:

import uuid

from nameko.rpc import rpc
from nameko_redis import Redis

class AirportsService:
    name = "trips_service"

    redis = Redis('development')

    def get(self, trip_id):
        trip = self.redis.get(trip_id)
        return trip

    def create(self, airport_from_id, airport_to_id):
        trip_id = uuid.uuid4().hex
        self.redis.set(trip_id, {
            "from": airport_from_id,
            "to": airport_to_id
        return trip_id

The Dockerfile for each microservice is also very straightforward. The only dependency is nameko, and in the case of the Airports and Trips services, there is a need to install nameko-redis as well. Those dependencies are given in the requirements.txt in each service. The Dockerfile for the Airports service looks like:

FROM python:3

RUN apt-get update && apt-get -y install netcat && apt-get clean


COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY config.yml ./

RUN chmod +x ./

CMD ["./"]

The only difference between that and the Dockerfile for the other services is the source file (in this case, which should be changed accordingly.

The script takes care of waiting until RabbitMQ and, in the case of the Airports and Trips services, the Redis database is ready. The following snippet shows the content of for airports. Again, for the other services just change from aiports to gateway or trips accordingly:


until nc -z ${RABBIT_HOST} ${RABBIT_PORT}; do
    echo "$(date) - waiting for rabbitmq..."
    sleep 1

until nc -z ${REDIS_HOST} ${REDIS_PORT}; do
    echo "$(date) - waiting for redis..."
    sleep 1

nameko run --config config.yml airports

Our services are now ready to run:

$ docker-compose up

Let’s test our system. Run the command:

$ curl -i -d "{\"airport\": \"first_airport\"}" localhost:8000/airport
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 32
Date: Sun, 27 May 2018 05:05:53 GMT


That last line is the generated id for our airport. To test if it is working, run:

$curl localhost:8000/airport/f2bddf0e506145f6ba0c28c247c54629
{"airport": "first_airport"}

Great, now let’s add another airport:
$ curl -i -d "{\"airport\": \"second_airport\"}" localhost:8000/airport
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 32
Date: Sun, 27 May 2018 05:06:00 GMT


Now we got two airports, That’s enough to form a trip. Let’s create a trip now:

$ curl -i -d "{\"airport_from\": \"f2bddf0e506145f6ba0c28c247c54629\", \"airport_to\": \"565000adcc774cfda8ca3a806baec6b5\"}" localhost:8000/trip
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Content-Length: 32
Date: Sun, 27 May 2018 05:09:10 GMT


As before, that last line represents the trip ID. Let’s check if it was inserted correctly:

$ curl localhost:8000/trip/34ca60df07bc42e88501178c0b6b95e4
{"trip": "{'from': 'f2bddf0e506145f6ba0c28c247c54629', 'to': '565000adcc774cfda8ca3a806baec6b5'}"}


We have seen how Nameko works by creating a local running instance of RabbitMQ, connecting to it and performing several tests. Then, we applied the gained knowledge to create a simple system using a Microservices architecture.

Despite being extremely simple, our system is very close to what a production-ready deployment would look like. You would preferably use another framework to handle HTTP requests such as Falcon or Flask. Both are great options and can easily be used to create other HTTP-based microservices, in case you want to break your Gateway service, for example. Flask has the advantage of already having a plugin to interact with Nameko, but you can use nameko-proxy directly from any framework.

Nameko is also very easy to test. We haven’t covered testing here for simplicity, but do check out Nameko’s testing documentation.

With all the moving parts inside a microservices architecture, you want to ensure you’ve got a robust logging system. To build one, see Python Logging: An In-Depth Tutorial by fellow Toptaler and Python Developer: Son Nguyen Kim.

Understanding the basics

RabbitMQ is written in Erlang.

RabbitMQ is a message broker used to handle communication between systems in distributed computing.

AMQP usually uses TCP, as it is commonly expected to be reliable. Specifically, RabbitMQ is configured to use TCP by default.

Microservices is an architectural pattern that focuses on creating relatively small and uncoupled services to compose an application, rather than a so-called monolith.

Docker is a tool for deploying isolated, or containerized, applications. Docker containers are similar to virtual machines in a sense, but much more lightweight both in size and resource consumption.

Docker is used to clearly define the how to install all prerequisites for an application and how it should run, allowing for easy testing and environment replication.

A microservice is a self-contained building block for a larger application, which usually runs on the web. It is self-contained in the sense that it ships with its own prerequisites and does not depend on other microservices to be deployed and run.