Shift March 28, 2018

Scaling cloud apps without losing your mind

Matthias Vervaet

Solution Architect

The software development industry is in a continuous battle to keep things simple. Simplicity means that less things can go wrong, that they are easier to understand, and that development takes less time. 

Our industry also has a favourite weapon of choice: abstraction. Recently, that same strategy has been applied to developing and deploying cloud applications and it is currently causing an industry-wide shift. 

If you are looking to improve productivity and customer experience, get ready to take a long but illuminating journey from monoliths over microservices to serverless functions.

More articles like this? Download Shift

This article was first published in Shift, our annual report on the 10 next big digital trends. Download Shift for more interesting stuff!


1. Traditional monolithic architecture

Traditionally, cloud applications were developed as monoliths, built and deployed as a single block. This represents a challenge, since this block presents dependencies to a server: libraries, frameworks, runtimes, drivers, OS.... Each of those dependencies needs to be updated, which is risky

Developing a monolithic application poses the same problem. A developer has to install the exact same dependencies on their local machine to make sure the application works on the server.

All that makes maintenance harder. What if a certain library is updated to a new version and this upgrade doesn't make it to the production server? Or what if the server still uses an old version of say, node.js? Such inconsistencies can easily cause bugs that can be hard to track down.

2. Containerise dependencies

Luckily, these issues can be avoided with containerisation. A software container isolates the software and all of its dependencies and defines a standard way to 'run' or start the software. 

It is yet another example of abstraction: a server that runs containers is completely unaware of the contents of that container. In fact, we actually make a distinction between a container and a container image. A container image is a single file that contains the clean state of an application (e.g. a blueprint of a house), whereas a container is a running instance of that application (e.g. the actual house). 

All of this makes containers incredibly portable:

  • Developers can run the exact same setup as the production environment.
  • Multiple containers of the same image can be run on the same server.
  • Multiple versions of the same application can be run at the same time.
  • Developers can easily share container images across multiple servers or with other developers.

The most popular technology to containerise an application is Docker:

Docker facilitates creating or extending container images by using a simple descriptive file. It also has an excellent system to distribute or share container images. 

More and more vendors are nowadays distributing official container images for their software, which makes using Docker even more convenient. 

Major companies like Netflix, Uber and Twitter have long adopted Docker and they understand the key benefits of using it:

Docker images are relatively small and a Docker container runs with a minimal overhead on top of the operating system. This facilitates rapid delivery and reduces the time to deploy new application containers.

The only dependency required is the Docker runtime. An application and all of its dependencies are bundled in one or more containers, whatever the machine.

It is possible to track successive versions of a container image, inspect differences, or roll back to previous versions. A container images consists of several layers. Each of those layers is actually a container image itself. Since container images are typically created by extending an existing container image, a lot of those layers are shared between container images.

Docker reduces both the time spent on problems and the risk of problems with application dependencies.

3. Think in microservices

Containerisation solves typical problems encountered with monolithic applications, but it also allows you to take separation of concerns one step further. When implementing a new feature in a complex application, there's still a risk of ripple effects throughout your application if a codebase is coupled too tightly. Consequently, releasing a new feature can become hard, since the entire application needs to be retested completely. 

To solve this problem, an application can be split up into separate logical blocks or 'microservices'. Each of these logical blocks has its own responsibility, typically a single feature or even just an aspect of a feature. Dependencies between these blocks is minimised, so that these blocks are loosely coupled. Yet together they form a single application. Try to picture it as bees forming a bee hive. 

Microservices offer a lot of advantages:

  • They enforce a separation of concern: each microservice has its own set of responsibilities.
  • Each microservice can use its own technology, framework or programming languages. Developers can choose the best tool for the job.
  • Each microservice can have its own release cycle. This allows for frequent releases for new components, while stable components are released at a much lower frequency.
  • A more granular way of scaling is possible. Since microservices are loosely coupled and containerised, it's perfectly possible to use multiple container instances (and thus more computing power) for a demanding feature, while using only a few instances for a less critical feature.
  • It is easier to detect and isolates faults.

4. Orchestrate your infrastructure

Microservices allow for flexible setups, but the tradeoff is complexity. Managing a multitude of distributed services opens up a set of new challenges. How can we efficiently distribute our services over multiple servers? How can we automatically scale the number of containers or the number of servers? How can we detect a problem in a single container that is started automatically and might disappear automatically? How do we know which server is best suited to run a new container?

Because of tools like Kubernetes, hosting software in the Cloud has become easy.

To overcome these issues, several so-called orchestrators have been developed. Orchestrators typically abstract the underlying infrastructure: you just have to tell an orchestrator to start a container and it will figure out which server is best placed to run it. Usually, orchestrators also have a built-in monitoring system, can automatically scale a solution, and are able to deploy new versions using rolling updates (no downtime). 

One of the best known orchestrators today is Google's Kubernetes. Because of tools like Kubernetes, hosting software in the Cloud has become easy. Designated servers hosting a specific application have become a thing of the past. Servers should be interchangeable members of a group. Your only concern is that there are enough of them to produce the amount of work you need, and you can use autoscaling to adjust the numbers up and down.

5. Forget about servers altogether

But things can get even better. We can think of microservices as decoupled components, but what if we decouple it even more and start using separate bite-sized functions?

This concept is called 'serverless'. The phrase 'serverless' doesn't mean servers are no longer involved. It simply means that developers no longer have to think so much about them. 

serverless function shares some of the characteristics of a microservice, and even looks very similar to a microservice, but is different in a number of ways.

  • It's short-lived.
  • It serves a single task.
  • It's very cost-efficient: You only pay for the time that your function(s) are running and don't have to run an app 24/7 anymore.

When using a serverless platform, such as AWS Lambda or Google Cloud Functions, all you need to do is upload a few lines of code to the cloud and hand over the management of that function to your cloud provider.

  1. A trigger is fired (this could be an http call).
  2. The application (function) is started.
  3. The application (function) serves the request.
  4. The application (function) is killed.

The process is short-lived. There is no need for ongoing management of the application. All you have to do is manage the application code and make sure this works correctly. The monitoring, logging and scaling are all taken care of by the cloud platform.

Ready to go serverless? Choose wisely

The trend is clear: the unit of work is getting smaller and smaller. We've gone from monoliths to microservices to serverless functions. But does that mean everything will become serverless?

Becoming serverless will take workload from microservices, but won't completely take it away. It is mainly another step in breaking things down even smaller. Serverless functions probably represent the best cloud computing has to offer. With the rise of IoT devices, they will take the center stage. 

But using serverless functions comes at a cost: splitting up an application to its function level causes an overhead and running a high-load application serverless is costly compared to using microservices.

If you are looking to improve productivity and customer experience, you should look beyond monolithic applications and embrace microservices and/or serverless. Both can be introduced gradually into existing products. 

In our experience, that's also the best way to adopt them: start small, build knowledge and experience, measure the impact and go from there.

More articles like this? Download Shift

This article was first published in Shift, our annual report on the 10 next big digital trends. Download Shift for more interesting stuff!