A bit of history
For a small scale web application, the backend architecture was pretty simple. You make a DB and store object into it so that information is stored. For basic services
- Store username and password
- For any content, store the information in the DB
- Basic CRUD operation for data stored in DB
At this stage we have the basic architecture of backend consisting of APIs, DB and server side software running on let’s say Apache.
First steps
As we enter into sites where the data requirement is huge and many interactions are required with user, the “View” that App user gets, and the “Data” needs isolation to prevent data getting out of sync or being overwritten.
To address this requirement, comes in the MVC architecture. It provides a “view”, and a “controller” to access the DB (or “model”).
Popular example of this backend MVC architecture or Laravel (PHP) or DJango/Flask (Python).
The pitfalls - monolithic & multiple apps
With MVC architecture, though the data isolation is complete, we soon start running into issues as the concurrency demands increase and the operational issues demands quicker turnaround time.
This becomes especially true when the backend APIs are consumed not only by by browser but also simultaneously by Mobile Apps, which, in turn, can be entirely different.
Image a backend service that is providing API interfaces to clients that are running on browsers, iOS and Android native apps, and we need to change the DB for supporting a new feature. In such case, post the DB change, the entire software has to be tested again and deployed.
Each time we want to enhance the backend service or fix a bug, the whole software has to be tested again, and there is a single DB and all the software is running on it.
This is the MONOLITHIC architecture
Evolution - Knowing microservices (era 2015)
To solve the above issues, the idea of managing complex backend, naturally evolves to breaking the number of backend services into small, self contained software. This means that each service or group of related services, work with a seperate database. And this database can be changed provided the API interface to this microservice is not affected.
Some examples might be
- Search as a microservice. Probably operating over Elastic Search. Team can keep evolving the service and rolling it out independently
- User authentication services. Team can let’s say add social login e.g. for twitter over already existing facebook oauth and roll out new service independently
- Billing services. Bug fixes or new payment gateway integration can happen independently.
- Content services where we are trying to get images or videos.
Evolution - Managing microservices
As the number of microservices increase, to manage and harness the flexibility offered by them, we need to bring in a few additional softwares. These are
Managing the database of each microservice
- Each microservice has its own database that is private, meaning that other microservices of same web application, cannot access each other database directly, but only through APIs of that microservice
- In monolithic architecture and reading/writing to DB is well defined by proven methodology like ACID (atomicity, consistency, isolation, durability). However, in microservices, there are two challenges
- Firstly, it is about keeping all the different DB in sync, when there is a business transaction. Secondly, each microserver, might use a different programming language and a different kind of DB, e.g. one microservice Java with MySQL and second with NodeJS and MongoDB.
- To solve this issue, we can go for Event Driven architecture. The basic principle of this architecture is that a microservice publishes an event whenever something happens, other microservices subscribe to these events, hence each microservice is responsible for its own database updates.
- This approach is more complex than ACID based management of DB. More ever, it does not ensure Atomicity. For example, service might crash, just before publishing event, but after completing DB update. To solve this issue, additional solution exists e.g.
- Local transaction based approach that use event table
- Mining DB transaction log (so that all events are published)
- Event sourcing - most effective but has learning curve. Here a sequence of state changing events are stored, and now an entity’s current state is reconstructed by replaying the events.
Managing microservices instances and communication
Microservices gets deployed generally in cloud, and the set of running service instances changes dynamically. To manage this there has to be ‘service registry’. Next to manage the communication among microservices, any inter process communication (IPC) has been used - the most popular being HTTP with JSON.
Netflix Ribbon is an IPC client that works along with Eureka (service registry) to discover and load balance requests.
Registering a service (‘service registry’) can be implemented in two ways - client side service registry (e.g. Eureka mentioned above) or server side service registry (e.g. Zookeeper)
The difference between the above two is that for client side service registry, client query service registry and then make requests to that instance. While in server side registry, client make requests via a router, the router queries the service registry, and then forwards the request to available instance.
API Gateway (intelligent routing)
The API Gateway serves as a single entry point for all the APIs of a specific web application. Mostly, it will also serve as a central authentication point for any request/response.
Functionally, it can be used to throttle the incoming API requests, especially useful in load conditions (let’s say when there are a million API request per minute).
One popular example is Zuul - it provides a single URL for all the instances of Rest Service and then load balances the incoming request to different instances of this Rest Service
Circuit breaker
Even in the most robust system, it might happen that one of the microservice goes down, probably due to server or load issues or a software glitch. In such cases, the microservices offers the feature of graceful degradation and limiting the blast radius.
To put in plain words, even when a microservice goes down, the whole application should not go down. For example, let’s say that during promotion/discounts day, there is a huge load on e-commerce website, in such cases we might decide that user is still able to come to the landing page, even though billing may be taking time. Or if user is not able to come to landing page, we should some other promotional video, while site is loading in background.
One popular example is Hystrix
Deploying Microservices
Deploying monolithic application is simple, you run multiple, identical copies of this application on few servers.
Deploying microservice application is much more complex, and is inherent in its nature, since one web application will have multiple microservices and each microservice will probably be coded using a different programming language and DB might differ too.
One approach is having all microservices of web application, deployed on a single host. However, most likely, due to resource and scaling issue, this approach is not popular.
The better way to run the microservice in isolation on its own host. For example, Netflix uses Animator to package and run each microservice on EC2 AMI. The CPU and Memory usage is constrained to the host. This uses the virtual machine (VM) concept.
Since containers are lightweight compared to VMs, it’s probably better to run multiple service instances per container (Docker), this can provide the advantage of limiting the
CPU usage, container memory, and even the I/O.
Cluster managers like Kubernetes can be used for managing the containers.
Serverless deployment [TBC]
N.B: The boundary between VM and Containers is getting thinner day by day. Companies like BoxFuser are enabling VMs to be as good as containers. In future, probably everything will move to ‘library operating systems’ e.g. unikernel.
Evolution - Sidecar Design Pattern and Service Mesh Architecture (era 2018 / 2019)
As is clear from the above details, implementing a microservice as a POC might be a simple task, but making it highly scalable (cloud based deployment), resilient (ckt breaker), secure(API Gateways) and observable(tracking) is hugely challenging.
To help us manage this complexity, the service mesh architecture has been quite successful. And here comes the sidecar and pods!
While deploying microservice, we were using the containers. To monitor multiple containers and to manage them we had Kubernetes.
Now we have the concept of sidecar, it runs alongside containers and routes/proxies all incoming/outgoing traffic. Typically we will have a sidecar associated with Kubernetes pods (one Kubenetes pod will consist of a group of related microservice). Envoy is the most popular sidecar proxy.
Service discovery is now managed by container orchestration framework like Kubernetes.
Load balancing is service mesh architecture evolves and enables blue-green deployment. (staging will get promoted to prod, and prod demoted to staging, and this keeps happening in cyclic manner, ensuring that staging and prod are in sync always)
Ckt breaker pattern, authentication and authorization, encryption are all taken care by service mesh.
The most popular implementation of service mesh architecture is Istio. Linkered is another popular implementation that is gaining a lot of traction.
The developers focus on the microservices, while operations team focuses on sidecar. The service mesh helps in this by providing Control Plane and Data Plane.


