Service Oriented Architecture is a Software Design Pattern that allows you to Distribute a System into smaller more manageable Applications. In this pattern the big monolithic system is split into domain based units called Services.

Each Service is independent from the other, and they communicate between each other by some sort of REST Api calls.

The common part here is that all of them share the same Database, where each one has its own schema. This way, each service can scale differently in terms of application layer, while maintaining only one Database Server.

There are many variants to this pattern, one is shown in the figure above, where the users can make requests to each one of these Services. If the called Service requires anything from other, it can make a call to it.

This pattern is recommended for decomposing a big Monolithic System, and for the sake of development costs, i recommend to use the same Stack for all Services, in my case I work with Nginx as Web Server and PHP as Backend Language.

Its also, very simple to add a common Nginx Ingress Server to act as Load Balancer and prevent possible malicious attacks, filtering only allowed incoming requests from whitelisted IPs, or special headers, etc. and I'd recommend for the long run to consider to put these Services in Docker containers and use Kubernetes as Container Orchestrator.

With Kubernetes is fairly simple to implement and maintain a pool of Services for the whole CI/CD Pipelines. Just imagine that this monolith got broken into 5 Services, now for each one, you will need a local environment where the developers will be working, then a Dev environment where the work of each team will be joined and unit, integration and functional testing will be executed, then a QA Environment where Q&A team will be testing each single use case to make sure everything continues working fine, then a Stage Environment that has  the same nmber of replicas with the exact same data  as in Production, where a round of key testing is done, and then Production Environment.

So if we consider one container for each one of these Services, and assuming you have no more than 3 replicas per service in Stage and in Production, plus the Sql proxy ones to connect to the DB, plus backup services, log and monitoring, etc. most likely for these 5 Services, you will end having a Kubernetes cluster with around 100 deployments.

A typical Kubernetes cluster for a 5 Service workload, with this configuration, will require you to consider a virtual environment capable of providing 10 to 30 virtual cores and 20 to 40 GB Ram divided into at least 5 to 10 nodes.

A typical Kubernetes Engine for a workload as mentioned above with a Cloud SQL DB up to 100 GB no HA in Google Cloud may be in the range of US $500 ~ 1,000 per month.

The benefits of using a cloud provider like Google Cloud for a Kubernetes Cluster, is that you can be up and running in no time and you have tech support available all the time. 

 Next Up: Microservices Architecture