5 Options for Deploying Microservices

The 5 ways to deploy microservices

Microservice applications can run in many ways, each with different tradeoffs and cost structures. What works for small applications spanning a few services will likely not suffice for large-scale systems.

  1. Single machine, multiple processes: buy or rent a server and run the microservices as processes.
  2. Multiple machines, multiple processes: the obvious next step is adding more servers and distributing the load, offering more scalability and availability.
  3. Containers: packaging the microservices inside a container makes it easier to deploy and run along with other services. It’s also the first step towards Kubernetes.
  4. Orchestrator: orchestrators such as Kubernetes or Nomad are complete platforms designed to run thousands of containers simultaneously.
  5. Serverless: serverless allows us to forget about processes, containers, and servers, and run code directly in the cloud.

Option 1: Single machine, multiple processes

At the most basic level, we can run a microservice application as multiple processes on a single machine. Each service listens to a different port and communicates over a loopback interface.

  • Lightweight: there is no overhead as it’s just processes running on a server.
  • Convenience: it’s a great way to experience microservices without the learning curve that other tools have.
  • Easy troubleshooting: everything is in the same place, so finding a problem or reverting to a working configuration in case of trouble is very straightforward, if you have continuous delivery in place.
  • Fixed billing: we know how much we’ll have to pay each month.
  • No scalability: once you max out the resources of the server, that’s it.
  • Single point of failure: if the server goes down, the application goes down with it.
  • Fragile deployment: we need custom deployment and monitoring scripts to ensure that services are installed and running correctly.
  • No resource limits: any microservice process can consume any amount of CPU or memory, potentially starving other services and leaving the application in a degraded state.

Option 2: Multiple machines and processes

This option is essentially an upgrade of option 1. When the application exceeds the capacity of a server, we can scale up (upgrade the server) or scale sideways (add more servers). In the case of microservices, horizontally scaling into two or more machines makes more sense since we get improved availability as a bonus. And, once we have a distributed setup, we can always scale up by upgrading servers.​

  • How do we correlate log files distributed among many servers?
  • How do we collect sensible metrics?
  • How do we handle upgrades and downtime?
  • How do we handle spikes and drops in traffic?

Option 3: Deploy microservices with containers

While running microservices directly as processes is very efficient, it comes at a cost.

  • The server must be meticulously maintained with the necessary dependencies and tools.
  • A runaway process can consume all the memory or CPU.
  • Deploying and monitoring the microservices is a brittle process.
  • Isolation: contained processes are isolated from one another and the OS. Each container has a private filesystem, so dependency conflicts are impossible (as long as you are not abusing volumes).
  • Concurrency: we can run multiple instances of the same container image without conflicts.
  • Less overhead: since there is no need to boot an entire OS, containers are much more lightweight than VMs.
  • No-install deployments: installing a container is just a matter of downloading and running the image. There is no installation step required.
  • Resource control: we can put CPU and memory limits on containers so they don’t destabilize the server.

Containers on servers

This approach replaces processes with containers since they give us greater flexibility and control. As with option 2, we can distribute the load across any number of machines.​

Serverless containers

All the options described up to this point were based on servers. But software companies are not in the business of managing servers — servers that must be configured, patched, and upgraded — they are in the business of solving problems with code. So, it shouldn’t be surprising that many companies prefer to avoid servers whenever possible.

  • No servers: there is no need to maintain or patch servers.
  • Easy deployment: just build a container image and tell the service to use it.
  • Autoscaling: the cloud provider can provide more capacity when demand spikes or stop all containers when there is no traffic.
  • Vendor lock-in: this is the big one. Moving away from a managed service is always challenging, as the cloud vendor provides and controls most of the infrastructure.
  • Limited resources: managed services impose CPU and memory limits that cannot be avoided.
  • Less control: we don’t have the same level of control we get with other options. You’re out of luck if you need functionality that is not provided by the managed service.

Option 4: Orchestrators

Orchestrators are platforms specialized in distributing container workloads over a group of servers. The most well-known orchestrator is Kubernetes, a Google-created open-source project maintained by the Cloud Native Computing Foundation.

  • Complexity: orchestrators are known for their steep learning curve. It’s not uncommon to shoot oneself in the foot if not cautious. For simple applications, an orchestrator is overkill.
  • Administrative burden: maintaining a Kubernetes installation requires significant expertise. Fortunately, every decent cloud vendor offers managed clusters that take away all the administration work.
  • Skillset: Kubernetes development requires a specialized skillset. It can take weeks to understand all the moving parts and learn how to troubleshoot a failed deployment. Transitioning into Kubernetes can be slow and decrease productivity until the team is familiar with the tools.

Option 5: Deploy microservices as serverless functions

Serverless functions deviate from everything else we’ve discussed so far. Instead of servers, processes, or containers, we use the cloud to simply run code on demand. Serverless offerings like AWS Lambda and Google Cloud Functions handle all the infrastructure details required for scalable and highly-available services, leaving us free to focus on coding.​

  • Ease of use: we can deploy functions on the fly without compiling or building container images, which is great for trying things out and prototyping.
  • Easy to scale: you get (basically) infinite scalability. The cloud will provide enough resources to match demand.
  • Pay per use: you pay based on usage. If there is no demand, there’s no charge.
  • Vendor lock-in: as with managed containers, you’re buying into the provider’s ecosystem. Migrating away from a vendor can be demanding.
  • Cold starts: infrequently-used functions might take a long time to start. This happens because the cloud provider spins down the resources attached to unused functions.
  • Limited resources: each function has a memory and time limit–they cannot be long-running processes.
  • Limited runtimes: only a few languages and frameworks are supported. You might be forced to use a language that you’re not comfortable with.

Conclusion

The best way to run a microservice application is determined by many factors. A single server using containers (or processes) is a fantastic starting point for experimenting or testing prototypes.

--

--

Supporting developers with insights and tutorials on delivering good software. · https://semaphoreci.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Semaphore

Semaphore

Supporting developers with insights and tutorials on delivering good software. · https://semaphoreci.com