Microservices and Message Queues
While discussing Microservices best practices, questions about message queues comes up often. Specifically, a question I have been asked many times is whether it is OK for multiple microservices to access a shared message queue. This is a fair and important question. In general, sharing data by multiple microservices decreases independent deployability of those microservices and is therefore a bad idea. Intuitively, sharing message queues should equally be frowned upon. However, considering eventual and asynchronous character of many microservice-to-microservice invocations, inter-service communications could certainly benefit from queue-based message exchange. So, should and could we share queues between Microservices or not?
The advice I have been giving, so far, is to step back from the implementation (queue) and concentrate on a business capability that the queue provides, in each case. By doing this we can implement the capability (which, admittedly, happens to use a queue) but microservices in question will not be accessing a queue directly, instead they will just be invoking yet another microservice.
IMPORTANT: to be very clear, I am not advocating just hiding a queue behind an HTTP API. That, to a large extent, is a waste of time since the initial problem was not the transport protocol to begin with. Just substituting, say, AMQP with HTTP is definitely not the solution here. It is very important that the newly-minted microservice genuinely provides a real capability, even if the capability is more technical than your subject-matter experts would like you to work on, on a regular day.
Let me give couple examples to clarify what I mean. In one complex application, recently, we avoided direct access to message queues by creating three capability-driven microservices:
- Publish-subscribe hub.
- Job scheduler
- Batch job processor
While each one of these microservices were backed by a message queue, behind the scenes, the semantics of their APIs wasn’t that of a message queue and we genuinely implemented three distinct capabilities (even if somewhat infrastructural), instead of just mechanically wrapping a message queue with HTTP protocol.
Real-Life Example of an API vs. Infrastructure
We were recently discussing a related topic with my good friend and a well-known API expert Dave Goldberg and we landed on an example that I think vividly exemplifies when an API is much more than just an HTTP wrapper around an underlying infrastructure. This example is not about queues, but the principles are identical and relevant.
Consider the example of a modern, transactional email API, such as Sendgrid, Postmark, Mailjet or Mandrill, etc. Their APIs, while fundamentally just allowing to send e-mail, are most certainly not mere wrappers around decades-old SMTP protocol. Rather, these APIs provide novel, optimized semantics to the basic functionality and also add features that were never present in SMTP, to begin with (e.g. status callbacks). Obviously behind the scenes the above services all employ massively-scalable SMTP deployments, but conceptually they are transactional email APIs, not: SMTP wrappers.
When we recommend avoiding direct exposure of message queues to the individual microservices, in your architecture, we talk about an abstraction similar to what Sendgrid et al. have achieved by concentrating on a capability and hiding low-level infrastructure (SMTP) as a non-essential implementation detail. Only a handful of your microservices should ever work directly with the likes of Kafka or RabbitMQ, most of your services should instead “talk” to the capabilities that those few services encapsulate.
I am sure, as understanding of Microservices evolves, we may find a better answer to the original question, but from where things stand currently the above-described technique seems to work. I hope it helps somebody else, just like it helped us.