AMQP (RabbitMQ) vs Kafka for asynchronous communication

Ankit Trehan
9 min readFeb 28, 2022

--

So you have decided to use asynchronous communication between your services/applications and now need to decide how to implement it. The most common two ways to implement async communication between microservices is through RabbitMQ and Kafka. But what are they and how do they differ from each other?

Let’s start with what exactly these terms mean:

AMQP and RabbitMQ

AMQP is the queue protocol that RabbitMQ is natively based on. According to amqp.org,

AMQP is an open standard for passing business messages between applications or organizations.

AMQP allows you to be platform agnostic, with AMQP one can use any message broker that is AMQP compliant. AMQP enables message passing through TCP/IP connections while only allowing binary data to be sent across it. Some features that AMQP offers are its ability for message queueing, reliability and routing.

Although RabbitMQ uses AMQP natively for its protocol, it also supports STOMP, MQTT and HTTP protocols through plugins. It accepts messages from services called “producers” and forwards messages to “consumers”. RabbitMQ’s website likens it to a post office, where RabbitMQ is the post box which takes the message, the post office that routes the message and the letter carrier that will eventually deliver the message.

The major difference between RabbitMQ and the post office is that it doesn’t deal with paper, instead it accepts, stores, and forwards binary blobs of data ‒ messages.

Kafka

Kafka, according to their website is, an event streaming platform. To understand what they mean by this, let’s see what event streaming means. According to Kafka’s docs, they define event streaming as:

Event streaming is the practice of capturing data in real-time from event sources like databases, sensors, mobile devices, cloud services, and software applications in the form of streams of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting to the event streams in real-time as well as retrospectively; and routing the event streams to different destination technologies as needed.

So in other words, event streaming is the process of gathering a bunch of data from a lot of points producing certain events and then storing/ processing this data based on our needs. Kafka is event based and uses streams: streams can be thought of as a huge pipeline of infinite data.

Kafka provides one with the ability to publish an event to a data stream, store these streams of events, process these streams of data and finally subscribe to these streams. Under the hood, Kafka is a distributed system consisting of servers and clients communicating over the TCP protocol.

Push vs Pull Based Messaging

RabbitMQ uses something called as a smart producer, which means the producer of the data decides when to create it. A prefetch limit is set on the consumers end to stop from overwhelming the consumer. Such a push based system means that there is almost a FIFO structure in the queue. It is almost because some messages could be processed faster than others leading to an almost in order queue.

Apache Kafka on the other hand uses a smart consumer. Which means that the consumer has to request the messages that it wants from the stream. Kafka also allows one to set the time offset which a consumer wants the producer to take into account while producing messages. This means that all consumers can consume and process events at their own pace. One benefit of this pull system is that consumers can easily be added at any time and the application can be scaled to implement new services without any changes to Kafka.

Queues vs Topics

RabbitMQ is a basic queue data structure, the messages are added to the end of the queue called an exchange and are consumed from the top of the queue. In order to route messages in RabbitMQ message exchanges are used. There are different messaging patterns that RabbitMQ has for different use cases: direct, topic, headers and fanout. In a direct exchange, messages are routed based on the exact routing key of the message (an example is on the figure showing the routing below). The second type of message pattern the header pattern which ignores the routing key and instead uses the message headers to decide where to send the message. The third type is the topic message pattern which route using the routing key like the direct pattern but they allow two wildcards. These two wildcards are * (matches one word) and # matches any number of words. As an example, payments.* will be matched with exchanges with keys payments.credit, payments.debit etc.

Finally there is the fanout pattern in which messages are sent to the fanout exchange and these events are then broadcasted to other exchanges and queues which are subscribed to the exchange. To learn more about these patterns and more, I would highly suggest Jack Vanlightly’s blog post here.

Credit

RabbitMQ can route events based on the routing key using the binding key (direct messaging pattern). In the figure below, any event with the key orange will be put on Q1 whereas any event with key black or green will be put on Q2.

Kafka uses topics, topics can be described as a folder in a file system and whereas each event can be considered a file. There can be zero, one or multiple producers of events and zero, one or multiple consumers, events in Kafka aren’t deleted when consumed and you can set how long Kafka should retain your topics.

Each topic in Kafka can have multiple partitions, each partition can be looked at as buckets. When producing events, each partition’s key can be matched to see if some event should be added to that particular partition. For example in the above example, both client 1 and 2 can add events to the Kafka topic and the topics can be added based on the partition keys. Events with the same event key are written always to the same partition and Kafka guarantees that any consumer of a given partition will consume the events from that partition in order.

The image below shows how one can scale a Kafka based messaging system. The entire consumer group 2 can be added later on for additional services if needed and can be subscribed to any partition in the topic:

Reference

Some other quick differences

Events in Kafka can be replayed since they are not deleted as soon as they are consumed whereas events in RabbitMQ cannot be replayed since they are deleted as soon as they are consumed. Kafka can process a lot of the data in its streams in order for the consumers to consume whereas RabbitMQ does not provide the functionality to process data in its queue.

In RabbitMQ it is possible to specify message priorities and to consume messages based on the priority provided for each message. Hence, RabbitMQ supports creating a priority queue whereas there isn’t such a functionality in kafka. Kafka offers much higher performance than message brokers like RabbitMQ. It can achieve high throughput with limited resources, a necessity for big data use cases. RabbitMQ is best for transactional data, such as order formation and placement, and user requests. Kafka works best with operational data like process operations, auditing and logging statistics, and system activity.

So what are their use cases?

Here are some of the most common use cases for both these systems. This list is in no way exhaustive but gives an idea of where these systems would be used.

RabbitMQ:

  • Legacy Systems: In case of RabbitMQ, there might be times when you need to support legacy systems that use legacy protocols like STOMP or MQTT. You could also use a JMS plugin to communicate with JMS applications.
  • Complex Routing: As seen above, it is very easy to route RabbitMQ messages based on routing keys. If there is a requirement for routing messages based on a few criteria, RabbitMQ’s direct and topic patterns can be used to achieve routing.
  • Long running processes: RabbitMQ can be preferred in cases where there might be long running tasks. This is because there isn’t usually a need for Kafka’s strengths of storing and processing event data or replaying data. A queue of processes that need to get done satisfy this use case.

Kafka:

  • Log Aggregation: Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption.
  • Stream Processing: Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing.
  • Event sourcing: Kafka can be used as an event source. Which means that any changes in an application can be stored as events for later processing. This can help retrieve certain information that might get lost or corrupted during application run times.
  • High Activity: Kafka can be preferred to be used for high volume data ingestions from IOT devices and other data points which are consistently producing a lot of events.

Here is an interesting case study and blog from Doordash who had to switch to a Kafka-esque system after the limit was reached on their scalability with RabbitMQ. Overall, what is the best for you? Like every other technology comparison the answer is it depends, it depends on what is the best for your personal use case.

P.S: I am always looking for feedback to improve and learn. Please leave comments if something can be improved upon/if something isn’t accurate

References/Further reading:

https://www.instaclustr.com/blog/rabbitmq-vs-kafka/#:~:text=While%20RabbitMQ%20uses%20exchanges%20to,%E2%80%9D%E2%80%94can%20consume%20those%20messages.

--

--

Ankit Trehan
Ankit Trehan

No responses yet