The ‘Strict Order’ and ‘Exactly-Once’ conundrums

By: on July 6, 2020

Message and event-based architectures have become increasingly accessible in recent years, and the big cloud providers’ managed services that abstract the infrastructure complexities away from developers, have made messaging more accessible than ever. The likes of AWS Kinesis and GCP Pub/Sub significantly simplify operating a distributed message architecture.

This has also meant that the general understanding of these types of architectures has been greatly improved, however there are still a number of topics I continue to see a lot of confusion about, most notably strict order and exactly-once processing.

This article tries to explain the significance of these two concepts and why they cannot be a simple property or be addressed with a single obvious product or pattern.

Strict Order

‘Strict Order’ is one of those poorly understood concepts I hear most often mentioned in context of message architectures. I believe this is partly due to the fact message systems are often thought of as queues, and while they can be supported by queues, they don’t have to be.

The difference between message queueing and pub/sub is outside the scope of this article, but I refer to RabbitMQ vs. Kafka, an architect’s dilemma which gives a good overview and Martin Vesper’s excellent RabbitMQ introduction and Wunderlist case study.

The definition of a queue is easy to understand: ‘A list of data items, commands, etc., stored so as to be retrievable in a definite order, usually the order of insertion‘.
Let’s look at what this means in the context of a distributed application.

It is easy to assume ordering is simple when looking at a simple example:

  • A single publisher publishes a single message
  • The message service’s message broker ensures that message is delivered to the single subscriber
  • The subscriber processes that message

While this clearly ensures strict order of message processing, order defined by the Publisher, this architecture also has a number of significant downsides, mainly that this design cannot scale. It depends on a single Publisher, single Message Delivery Service, single Subscriber and synchronous processing because otherwise we can only guarantee the first delivery of the message is in order (we’ll cover challenges around re-delivery scenarios later in this article). This significantly impacts possible throughput and hence this architecture is not useful in a lot of real-world scenarios.

‘Strict’ Order?

People say ‘strict’ order as if there is are varied level of ‘orderness’. Most people will say that Order is binary. However, where we maintain order and how that impact the overall solution architecture is important to understand.

Scaling up

In many real-world scenarios, the ‘expensive’ processing happens on the subscription side. For example, payments may be published which subscribers need to process for Fraud Detection.

In this scenario, we can scale the above architecture by introducing multiple subscribers who can process messages in parallel:

This design can still ensure Publisher order, and this order can be maintained in a queue or topic, and the message broker can even distribute the messages across all subscribers in order, however it cannot guarantee that the subscribers will process the messages in order. The middle subscriber in the above diagram, for example, was quicker to process its message compared to the other two, resulting in messages being processed (and potentially passed on), out of order.

Subscribers could synchronise processing of these messages, but that would largely bring us back to the ‘single synchronous’ example in terms of throughput limitations as all subscribers must synchronise with one another.

More importantly, this shows that processing order is not a fundamental property of a message product, but a characteristic of the entire application.
Similar to the above, sometimes we require multiple publishers—for example, to support publish throughput or geographic distribution.

This design presents similar challenges. While the Message Service can manage order of the messages it is delivering to the subscriber, it cannot control synchronisation of the publishers. Publishers can synchronise amongst themselves, but more commonly, they will publish a message attribute that can be used by the message service to define order, a timestamp or sequence number is frequently used for this. This still presents a number of challenges:

  • How do publishers ensure their timestamp or sequence number are synchronised across them? Clock drift can be an issue, for example.
  • How long does the message service “wait” to ensure that it does not deliver a message ahead of one with an earlier timestamp or sequence arriving? The diagram above shows an example of a slow publisher causing the messages to be delivered to the subscriber out of order.

These are important considerations which, in my opinion, cannot be solved with a single design or by a single product. Each application will have to consider which trade-offs are acceptable.

Back to centralised architectures then?

The above suggests that it isn’t possible to have a distributed system where order is important, and that is also scalable, but that doesn’t mean we should stick to centralised architectures. When somebody claims ‘Strict Order’ is a requirement, we must ask further questions and understand what this means. Does order really matter? Where does it matter and when does it matter? Often ‘eventual order’ is good enough.

Imagine we are building a surveillance system that captures events from multiple sensors (cameras, door swipes, telephone calls) and enriches them before compiling a detailed log that allows authorities to retrace the exact steps of an individual (apologies to the more paranoid amongst you). Enrichment could include facial recognition. In order to scale the system, we must not rely on a single, synchronous, processing stream. Sensors may be slow or have high latency and processing may be slow, so this problem lends itself very well to a distributed, asynchronous, architecture like this:

In order to retrace somebody’s steps accurately, it’s important that events are stored such that they can be retrieved in order. How is that possible when everything is asynchronous? This could be quite simple: if each sensor publishes a timestamp along with the message payload, and that timestamp is persisted in the events store, it can be used to retrieve events in the order they happened. Our message service does not need to consider order at all. However, we can still retrieve events in order, post-processing. Hence order is a characteristic of the application as a whole, not the messaging product.

But we said publishing timestamps is problematic because of clock drift issues, I hear you say. Sure, but unless we are tracking super-humans, millisecond accuracy is more than sufficient, so clock drift is an acceptable risk.

This is to illustrate that it is not useful to specify ‘strict order’ as a must requirement, without taking the full context into account.

Nice, so order never matters for messaging?

Not quite, sometimes order of messages really is important. Let’s consider the following (overly simplified) example of a trade processor:

Let’s say the only validation we are doing is to ensure that the user has not exceeded their daily trade limit.

Why is order important in this use case? In most circumstances it isn’t, as each trade will cause the customers running daily balance will be increased until it has breached the limit. However, consider the following example for a customer with a trade limit of $1000 per day:

  1. trade booking with value of $500
  2. trade booking with value of $400
  3. cancel previous booking
  4. trade booking with value of $300

The above scenario is valid because the cancelled trade means the limit has not been breached. However, if our system inadvertently processed step four before step three, the trade would incorrectly be flagged as invalid.

One could argue that it is only the eventual consistency that is important, as it is only the total sum that is relevant, as trades could retrospectively be cancelled. This is true in a lot of cases, but let’s assume that, in this scenario, we need to know the limit has been breached before passing the trade to a settlement system as soon as possible.

Let’s assume that, at peak, we need to process 1000s of trades per second, in which case our system could look like this:

‘Customer limits’ is a persistent store of daily limits and running totals (a Redis cache for example) and Settlement is an external system to ours.

I already described at the start of this article how this architecture fundamentally cannot support ‘Strict Order’ across the entire application, while also supporting high availability and high throughput. Each trade processor should be able to operate independently and a single faulty or slow processor shouldn’t mean the entire system stops or fails. Instead, it should be stopped and replaced automatically.

Because we cannot ensure strict order over the entire set of messages, we must find other ways to guarantee we process the messages in our scenario in the correct order. In this case, when we said order was important, actually what we should have said is that order is important for a particular customer of our trading application. Our system is not impacted by different customers’ bookings being ordered out of sync, as long as we process all messages for a given customer in the order they were received.

This can be achieved by simple partitioning. If all of the bookings for a given customer are always captured by the same Trade Capture service and processed by the same Trade Processor service (or their replacements once they have failed), then our application sufficiently guarantees order. In extremis we could have a single Trade Capture service, Message Broker and Trade Processor per customer. In practice, we can group customers—for example by partitioning using the modulus of the customer id hash and the number of processors we wish to use. In Kafka, for example, we can align subscribers to specific topic partitions to ensure all messages within a partition are processed in order.


I wanted to cover ‘Exactly-once’ in the same article as ‘Strict Order’ because they are both problematic for similar reasons. People will often say they need ‘Exactly-once Delivery’ when, in fact, what they mean is that they need ‘Exactly-once Processing’.

What is the difference? Exactly-once delivery suggests that the onus is on the message broker to ensure a message has only been delivered once. But in fact, there are other ways, better and simpler in many ways, to ensure a message has only been processed once, which is the overall requirement.

Message brokers generally use an acknowledgement mechanism to track whether messages have successfully been delivered. Consider our first example above:

The message broker above does not deliver the second message until it has received an acknowledgement from the subscriber that it has processed the first. This is how message brokers typically guarantee delivery.

However, consider the concerns we highlighted in the ‘Strict Order’ section. Here we should recognise similar concerns:

  • Timeouts: What should the message broker do in the case it does not receive an acknowledgement from the subscriber? Typically, we can configure an `ack` timeout, after which the broker will redeliver the message, or in case of multiple subscribers, it may be configured to deliver it to another subscriber.
    Setting this timeout value too high risks unacceptable waits which in turn may cause the system to back up on the message broker and subsequently unable to catch up.
    Set the timeout too short and the message broker may re-deliver the message too quickly, therefore, when such timeout is set, there cannot be an absolute exactly-one delivery guarantee.
  • Throughput: If a message broker needs to wait for a message to be acknowledged before it can deliver the next, it operates in a synchronous way. Typically, we can configure a broker to deliver a batch of messages at the same time and similarly a subscriber can batch acknowledge, which improves throughput but introduces extra complexity in failure scenarios. What if the subscriber fails to acknowledge messages in the batch? How does the Message Broker know which messages should be resent? The only option is for the message broker to redeliver all messages in the batch, which means that again we have violated the exactly-once delivery guarantee.

The above shows that, unless throughput and scalability is not a concern, which is hardly ever the case, the message broker can only guarantee at-least-once or at-most-once delivery, not exactly-once.

How can we achieve exactly-once processing without exactly-once delivery?

While we recognise that it is not practical for a message broker to support exactly-once delivery, it is true that often we have scenarios where we require or desire exactly-once processing.

Consider again our trading application example above. When we send a trade to the settlement system, we want to ensure we don’t create a double booking. Often we like these systems to be idempotent, and therefore allowing us to send the same booking twice without the downstream system processing it twice, but sometimes this is not practical. For example, this would usually require a lookup (to identify an already seen identifier for example), which may not be an option in situations where latency or processing speed is important.

Fortunately, there are alternative patterns we can use to ensure a message is only processed once.

Let’s have a look at the following scenario in our trading application example:

The above example shows one of our Trade Processors fail after it has consumed a batch of three messages from the message broker. Even though the processor has passed message #7 to the settlement service, the message broker has no confirmation of this.

How can we recover from this while ensuring we process all messages only once? One way we can do this fairly easily is by asking the target system for the last message it has received. In this scenario, when the Trade Processor restarts it can go through the following sequence:

  • Identify as a processor of the ‘green’ partition
  • Ask the Settlement service for the last ‘green’ message it has processed, #7 in this case
  • Create a new ‘green’ subscription to the Message Broker, requesting for all messages after #7

How do we identify the last processed message?

Imagine if the settlement instructions were delivered over a Kafka stream. We can publish the original message offset inside the second message sent to the settlement service. When we then request the last published message from the settlement stream or partition, we can easily identify the offset to use for the new subscription from the trade capture stream.


Strict Order and Exactly-once are terms that are often used in context of distributed architecture. Sometimes people think of them as requirements for a particular message system when comparing, for example Kafka with RabbitMQ, however hopefully this article has shown that we need more context when reviewing the requirements for ordering and processing guarantee.


Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>