Bloated Rabbit – Part 1

By: on November 30, 2015

Can a rabbit with a 128Gb stomach swallow 128Gb of carrots?

I’m working with a client at the moment, and they seem to be finding that no, it can’t. I want to figure out why not, so I’m going to have to find out a bit about Erlang memory management.

RabbitMQ messages are stored as Erlang binaries. Binaries larger than 64 bytes are stored in there own pool, global to the whole Erlang process. We will just focus on large binaries, because that’s all that’s relevant in this case.

Now you know that you can Google for it and read everyone else’s plaintive cries on the subject…. Allow me to me explain it here anyway, hopefully it will be useful when we analyse RabbitMQ performance.

An Erlang large binary is accessed through a reference. The reference is stored in process memory. At some point after the process no longer needs it, the garbage collection for that process will run, and release the reference, which will decrement the reference count, which will release the memory if the reference count reaches zero. The general complaint is that some processes do not experience much memory pressure, and references to large binaries does not contribute to that pressure, so it’s not unusual for large binaries to accumulate until all of memory is consumed.

OK, so that’s interesting, but it’s not my problem – my message binaries are all referenced. I want to know how having a lot of large binaries affects the cost of allocation. I’m going to assume that my binaries are all 4KB, and that there are about 2M of them. What does that look like in Erlang’s memory management data structures?

Erlang has a memory allocation hierarchy: it has containers, and inside them, it has blocks. Containers default to 128KB, so there are at least 64K containers to hold our messages. Free blocks are stored in a red black tree sorted by size. An allocation is therefore a search in the tree for the best fit block. When there is no block large enough, a new container is created and the block is allocated from that. So it’s basically O(log(N)) where N is the number of free blocks.

A lower bound for N is the number of containers – so log(N) is 16, which doesn’t sound terrible. It’s obviously an extremely high price for allocation in general, but for RabbitMQ, its a cost per message, so to matter it needs to be slow compared to network IO.

Perhaps fragmentation means N is much higher. I think I’ll have to measure that. Erlang seems to have a way to do exactly that: the instrument module, although it’s deemed experimental.

First we need to stop RabbitMQ and start it again with the required options:

$ sudo /etc/init.d/rabbitmq-server stop
 * Stopping message broker rabbitmq-server [ OK ] 
$ sudo RABBITMQ_SERVER_START_ARGS="+Mim true +Mis true" \
    /etc/init.d/rabbitmq-server start
 * Starting message broker rabbitmq-server

Now I can connect to the REPL and try some things out:

erl -setcookie $(sudo cat /var/lib/rabbitmq/.erlang.cookie) \
    -sname test@localhost \
    -remsh rabbit@mrclumsy
Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Eshell V5.10.4 (abort with ^G)
(rabbit@mrclumsy)1>  instrument:memory_status(allocators).


That seems to be working, and we can see at least a bit of information about the binary allocator.

Blocks refers to the number of blocks allocated, which sizes refers to the bytes of memory allocated. The values are now, peak since last call, peak for all time, in each case.

That doesn’t shed any light in this case.

(rabbit@mrclumsy)10> instrument:holes(instrument:sort(instrument:memory_data())).

Prints some numbers among a lot of the number 8? That’s rather anticlimactic. I guess I could see if there was a visible change in this output as I sent messages to RabbitMQ.

The other thread of this is that I’ve hacked on the PerfTest in the RabbitMQ Java client. Here’s the fork. All it does is add random variance to the message size. This seems to have the major impact on RabbitMQ memory usage I was expecting, but I’m not really able to verify what is happening in the instrumentation above.

Not sure exactly what this means at this stage; there seems to be a bug in Perf Test that means the first consumer doesn’t keep up with the producer (additional consumers do).




Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>