I wrote a short script to display the results of the 2004 US presidential elections as a bar chart, shown below (click for full-size image, which will make more sense):

The width of each state indicates the number of electoral college votes it has; the height the extent to which votes for one party exceeded votes for the other. The script is in Python and uses Cairo for drawing. The most challenging part of writing the script was…

…placing the labels that indicate the states.

You can’t just place the labels above/below the states, because some of the states are too narrow; the labels would overlap each other. So some labels need to be moved a little to one side or the other for them all to fit. It took me a little while to find a neat, satisfying algorithm for solving the problem. I’m sure it’s well known – it’s too simple not to be, and I’m sure I’m not the first with this problem – but it was fun to find.

Formally, we have some minimum distance between labels *d*, as well as smallest and largest positions they can be in (*x*_{min} and *x*_{max}). Each label wants to be in a position *a*_{i}, so we want to choose each *x*_{i} such that *x*_{i+1} ≥ *x*_{i} + *d*, but |*a*_{i} – *x*_{i}| is small. The first requirement is strict and formally defined; the second is a fuzzier goodness factor.

My solution looks something like this:

offsets = [s.desired_x – i * d for i, s in enumerate(states)]

max_offset = x_max – d * (len(states) -1)

for i, s in enumerate(states):

new_offset = (max(offsets[:i+1]) + min(offsets[i:])) / 2.0

s.label_x = i * d + max(x_min, min(max_offset, new_offset))

How does it work?

Define *a*‘_{i} = *a*_{i} – *i**d*, and similarly *x*‘_{i} = *x*_{i} – *i**d*. Note that *x*_{i+1} ≥ *x*_{i} + *d* if and only if *x*‘_{i+1} ≥ *x*‘_{i}, in other words, *x*‘_{i} has to be a monotonically nondecreasing sequence. Note also that |*a*‘_{i} – *x*‘_{i}| = |*a*_{i} – *x*_{i}|. So we’ve reduced the problem to finding a nondecreasing sequence that’s “close” in some sense to an arbitrary sequence. We make two non-decreasing sequences – “smallest point ahead of me” and “largest point behind me” – and average the two, which is guaranteed to yield another non-decreasing sequence. We then clamp this sequence between suitable min/max bounds, which again is guaranteed to yield a non-decreasing sequence. So it’s not hard to prove that labels never overlap.

How does it do on the goodness factor? As can be seen from the example, not too badly. Where there is plenty of room, the *a*‘_{i} sequence is monotonically increasing, and so the other sequences we average cling to it; the result is that *a*‘_{i} = *x*‘_{i}. Where they are bunched together, the labels are spread out above, centered on the center of the labels, except at the ends where they fan out into the space available.

It’s not perfect; look in particular at New Jersey, which is making more room for its neighbours than it needs to. I may yet be able to improve on this. But it’s pleasing when you replace a nastily hacked-up thirty-line succesive approximation solution with a five-line solution that runs faster and produces more aesthetically pleasing results.

## 8 Comments

Untested python code for cvxopt:

d = 0.1

a = arange(n) + uniform(-d, d, n)

x = variable(n, ‘x’)

c

xlow = ( xmin <= x )xc

up = ( x <= xmax )c_d = [ x[i+1] – x[i] >= d for i in range(n-1) ]

t = variable(n, ‘t’)

c

tlow = ( -t <= a – x )c

tup = ( a – x <= t )lp

1 = op(t, [ cxlow, cxup ] + cd + [ ctlow, ct_up ])t

inf = variable(1, ‘tinf’)c

tinflow = ( -tinf <= a – x )c

tinfup = ( a – x <= tinf )lp

inf = op(t, [ cxlow, cxup ] + cd + [ ctinflow, ctinf_up ])lp_1.solve()

print x.value(), t.value()

Should have put code tags around formulas and code in the previous comments.

Probably overkill, but you can try to formulate this as a linear programming problem.

l

1 normi tmin sum

imin <= xs.t.

x

i <= xmaxx

{i+1} – xi >= d-t

i <= ai – xi <= til

infty normmin <= xmin t

s.t.

x

i <= xmaxx

{i+1} – xi >= d-t <= a

i – xi <= tThe l_2 norm would give a quadratic programming problem.

You can solve linear programming (and other) problems in python using for example cvxopt (http://www.ee.ucla.edu/~vandenbe/cvxopt/).

Yes, that seems a plausible approach. It occurred to me to cast this as a quadratic programming problem, but I hadn’t realised that such a convenient library for it existed – thanks for the pointer.

cvxopt.base.matrix([[1.0*int(i == j) for j in range(len(xpos))] for i in range(len(xpos))]),

cvxopt.base.matrix([[-x for x in xpos]]),

cvxopt.base.matrix([[0.0 for j in range(i)] + [-1.0, 1.0] + [0.0 for j in range(len(xpos)-i-1)] for i in range(len(xpos))]),

cvxopt.base.matrix([0.0 - xmin] + [-lwidth for i in range(len(xpos)-1)] + [xmax]))

Hi Paul,

Specifying this problem as sparse will be much faster for

anything but toy-problems:

q = matrix(xpos)

G = spmatrix(-1,range(n),range(n),(n+1,n)) +

spmatrix( 1,range(1,n+1),range(n),(n+1,n))

h = matrix([0.0 - xmin] + [-lwidth for i in range(len(q)-1)] + [xmax]);

res = solvers.qp(P, q, G, h)

Best

Joachim

You could also write your own KKT solver (Â§8.7 in the CVXOPT documentation). Essentially you would solve

the positive definite system

(I + G’

diag(d)G)dx = rwhere ‘d’ is a given positive scaling vector. That’s a

tridiagonal system, that can be solved very efficiently;

the next release of CVXOPT will include the banded

solvers in LAPACK suited for something like this.

Joachim

Now I have three solutions:

I think the new algorithm produces the same optimal solution defined by the quadratic programming algorithm, but my current proof of it is so ugly that it’s too much work to set it out here and I don’t trust it. I think I’m on the tail of a more elegant proof now – I’ll make a new blog entry if I find it. In any case, this algorithm is much faster than the CVXOPT-based solution and produces results that look indistinguishable, without the artifacts of the first algorithm.

Fast algorithm:CVXOPT-based algorithm:New fast and perhaps optimal algorithm: