So, to add more… well, ‘fuel to the fire’ is the wrong term—more sticks to the pile, perhaps? More monads to the stack? Here we go:

If you DuckDuckGo “are promises monads” the first nine answers are:

- Promises are the monad of asynchronous programming
- Javascript—why are Promises monads?
- JavaScript Promises are just like monads and I can explain…
- JavaScript Promises in two minutes—they’re just like monads…
- Javascript pictures: why are Promises monads?
- Javascript: Promises and monads—breakthrough technologies
- Functional JavaScript—functors, monads, and Promises
- No, a promise is not a monad
- A proof that Promises/A is a monad

Now, the top answer to ‘Javascript—why are promises monads?’ is that ‘they aren’t monads’, and ‘Javascript pictures: why are promises monads?’ does not come to a firm conclusion. So let’s chalk that up 6-2 that Promises are monads!

The argument that promises are not monads is usually stated in terms of the violation of one or more of the ‘monad laws’, but I want to make a more intuitive argument that should hopefully clarify where monads are and are not applicable.

But first—what are Promises? And what is a monad?

A Promise is simply delayed execution. You have a function that takes some time, and you want to run some more code only once the answer becomes available. In Javascript, this would be written something like this:

function pr1() { return new Promise( (resolve, reject) => // do something long-running to get an answer... if (answer === null) { reject(“oh no that didn’t work”); } else { resolve(answer); } );} function pr2(answer) { return new Promise( (resolve, reject) => // do something else that is long running and call reject or resolve as appropriate... );}

And then you can chain them together like this:

pr1().then(pr2).catch(function(failureMessage) { … });

Where this function passed to *catch()* applies to either of *pr1* and *pr2*’s failures.

So, what is a monad? A monad represents a chainable operation, or the result of a chainable operation. In this case you can see *pr1* and *pr2* do return chainable operations, so we can see where the Promise=monad idea comes from. Indeed, wherever you see something of the form *foo().glue(bar).glue(baz).glue(boo)* the thought ‘oh, there goes a monad’ immediately arisess.

But is that all? Indeed not. A monad is a more subtle thing than just a chainable operation, and usually only Haskellers are interested in whether something is or isn’t a monad.

This is because only Haskellers (and F-sharpers and a few others) have special syntax for monadic operations which allow an imperative style of coding in a functional language. This is hugely important in these languages because when you are dealing in chains of such operations you don’t want to have to write them in a functional style. A side-effect of this is that you can make vastly different types of chains with the same syntax—each different monad represents a different way to glue together operations and so gives a different meaning to the same syntax. Imagine if you could say in your golang program ‘this function is in the ‘exception’ monad to save you from having to write ‘*if err != nil { return nil.err; }*‘ in between every useful line of code even without changing syntax! So you see the reason imperative languages don’t have monads is because there is no pressing need for them, but that doesn’t mean they wouldn’t benefit from special (or even normal) syntax for expressing chains of monadic operations. Other benefits of monads are tiny compared to the simple but powerful one of having syntax for expressing your chains of operations succinctly.

But, you might ask, who needs special syntax for expressing a chain of operations? Chains of operations are trivially representable in hundreds of ways. True, but some chains are not so simple. Consider a cheese and bacon toasted sandwich, but your sandwich toaster isn’t working (or doesn’t exist), so you might try:

- Slice loaf
- Toast slices of bread
- Fry bacon
- Slice cheese
- Assemble sandwich
- Microwave sandwich

Or, in Haskell’s monadic do notation:

do breads <- replicateM 2 (slice loaf) [toast1,toast2] <- mapM toast breads friedBacon <- fry bacon cheeseSlices <- replicateM 4 (slice cheese) sandwich <- assemble ( [toast1] ++ cheeseSlices ++ [friedBacon, toast2]) microwave sandwich

You can see how being able to name each output and feed them in as inputs in whichever later stage is appropriate is nice. Trying to do that with lambdas or the like would be painful, even for a short function like this.

And look! Here we have a ‘fry’ function and a ‘toast’ function, both of which take time! Like promises! Could this be the way in which monads are a useful abstraction for promises? Yes! Yes! Let us express our promises in monadic do notation and ride off into the sunset on our monadic horses!

But, but, but! The devil is in the details. Yes, this monadic do notation does express what we want really nicely, but the fact that behind it lies a monad ties our hands somewhat. The gluing operation of a promises monad would have to look like this:

function bind(promise, operation) { return operation(promise.resolve()); }

And that would mean that each promise gets resolved before the next operation can start; you have to wait for the toaster to finish before you can start frying the bacon!

Surely there is some way round this? The problem is that the monad abstraction does not allow for the idea of an input that you aren’t using yet; the toast has to be there for the fry operation even though it won’t be needed until the assemble function, yes, even (as I understand it) in a lazy language like Haskell! If you want this, you have to go for a richer abstraction like arrows.

So, while monadic do notation does express promises very well, monads do not allow us to use what is important about promises as fully as we want. This is the intuitive reason why, in this case, what seems like a minor violation of the monad laws completely ruins the point of the monad.

Now I’ve written this, I cannot help wondering if just adding a parallel operation is all that’s needed. Certainly, our example is helped:

do [ [toast1,toast2] , friedBacon , cheeseSlices ] <- inParallel [ do breads <- replicateM 2 (slice loaf) mapM toast breads , fry bacon , replicateM 4 (slice cheese) ] sandwich <- assemble ( [toast1] ++ cheeseSlices ++ [friedBacon, toast2]) microwave sandwich

Not bad—but not great. This is actually Haskell’s ‘MonadParallel’ abstraction (although if I used that my ‘inParallel’ function would have the counterintuitive name ‘sequence’!). It would be better with special syntax that allowed us to put the outputs of the parallel operations alongside the functions that produce them (here we have to match up *toasts*, *friedBacon* and *cheeseSlices* to the three operations that produce them by counting unlike in the non-parallel case where they line up). It also feels like more exciting networks where different parallel streams want to swap information could not be expressed in this way. And yes, you might still have to argue about whether adding an extra *resolve()* counts as an unacceptable violation of equivalence and hence the monad laws.

So, in conclusion—no, Promises are not in any useful way (normal, non-parallel) monads. But check out arrows!