I’m sure everyone is sick of the monad story already, but I think I found yet another way to approach it that might help understand the big picture.
Perfect compositionality is the holy grail of software engineering, and enforcing referential transparency at every level of abstraction is a huge step towards that goal. This is all fine as long as we only have to deal with pure transformations, but what about side effects? The essence of the Haskell solution to this problem is to step back a bit, encapsulate effects in some abstraction, and combine these effectful entities as if they were ordinary values. This is basically a form of metaprogramming, since after assembling a big program from small ones using combinators, we need to interpret the resulting structure to make it come to life.
One can come up with all kinds of silly analogies, but nothing beats simply looking at those encapsulated computations as generic black boxes. In order to build a big box from several small ones, all we have to do is connect them. The Haskell base libraries offer us a bunch of type classes that provide various sorts of plumbing combinators to describe the composition of boxes in a disciplined way. To date, probably the Typeclassopedia is the most accessible resource to explain them to the uninitiated.
While preparing notes for my FP class, I realised that I have never seen the relationships between these classes explained on a single diagram, so I prepared one:
The arrows always point towards the specialisation. The top of the stack is Functor, which only allows us to combine a single effectful entity with pure functions. If we add parallel composition of effects (i.e. the ability to execute n boxes in parallel, collect their outputs and feed them to a single function of n arguments), we get Applicative. Orthogonally, if our boxes also have an input port we can attach to, we can define sequential composition, which is the responsibility of the Category class.
The Arrow class is known to specialise both Applicative and Category, but does it have any instances that cannot fit in either of those parents is there any other common specialisation that doesn’t fit in Arrow (yeah, I had it the other way around originally)? I’ve never seen the answer spelled out, so I’ll say it here: no, Arrow is strictly the intersection of Applicative and Category. If c
is a category and c a
is applicative functor for any a
, then it is easy to show that c
is also an arrow. All we have to do is provide a minimal definition of the Arrow interface, which is the arr combinator plus any one of the other four (this is technically not true due to the way the default implementations are defined in the Arrow class, but it’s always possible to express them in terms of each other). As it turns out, the input-splitting combinator &&& is the easiest to express given the applicative interface, so we can use the following definition:
arr f = fmap f id
(&&&) = liftA2 (,)
With Arrow, we can arrange our effectful computations in a static acyclic network. The next step on the ladder of expressiveness is ArrowChoice, which allows us to branch according to the input of the box, but the branches still have a static structure. This restriction is lifted by the Monad (or the equivalent ArrowApply) interface, which grants us the ability to manipulate arrows as first class entities during runtime, i.e. within the interpretation phase.
Nowadays we can hear more and more voices saying that monads receive too much attention compared to all the other patterns mentioned above. I understand the sentiment, and even sympathise with it, but I would still say monads are special. When we program in mainstream imperative languages, we rarely have the opportunity to venture beyond the limits of ArrowChoice. Haskell can be an excellent imperative language precisely because monads allow it to eliminate the boundary between compilation and execution, hence be strictly more expressive than our usual toolset. The IO monad is dark magic, no doubt. Nevertheless, the aforementioned voices are absolutely right in saying that we shouldn’t use the monadic interface when we don’t need its expressive power. Restricting ourselves to the more general interface makes our code more reusable and typically also easier to implement efficiently (as an extreme example, infinite streams have a much more efficient direct Applicative instance than the default one derived from the monad operations).
Back to the drawing, we can see that there are two orthogonal traits to consider besides the main hierarchy. When building a graph of effects, we often want feedback loops, i.e. cycles in the graph. Cycles only make sense when we have sequential composition and the ability to branch, so there’s no point in extending anything more general than arrows with the ability to form loops. They are provided by the ArrowLoop class and also the MonadFix class.
The other aspect is the ability to combine the internals of two boxes instead of just enclosing them both in a bigger box. If the structure of the side effect is a monoid, then we have an extra means to merge boxes. Such a combination only makes sense in the presence of parallel composition, so it can only be defined for applicative functors. Depending on which step of the ladder we’re standing on, the class is called Alternative, ArrowPlus or MonadPlus, but it’s really the same thing everywhere.
The last two paragraphs suggest that the type class hierarchy could use a lot more refactoring than the oft-mentioned issue that Applicative is not a superclass of Monad. As it turns out, the Arrow class is completely superfluous, and we should probably have a single unified class for monoidal effects as well as one for feedback loops. I’m not sure where the branching added to ArrowChoice sits, it might probably apply to any Category, since it’s a fourth kind of combination besides parallel, sequential, and monoidal: separation in time.
And we haven’t even touched on the question of commutativity...
Is it possible to provide an Applicative instance for every Arrow?
ReplyDeleteMonadPlus has laws relating it with Monad which Alternative does not, so they are not exactly equivalent. Unfortunately, nobody can decide what those laws should be, and there is a proposal to split MonadPlus into three different classes. I wonder where ArrowPlus falls regarding laws.
ReplyDeletehttp://www.haskell.org/haskellwiki/MonadPlus_reform_proposal
@Sjoerd:
ReplyDeleteI managed to come up with this. I haven't verified any laws, but the types are correct.
fmap f a = arr f . a
pure = arr . const
f <*> g = (arr.uncurry) ($) <<< f &&& g
WrappedArrow can be used to provide an Applicative interface to any arrow, and it contains basically the same code as the solution in Jake’s comment.
ReplyDeleteAs for the MonadPlus laws, now I remember seeing them long ago, but forgot the specifics. Thanks for the pointer!
It might be interesting to do an implementation of the arrow syntax with the applicative and category classes. It might even be faster too, as it doesn't require all those tuples.
ReplyDeleteTuple fusion for arrows? That would be nice indeed, because it could be used for any arrow, not just CCAs.
ReplyDeleteSince Arrow is Applicative applied to the target type of category arrows, I wonder if ArrowChoice is Applicative applied to the source type of category arrows.
ReplyDeleteNo, because Applicative doesn’t allow us to make run-time choice between effects, while that’s exactly the contribution of ArrowChoice. Also, they being duals of each other, the former acts on products, while the latter on sums.
ReplyDeleteTo be clear, I meant something like:
ReplyDeletecat (a -> b) x -> cat a x -> cat b x
Yes, I understood. Note that a combinator with that signature would have to be able to undo function application – how else could it supply inputs to the two constituents? ArrowChoice is a different story, since it’s about multiplexing.
ReplyDeleteThese are some fantastic insights. Would it be OK if I include some of this material in an upcoming second edition of the Typeclassopedia? Or would you consider writing something about it to contribute yourself?
ReplyDeleteIt’s probably the best solution if you simply reuse ideas worth reusing. This post was not intended to be a tutorial, after all, and it skims over some details, e.g. the possible further division of the classes or the issue with the alternative MonadPlus laws mentioned above. Also, equating applicative functors with parallel composition requires quite a bit of handwaving (since effects from the two boxes are not necessarily independent; this is closely related to the issue of commutativity), even if I think that’s the essence of the Applicative class.
ReplyDeleteBy the way, I do plan to write a follow-up post to clarify the applicative-parallel-commutative part of the story.