2010/04/10

Programming with effects – the story so far

I’m sure everyone is sick of the monad story already, but I think I found yet another way to approach it that might help understand the big picture.

Perfect compositionality is the holy grail of software engineering, and enforcing referential transparency at every level of abstraction is a huge step towards that goal. This is all fine as long as we only have to deal with pure transformations, but what about side effects? The essence of the Haskell solution to this problem is to step back a bit, encapsulate effects in some abstraction, and combine these effectful entities as if they were ordinary values. This is basically a form of metaprogramming, since after assembling a big program from small ones using combinators, we need to interpret the resulting structure to make it come to life.

One can come up with all kinds of silly analogies, but nothing beats simply looking at those encapsulated computations as generic black boxes. In order to build a big box from several small ones, all we have to do is connect them. The Haskell base libraries offer us a bunch of type classes that provide various sorts of plumbing combinators to describe the composition of boxes in a disciplined way. To date, probably the Typeclassopedia is the most accessible resource to explain them to the uninitiated.

While preparing notes for my FP class, I realised that I have never seen the relationships between these classes explained on a single diagram, so I prepared one:


The arrows always point towards the specialisation. The top of the stack is Functor, which only allows us to combine a single effectful entity with pure functions. If we add parallel composition of effects (i.e. the ability to execute n boxes in parallel, collect their outputs and feed them to a single function of n arguments), we get Applicative. Orthogonally, if our boxes also have an input port we can attach to, we can define sequential composition, which is the responsibility of the Category class.

The Arrow class is known to specialise both Applicative and Category, but does it have any instances that cannot fit in either of those parents is there any other common specialisation that doesn’t fit in Arrow (yeah, I had it the other way around originally)? I’ve never seen the answer spelled out, so I’ll say it here: no, Arrow is strictly the intersection of Applicative and Category. If c is a category and c a is applicative functor for any a, then it is easy to show that c is also an arrow. All we have to do is provide a minimal definition of the Arrow interface, which is the arr combinator plus any one of the other four (this is technically not true due to the way the default implementations are defined in the Arrow class, but it’s always possible to express them in terms of each other). As it turns out, the input-splitting combinator &&& is the easiest to express given the applicative interface, so we can use the following definition:

arr f = fmap f id
(&&&) = liftA2 (,)


With Arrow, we can arrange our effectful computations in a static acyclic network. The next step on the ladder of expressiveness is ArrowChoice, which allows us to branch according to the input of the box, but the branches still have a static structure. This restriction is lifted by the Monad (or the equivalent ArrowApply) interface, which grants us the ability to manipulate arrows as first class entities during runtime, i.e. within the interpretation phase.

Nowadays we can hear more and more voices saying that monads receive too much attention compared to all the other patterns mentioned above. I understand the sentiment, and even sympathise with it, but I would still say monads are special. When we program in mainstream imperative languages, we rarely have the opportunity to venture beyond the limits of ArrowChoice. Haskell can be an excellent imperative language precisely because monads allow it to eliminate the boundary between compilation and execution, hence be strictly more expressive than our usual toolset. The IO monad is dark magic, no doubt. Nevertheless, the aforementioned voices are absolutely right in saying that we shouldn’t use the monadic interface when we don’t need its expressive power. Restricting ourselves to the more general interface makes our code more reusable and typically also easier to implement efficiently (as an extreme example, infinite streams have a much more efficient direct Applicative instance than the default one derived from the monad operations).

Back to the drawing, we can see that there are two orthogonal traits to consider besides the main hierarchy. When building a graph of effects, we often want feedback loops, i.e. cycles in the graph. Cycles only make sense when we have sequential composition and the ability to branch, so there’s no point in extending anything more general than arrows with the ability to form loops. They are provided by the ArrowLoop class and also the MonadFix class.

The other aspect is the ability to combine the internals of two boxes instead of just enclosing them both in a bigger box. If the structure of the side effect is a monoid, then we have an extra means to merge boxes. Such a combination only makes sense in the presence of parallel composition, so it can only be defined for applicative functors. Depending on which step of the ladder we’re standing on, the class is called Alternative, ArrowPlus or MonadPlus, but it’s really the same thing everywhere.

The last two paragraphs suggest that the type class hierarchy could use a lot more refactoring than the oft-mentioned issue that Applicative is not a superclass of Monad. As it turns out, the Arrow class is completely superfluous, and we should probably have a single unified class for monoidal effects as well as one for feedback loops. I’m not sure where the branching added to ArrowChoice sits, it might probably apply to any Category, since it’s a fourth kind of combination besides parallel, sequential, and monoidal: separation in time.

And we haven’t even touched on the question of commutativity...

2010/03/26

Moving to GitHub

I just moved some of my Haskell code over to GitHub. For the time being, you’ll find there everything related to Elerea, DoW and Hemkay. More to come later.

2010/02/14

Behind the dungeons

Finally I managed to polish Dungeons of Wor to the degree that it felt worth releasing. The main motivation for creating it was to get acquainted with functional reactive programming (FRP) from the programmer’s side as opposed to the library implementor’s side. If I had to pass a verdict right away, I would definitely say it was a pleasant experience overall despite the fact that I had to figure out everything from zero.

The core idea of FRP is to work with time-varying quantities -- both continuous functions and discrete events -- as first-class entities, which opens up a whole new perspective to look at interactive systems from. There are essentially two main schools today: focusing on the quantities themselves or on the transformers that consume and produce such quantities. The former is sometimes called classic FRP (CFRP), while the latter is arrow-based FRP (AFRP).

Since I was mainly interested in putting this paradigm into practice, it, I was disappointed to learn that most of the libraries available were not ready for actual use. Yampa was the only one that didn’t break down immediately when I tried doing something with it, but I felt the arrow style and the pre-determined switching mechanisms too constraining, and I wanted an applicative style library to play with. For this reason I created Elerea from scratch, which was a quick hack first, but I managed to iron out a lot of its creases over the past few months and reached a state where it was ready for the challenge. The details can be found in the paper I presented at WFLP 2010.

Elerea provides an obsolete interface and three experimental ones, but the latter group is actually much more dependable and well-behaved in general. I used the Simple version to program the game, because it has clean semantics. In a way, it goes against FRP, since it only provides an interface to work with discrete streams as opposed to continuous behaviours, but the more complex problem of managing the lifetimes of the entities is an orthogonal issue. Since I was mostly concerned about the latter problem, I chose a task that can be easily expressed as a succession of states: an old-school arcade game.

In essence, the library provides streams (called signals) that are not only isomorphic to functions over natural numbers, but also support the monad interface just like functions do (as in the reader monad). The only construct offered over the monadic combinators is a memory element to be able to introduce state. The bind operation corresponds to sampling, and it allows us to describe dynamic data-flow networks without the need for additional constructs. This is precisely the difference between monads and arrows: arrows can be used to describe the static parts of the system, but we need something more when we have a dynamic system (in the case of Yampa this role is played by the various switching combinators).

Monads, those cute fluffy things

When it comes to practice, the monadic interface is both a blessing and a curse. It is really powerful, and it allowed me to express all kinds of dynamic structures (especially point-wise behaviours and dynamic collections) with relative ease. It was really useful not to have to rely on a pre-determined set of switching constructs, because I often found that I wanted switchers that are slightly different in individual cases, and I could easily implement them as needed -- and later I managed to unify those instances.

However, it can get confusing at times, because contrary to my expectations, I often needed to create signals with several layers (i.e. values of type Signal (SignalGen [Signal (SignalGen a)]) and worse), and I had to write basically boilerplate code to break down these structures or convert one into the other.

For instance, the combinator I used to maintain dynamic collections is the following (you can find its source in the Utils module of the game code):

collection :: Signal [Signal a] -> Signal (a -> Bool) -> SignalGen (Signal [a])


The first argument is a source of signals, which contains a list of signals to add to the collection at any instant. The second argument is a keep-alive condition, which is also time-varying. Whenever the current function carried by this signal yields False on any of the signals currently maintained in the collection, the signal in question is thrown away. The collection signal hides the signals living inside it, and only exposes their output.

According to the rules of the game, each new enemy is spawned upon the death of a previous one. There is a function called spawnEnemy that takes the current states of each enemy and turns it into a list of new enemies. This function is simply applied to the list of current enemy states using concatMap within the spawnEnemies function, whose type is therefore the following:

spawnEnemies :: [Actor] -> [Signal (SignalGen Actor)]


We get a two-layered structure inside the list, because the generators depend on the current position of the player (to avoid spawning creatures too close to it). We have to take the resulting list and turn it into something accepted by collection. There is only one way to turn a signal generator into a signal (this is a library primitive that’s essentially a monadic join across two types of stream):

generator :: Signal (SignalGen a) -> SignalGen (Signal a)


Now the line that creates the source signal for the enemy collection is the following:

spawnedEnemies <- generator (sequence <$> (sequence . spawnEnemies =<< enemies'))


The enemies' signal carries the previous state of the enemies. First we bind this signal to sequence . spawnEnemies, where sequence has the following type:

sequence (inner) :: [Signal (SignalGen Actor)] -> Signal [SignalGen Actor]


The outer call to sequence pushes the list layer even further inside the structure:

fmap sequence (outer) :: Signal [SignalGen Actor] -> Signal (SignalGen [Actor])


This will give us something we can apply generator to, and get exactly the structure we want.

Now this is quite easy to follow in hindsight. However, while the game was developed and dependencies were being added and removed, the types kept changing, and I had to follow the changes. I’d say that was more trouble than it should be. It would have helped a lot if I could easily display the inferred types of non-top-level expressions.

At the same time, all this power made playing with various approaches to dynamism possible, and I don’t know how I would have solved all the little problems without it. In general, I’d say it provides a great platform to experiment with abstractions (like the switcher and collection combinators) that should make FRP in the large possible in the long run.

Issues with delays

When we create a dynamic data-flow network, in theory we have to be careful to create only well-formed loops, i.e. no loops without at least one delay element to break them. In practice, however, I never found it to be a real problem. While there is no protection against instantanous loops in the library, there is a very simple convention to avoid them: every forward reference must point to a delayed signal. Even though this should be trivial, I’ve never seen any similar rule spelt out like this. I got the inspiration from Daniel Bünzli’s React library, which doesn’t allow undelayed forward references, because its fixed point combinators inevitably add delays. In the case of Elerea these references are made possible by the MonadFix instance of signal generators, which cannot be forced to contain a delay (which would violate the laws for mfix anyway), but in exchange is more powerful, since any structure can be fed back, not just a single signal. And there is even syntactic sugar (mdo and recursive do expressions) to make its usage trivial. But if someone feels that the compiler should be able to check loops for us anyway, Neil Sculthorpe’s latest paper shows a way to achieve that.

Unfortunately, there is a much more fundamental problem: the possibility of off-by-one errors. When we have events that trigger each other in a circular dependency, we have to be careful when to take the current value of some signal as opposed to the previous one. Note that this has nothing to do with whether our system is conceptually continuous-time or not (just swap ‘current’ and ‘previous’ to ‘post-switch’ and ‘pre-switch’ and you still have the same issue), it seems to be part of the essential complexity of FRP applications in general. This problem is highly application specific, and I’ve yet to see it addressed in any way, be it a just set of guidelines or something more rigorous, so pointers are kindly welcome.

Dealing with code evolution

Dungeons of Wor being a very small application I could only gather some early impressions. It was certainly easy not to fall into the trap of maintaining a world state, as a lot of the local state could be hidden inside individual signals. Both Elerea and Yampa seem to do a good job when it comes to avoiding monolithic designs. When it comes to moving around all the information necessary to perform some calculation, using Elerea is not much different from functional programming in general. One can even nest definitions the same way as usual, using local declarations.

If possible, it helps a lot to express state changes in terms of hints calculated in other parts of the program and merging them at a single point. For instance, we don’t need to pass the score to the level logic, but create events that cause the score to be changed (increased or decreased depending on whether the player killed something or fired a shot). This makes it easier to introduce new entities.

Elerea limitations

There are a few sore points in Elerea. First, the current implementation highly depends on the garbage collector to remove unreferenced signals. Unfortunately, a signal doesn’t only occupy memory but keeps actively calculating and refreshing its output until that happens. This is quite spectacular when we run the game in ghci: we can easily tell when garbage collection happens, because the monsters suddenly pick up speed... I wonder if this problem could be solved while keeping the library a shallow embedding on top of Haskell. Yampa doesn’t have this problem, since it only advances signal functions that are attached to the network. Again, this is a consequence of the extra power monads give us: it’s much more difficult to track our signals.

Also, Elerea doesn’t make it possible to stop a stateful signal and restart it later, because we can’t even tell which parts of a network are supposed to be stopped. This is no problem in Yampa, since a signal function encompasses state in a pure way, and it can be freely passed around. The funny consequence is that it’s pretty much impossible to implement a pause functionality without introducing a new primitive in the library. But I’m not yet sure what this primitive should do, especially how it could work without violating referential transparency, so the limitation stays. The root of the problem is again the monadic interface, since it hides the signal connections and only exposes the output of the subnetworks. Consequently, we have no means to tell just a part of the network to stop and resume later.

The third problem I ran into is another simple thing: one-time switching. If I want a signal that behaves one way, then after an event fires it behaves in another way for the rest of its lifetime, I cannot express it in a way that the event causing the switch can be garbage collected. The library gives a solution to the problem of start times, but there’s no way to specify the end of life for a signal, so it seems that I’ll have to add a switching primitive to the library. I still don’t know what the signature of this primitive should be, but it’s most likely going to be very similar to Yampa’s switch.

All these problems are somewhat related, since they are all about stopping and possibly resuming parts of the network. I wonder if there is an elegant way to unify them.

Last words, for now at least

This post focused very much on the difficulties, but my conclusion is that FRP has great potentials. Most of the coding was straightforward, it was easy to change the program when necessary, it was also easy to achieve a nice degree of locality, and despite the fact that I’m in the process of figuring out this whole thing, it didn’t even take too long to get a program of this size working. We need more experiments, and I can only encourage everyone to play with FRP libraries and create larger applications.