2013/09/30

A few reasons why functional reactive programming matters


Inspired by Stephen Blackheath’s recent post on his experience with Sodium, I’d like to add a few more thoughts about the role of FRP in application development. It’s especially interesting to think about what benefits it brings to the table for someone who’s mostly used to working in mainstream languages.

Note that the following discussion is going to be mostly abstract. My goal here is rather to convey a general intuition about FRP as a paradigm – at least my interpretation of it – than to talk about specific systems.

As it turns out, the most important advantages I found while working with Elerea (which happens to be very similar to Sodium in spirit) are independent of the way we formulate FRP, for instance whether the chosen approach is based on arrows or monads. For the sake of this discussion, we can work with a fuzzy definition of FRP: a system for composing time-varying quantities with plain functions.

There are two primary kinds of time-varying quantities, often referred to as events and behaviours. In FRP parlance, an ‘event’ typically refers to a series of occurrences, and that’s how I’m going to use the term as well. A ‘behaviour’ is in essence a function of time (either discrete or continuous), and we can think of it as the whole lifetime of a variable in an imperative language. As a simple example, all the clicks of a mouse button form an event, while the mouse position is typically best modelled as a behaviour. Programs are built as complex events formed from simple ones through composition, using functions like the following (examples taken from the Sodium API):

merge :: Event a → Event a → Event a
snapshotWith :: (a → b → c) → Event a → Behaviour b → Event c
filterE :: (a → Bool) → Event a → Event a

The merge function takes two events and returns a new event that contains the union of all their occurrences. The snapshotWith function takes an event e and a behaviour b, and returns an event whose occurrences are simultaneous with e’s, but the values they contain are the result of applying a binary function to the values yielded by e and b at the time of those occurrences. As a last example, filterE derives a new event by throwing away some occurrences of an existing one. With some imagination, it’s hopefully clear that it’s possible to fully implement the logic of an interactive application in this manner.

The basics out of the way, let’s see what solutions FRP offers to certain real-life problems! To quantify the term ‘solution’ and to provide context for this discussion, let’s assume some code is read and modified by several programmers over a long period of time, and we’d like to prevent certain mistakes by construction, or at least make the right solution the most obvious choice. FRP is by no means a silver bullet, but it can guide the programmer in the right direction where OOP offers no particular help.

Complex event handling


Most of the time reactive programming is mentioned it is usually about event handling in some way. FRP systems offer a principled approach to specify complex rules for transforming and combining events. In order to live up to the term ‘functional’, such a system must be referentially transparent. This criterion has a few implications, like being free of glitches (i.e. being transactional: not allowing inconsistent state to be observed). Achieving this requires sophisticated dependency and lifecycle management.

As an abstract example, consider the following event propagation structure: A → C, B → C, (A, C) → D. Every time either A or B fires an event, it causes an update in C. At the same time, events coming from A also trigger a change in D, but we need to sample the state of C to compute this change. However, using the state of C before A’s effect on it is performed would lead to erroneous results, since it’s inconsistent.

Let’s now assume that our events are consumed by simple callbacks, and D is not implemented yet. A programmer new to the code is tasked with creating it, and they somehow figure out that they need to listen to A and sample the state of C to be able to perform the necessary update. However, they don’t realise that C’s state is partly affected by A due to some indirection in the code.

What happens? Depending on the order the callbacks are invoked, we might or might not get the desired result. If we’re unlucky, D’s behaviour will happen to be correct, and we don’t find out about our reliance on random ordering for a long time. Then a year or two down the line something in our system changes, event listeners get registered in a different order, D breaks, and a pleasant debugging session awaits us. By the way, we were not involved in implementing either C or D...

There are two aspects of FRP that can make it easier to get the solution right from the beginning: explicit dependencies and explicit delay policies.

One of the core properties of functional programs is the fact that definitions need to explicitly mention all the entities necessary to derive each value. In effect, FRP forces us to define time-varying quantities (i.e. the states of the objects in the system) by describing the clear connection between them and all the entities in the world that can possibly affect them. Some of these entities can be extensible slots, but we still have to define an explicit policy to deal with them. Our hypothetical programmer would have been much more likely to find out early on that C was affected by A had the system been formulated in this manner. They only need to look at the definition as opposed to scouring the code for all the possible callers, which generally doesn’t occur to people in practice even if their IDE makes it easy.

The other half of the story is delays. A mature FRP system would offer pairs of combinators like the following:

hold :: Event a → Behaviour a
holdDelayed :: Event a → Behaviour a

These combinators both convert events into behaviours by simply storing the last occurrence at any time. The difference between them is a conceptually infinitesimal delay that affects sampling. When taking snapshots of these behaviours at the exact moments of the original occurrences, the one constructed with hold already yields the new value, while holdDelayed e still exposes the old value. The merit of FRP is making the logic very explicit and obvious, and especially making the programmer think about which one they need in a particular case.

Avoiding the observation of inconsistent state


While transactionality was already mentioned above, I’d like to bring it up again as a separate point.

When using some FRP system, we’re basically describing a multi-rate dynamic data-flow network. Any incoming event causes a series of changes that propagates through the network, and the system makes sure that we never observe the state of the system before this propagation completes. For instance, Elerea achieves this by double buffering every state variable and updating the whole network in two phases. The nice part is that this consistency is enforced by construction at the language level as opposed to requiring explicit book-keeping.

Conflict handling by construction


When an object exposes a way to manipulate its state, a potential source of conflicts opens. As the code evolves, there can be more and more different entities in the system that might want to exercise control over this object. In a traditional OOP setting we need extra effort to avoid situations where separate parts of the system are stepping on each other’s toes: best practices, design patterns, frameworks etc. In contrast, FRP forces us to make a decision about resolving conflicts on a much more elementary level, simply because we have to think about all the possible external effects on an object when describing its state transitions. There’s no other way to implement its logic!

As a real-life example, let’s consider the GUI of a complex application. Assuming that we keep adding features to this program, we regularly get user feedback about certain flows being broken, e.g. an unforeseen combination of windows somehow interfering with each other’s functionality. It is tempting to fix such issues on an individual basis by trying to detect the offending situation and e.g. controlling the visibility of some visual element in this special case. OOP by itself offers no safeguards to stop us from doing the wrong thing and adding the extra rule without considering others that try to manipulate same element. FRP, on the other hand, makes us have a good look at all the existing effects and integrate a new rule in a way that plays nice with everyone else.

In short, FRP forces us to assign a recognisable identity to every writer of a given channel (state-changing method), and to define explicit rules to resolve their requests to affect some state.

Encapsulating state (transformation) at a small scale in a reusable manner


One of the strengths of functional programming is its ability to create small-scale abstractions at virtually no cost. For instance, single loops can be factored into general recursion patterns and specific logic in the form of a non-recursive state transforming function. FRP brings this capability into the world of mutable objects, making it possible to define transformers like rising edge detectors in a light-weight way without introducing new state variables elsewhere.

I have a quick and dirty example from our LambdaCube samples. In order to produce the videos on the blog, we added a way to save screenshots of the running application. Depending on the use case, sometimes it was convenient to have a button that I’d have to keep pressed to save the images, at other times a toggle logic was more convenient. The first case is handled by the following code:

capture = capturePress

The capturePress signal reflects the state of the chosen key, while capture specifies whether capturing is taking place at the moment. Well, this was trivial. To make the toggle logic work, we need two state variables: one to detect edges (when the key goes from the non-pressed to the pressed state), and the other to remember whether we’re saving screenshots or not. In the final code, each of these states can be hidden in two reusable signal transformers:

capture <- toggle =<< risingEdge =<< capturePress

In this case we need monadic bind as opposed to plain function application because Elerea requires all stateful entities to be created in a monadic context that manages their lifecycle. It’s nice that the change is local; we don’t need to introduce a new variable or a new object type. The desired state transformation can be expressed by changing a single line and using generic building blocks.

The bottom line


I believe FRP has high potential in application development, but we need more real-life data to have a clear picture of the advantages and the downsides. FRP has some desirable properties that should make it easier for programmers to do the right thing when attacking a problem, but we don’t know yet how it works at scale. There’s a trade-off between putting too many constraints over the way programs can be structured versus letting the programmers shoot themselves in the foot (often in subtle ways that might take a long time to realise), and the stricter end of the spectrum is of course the less popular one. In the end, I’m not expecting FRP to get big – not in its more rigorous forms at least –, but I still recommend studying it for the valuable lessons that can make our software better, no matter what approach we use.

2011/06/22

Renewed LambdaCube, Bullet bindings and Stunts example

After many long hours of labour, I’m happy to announce the new release of the LambdaCube 3D engine on Hackage! Almost two years passed since the previous one, and the project spent the majority of that time in dormancy. The most important changes are the following (mostly straight from the latest HCAR):
  • removed dependency on the high-level OpenGL bindings: from now on, the library only builds on OpenGLRaw, and the OpenGL specific code is limited to the GL render system module, which we plan to move into a separate package;
  • switched to the vect library (from Vec), which was created for 3D applications from the get-go;
  • dropped fgl dependency for scene graph handling, and switched to a simpler and much more efficient solution based on bytestring-trie;
  • simplified support for procedurally created content through vector vertex buffers, which subsumes user-supplied loaders for any format as a special case;
  • introduced the LCM (LambdaCube Monad) abstraction, which hides the management of the world state and generally simplifies the engine code;
  • more efficient scene management with frustum culling;
  • added support for light sources;
  • identified the public interface and documented it (the rest is only exposed for the sake of library implementers).

Besides the engine, we updated the bindings for the Bullet physics library as well, which covers a big portion of the existing functionality in its present state. This is a raw binding that exposes a C style interface.

We also set out to create a complex example that people can immediately start playing with. This is a modern remake of the classic racing game Stunts, available on Hackage under the same name. The example serves two primary purposes: it is a test case for the 3D engine and the physics binding as well as a starting point for future users of the library. We made sure that everything compiles and works fine on the three major platforms supported by GHC. The quickest way to see the example in action is to try the Windows executable, which is also handled fine by Wine. Fortunately, the compilation process is less painful than one would expect (even on Windows!), so those who want to start hacking on it right away should feel encouraged to do so; we provided a starter guide to make the first steps easier. For the even lazier ones, here’s a taste of the example:



Plans for the future

One of the original design goals was to create an engine that can handle Ogre3D content out of the box. Unfortunately, this proved to be a bad decision, since we ended up with a lot of fixed-function legacy right away, and the implementation in its current form is rather wasteful of resources due to its naivety, especially with respect to the choice of data structures.

The next version will definitely get rid of all the Ogre3D specific functionality. Our plan is to create a small and efficient core package that allows the programmer to easily set up the rendering pipeline and manage mesh hierarchies. We are targeting the OpenGL 3.3 core profile, so the engine will build exclusively on the programmable pipeline. The current idea is to define a simple DSL for describing pipelines that’s similar to GPipe in its basic philosophy, but not embedded in Haskell, which allows us to have full control over its structure. The engine would provide an API to build pipelines (allocating buffer objects and managing shader programs) out of these descriptions and to feed them with content (textures and primitive streams resulting from the flattening of the scene), while taking care of caching and optimising context switches. All this functionality would be wrapped in a declarative interface, naturally.

While it would be possible to create separate packages to support existing formats (e.g. Ogre3D or COLLADA), we have something much more exciting in mind: integration with Blender. Blender can load content in several formats and present it in a basic form, reducing everything to raw vertex and index buffers, and the data can be accessed from Haskell code through Thrift. It is also possible to wrap LambdaCube in a rendering plugin that would display content inside Blender, thereby giving us an integrated content authoring solution. Some of this actually works already, but it’s not ready yet for public consumption.

By the way, Csaba will be present on this year’s CamHac, so if you’re one of the attendants, feel free to bombard him with questions and ideas!

2011/04/02

Elerea documentation rewritten

I noticed that there have been some misunderstandings about the capabilities and other characteristics of Elerea, so I sat down today and updated the documentation. Also, I deprecated the variant with the automatic delays, because it’s a difficult to handle beast. The new version is already live on Hackage. I still need to update the examples not to use the legacy interfaces, but they shouldn’t look very different after the rewrite.

If you have any advice or comment regarding the renewed docs, don’t hesitate to share it with me.

2010/04/10

Programming with effects – the story so far

I’m sure everyone is sick of the monad story already, but I think I found yet another way to approach it that might help understand the big picture.

Perfect compositionality is the holy grail of software engineering, and enforcing referential transparency at every level of abstraction is a huge step towards that goal. This is all fine as long as we only have to deal with pure transformations, but what about side effects? The essence of the Haskell solution to this problem is to step back a bit, encapsulate effects in some abstraction, and combine these effectful entities as if they were ordinary values. This is basically a form of metaprogramming, since after assembling a big program from small ones using combinators, we need to interpret the resulting structure to make it come to life.

One can come up with all kinds of silly analogies, but nothing beats simply looking at those encapsulated computations as generic black boxes. In order to build a big box from several small ones, all we have to do is connect them. The Haskell base libraries offer us a bunch of type classes that provide various sorts of plumbing combinators to describe the composition of boxes in a disciplined way. To date, probably the Typeclassopedia is the most accessible resource to explain them to the uninitiated.

While preparing notes for my FP class, I realised that I have never seen the relationships between these classes explained on a single diagram, so I prepared one:


The arrows always point towards the specialisation. The top of the stack is Functor, which only allows us to combine a single effectful entity with pure functions. If we add parallel composition of effects (i.e. the ability to execute n boxes in parallel, collect their outputs and feed them to a single function of n arguments), we get Applicative. Orthogonally, if our boxes also have an input port we can attach to, we can define sequential composition, which is the responsibility of the Category class.

The Arrow class is known to specialise both Applicative and Category, but does it have any instances that cannot fit in either of those parents is there any other common specialisation that doesn’t fit in Arrow (yeah, I had it the other way around originally)? I’ve never seen the answer spelled out, so I’ll say it here: no, Arrow is strictly the intersection of Applicative and Category. If c is a category and c a is applicative functor for any a, then it is easy to show that c is also an arrow. All we have to do is provide a minimal definition of the Arrow interface, which is the arr combinator plus any one of the other four (this is technically not true due to the way the default implementations are defined in the Arrow class, but it’s always possible to express them in terms of each other). As it turns out, the input-splitting combinator &&& is the easiest to express given the applicative interface, so we can use the following definition:

arr f = fmap f id
(&&&) = liftA2 (,)


With Arrow, we can arrange our effectful computations in a static acyclic network. The next step on the ladder of expressiveness is ArrowChoice, which allows us to branch according to the input of the box, but the branches still have a static structure. This restriction is lifted by the Monad (or the equivalent ArrowApply) interface, which grants us the ability to manipulate arrows as first class entities during runtime, i.e. within the interpretation phase.

Nowadays we can hear more and more voices saying that monads receive too much attention compared to all the other patterns mentioned above. I understand the sentiment, and even sympathise with it, but I would still say monads are special. When we program in mainstream imperative languages, we rarely have the opportunity to venture beyond the limits of ArrowChoice. Haskell can be an excellent imperative language precisely because monads allow it to eliminate the boundary between compilation and execution, hence be strictly more expressive than our usual toolset. The IO monad is dark magic, no doubt. Nevertheless, the aforementioned voices are absolutely right in saying that we shouldn’t use the monadic interface when we don’t need its expressive power. Restricting ourselves to the more general interface makes our code more reusable and typically also easier to implement efficiently (as an extreme example, infinite streams have a much more efficient direct Applicative instance than the default one derived from the monad operations).

Back to the drawing, we can see that there are two orthogonal traits to consider besides the main hierarchy. When building a graph of effects, we often want feedback loops, i.e. cycles in the graph. Cycles only make sense when we have sequential composition and the ability to branch, so there’s no point in extending anything more general than arrows with the ability to form loops. They are provided by the ArrowLoop class and also the MonadFix class.

The other aspect is the ability to combine the internals of two boxes instead of just enclosing them both in a bigger box. If the structure of the side effect is a monoid, then we have an extra means to merge boxes. Such a combination only makes sense in the presence of parallel composition, so it can only be defined for applicative functors. Depending on which step of the ladder we’re standing on, the class is called Alternative, ArrowPlus or MonadPlus, but it’s really the same thing everywhere.

The last two paragraphs suggest that the type class hierarchy could use a lot more refactoring than the oft-mentioned issue that Applicative is not a superclass of Monad. As it turns out, the Arrow class is completely superfluous, and we should probably have a single unified class for monoidal effects as well as one for feedback loops. I’m not sure where the branching added to ArrowChoice sits, it might probably apply to any Category, since it’s a fourth kind of combination besides parallel, sequential, and monoidal: separation in time.

And we haven’t even touched on the question of commutativity...

2010/03/26

Moving to GitHub

I just moved some of my Haskell code over to GitHub. For the time being, you’ll find there everything related to Elerea, DoW and Hemkay. More to come later.

2010/02/14

Behind the dungeons

Finally I managed to polish Dungeons of Wor to the degree that it felt worth releasing. The main motivation for creating it was to get acquainted with functional reactive programming (FRP) from the programmer’s side as opposed to the library implementor’s side. If I had to pass a verdict right away, I would definitely say it was a pleasant experience overall despite the fact that I had to figure out everything from zero.

The core idea of FRP is to work with time-varying quantities -- both continuous functions and discrete events -- as first-class entities, which opens up a whole new perspective to look at interactive systems from. There are essentially two main schools today: focusing on the quantities themselves or on the transformers that consume and produce such quantities. The former is sometimes called classic FRP (CFRP), while the latter is arrow-based FRP (AFRP).

Since I was mainly interested in putting this paradigm into practice, it, I was disappointed to learn that most of the libraries available were not ready for actual use. Yampa was the only one that didn’t break down immediately when I tried doing something with it, but I felt the arrow style and the pre-determined switching mechanisms too constraining, and I wanted an applicative style library to play with. For this reason I created Elerea from scratch, which was a quick hack first, but I managed to iron out a lot of its creases over the past few months and reached a state where it was ready for the challenge. The details can be found in the paper I presented at WFLP 2010.

Elerea provides an obsolete interface and three experimental ones, but the latter group is actually much more dependable and well-behaved in general. I used the Simple version to program the game, because it has clean semantics. In a way, it goes against FRP, since it only provides an interface to work with discrete streams as opposed to continuous behaviours, but the more complex problem of managing the lifetimes of the entities is an orthogonal issue. Since I was mostly concerned about the latter problem, I chose a task that can be easily expressed as a succession of states: an old-school arcade game.

In essence, the library provides streams (called signals) that are not only isomorphic to functions over natural numbers, but also support the monad interface just like functions do (as in the reader monad). The only construct offered over the monadic combinators is a memory element to be able to introduce state. The bind operation corresponds to sampling, and it allows us to describe dynamic data-flow networks without the need for additional constructs. This is precisely the difference between monads and arrows: arrows can be used to describe the static parts of the system, but we need something more when we have a dynamic system (in the case of Yampa this role is played by the various switching combinators).

Monads, those cute fluffy things

When it comes to practice, the monadic interface is both a blessing and a curse. It is really powerful, and it allowed me to express all kinds of dynamic structures (especially point-wise behaviours and dynamic collections) with relative ease. It was really useful not to have to rely on a pre-determined set of switching constructs, because I often found that I wanted switchers that are slightly different in individual cases, and I could easily implement them as needed -- and later I managed to unify those instances.

However, it can get confusing at times, because contrary to my expectations, I often needed to create signals with several layers (i.e. values of type Signal (SignalGen [Signal (SignalGen a)]) and worse), and I had to write basically boilerplate code to break down these structures or convert one into the other.

For instance, the combinator I used to maintain dynamic collections is the following (you can find its source in the Utils module of the game code):

collection :: Signal [Signal a] -> Signal (a -> Bool) -> SignalGen (Signal [a])


The first argument is a source of signals, which contains a list of signals to add to the collection at any instant. The second argument is a keep-alive condition, which is also time-varying. Whenever the current function carried by this signal yields False on any of the signals currently maintained in the collection, the signal in question is thrown away. The collection signal hides the signals living inside it, and only exposes their output.

According to the rules of the game, each new enemy is spawned upon the death of a previous one. There is a function called spawnEnemy that takes the current states of each enemy and turns it into a list of new enemies. This function is simply applied to the list of current enemy states using concatMap within the spawnEnemies function, whose type is therefore the following:

spawnEnemies :: [Actor] -> [Signal (SignalGen Actor)]


We get a two-layered structure inside the list, because the generators depend on the current position of the player (to avoid spawning creatures too close to it). We have to take the resulting list and turn it into something accepted by collection. There is only one way to turn a signal generator into a signal (this is a library primitive that’s essentially a monadic join across two types of stream):

generator :: Signal (SignalGen a) -> SignalGen (Signal a)


Now the line that creates the source signal for the enemy collection is the following:

spawnedEnemies <- generator (sequence <$> (sequence . spawnEnemies =<< enemies'))


The enemies' signal carries the previous state of the enemies. First we bind this signal to sequence . spawnEnemies, where sequence has the following type:

sequence (inner) :: [Signal (SignalGen Actor)] -> Signal [SignalGen Actor]


The outer call to sequence pushes the list layer even further inside the structure:

fmap sequence (outer) :: Signal [SignalGen Actor] -> Signal (SignalGen [Actor])


This will give us something we can apply generator to, and get exactly the structure we want.

Now this is quite easy to follow in hindsight. However, while the game was developed and dependencies were being added and removed, the types kept changing, and I had to follow the changes. I’d say that was more trouble than it should be. It would have helped a lot if I could easily display the inferred types of non-top-level expressions.

At the same time, all this power made playing with various approaches to dynamism possible, and I don’t know how I would have solved all the little problems without it. In general, I’d say it provides a great platform to experiment with abstractions (like the switcher and collection combinators) that should make FRP in the large possible in the long run.

Issues with delays

When we create a dynamic data-flow network, in theory we have to be careful to create only well-formed loops, i.e. no loops without at least one delay element to break them. In practice, however, I never found it to be a real problem. While there is no protection against instantanous loops in the library, there is a very simple convention to avoid them: every forward reference must point to a delayed signal. Even though this should be trivial, I’ve never seen any similar rule spelt out like this. I got the inspiration from Daniel Bünzli’s React library, which doesn’t allow undelayed forward references, because its fixed point combinators inevitably add delays. In the case of Elerea these references are made possible by the MonadFix instance of signal generators, which cannot be forced to contain a delay (which would violate the laws for mfix anyway), but in exchange is more powerful, since any structure can be fed back, not just a single signal. And there is even syntactic sugar (mdo and recursive do expressions) to make its usage trivial. But if someone feels that the compiler should be able to check loops for us anyway, Neil Sculthorpe’s latest paper shows a way to achieve that.

Unfortunately, there is a much more fundamental problem: the possibility of off-by-one errors. When we have events that trigger each other in a circular dependency, we have to be careful when to take the current value of some signal as opposed to the previous one. Note that this has nothing to do with whether our system is conceptually continuous-time or not (just swap ‘current’ and ‘previous’ to ‘post-switch’ and ‘pre-switch’ and you still have the same issue), it seems to be part of the essential complexity of FRP applications in general. This problem is highly application specific, and I’ve yet to see it addressed in any way, be it a just set of guidelines or something more rigorous, so pointers are kindly welcome.

Dealing with code evolution

Dungeons of Wor being a very small application I could only gather some early impressions. It was certainly easy not to fall into the trap of maintaining a world state, as a lot of the local state could be hidden inside individual signals. Both Elerea and Yampa seem to do a good job when it comes to avoiding monolithic designs. When it comes to moving around all the information necessary to perform some calculation, using Elerea is not much different from functional programming in general. One can even nest definitions the same way as usual, using local declarations.

If possible, it helps a lot to express state changes in terms of hints calculated in other parts of the program and merging them at a single point. For instance, we don’t need to pass the score to the level logic, but create events that cause the score to be changed (increased or decreased depending on whether the player killed something or fired a shot). This makes it easier to introduce new entities.

Elerea limitations

There are a few sore points in Elerea. First, the current implementation highly depends on the garbage collector to remove unreferenced signals. Unfortunately, a signal doesn’t only occupy memory but keeps actively calculating and refreshing its output until that happens. This is quite spectacular when we run the game in ghci: we can easily tell when garbage collection happens, because the monsters suddenly pick up speed... I wonder if this problem could be solved while keeping the library a shallow embedding on top of Haskell. Yampa doesn’t have this problem, since it only advances signal functions that are attached to the network. Again, this is a consequence of the extra power monads give us: it’s much more difficult to track our signals.

Also, Elerea doesn’t make it possible to stop a stateful signal and restart it later, because we can’t even tell which parts of a network are supposed to be stopped. This is no problem in Yampa, since a signal function encompasses state in a pure way, and it can be freely passed around. The funny consequence is that it’s pretty much impossible to implement a pause functionality without introducing a new primitive in the library. But I’m not yet sure what this primitive should do, especially how it could work without violating referential transparency, so the limitation stays. The root of the problem is again the monadic interface, since it hides the signal connections and only exposes the output of the subnetworks. Consequently, we have no means to tell just a part of the network to stop and resume later.

The third problem I ran into is another simple thing: one-time switching. If I want a signal that behaves one way, then after an event fires it behaves in another way for the rest of its lifetime, I cannot express it in a way that the event causing the switch can be garbage collected. The library gives a solution to the problem of start times, but there’s no way to specify the end of life for a signal, so it seems that I’ll have to add a switching primitive to the library. I still don’t know what the signature of this primitive should be, but it’s most likely going to be very similar to Yampa’s switch.

All these problems are somewhat related, since they are all about stopping and possibly resuming parts of the network. I wonder if there is an elegant way to unify them.

Last words, for now at least

This post focused very much on the difficulties, but my conclusion is that FRP has great potentials. Most of the coding was straightforward, it was easy to change the program when necessary, it was also easy to achieve a nice degree of locality, and despite the fact that I’m in the process of figuring out this whole thing, it didn’t even take too long to get a program of this size working. We need more experiments, and I can only encourage everyone to play with FRP libraries and create larger applications.

2009/11/19

LambdaCube and Bullet on Hackage at last

Yesterday we sat down with Csaba to finally clean up the code a little bit and wrap it into, ehm, cabbages. Since this is basically an early snapshot of a rather volatile codebase, there is no documentation beyond a simple start-up guide. In its present state, it is mainly aimed at fanatics, and we made it available in the hope of attracting early feedback.

So what is there on offer? As a first step, I recommend installing lambdacube-examples, which works out of the box. We tried it on Linux as well as a freshly installed Haskell Platform on XP. It gives you two executables: lambdacube-basic, a rather psychedelic experience that shows off some basic features, and lambdacube-cameratrack, a simple non-interactive example. Note the readme in lambdacube-engine, which tells you how to make resources (models, textures, scripts etc.) accessible to your program.

The other half of the deal is the Bullet binding, which contains a little C wrapper around the C++ API, so it doesn’t rely on the rather limited C interface that the library provides. Note that there is an example that requires OpenGL, which also uses unsafeCoerce to convert from CFloat (exposed by the Bullet binding) to GLfloat (which is actually a newtype wrapper around CFloat), because the performance of realToFrac is beyond obscene. We could use some advice on how to deal with this problem in a less fragile way.

Anyway, if you managed to install the binding, you can also try lambdacube-bullet, which shows how to use the two libraries together. Note that meshes can be turned into collision shapes. In this case, the same shape is used for rendering and dynamics, so the simulation is rather slow. If we used a less detailed convex mesh, animation would be smooth even with a hundred objects interacting.

Finally, if you checked the screenshots on the LambdaCube wiki page, you could see that we also have some code to load the content of Tile Racer to some extent. Unfortunately, we cannot distribute the content, and it needs some preprocessing (which requires a working Ogre3D installation) in order for our program to be able to deal with it, so you’ll have to do some manual work to get it working. The track loader can be found in the rather unorganised obsolete branch of the source repository.

That’s it for now. Have fun with it, but prepare for some bumps.

2009/09/28

An eventful month in a few words

I haven’t written for a long while, because the past few weeks were packed with excitement. First my hp2any work had to be finalised for Google, and I couldn’t even touch it ever since thanks to all the events that followed. After a brief holiday I went to ICFP and spent the whole week in Edinburgh. Besides the conference, my personal package included the Haskell Symposium and CUFP. It was a great experience, especially because I could meet all these weird people. I spent the next two weeks back in Budapest and went to two FP-BUD meetings during that time among all the less interesting day-to-day duties. The topping of the cake was last week, which I had the pleasure of spending in New York thanks to Jane Street and our JSSP participation with Csaba. After two days of freely roaming in the city we went to the meeting that Yaron already blogged about, then I spent the rest of the week in Newark and South Orange at IFL, and gave a talk about Elerea on the first day. Yes, it was a lot of FP fun in such a short time. And by now I’ve even recovered from the rather exhausting nine-hour flight from JFK to BUD...

The main reason of this post is to complement Yaron’s and show you some pictures (by the way, sorry for the quality in advance) of the JSSP meeting. The trip was a real treat, but I’m not going to recall the days before the meeting. Suffice it to say, New York is highly shocking for someone who spent all their life in Europe. We were basically walking around in Manhattan for two days and tried to absorb as much of it as possible – which is more fun to actually do than to be told about.

The meeting itself started off with a welcome speech.

The essence of it was a shopping list of mostly FP related issues (language features as well as practices) that gave a general idea what using FP in the wild looks like. The whole talk was supported by a single slide with the list on it, but I forgot to take a photo of it. That is a pity, because it would be wonderful flamewar material. ;) Thankfully, it was in huge contrast with the bulk of IFL, so I wouldn’t risk being exposed to a one-sided view over the week.

Afterwards, the first session was about Yaron’s favourite projects, and I can only agree with him seeing all the work that went into them. The very first talk was about Ocamlviz, given by Guillaume Von Tokarski:

It is kind of embarrassing to me, because it really makes my hp2any work pale in comparison. Just look at their project page.

The next one was about the Moby compiler, this time given by Shriram Krishnamurthi:

I already encountered this project at ICFP, and even from that crop it stood out to me with all the real-life results they have in making maths accessible to kids. Danny Yoo gave the same talk on it at IFL a few days later.

After lunch, the next session started with Bertrand Desmons presenting his graphics library.

Even if the work is not finished yet, it still looked fine to me considering the time frame of the summer project, especially knowing how much struggle we had to go through. Having said that, the next talk was about Lambdacube. Amidst all the excitement I forgot to take a picture, so here’s one about the rehearsal:

I plan to write more about the project in a later post. Csaba being the perfectionist he is was not really satisfied with his own performance, but I wouldn’t be so harsh. We started the summer with a piece of hack and ended it with an extensible base and a heap of features implemented.

The last project talk was given by Duru Türkoğlu about his work on the computational geometry library.

I have to admit that I had a hard time following parts of this talk, not being familiar with the area or the depths of SaSML. In that sense, it was a warm-up for the conference days... I had the impression that the emphasis was more on the research than the implementation in this project, so it was slightly different from the rest in its nature. But this only supports the observation that this year’s JSSP projects covered a wide area, and it wasn’t only diverse in terms of the implementation languages.

After the talks we had a demo session, where everyone could wander around and have a word with the creators.

Here’s a screenshot from Lambdacube interacting with the Bullet physics engine:

Yes, we piled up some cars just for fun. You can also do it soon yourself, as the new version of the engine as well as the renewed Bullet binding is bound to be released soon. I’m not sure why, but the program on the photo is called ‘Lambda-Cube GLFW UnsafeFRP Example 1’, so beware.

After the demo session, as a penultimate treat before the dinner, Chris Okasaki told us some fun ‘insider facts’ about his book.

Besides the Slashdot anecdote mentioned by Yaron I liked the part when he recalled how his creative crisis combined by outside pressure was resolved during an inspirational walk home. Moments of enlightenment are always nice to hear about, especially since I have so few of them myself. And even if I normally hate sappy happy endings, sometimes I am willing to make an exception. ;)

The dinner at the end put the crown on the day as we often say in Hungary. We had a chat, ate a lot of good food and ended the day with a walk in the most colourful part of the city. All thanks to Jane Street for making this happen!

2009/08/14

hp2any overview online

At last I created a HaskellWiki entry for hp2any. It’s easier to maintain than the Google Code wiki, so I’ll be using this page for the project from now on.

As for the code, there is also a minor update that makes path handling more robust and Windows friendly. However, I still didn’t manage to work around the sharing violation issue that makes live profiling impossible under Windows.

2009/08/11

hp2any on Hackage

As the title says. Since the final deadline is very close, I uploaded the current versions of the hp2any packages. There are three of them: hp2any-core, hp2any-graph, and hp2any-manager. That’s also the order in which they depend on each other, so e.g. if you don’t want the history manager (because, say, you are a Windows user and don’t have the GTK bindings installed), you can get just the other two.

The graph package contains a directory called ‘test’ that includes a little example to get started with. The manager should be straightforward to use for the most part, the only thing to remember is that you can zoom graphs with the mouse wheel. Nevertheless, a little readme is included in its package.

There hasn’t been much change since the last update. The biggest issue was the fact that GLFW got broken due to the recent OpenGL update, so I switched to GLUT in the grapher. I also spent some time on cleaning up the code and the Haddock comments for the library interfaces. Everything should be functional, but please do exercise the code a bit and share your comments. :)

2009/08/02

More profiling goodies

Time for another update! Most of the changes happened behind the scenes. I wasted a few days because of my naive assumption that the text rendering process presented in the previous post actually works. It seems to, but apparently it causes some GTK calls to randomly hang. And I do mean randomly. The error doesn’t even seem to be related to text rendering per se, but it stops occurring as soon as I remove the call to drawPixBuf. This is something to look into later, but there’s no point pursuing it at the moment.

The changes on the surface are not really spectacular, but they might be useful. After giving up on my text rendering ambitions I added cost centre lists next to the graphs, and also the ability to toggle accumulated and separate views. The lists can be sorted according to the total cost for each cost centre, and as we hover over the graph, the cost centre under the cursor is highlighted on the list. See for yourself:



The extra options can be found under the fourth button next to the left arrow in the graph headers.

As for library interfaces, one major change is the removal of the network protocol for the most part. My original idea was to create a remote controllable graphing application, but I ended up factoring graph rendering routines into a library, which removed the need for serialising various commands. The other change is the introduction of the Stats module, which makes it possible to efficiently query the heap profile in various ways. In particular, you can get the maximum individual cost, the maximum total cost (at a given moment) and the integral of any cost centre for any time range, and you can also extract samples collected during a given time interval. All this requires a preprocessing phase which can be performed in nearly linear time.

From now on, I’ll be concentrating on the interface of the history manager, and after adding some features I’ll also write a tutorial/manual for the applications.

2009/07/25

Pango font rendering on an OpenGL canvas

Nowadays I’m working on the interface of the profile history manager, and I ran into the problem mentioned in the title. The problem is actually that the Gtk2Hs interface seems to have more degrees of freedom than the implementation, and even if it seemingly lets us combine various mechanisms in creative ways, most of these combinations lead to a segfault. In particular, using any incarnation of drawLayout in an OpenGL drawing area suffers from this problem.

After some sweat and tears and frantic search for solutions I decided to use Cairo for rendering. This also has the advantage that the result of the rendering can be cached in memory, so the display can be refreshed very efficiently if the text is static. All we have to do is prepare an image for later rendering.

First we need a Cairo context. As part of my goal was to preferably use the same font as the rest of the interface, I also set it to a sans serif typeface:

cctx <- cairoCreateContext Nothing
fontDesc <- contextGetFontDescription cctx
fontDescriptionSetFamily fontDesc "Sans"
contextSetFontDescription cctx fontDesc


The next step towards consistency is to set the resolution to match that of the screen. This is very important to crazy people like me, who had to change their DPI setting.

screenGetDefault >>= \s -> case s of
Nothing -> return ()
Just scr -> do
res <- Gtk.get scr screenResolution
cairoContextSetResolution cctx res


Okay, the context is sufficiently prepared for our purposes, we can create the layout:

txt <- layoutText cctx "Hello world!"


Next, we need a pixel buffer that can hold the final render. We can easily find out its dimensions by asking for the pixel extents of the layout, and it’s probably better to use the logical extents that also include some padding:

Rectangle _ _ txtWidth txtHeight <- snd <$> layoutGetPixelExtents txt
textSurface <- createImageSurface FormatARGB32 txtWidth txtHeight
textSurfacePixbuf <- pixbufFromImageSurface textSurface


And everything’s ready for rendering, so let’s do it:

renderWith textSurface $ showLayout txt


From this moment the rendering of the text is available in the pixbuf. Rendering it is a piece of cake:

withGLDrawingArea glCanvas $ \glw -> do
-- various opengl commands
gc <- gcNew glw
drawPixbuf glw gc textSurfacePixbuf 0 0 textX textY txtWidth txtHeight RgbDitherNone 0 0
-- more opengl commands
glDrawableSwapBuffers glw


When we know that we won’t need the image any more, we can free up the surface:

surfaceFinish textSurface


For more consistency we could look up the actual rendering settings (antialiasing, hinting) and fonts of the current theme and set everything before rendering. That’s left as an exercise to the reader. I’m too tired. ;)

2009/07/17

Introducing the heap profile manager

Well-well, it’s been almost two weeks since the last release, so I decided to show you the beginnings of the heap profile viewing application. After thinking about the alternatives I decided to follow in the steps of others who make core tools and use gtk2hs for the interface. Since I had a working grapher in OpenGL, it was also straightforward to keep it. I started out as a complete beginner in gtk2hs last week, and it proved to be one of the smoothest library learning experiences so far. Even getting several OpenGL drawing areas to work simultaneously went without a hitch. The UI started out as something more complex, but after some experimentation I arrived at the present design:


You can load several graphs in each column (note that multiple selection is enabled in the file open dialog), and move them left and right between columns as you wish. Columns can also be created and destroyed at will. This initial version doesn’t have any more interaction, e.g. you cannot zoom in on areas you’re interested in. Consequently, graphs with an occasional high spike can be unpleasant to look at, as illustrated by the example on the right.

The next step is obviously to add the features that make comparison easier, like displaying scales, displaying identical cost centres with identical colours, adding zooming and panning, comparing a given cost centre from separate runs on the same graph and so on. Also, the profile loader could be probably speeded up a bit by not building a map from times to samples if we convert it into a list anyway.

If all this works okay, I can also add some export plugins (e.g. ps and svg) to finally retire hp2ps.

Oh, the button next to the left arrow doesn’t do anything yet.

2009/07/06

Remote profiling at your fingertips

Yes, it’s time for another code release after a long hiatus. You can grab the source from the repository, and it should even be able to compile without any major hiccup. The grapher is now capable of connecting to a profile relay server that can broadcast the heap profile of its associated process on the fly. Stopping either the process, the server or a grapher client is handled gracefully. Thanks to decoupling the grapher and the process in the remote case, it is now possible to attach an observer to a program that was started earlier. Here’s the proof:


It’s hopefully obvious that the clients were attached at increasing times from top to bottom, and I flipped the mode of the middle one just for the fun of it.

Unfortunately, even though the code to achieve all this is quite small (hey, thanks Haskell!), getting it to work at a reasonable level of stability took longer than expected. Therefore, instead of adding remote control features to the grapher right away, I’ll start preparation for the history manager, otherwise I might not be able to get it working before the final GSoC deadline.

2009/07/03

Playing and learning

My profiling project is currently in the state where I can’t really show any pictures or code, since I’m fighting with various edge cases that one can bump into while writing a multithreaded distributed graphical application. In short, remote graphing has already been working since the beginning of the week, but there are stability issues that need to be ironed out before the next release, and it’s taking longer than I anticipated...

Until then, let me bring another program into your attention. It is a Bloxorz clone in Haskell, available through cabal in the bloxorz package. It is more limited than the original, and only features three levels, but the essence is there. This rendition was made by a student of mine, Viktor Devecseri, for a ‘practical functional programming’ class I’ve been teaching for one and a half years. This is an optional class available to students of electrical engineering and informatics (at our university, the latter involves a fair amount of both CS and IT), who don’t necessarily have any non-mainstream programming background.

The point of the class is basically to give students a wider perspective by exposing them to functional programming. The primary language of the course is Haskell, but I also talk about Hume, and the last time I dedicated a lesson to Timber too, as the official name of the course is Embedded Functional Programming for historical reasons. We even used Hume to control Mindstorms NXT robots and a Tmote Sky sensor the first time this class was held, but this proved to be a bit too complicated due to the lacking toolset for this language, so the topic slowly shifted towards general practical FP. While one might argue that Haskell is not the most practical language, it is definitely among the few languages that’s both useful for real-life purposes and can broaden one’s mind to a great extent.

I always take the time to ask everyone about their learning experience, what they found easy or hard while working on their assignment. Since students can have rather different past experiences, I let everyone choose the topic on their own, the only constraint being that the program should be interactive in some way. In other words, I don’t want to see pure data grinding, because there is another class with that focus, and it’s probably more rewarding for newcomers to see something move and respond right away.

While talking to Viktor, he had remark that the approach being ‘not object oriented enough’ (I hope I’m not misquoting him too much here) was somewhat inconvenient. Afterwards, we had a little conversation about FP being a completely different approach to modularisation than OO. Nevertheless, even if I have a few years of experience with functional languages, I still found it hard to respond to questions pertaining concrete problems. I believe it would help a lot if there was some kind of tutorial on designing real-life applications in the style of Why Functional Programming Matters. Is there such a guide somewhere, or only in the brains of experienced people and scattered everywhere across the web? My dear readers, please help me out!

2009/06/25

Short-term hp2any plans

I haven’t written for a while, because I was distracted by other activities for a few days. I have no code or pretty pictures to show this time, just a few words on what I intend to do next.

This week’s milestone is a working network protocol for the grapher that can also be used by other tools, e.g. to monitor the heap of a remote application. Next week will be mainly about improving this feature and implementing some more functionality in the grapher that will hopefully make it a more useful tool. The rest of July shall be dedicated to the history manager.

2009/06/13

You can draw your own graphs now!

Sorry, no pictures this time, there’s nothing new to show. However, you can play with the grapher now, so go grab the source from the repository. The details of the update are in the commits. First I wanted to polish the grapher a bit more before this release, but I got held down by more duties than I had expected, and I figured it’s only for everyone’s benefit to start exercising the code as early as possible. It also gives you something to provide feedback on.

There are two main features I’ll add to this grapher: zooming-panning and a socket based remote control interface that allows other programs to use it as a display. It might even become a generic grapher that’s attached to the profiler output through some minimal glue code that also uses the remote interface. The same glue code could actually be used to make profiling output available over the network to anyone interested, sharing the protocol of the grapher.

It’s also time to start thinking about the history manager (I’m reluctant to call it a framework, however fancy it would sound ;). At this point, the most important question is which windowing toolkit to use. I have already collected some vague ideas on the project wiki, and I’d need some advice on this.

2009/06/07

More colourful graphs

In order to make it easier to compare cost centres, I added a cumulative view that works the same way as hp2ps. Generating appropriate colours is tricky when you don’t know the data in advance, but the result is at least satisfactory for the time being. Hovering over an area causes the name of the respective cost centre to be displayed in the top left corner, as illustrated by the following screenshots:



The vertical magnification of the graph depends on the last 50 samples, so a huge spike will squeeze the diagram only for a limited amount of time, as you can see on the screenshots too.

Of course, the line graphs are also available, but the two can’t be toggled during a single run yet, so here’s a picture from another one (the lines are broken at places, because the Gnome screenshotter can’t seem to keep up with the animation):



What’s next? The first thing to do is to optimise graphing, because everything is recalculated for each frame in the current snapshot, and it doesn’t take long for the visualiser to stress the CPU more than the program we’re profiling... Pushing everything into display lists should take care of that. The next step is to add the means to stop the animation and look around in the graph, and also the option to display only the last portion while animating the graph.

When the grapher works fine as a standalone application, I’ll start extending it with remote control capabilities as outlined on the project wiki.

2009/06/03

The first graphs

Well, after a few days of other distractions I got back to hp2any and implemented a simplistic graphing utility on top of the core library. Here is the first test run featuring a little game I’m working on in my similarly little spare time:



Oops, of course I forgot about the evils of laziness right away... I removed some debug text, but still calculate the values that would be displayed, which are therefore never evaluated and hold on to various big structures from every past frame. The situation is much better after adding an exclamation mark at the right spot:



There’s no public code yet, but I hope to get it in a much better shape in a few days.

2009/05/28

Read your profiles!

I uploaded the first version of the hp2any core library that is capable of handling heap profiles both during and after execution. The essence of the interface is this group of functions:

type ProfileReader = IO Profile
type ProfileSink = SinkInput -> IO ()

readProfile :: FilePath -> IO Profile
profile :: CreateProcess -> IO (ProfileReader,ProcessHandle)
profileCallback :: CreateProcess -> ProfileSink -> IO ProcessHandle


The readProfile function can parse an .hp file from an earlier run and build an easy to query structure from it. The profile function takes a process to run and returns an IO action that lets the caller look at the snapshot of the profile at the given moment. For those who want to manage profiling data on their own, the profileCallback function is provided, which takes a callback function that’s called whenever some new piece of information is available. Its input has the following form:

data SinkInput = SinkSample Time ProfileSample
| SinkId CostCentreId CostCentreName
| SinkStop


There are three possibilities: a snapshot that lists active cost centres with their associated costs, an association between a numeric id (created by the library, used in the samples) and the name of the cost centre, and an indication that no more data should be expected. Note that SinkInput is an instance of Show, therefore print can be used as a callback function.

For the time being, the following functions are provided to query the Profile structure:

type CostCentreId = Int
type CostCentreName = String
type Time = Double
type Cost = Int
type ProfileSample = [(CostCentreId,Cost)]

costCentreName :: Profile -> CostCentreId -> Maybe CostCentreName
costCentreNames :: Profile -> [(CostCentreId,CostCentreName)]
toList :: Profile -> [(Time,ProfileSample)]
intervalToList :: Profile -> Time -> Time -> [(Time,ProfileSample)]
profileLength :: Profile -> Time


Caveat: it’s all preliminary and makes no attempt at being efficient, but it seems to work fine at first glance. Do play with it and shower me with your comments. :)