2013/09/30

A few reasons why functional reactive programming matters


Inspired by Stephen Blackheath’s recent post on his experience with Sodium, I’d like to add a few more thoughts about the role of FRP in application development. It’s especially interesting to think about what benefits it brings to the table for someone who’s mostly used to working in mainstream languages.

Note that the following discussion is going to be mostly abstract. My goal here is rather to convey a general intuition about FRP as a paradigm – at least my interpretation of it – than to talk about specific systems.

As it turns out, the most important advantages I found while working with Elerea (which happens to be very similar to Sodium in spirit) are independent of the way we formulate FRP, for instance whether the chosen approach is based on arrows or monads. For the sake of this discussion, we can work with a fuzzy definition of FRP: a system for composing time-varying quantities with plain functions.

There are two primary kinds of time-varying quantities, often referred to as events and behaviours. In FRP parlance, an ‘event’ typically refers to a series of occurrences, and that’s how I’m going to use the term as well. A ‘behaviour’ is in essence a function of time (either discrete or continuous), and we can think of it as the whole lifetime of a variable in an imperative language. As a simple example, all the clicks of a mouse button form an event, while the mouse position is typically best modelled as a behaviour. Programs are built as complex events formed from simple ones through composition, using functions like the following (examples taken from the Sodium API):

merge :: Event a → Event a → Event a
snapshotWith :: (a → b → c) → Event a → Behaviour b → Event c
filterE :: (a → Bool) → Event a → Event a

The merge function takes two events and returns a new event that contains the union of all their occurrences. The snapshotWith function takes an event e and a behaviour b, and returns an event whose occurrences are simultaneous with e’s, but the values they contain are the result of applying a binary function to the values yielded by e and b at the time of those occurrences. As a last example, filterE derives a new event by throwing away some occurrences of an existing one. With some imagination, it’s hopefully clear that it’s possible to fully implement the logic of an interactive application in this manner.

The basics out of the way, let’s see what solutions FRP offers to certain real-life problems! To quantify the term ‘solution’ and to provide context for this discussion, let’s assume some code is read and modified by several programmers over a long period of time, and we’d like to prevent certain mistakes by construction, or at least make the right solution the most obvious choice. FRP is by no means a silver bullet, but it can guide the programmer in the right direction where OOP offers no particular help.

Complex event handling


Most of the time reactive programming is mentioned it is usually about event handling in some way. FRP systems offer a principled approach to specify complex rules for transforming and combining events. In order to live up to the term ‘functional’, such a system must be referentially transparent. This criterion has a few implications, like being free of glitches (i.e. being transactional: not allowing inconsistent state to be observed). Achieving this requires sophisticated dependency and lifecycle management.

As an abstract example, consider the following event propagation structure: A → C, B → C, (A, C) → D. Every time either A or B fires an event, it causes an update in C. At the same time, events coming from A also trigger a change in D, but we need to sample the state of C to compute this change. However, using the state of C before A’s effect on it is performed would lead to erroneous results, since it’s inconsistent.

Let’s now assume that our events are consumed by simple callbacks, and D is not implemented yet. A programmer new to the code is tasked with creating it, and they somehow figure out that they need to listen to A and sample the state of C to be able to perform the necessary update. However, they don’t realise that C’s state is partly affected by A due to some indirection in the code.

What happens? Depending on the order the callbacks are invoked, we might or might not get the desired result. If we’re unlucky, D’s behaviour will happen to be correct, and we don’t find out about our reliance on random ordering for a long time. Then a year or two down the line something in our system changes, event listeners get registered in a different order, D breaks, and a pleasant debugging session awaits us. By the way, we were not involved in implementing either C or D...

There are two aspects of FRP that can make it easier to get the solution right from the beginning: explicit dependencies and explicit delay policies.

One of the core properties of functional programs is the fact that definitions need to explicitly mention all the entities necessary to derive each value. In effect, FRP forces us to define time-varying quantities (i.e. the states of the objects in the system) by describing the clear connection between them and all the entities in the world that can possibly affect them. Some of these entities can be extensible slots, but we still have to define an explicit policy to deal with them. Our hypothetical programmer would have been much more likely to find out early on that C was affected by A had the system been formulated in this manner. They only need to look at the definition as opposed to scouring the code for all the possible callers, which generally doesn’t occur to people in practice even if their IDE makes it easy.

The other half of the story is delays. A mature FRP system would offer pairs of combinators like the following:

hold :: Event a → Behaviour a
holdDelayed :: Event a → Behaviour a

These combinators both convert events into behaviours by simply storing the last occurrence at any time. The difference between them is a conceptually infinitesimal delay that affects sampling. When taking snapshots of these behaviours at the exact moments of the original occurrences, the one constructed with hold already yields the new value, while holdDelayed e still exposes the old value. The merit of FRP is making the logic very explicit and obvious, and especially making the programmer think about which one they need in a particular case.

Avoiding the observation of inconsistent state


While transactionality was already mentioned above, I’d like to bring it up again as a separate point.

When using some FRP system, we’re basically describing a multi-rate dynamic data-flow network. Any incoming event causes a series of changes that propagates through the network, and the system makes sure that we never observe the state of the system before this propagation completes. For instance, Elerea achieves this by double buffering every state variable and updating the whole network in two phases. The nice part is that this consistency is enforced by construction at the language level as opposed to requiring explicit book-keeping.

Conflict handling by construction


When an object exposes a way to manipulate its state, a potential source of conflicts opens. As the code evolves, there can be more and more different entities in the system that might want to exercise control over this object. In a traditional OOP setting we need extra effort to avoid situations where separate parts of the system are stepping on each other’s toes: best practices, design patterns, frameworks etc. In contrast, FRP forces us to make a decision about resolving conflicts on a much more elementary level, simply because we have to think about all the possible external effects on an object when describing its state transitions. There’s no other way to implement its logic!

As a real-life example, let’s consider the GUI of a complex application. Assuming that we keep adding features to this program, we regularly get user feedback about certain flows being broken, e.g. an unforeseen combination of windows somehow interfering with each other’s functionality. It is tempting to fix such issues on an individual basis by trying to detect the offending situation and e.g. controlling the visibility of some visual element in this special case. OOP by itself offers no safeguards to stop us from doing the wrong thing and adding the extra rule without considering others that try to manipulate same element. FRP, on the other hand, makes us have a good look at all the existing effects and integrate a new rule in a way that plays nice with everyone else.

In short, FRP forces us to assign a recognisable identity to every writer of a given channel (state-changing method), and to define explicit rules to resolve their requests to affect some state.

Encapsulating state (transformation) at a small scale in a reusable manner


One of the strengths of functional programming is its ability to create small-scale abstractions at virtually no cost. For instance, single loops can be factored into general recursion patterns and specific logic in the form of a non-recursive state transforming function. FRP brings this capability into the world of mutable objects, making it possible to define transformers like rising edge detectors in a light-weight way without introducing new state variables elsewhere.

I have a quick and dirty example from our LambdaCube samples. In order to produce the videos on the blog, we added a way to save screenshots of the running application. Depending on the use case, sometimes it was convenient to have a button that I’d have to keep pressed to save the images, at other times a toggle logic was more convenient. The first case is handled by the following code:

capture = capturePress

The capturePress signal reflects the state of the chosen key, while capture specifies whether capturing is taking place at the moment. Well, this was trivial. To make the toggle logic work, we need two state variables: one to detect edges (when the key goes from the non-pressed to the pressed state), and the other to remember whether we’re saving screenshots or not. In the final code, each of these states can be hidden in two reusable signal transformers:

capture <- toggle =<< risingEdge =<< capturePress

In this case we need monadic bind as opposed to plain function application because Elerea requires all stateful entities to be created in a monadic context that manages their lifecycle. It’s nice that the change is local; we don’t need to introduce a new variable or a new object type. The desired state transformation can be expressed by changing a single line and using generic building blocks.

The bottom line


I believe FRP has high potential in application development, but we need more real-life data to have a clear picture of the advantages and the downsides. FRP has some desirable properties that should make it easier for programmers to do the right thing when attacking a problem, but we don’t know yet how it works at scale. There’s a trade-off between putting too many constraints over the way programs can be structured versus letting the programmers shoot themselves in the foot (often in subtle ways that might take a long time to realise), and the stricter end of the spectrum is of course the less popular one. In the end, I’m not expecting FRP to get big – not in its more rigorous forms at least –, but I still recommend studying it for the valuable lessons that can make our software better, no matter what approach we use.