`Traversable`

is a fun type class. It lies at a crossroad, where many basic Haskell concepts meet, and it can be presented in multiple ways that provide complementary intuitions. In this post, `Traversable`

will be described from a slightly unusual point of view, or at least one that is not put into foreground all that often. We will suspend for a moment the picture of walking across a container while using an effectful function, and instead start by considering what can be done with effectful functions.
Let’s begin with a familiar sight:

There are quite a few overlapping ways of talking about functions with such a type. If `F`

is a `Functor`

, we can say the function produces a functorial context; if it is an `Applicative`

, we (also) say it produces an effect; and if it is a `Monad`

we (also) call it a Kleisli arrow. Kleisli arrows are the functions we use with `(>>=)`

. Kleisli arrows for a specific `Monad`

form a category, with `return`

as identity and the fish operator, `(<=<)`

, as composition. If we pick `join`

as the fundamental `Monad`

operation, `(<=<)`

can be defined in terms of it as:

The category laws, then, become an alternative presentation of the monad laws:

All of that is very well-known. Something less often noted, though, is that there is an interesting category for `a -> F b`

functions even if `F`

is not a `Monad`

. Getting to it is amusingly easy: we just have to take the Kleisli category operators and erase the monad-specific parts from their definitions. In the case of `(<=<)`

, that means removing the `join`

(and, for type bookkeeping purposes, slipping in a `Compose`

in its place):

```
(<%<) :: (Functor f, Functor g) =>
(b -> g c) -> (a -> f b) -> (a -> Compose f g c)
g <%< f = Compose . fmap g . f
```

While `(<=<)`

creates two monadic layers and merges them, `(<%<)`

creates two functorial layers and leaves both in place. Note that doing away with `join`

means the `Functor`

s introduced by the functions being composed can differ, and so the category we are setting up has *all* functions that fit `Functor f => a -> f b`

as arrows. That is unlike what we have with `(<=<)`

and the corresponding Kleisli categories, which only concern a single specific monad.

As for `return`

, not relying on `Monad`

means we need a different identity. Given the freedom to pick any `Functor`

mentioned just above, it makes perfect sense to replace bringing a value into a `Monad`

in a boring way by bringing a value into the boring `Functor`

*par excellence*, `Identity`

:

With `(<%<)`

as composition and `Identity`

as identity, we can state the following category laws:

Why didn’t I write them as equalities? Once the definition of `(<%<)`

is substituted, it becomes clear that they do not hold literally as equalities: the left hand sides of the identity laws will have a stray `Identity`

, and the uses of `Compose`

on either side of the associativity law will be associated differently. Since `Identity`

and `Compose`

are essentially bookkeeping boilerplate, however, it would be entirely reasonable to ignore such differences. If we do that, it becomes clear that the laws do hold. All in all, we have a category, even though we can’t go all the way and shape it into a `Category`

instance, not only due to the trivialities we chose to overlook, but also because of how each `a -> F b`

function introduces a functorial layer `F`

in a way that is not reflected in the target object `b`

.

The first thing to do once after figuring out we have a category in our hands is looking for functors involving it.^{1} One of the simplest paths towards one is considering a way to, given some `Functor`

`T`

, change the source and target objects in an `a -> F b`

function to `T a`

and `T b`

(that is precisely what `fmap`

does with regular functions). This would give an endofunctor, whose arrow mapping would have a signature shaped like this:

This signature shape, however, should ring a bell:

```
class (Functor t, Foldable t) => Traversable t where
traverse :: Applicative f => (a -> f b) -> t a -> f (t b)
-- etc.
```

If `traverse`

were the arrow mapping of our endofunctor, the relevant functor laws would be:

Substituting the definition of `(<%<)`

reveals these are the identity and composition laws of `Traversable`

:

```
traverse Identity = Identity
traverse (Compose . fmap g . f) = Compose . fmap (traverse g) . traverse f
```

There it is: a `Traversable`

instance is an endofunctor for a category made of arbitrary context-producing functions.^{2}

Is it really, though? You may have noticed I have glossed over something quite glaring: if `(<%<)`

only involved `Functor`

constraints, where does the `Applicative`

in `traverse`

comes from?

Let’s pretend we have just invented the `Traversable`

class by building it around the aforementioned endofunctor. At this point, there is no reason for using anything more restrictive than `Functor`

in the signature of its arrow mapping:

The natural thing to do now is trying to write `traverse`

for various choices of `t`

. Let’s try it for one of the simplest `Functor`

s around: the pair functor, `(,) e`

– values with something extra attached:

```
instance Traversable ((,) e) where
-- traverse :: Functor f => (a -> f b) -> (e, a) -> f (e, b)
traverse f (e, x) = ((,) e) <$> f x
```

Simple enough: apply the function to the contained value, and then shift the extra stuff into the functorial context with `fmap`

. The resulting `traverse`

follows the functor laws just fine.

If we try to do it for different functors, though, we quickly run into trouble. `Maybe`

looks simple enough…

```
instance Traversable Maybe where
-- traverse :: Functor f => (a -> f b) -> Maybe a -> f (Maybe b)
traverse f (Just x) = Just <$> f x
traverse f Nothing = -- ex nihilo
```

… but the `Nothing`

case stumps us: there is no value that can be supplied to `f`

, which means the functorial context would have to be created out of nothing.

For another example, consider what we might do with an homogeneous pair type (or, if you will, a vector of length two):

```
data Duo a = Duo a a
instance Functor Duo where
fmap f (Duo x y) = Duo (f x) (f y)
instance Traversable Duo where
-- traverse :: Functor f => (a -> f b) -> Duo a -> f (Duo b)
traverse f (Duo x y) = -- dilemma
```

Here, we seemingly have to choose between applying `f`

to `x`

or to `y`

, and then using `fmap (\z -> Duo z z)`

on the result. No matter the choice, though, discarding one of the values means the functor laws will be broken. A lawful implementation would require somehow combining the functorial values `f x`

and `f y`

.

As luck would have it, though, there is a type class which provides ways to both create a functorial context out of nothing and to combine functorial values: `Applicative`

. `pure`

solves the first problem; `(<*>)`

, the second:

```
instance Traversable Maybe where
-- traverse :: Applicative f => (a -> f b) -> Maybe a -> f (Maybe b)
traverse f (Just x) = Just <$> f x
traverse f Nothing = pure Nothing
instance Traversable Duo where
-- traverse :: Applicative f => (a -> f b) -> Duo a -> f (Duo b)
traverse f (Duo x y) = Duo <$> f x <*> f y
```

Shifting to the terminology of containers for a moment, we can describe the matter by saying the version of `traverse`

with the `Functor`

constraint can only handle containers that hold exactly one value. Once the constraint is strengthened to `Applicative`

, however, we have the means to deal with containers that may hold zero or many values. This is a very general solution: there are instances of `Traversable`

for the `Identity`

, `Const`

, `Sum`

, and `Product`

functors, which suffice to encode any algebraic data type.^{3} That explains why the `DeriveTraversable`

GHC extension exists. (Note, though, that `Traversable`

instances in general aren’t unique.)

It must be noted that our reconstruction does not reflect how `Traversable`

was discovered, as the idea of using it to walk across containers holding an arbitrary number of values was there from the start. That being so, `Applicative`

plays an essential role in the usual presentations of `Traversable`

. To illustrate that, I will now paraphrase Definition 3.3 in Jaskelioff and Rypacek’s *An Investigation of the Laws of Traversals*. It is formulated not in terms of `traverse`

, but of `sequenceA`

:

`sequenceA`

is characterised as a natural transformation in the category of applicative functors which “respects the monoidal structure of applicative functor composition”. It is worth it to take a few moments to unpack that:

The category of applicative functors has what the

`Data.Traversable`

documentation calls “applicative transformations” as arrows – functions of general type`(Applicative f, Applicative g) => f a -> g a`

which preserve`pure`

and`(<*>)`

.`sequenceA`

is a natural transformation in the aforementioned category of applicative functors. The two functors it maps between amount to the two ways of composing an applicative functor with the relevant traversable functor. The naturality law of`Traversable`

…… captures that fact (which, thanks to parametricity, is a given in Haskell).

Applicative functors form a monoid, with

`Identity`

as unit and functor composition as multiplication.`sequenceA`

preserves these monoidal operations, and the identity and composition laws of`Traversable`

express that:

All of that seems only accidentally related to what we have done up to this point. However, if `sequenceA`

is taken as the starting point, `traverse`

can be defined in terms of it:

Crucially, the opposite path is also possible. It follows from parametricity^{4} that…

… which allows us to start from `traverse`

, define…

… and continue as before. At this point, our narrative merges with the traditional account of `Traversable`

.

In the previous section, we saw how using `Applicative`

rather than `Functor`

in the type of `traverse`

made it possible to handle containers which don’t necessarily hold just one value. It is not a coincidence that, in *lens*, this is precisely the difference between `Traversal`

and `Lens`

:

```
type Traversal s t a b = forall f. Applicative f => (a -> f b) -> s -> f t
type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t
```

A `Lens`

targets exactly one value. A `Traversal`

might reach zero, one or many targets, which requires a strengthening of the constraint. Van Laarhoven (i.e. *lens*-style) `Traversal`

s and `Lens`

es can be seen as a straightforward generalisation of the `traverse`

-as-arrow-mapping view we have been discussing here, in which the, so to say, functoriality of the container isn’t necessarily reflected at type level in a direct way.

Early on, we noted that `(<%<)`

gave us a category that cannot be expressed as a Haskell `Category`

because its composition is too quirky. We have a general-purpose class that is often a good fit for things that look like functions, arrows and/or `Category`

instances but don’t compose in conventional ways: `Profunctor`

. And sure enough: *profunctors* defines a profunctor called `Star`

…

```
-- | Lift a 'Functor' into a 'Profunctor' (forwards).
newtype Star f d c = Star { runStar :: d -> f c }
```

… which corresponds to the arrows of the category we presented in the first section. It should come as no surprise that `Star`

is an instance of a class called `Traversing`

…

```
-- Abridged definition.
class (Choice p, Strong p) => Traversing p where
traverse' :: Traversable f => p a b -> p (f a) (f b)
wander :: (forall f. Applicative f => (a -> f b) -> s -> f t) -> p a b -> p s t
instance Applicative m => Traversing (Star m) where
traverse' (Star m) = Star (traverse m)
wander f (Star amb) = Star (f amb)
```

… which is a profunctor-oriented generalisation of `Traversable`

.

Amusingly, it turns out there is a baroque way of expressing `(<%<)`

composition with the *profunctors* vocabulary. `Data.Profunctor.Composition`

gives us a notion of profunctor composition:

`Procompose`

simply pairs two profunctorial values with matching extremities. That is unlike `Category`

composition, which welds two arrows^{5} into one:

The difference is rather like that between combining functorial layers at type level with `Compose`

and fusing monadic layers with `join`

^{6}.

Among a handful of other interesting things, `Data.Functor.Procompose`

offers a *lens*-style isomorphism…

… which gives us a rather lyrical encoding of `(<%<)`

:

```
GHCi> import Data.Profunctor
GHCi> import Data.Profunctor.Composition
GHCi> import Data.Profunctor.Traversing
GHCi> import Data.Functor.Compose
GHCi> import Control.Lens
GHCi> f = Star $ \x -> print x *> pure x
GHCi> g = Star $ \x -> [0..x]
GHCi> getCompose $ runStar (traverse' (view stars (g `Procompose` f))) [0..2]
0
1
2
[[0,0,0],[0,0,1],[0,0,2],[0,1,0],[0,1,1],[0,1,2]]
```

If you feel like playing with that, note that `Data.Profunctor.Sieve`

offers a more compact (though prosaic) spelling:

```
GHCi> import Data.Profunctor.Sieve
GHCi> :t sieve
sieve :: Sieve p f => p a b -> a -> f b
GHCi> getCompose $ traverse (sieve (g `Procompose` f)) [0..2]
0
1
2
[[0,0,0],[0,0,1],[0,0,2],[0,1,0],[0,1,1],[0,1,2]]
```

The already mentioned

*An Investigation of the Laws of Traversals*, by Mauro Jaskelioff and Ondrej Rypacek, is a fine entry point to the ways of formulating`Traversable`

. It also touches upon some important matters I didn’t explore here, such as how the notion of container`Traversable`

mobilises can be made precise, or the implications of the`Traversable`

laws. I plan to discuss some aspects of these issues in a follow-up post.Will Fancher’s

*Profunctors, Arrows, & Static Analysis*is a good applied introduction to profunctors. In its final sections, it demonstrates some use cases for the`Traversing`

class mentioned here.The explanation of profunctor composition in this post is intentionally cursory. If you want to dig deeper, Dan Piponi’s

*Profunctors in Haskell*can be a starting point. (N.B.: Wherever you see “cofunctor” there, read “contravariant functor” instead). Another option is going to Bartosz Milewski’s blog and searching for “profunctor” (most of the results will be relevant).

For why that is a good idea, see Gabriel Gonzalez’s

*The functor design pattern*.↩A more proper derivation for the results in this section can be found in this Stack Overflow answer, which I didn’t transcribe here to avoid boring you.↩

Suffice, that is, with the help of the trivial data types,

`()`

(unit) and`Void`

. As an arbitrary example,`Maybe`

can be encoded using this functor toolkit as`Sum (Const ()) Identity`

.↩The property is an immediate consequence of the free theorem for

`traverse`

. Cf. this Stack Overflow answer by Rein Heinrichs.↩I mean “arrows” in the general sense, and not necessarily

`Arrow`

s as in`Control.Arrow`

!↩This is not merely a loose analogy. For details, see Bartosz Milewski’s

*Monoids on Steroids*, and and in particular its section about`Arrow`

s.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The primeval example of a fold in Haskell is the right fold of a list.

One way of picturing what the first two arguments of `foldr`

are for is seeing them as replacements for the list constructors: the `b`

argument is an initial value corresponding to the empty list, while the binary function incorporates each element prepended through `(:)`

into the result of the fold.

```
data [a] = [] | a : [a]
foldr (+) 0 [ 1 , 2 , 3 ]
foldr (+) 0 ( 1 : (2 : (3 : [])) )
( 1 + (2 + (3 + 0 )) )
```

By applying this strategy to other data structures, we can get analogous folds for them.

```
-- This is foldr; I have flipped the arguments for cosmetic reasons.
data [a] = [] | (:) a [a]
foldList :: b -> (a -> b -> b) -> [a] -> b
-- Does this one look familiar?
data Maybe a = Nothing | Just a
foldMaybe :: b -> (a -> b) -> Maybe a -> b
-- This is not the definition in Data.List.NonEmpty; the differences
-- between them, however, are superficial.
data NEList :: NEList a (Maybe (NEList a))
foldNEList :: (a -> Maybe b -> b) -> NEList a -> b
-- A binary tree like the one in Diagrams.TwoD.Layout.Tree (and in
-- many other places).
data BTree a = Empty | BNode a (BTree a) (BTree a)
foldBTree :: b -> (a -> b -> b -> b) -> BTree a -> b
```

It would make sense to capture this pattern into an abstraction. At first glance, however, it is not obvious how to do so. Though we know intuitively what the folds above have in common, their type signatures have lots of superficial differences between them. Our immediate goal, then, will be simplifying things by getting rid of these differences.^{1}

We will sketch the simplification using the tangible and familiar example of lists. Let’s return to the type of `foldr`

.

With the cosmetic flip I had applied previously, it becomes:

The annoying irregularities among the types of the folds in the previous section had to do with the number of arguments other than the data structure (one per constructor) and the types of said arguments (dependent on the shape of each constructor). Though we cannot entirely suppress these differences, we have a few tricks that make it possible to disguise them rather well. The number of extra arguments, for instance, can be always be reduced to just one with sufficient currying:

The first argument is now a pair. We continue by making its two halves more like each other by converting them into unary functions: the first component acquires a dummy `()`

argument, while the second one gets some more currying:

We now have a pair of unary functions with result type `b`

. A pair of functions with the same result type, however, is equivalent to a single function from `Either`

one of the argument types (if you are sceptical about that, you might want to work out the isomorphism – that is, the pair of conversion functions – that witnesses this fact):

At this point, the only extra argument of the fold is an unary function with result type `b`

. We have condensed the peculiarities of the original arguments at a single place (the argument of said function), which makes the overall shape of the signature a lot simpler. Since it can be awkward to work with anonymous nestings of `Either`

and pairs, we will replace `Either () (a, b)`

with an equivalent type equipped with suggestive names:

That leaves us with:

The most important fact about `ListF`

is that it mirrors the shape of the list type except for one crucial difference…

… namely, *it is not recursive*. An `[a]`

value built with `(:)`

has another `[a]`

in itself, but a `ListF a b`

built with `Cons`

does not contain another `ListF a b`

. To put it in another way, `ListF`

is the outcome of taking away the recursive nesting in the list data type and filling the resulting hole with a placeholder type, the `b`

in our signatures, that corresponds to the result of the fold. This strategy can be used to obtain a `ListF`

analogue for any other data structure. You might, for instance, try it with the `BTree a`

type shown in the first section.

We have just learned that the list `foldr`

can be expressed using this signature:

We might figure out a `foldr`

implementation with this signature in a mechanical way, by throwing all of the tricks from the previous section at `Data.List.foldr`

until we squeeze out something with the right type. It is far more illuminating, however, to start from scratch. If we go down that route, the first question that arises is how to apply a `ListF a b -> b`

function to an `[a]`

. It is clear that the list must somehow be converted to a `ListF a b`

, so that the function can be applied to it.

```
foldList :: (ListF a b -> b) -> [a] -> b
foldList f = f . something
-- foldList f xs = f (something xs)
-- something :: [a] -> ListF a b
```

We can get part of the way there by recalling how `ListF`

mirrors the shape of the list type. That being so, going from `[a]`

to `ListF a [a]`

is just a question of matching the corresponding constructors.^{2}

```
project :: [a] -> ListF a [a]
project = \case
[] -> Nil
x:xs -> Cons x xs
foldList :: (ListF a b -> b) -> [a] -> b
foldList f = f . something . project
-- something :: ListF a [a] -> ListF a b
```

`project`

witnesses the simple fact that, given that `ListF a b`

is the `[a]`

except with a `b`

placeholder in the tail position, where there would be a nested `[a]`

, if we plug the placeholder with `[a]`

we get something equivalent to the `[a]`

list type we began with.

We now need to go from `ListF a [a]`

to `ListF a b`

; in other words, we have to change the `[a]`

inside `ListF`

into a `b`

. And sure enough, we do have a function from `[a]`

to `b`

…

… the fold itself! To conveniently reach inside `ListF a b`

, we set up a `Functor`

instance:

```
instance Functor (ListF a) where
fmap f = \case
Nil -> Nil
Cons x y -> Cons x (f y)
foldList :: (ListF a b -> b) -> [a] -> b
foldList f = f . fmap (foldList f) . project
```

And there it is, the list fold. First, `project`

exposes the list (or, more precisely, its first constructor) to our machinery; then, the tail of the list (if there is one – what happens if there isn’t?) is recursively folded through the `Functor`

instance of `ListF`

; finally, the overall result is obtained by applying `f`

to the resulting `ListF a b`

.

```
-- A simple demonstration of foldList in action.
f :: ListF a b -> b
f = \case { Nil -> 0; Cons x y -> x + y }
foldList f [1, 2, 3]
-- Let's try and evaluate this by hand.
foldList f (1 : 2 : 3 : [])
f . fmap (foldList f) . project $ (1 : 2 : 3 : [])
f . fmap (foldList f) $ Cons 1 (2 : 3 : [])
f $ Cons 1 (foldList f (2 : 3 : []))
f $ Cons 1 (f . fmap (foldList f) $ project (2 : 3 : []))
f $ Cons 1 (f . fmap (foldList f) $ Cons 2 (3 : []))
f $ Cons 1 (f $ Cons 2 (foldList f (3 : [])))
f $ Cons 1 (f $ Cons 2 (f . fmap (foldList f) . project $ (3 : [])))
f $ Cons 1 (f $ Cons 2 (f . fmap (foldList f) $ Cons 3 []))
f $ Cons 1 (f $ Cons 2 (f $ Cons 3 (foldList f [])))
f $ Cons 1 (f $ Cons 2 (f $ Cons 3 (f . fmap (foldList f) . project $ [])))
f $ Cons 1 (f $ Cons 2 (f $ Cons 3 (f . fmap (foldList f) $ Nil)))
f $ Cons 1 (f $ Cons 2 (f $ Cons 3 (f $ Nil)))
f $ Cons 1 (f $ Cons 2 (f $ Cons 3 0))
f $ Cons 1 (f $ Cons 2 3)
f $ Cons 1 5
6
```

One interesting thing about our definition of `foldList`

is that all the list-specific details are tucked within the implementations of `project`

, `fmap`

for `ListF`

and `f`

(whatever it is). That being so, if we look only at the implementation and not at the signature, we find no outward signs of anything related to lists. No outward signs, that is, except for the name we gave the function. That’s easy enough to solve, though: it is just a question of inventing a new one:

`cata`

is short for *catamorphism*, the fancy name given to ordinary folds in the relevant theory. There is a function called `cata`

in *recursion-schemes*. Its implementation…

… is the same as ours, almost down to the last character. Its type signature, however, is much more general:

It involves, in no particular order:

`b`

, the type of the result of the fold;`t`

, the type of the data structure being folded. In our example,`t`

would be`[a]`

; or, as GHC would put it,`t ~ [a]`

.`Base`

, a type family that generalises what we did with`[a]`

and`ListF`

by assigning*base functors*to data types. We can read`Base t`

as “the base functor of`t`

”; in our example, we have`Base [a] ~ ListF a`

.`Recursive`

, a type class whose minimal definition consists of`project`

, with the type of`project`

now being`t -> Base t t`

.

The base functor is supposed to be a `Functor`

, so that we can use `fmap`

on it. That is enforced through a `Functor (Base t)`

constraint in the definition of the `Recursive`

class. Note, however, that there is no such restriction on `t`

itself: it doesn’t need to be a polymorphic type, or even to involve a type constructor.

In summary, once we managed to concentrate the surface complexity in the signature of `foldr`

at a single place, the `ListF a b -> b`

function, an opportunity to generalise it revealed itself. Incidentally, that function, and more generally any `Base t b -> b`

function that can be given to `cata`

, is referred to as an *algebra*. In this context, the term “algebra” is meant in a precise technical sense; still, we can motivate it with a legitimate recourse to intuition. In basic school algebra, we use certain rules to get simpler expressions out of more complicated ones, such as *a**x* + *b**x* = (*a* + *b*)*x*. Similarly, a `Base t b -> b`

algebra boils down to a set of rules that tell us what to do to get a `b`

result out of the `Base t b`

we are given at each step of the fold.

The `cata`

function we ended up with in the previous section…

… is perfectly good for practical purposes: it allows us to fold anything that we can give a `Base`

functor and a corresponding `project`

. Not only that, the implementation of `cata`

is very elegant. And yet, a second look at its signature suggests that there might be an even simpler way of expressing `cata`

. The signature uses both `t`

and `Base t b`

. As we have seen for the `ListF`

example, these two types are very similar (their shapes match except for recursiveness), and so using both in the same signature amounts to encoding the same information in two different ways – perhaps unnecessarily so.

In the implementation of `cata`

, it is specifically `project`

that links `t`

and `Base t b`

, as it translates the constructors from one type to the other.

Now, let’s look at what happens once we repeatedly expand the definition of `cata`

:

```
c = cata f
p = project
c
f . fmap c . p
f . fmap (f . fmap c . p) . p
f . fmap (f . fmap (f . fmap c . p) . p) . p
f . fmap ( . . . f . fmap c . p . . . ) . p
```

This continues indefinitely. The fold terminates when, at some point, `fmap c`

does nothing (in the case of `ListF`

, that happens when we get to a `Nil`

). Note, however, that even at that point we can carry on expanding the definition, merrily introducing do-nothing operations for as long as we want.

At the right side of the expanded expression, we have a chain of increasingly deep `fmap`

-ped applications of `project`

:^{3}

If we could factor that out into a separate function, it would change a `t`

data structure into something equivalent to it, but built with the `Base t`

constructors:

```
GHCi> :{
GHCi| fmap (fmap (fmap project))
GHCi| . fmap (fmap project) . fmap project . project
GHCi| $ 1 : 2 : 3 : []
GHCi| :}
Cons 1 (Cons 2 (Cons 3 Nil))
```

We would then be able to regard this conversion as a preliminary, relatively uninteresting step that precedes the application of a slimmed down `cata`

, that doesn’t use neither `project`

nor the `t`

type.^{4}

Defining `omniProject`

seems simple once we notice the self-similarity in the chain of `project`

:

```
omniProject = . . . fmap (fmap project) . fmap project . project
omniProject = fmap (fmap ( . . . project) . project) . project
omniProject = fmap omniProject . project
```

Guess what happens next:

```
GHCi> omniProject = fmap omniProject . project
<interactive>:502:16: error:
• Occurs check: cannot construct the infinite type: b ~ Base t b
Expected type: t -> b
Actual type: t -> Base t b
• In the expression: fmap omniProject . project
In an equation for ‘omniProject’:
omniProject = fmap omniProject . project
• Relevant bindings include
omniProject :: t -> b (bound at <interactive>:502:1)
```

GHCi complains about an “infinite type”, and that is entirely appropriate. Every `fmap`

-ped `project`

changes the type of the result by introducing a new layer of `Base t`

. That being so, the type of `omniProject`

would be…

… which is clearly a problem, as we don’t have a type that encodes an infinite nesting of type constructors. There is a simple way of solving that, though: we *make up* the type we want!

If we read `Fix f`

as “infinite nesting of `f`

”, the right-hand side of the `newtype`

definition just above reads “an infinite nesting of `f`

contains an `f`

of infinite nestings of `f`

”, which is an entirely reasonable encoding of such a thing.^{5}

All we need to make our tentative definition of `omniProject`

legal Haskell is wrapping the whole thing in a `Fix`

. The recursive `fmap`

-ping will ensure `Fix`

is applied at all levels:

Another glance at the definition of `cata`

shows that this is just `cata`

using `Fix`

as the algebra:

That being so, `cata Fix`

will change anything with a `Recursive`

instance into its `Fix`

-wearing form:

```
GHCi> cata Fix [0..9]
Fix (Cons 0 (Fix (Cons 1 (Fix (Cons 2 (Fix (Cons 3 (Fix (Cons 4 (
Fix (Cons 5 (Fix (Cons 6 (Fix (Cons 7 (Fix (Cons 8 (Fix (Cons 9 (
Fix Nil))))))))))))))))))))
```

Defining a `Fix`

-style structure from scratch, without relying on a `Recursive`

instance, is just a question of introducing `Fix`

in the appropriate places. For extra convenience, you might want to define “smart constructors” like these two:^{6}

```
nil :: Fix (ListF a)
nil = Fix Nil
cons :: a -> Fix (ListF a) -> Fix (ListF a)
cons x xs = Fix (Cons x xs)
```

Before we jumped into this `Fix`

rabbit hole, we were trying to find a `leanCata`

function such that:

We can now easily define `leanCata`

by mirroring what we have done for `omniProject`

: first, we get rid of the `Fix`

wrapper; then, we fill in the other half of the definition of `cata`

that we left behind when we extracted `omniProject`

– that is, the repeated application of `f`

:

(It is possible to prove that this *must* be the definition of `leanCata`

using the definitions of `cata`

and `omniProject`

and the `cata f = leanCata f . omniProject`

specification. You might want to work it out yourself; alternatively, you can find the derivation in an appendix at the end of this article.)

What should be the type of `leanCata`

? `unfix`

calls for a `Fix f`

, and `fmap`

demands this `f`

to be a `Functor`

. As the definition doesn’t use `cata`

or `project`

, there is no need to involve `Base`

or `Recursive`

. That being so, we get:

This is how you will usually see `cata`

being defined in other texts about the subject.^{7}

Similarly to what we have seen for `omniProject`

, the implementation of `leanCata`

looks a lot like the `cata`

we began with, except that it has `unfix`

where `project`

used to be. And sure enough, *recursion-schemes* defines…

… so that its `cata`

also works as `leanCata`

:

```
GHCi> foo = 1 `cons` (2 `cons` (3 `cons` nil))
GHCi> foo
Fix (Cons 1 (Fix (Cons 2 (Fix (Cons 3 (Fix Nil))))))
GHCi> cata (\case {Nil -> 1; Cons x y -> x * y}) foo
6
```

In the end, we did manage to get a tidier `cata`

. Crucially, we now also have a clear picture of folding, the fundamental way of consuming a data structure recursively. On the one hand, any fold can be expressed in terms of an algebra for the base functor of the structure being folded by the means of a simple function, `cata`

. On the other hand, the relationship between data structures and their base functors is made precise through `Fix`

, which introduces recursion into functors in a way that captures the essence of recursiveness of data types.

To wrap things up, here a few more questions for you to ponder:

Does the data structure that we get by using

`Maybe`

as a base functor correspond to anything familiar? Use`cata`

to write a fold that does something interesting with it.What could possibly be the base functor of a non-recursive data structure?

Find

*two*base functors that give rise to non-empty lists. One of them corresponds directly to the`NEList`

definition given at the beginning of this article.As we have discussed,

`omniProject`

/`cata Fix`

can be used to losslessly convert a data structure to the corresponding`Fix`

-encoded form. Write the other half of the isomorphism for lists; that is, the function that changes a`Fix (ListF a)`

back into an`[a]`

.

When it comes to recursion schemes, there is a lot more to play with than just the fundamental catamorphism that we discussed here. In particular, *recursion-schemes* offers all sorts of specialised folds (and *un*folds), often with richly decorated type signatures meant to express more directly some particular kind of recursive (or *co*recursive) algorithm. But that’s a story for another time. For now, I will just make a final observation about unfolds.

Intuitively, an unfold is the opposite of a fold – while a fold consumes a data structure to produce a result, an unfold generates a data structure from a seed. In recursion schemes parlance, the intuition is made precise by the notion of *anamorphism*, a counterpart (technically, a *dual*) to the catamorphism. Still, if we have a look at `unfoldr`

in `Data.List`

, the exact manner in which it is opposite to `foldr`

is not immediately obvious from its signature.

One way of clarifying that is considering the first argument of `unfoldr`

from the same perspective that we used to uncover `ListF`

early in this article.

*Understanding F-Algebras*, by Bartosz Milewski, covers similar ground to this article from an explicitly categorical perspective. A good follow-up read for sharpening your picture of the key concepts we have discussed here.*An Introduction to Recursion Schemes*, by Patrick Thompson, is the first in a series of three articles that present some common recursion schemes at a gentle pace. You will note that examples involving syntax trees and simplifying expressions are a running theme across these articles. That is in line with what we said about the word “algebra” at the end of the section about`cata`

.*Practical Recursion Schemes*, by Jared Tobin, offers a faster-paced demonstration of basic recursion schemes. Unlike the other articles in this list, it explores the machinery of the*recursion-schemes*library that we have dealt with here.*Functional Programming With Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer, Maarten Fokkinga and Ross Paterson, is a classic paper about recursion schemes, the one which popularised concepts such as catamorphism and anamorphism. If you plan to go through it, you may find this key to its notation by Edward Z. Yang useful.

This is the derivation mentioned in the middle of the section about `Fix`

. We begin from our specification for `leanCata`

:

Take the left-hand side and substitute the definition of `cata`

:

Substitute the right-hand side of the `leanCata`

specification:

By the second functor law:

`unfix . Fix = id`

, so we can slip it in like this:

Substituting the definition of `omniProject`

:

Substituting this back into the specification:

Assuming a sensible `Recursive`

and `Base`

instances for `t`

, `t`

and `Fix (Base t)`

should be isomorphic (that is, losslessly interconvertible) types, with `omniProject`

performing one of the two relevant conversions. As a consequence, `omniProject`

is surjective (that is, it is possible to obtain every `Fix (Base t)`

value through it). That being so, we can “cancel out” the `omniProject`

s at the right end of both sides of the equation above. The definition of `leanCata`

follows immediately.

By the way, it is worth emphasising that the

`Foldable`

class from*base*is not the abstraction we are looking for. One way of seeing why is placing the signature of`foldBTree`

side by side with the one of`Foldable.foldr`

.↩In what follows, I will use the

`LambdaCase`

extension liberally, so that I have fewer boring variable names to make up. If you haven’t seen it yet, all you need to know is that…… is the same as:

↩While that is clear to the naked eye, it can be shown more rigorously by applying the second functor law, that is:

↩This is in some ways similar to how

`(>>= f) = join . fmap f`

can be read as a factoring of`(>>=)`

into a preliminary step (`fmap f`

) followed by the quintessential monadic operation (`join`

).↩The name

`Fix`

comes from “fixed point”, the mathematical term used to describe a value which is left unchanged by some function. In this case, if we have an infinite nesting of the`f`

type constructor, it doesn’t make any difference if we apply`f`

to it one more time.↩As suggested by Jared Tobin’s

*Practical Recursion Schemes*article, which is in the further reading list at the end of this post.↩The names in said texts tend to be different, though. Common picks include

`μ`

for the`Fix`

type constructor,`In`

for the`Fix`

value constructor,`out`

for`unfix`

, and`⦇f⦈`

for`leanCata f`

(using the famed banana brackets).↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The global project consists of a `stack.yaml`

file and an associated `.stack-work`

directory, which are kept in `~/.stack/global-project`

and are used by stack whenever there is no other `stack.yaml`

lying around. The `stack.yaml`

of the global project specifies a resolver, just like any other `stack.yaml`

. If said resolver is a snapshot you use elsewhere, you get access to all packages you have installed from that snapshot with zero configuration.

```
$ pwd
/home/duplode
$ ls -lrt | grep stack.yaml
$ stack ghci
Configuring GHCi with the following packages:
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/duplode/.ghci
Loaded GHCi configuration from /tmp/ghci22741/ghci-script
GHCi> import Control.Lens
GHCi> (1,2) ^. _1
1
```

By the way, this also holds for the stack-powered Intero Emacs mode, which makes it possible to simply open a new `*.hs`

file anywhere and immediately start hacking away.

What about packages you didn’t install beforehand? They are no problem, thanks to the `--package`

option of `stack ghci`

, which allows installing snapshot packages at a whim.

```
$ stack ghci --package fmlist
fmlist-0.9: download
fmlist-0.9: configure
fmlist-0.9: build
fmlist-0.9: copy/register
Configuring GHCi with the following packages:
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/duplode/.ghci
Loaded GHCi configuration from /tmp/ghci22828/ghci-script
GHCi> import qualified Data.FMList as FM
GHCi> FM.foldMapA (\x -> show <$> [0..x]) [0..3]
["0000","0001","0002","0003","0010","0011","0012","0013","0020","0021",
"0022","0023","0100","0101","0102","0103","0110","0111","0112","0113",
"0120","0121","0122","0123"]
```

One caveat is that `--package`

won’t install packages outside of the snapshot on its own, so you have to add them to the `extra-deps`

field of the global project’s `stack.yaml`

beforehand, just like you would do for an actual project. If you need several non-Stackage packages, you may find it convenient to create a throwaway project for the sole purpose of letting `stack solver`

figure out the necessary `extra-deps`

for you.

```
$ mkdir throwaway
$ stack new throwaway --resolver lts-7.14 # Same resolver of the global project.
# ...
Writing configuration to file: throwaway/stack.yaml
All done.
$ cd throwaway
$ vi throwaway.cabal # Let's add reactive-banana to the dependencies.
$ stack solver
# ...
Successfully determined a build plan with 2 external dependencies.
The following changes will be made to stack.yaml:
* Dependencies to be added
extra-deps:
- pqueue-1.3.2
- reactive-banana-1.1.0.1
To automatically update stack.yaml, rerun with '--update-config'
$ vi ~/.stack/global-project/stack.yaml # Add the packages to the extra-deps.
$ cd ..
$ rm -rf throwaway/
$ stack ghci --package reactive-banana
pqueue-1.3.2: configure
pqueue-1.3.2: build
pqueue-1.3.2: copy/register
reactive-banana-1.1.0.1: configure
reactive-banana-1.1.0.1: build
reactive-banana-1.1.0.1: copy/register
Completed 2 action(s).
Configuring GHCi with the following packages:
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/duplode/.ghci
Loaded GHCi configuration from /tmp/ghci23103/ghci-script
GHCi> import Reactive.Banana
GHCi> :t stepper
stepper :: MonadMoment m => a -> Event a -> m (Behavior a)
```

Support for running `stack solver`

directly with the global project is on the horizon.

There are also interesting possibilities if you need to compile your throwaway code. That might be useful, for instance, if you ever feel like testing a hypothesis with a criterion benchmark). While there is a `stack ghc`

command, if you don’t need GHC profiles then taking advantage of `--ghci-options`

to enable `-fobject-code`

for `stack ghci`

can be a more pleasant alternative.

```
$ stack ghci --ghci-options "-O2 -fobject-code"
Configuring GHCi with the following packages:
GHCi, version 8.0.1: http://www.haskell.org/ghc/ :? for help
Loaded GHCi configuration from /home/duplode/.ghci
Loaded GHCi configuration from /tmp/ghci23628/ghci-script
GHCi> :l Foo.hs
[1 of 1] Compiling Foo ( Foo.hs, /home/duplode/.stack/global-project/.stack-work/odir/Foo.o )
Ok, modules loaded: Foo (/home/duplode/.stack/global-project/.stack-work/odir/Foo.o).
GHCi> :main
A random number for you: 2045528912275320075
```

A nice little thing about this approach is that the build artifacts are kept in the global project’s `.stack-work`

, which means they won’t pollute whichever other directory you happen to be at. `-fobject-code`

means you can’t write definitions directly on the GHCi prompt; however, that is not much of a nuisance, given that you are compiling the code anyway, and that the source file is just a `:!vim Foo.hs`

away.

While in these notes I have focused on seat-of-the-pants experimentation, stack also provides tools for running Haskell code with minimal configuration in a more controlled manner. I specially recommend having a look at the *script interpreter* section of the stack User Guide.

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

The first decision to make when migrating a project is which Stackage snapshot to pick. It had been a while since I last updated my project, and building it with the latest versions of all its dependencies would require a few adjustments. That being so, I chose to migrate to stack before any further patches. Since one of the main dependencies was `diagrams`

1.2, I went for `lts-2.19`

, the most recent LTS snapshot with that version of `diagrams`

^{1}.

`$ stack init --resolver lts-2.19`

`stack init`

creates a `stack.yaml`

file based on an existing cabal file in the current directory. The `--resolver`

option can be used to pick a specific snapshot.

One complicating factor in the conversion to stack was that two of the extra dependencies, `threepenny-gui-0.5.0.0`

(one major version behind the current one) and `zip-conduit`

, wouldn’t build with the LTS snapshot plus current Hackage without version bumps in their cabal files. Fortunately, stack deals very well with situations like this, in which minor changes to some dependency are needed. I simply forked the dependencies on GitHub, pushed the version bumps to my forks and referenced the commits in the *remote* GitHub repository in `stack.yaml`

. A typical entry for a Git commit in the `packages`

section looks like this:

```
- location:
git: https://github.com/duplode/zip-conduit
commit: 1eefc8bd91d5f38b760bce1fb8dd16d6e05a671d
extra-dep: true
```

Keeping customised dependencies in public remote repositories is an excellent solution. It enables users to build the package without further intervention without requiring developers to clumsily bundle the source tree of the dependencies with the project, or waiting for a pull request to be accepted upstream and reach Hackage.

With the two tricky extra dependencies being offloaded to Git repositories, the next step was using `stack solver`

to figure out the rest of them:

```
$ stack solver --modify-stack-yaml
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still
need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
extra-deps:
- parsec-permutation-0.1.2.0
- websockets-snap-0.9.2.0
Updated /home/duplode/Development/stunts/diagrams/stack.yaml
```

Here is the final `stack.yaml`

:

```
flags:
stunts-cartography:
repldump2carto: true
packages:
- '.'
- location:
git: https://github.com/duplode/zip-conduit
commit: 1eefc8bd91d5f38b760bce1fb8dd16d6e05a671d
extra-dep: true
- location:
git: https://github.com/duplode/threepenny-gui
commit: 2dd88e893f09e8e31378f542a9cd253cc009a2c5
extra-dep: true
extra-deps:
- parsec-permutation-0.1.2.0
- websockets-snap-0.9.2.0
resolver: lts-2.19
```

`repldump2carto`

is a flag defined in the cabal file. It is used to build a secondary executable. Beyond demonstrating how the `flags`

section of `stack.yaml`

works, I added it because `stack ghci`

expects all possible build targets to have been built ^{2}.

As I have GHC 7.10.1 from my Linux distribution and the LTS 2.19 snapshot is made for GHC 7.8.4, I needed `stack setup`

as an additional step. That command locally installs (in `~/.stack`

) the GHC version required by the chosen snapshot.

That pretty much concludes the migration. All that is left is demonstrating: `stack build`

to compile the project…

```
$ stack build
JuicyPixels-3.2.5.2: configure
Boolean-0.2.3: download
# etc. (Note how deps from Git are handled seamlessly.)
threepenny-gui-0.5.0.0: configure
threepenny-gui-0.5.0.0: build
threepenny-gui-0.5.0.0: install
zip-conduit-0.2.2.2: configure
zip-conduit-0.2.2.2: build
zip-conduit-0.2.2.2: install
# etc.
stunts-cartography-0.4.0.3: configure
stunts-cartography-0.4.0.3: build
stunts-cartography-0.4.0.3: install
Completed all 64 actions.
```

… `stack ghci`

to play with it in GHCi…

```
$ stack ghci
Configuring GHCi with the following packages: stunts-cartography
GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
-- etc.
Ok, modules loaded: GameState, Annotation, Types.Diagrams, Pics,
Pics.MM, Annotation.Flipbook, Annotation.LapTrace,
Annotation.LapTrace.Vec, Annotation.LapTrace.Parser.Simple,
Annotation.Parser, Types.CartoM, Parameters, Composition, Track,
Util.Misc, Pics.Palette, Output, Util.ByteString, Util.ZipConduit,
Replay, Paths, Util.Reactive.Threepenny, Util.Threepenny.Alertify,
Widgets.BoundedInput.
*GameState> :l src/Viewer.hs -- The Main module.
-- etc.
*Main> :main
Welcome to Stunts Cartography.
Open your web browser and navigate to localhost:10000 to begin.
Listening on http://127.0.0.1:10000/
[27/Jul/2015:00:55:11 -0300] Server.httpServe: START, binding to
[http://127.0.0.1:10000/]
```

… and looking at the build output in the depths of `.stack-work`

:

```
$ .stack-work/dist/x86_64-linux/Cabal-1.18.1.5/build/sc-trk-viewer/sc-trk-viewer
Welcome to Stunts Cartography 0.4.0.3.
Open your web browser and navigate to localhost:10000 to begin.
Listening on http://127.0.0.1:10000/
[26/Jul/2015:20:02:54 -0300] Server.httpServe: START, binding to
[http://127.0.0.1:10000/]
```

With the upcoming stack 0.2 it will be possible to use `stack build --copy-bins --local-bin-path <path>`

to copy any executables built as part of the project to a path. If the `--local-bin-path`

option is omitted, the default is `~/.local/bin`

. (In fact, you can already copy executables to `~/.local/bin`

with stack 0.1.2 through `stack install`

. However, I don’t want to overemphasise that command, as `stack install`

not being equivalent to `cabal install`

can cause some confusion.)

Hopefully this report will give you an idea of what to expect when migrating your projects to stack. Some details may appear a little strange, given how familiar cabal-install workflows are, and some features are still being shaped. All in all, however, stack works very well already: it definitely makes setting up reliable builds easier. The stack repository at GitHub, and specially the wiki therein, offers lots of helpful information, in case you need further details and usage tips.

As a broader point, it just seems polite to, when possible, pick a LTS snapshot over than a nightly for a public project. It is more likely that those interested in building your project already have a specific LTS rather than an arbitrary nightly.↩

That being so, a more natural arrangement would be treating

`repldump2carto`

as a full-blown subproject by giving it its own cabal file and adding it to the`packages`

section. I would then be able to load only the main project in GHCi with`stack ghci stunts-cartography`

.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Sandboxes are exceptionally helpful not just for working in long-term Haskell projects, but also for casual experiments. While playing around, we tend to install all sorts of packages in a carefree way, which increases a lot the risk of entering cabal hell. While vanilla cabal-install sandboxes prevent such a disaster, using them systematically for experiments mean that, unless you are meticulous, you will end up either with dozens of .hs files in a single sandbox or with dozens of copies of the libraries strewn across your home directory. And no one likes to be meticulous while playing around. In that context, stack, the recently released alternative to cabal-install, can prevent trouble with installing packages in a way more manageable than through ad-hoc sandboxes. In this post, I will suggest a few ways of using stack that may be convenient for experiments. I have been using stack for only a few days, therefore suggestions are most welcome!

I won’t dwell on the motivation and philosophy behind stack ^{1}. Suffice it to say that, at least in the less exotic workflows, there is a centralised package database somewhere in `~/.stack`

with packages pulled from a Stackage snapshot (and therefore known to be compatible with each other), which is supplemented by a per-project database (that is, just like cabal sandboxes) for packages not in Stackage (from Hackage or anywhere else). As that sounds like a great way to avoid headaches, we will stick to this arrangement, with only minor adjustments.

Once you have installed stack ^{2}, you can create a new environment for experiments with `stack new`

:

```
$ mkdir -p Development/haskell/playground
$ cd Development/haskell/playground
$ stack new --prefer-nightly
```

The `--prefer-nightly`

option makes stack use a nightly snapshot of Stackage, as opposed to a long term support one. As we are just playing around, it makes sense to pick as recent as possible packages from the nightly instead of the LTS. (Moreover, I use Arch Linux, which already has GHC 7.10 and `base`

4.8, while the current LTS snapshot assumes `base`

4.7.) If this is the first time you use stack, it will pick the latest nightly; otherwise it will default to whatever nightly you already have in `~/.stack`

.

`stack new`

creates a neat default project structure for you ^{3}:

```
$ ls -R
.:
app LICENSE new-template.cabal Setup.hs src stack.yaml test
./app:
Main.hs
./src:
Lib.hs
./test:
Spec.hs
```

Of particular interest is the `stack.yaml`

file, which holds the settings for the local stack environment. We will talk more about it soon.

As for the default `new-template.cabal`

file, you can use its `build-depends`

section to keep track of what you are installing. That will make `stack build`

(the command which builds the current project without installing it) to download and install any dependencies you add to the cabal file automatically. Besides that, having the installed packages noted down may prove useful in case you need to reproduce your configuration elsewhere ^{4}. If your experiments become a real project, you can clean up the `build-depends`

without losing track of the packages you installed for testing purposes by moving their entries to a second cabal file, kept in a subdirectory:

```
$ mkdir xp
$ cp new-template.cabal xp/xp.cabal
$ cp LICENSE xp # Too lazy to delete the lines from the cabal file.
$ cd xp
$ vi Dummy.hs # module Dummy where <END OF FILE>
$ vi xp.cabal # Adjust accordingly, and list your extra deps.
```

You also need to tell stack about this fake subproject. All it takes is adding an entry for the subdirectory in `stack.yaml`

:

With the initial setup done, we use `stack build`

to compile the projects:

```
$ stack build
new-template-0.1.0.0: configure
new-template-0.1.0.0: build
fmlist-0.9: download
fmlist-0.9: configure
fmlist-0.9: build
new-template-0.1.0.0: install
fmlist-0.9: install
xp-0.1.0.0: configure
xp-0.1.0.0: build
xp-0.1.0.0: install
Completed all 3 actions.
```

In this test run, I added `fmlist`

as a dependency of the fake package `xp`

, and so it was automatically installed by stack. The output of `stack build`

goes to a `.stack-work`

subdirectory.

With the packages built, we can use GHCi in the stack environment with `stack ghci`

. It loads the library source files of the current project by default:

```
$ stack ghci
Configuring GHCi with the following packages: new-template, xp
GHCi, version 7.10.1: http://www.haskell.org/ghc/ :? for help
[1 of 2] Compiling Lib (
/home/duplode/Development/haskell/playground/src/Lib.hs, interpreted )
[2 of 2] Compiling Dummy (
/home/duplode/Development/haskell/playground/xp/Dummy.hs, interpreted )
Ok, modules loaded: Dummy, Lib.
*Lib> import qualified Data.FMList as F -- Which we have just installed.
*Lib F> -- We can also load executables specified in the cabal file.
*Lib F> :l Main
[1 of 2] Compiling Lib (
/home/duplode/Development/haskell/playground/src/Lib.hs, interpreted )
[2 of 2] Compiling Main (
/home/duplode/Development/haskell/playground/app/Main.hs, interpreted )
Ok, modules loaded: Lib, Main.
*Main F>
```

Dependencies not in Stackage have to be specified in `stack.yaml`

as well as in the cabal files, so that stack can manage them too. Alternative sources of packages include source trees in subdirectories of the project, Hackage and remote Git repositories ^{5}:

```
flags: {}
packages:
- '.'
- 'xp'
- location: deps/acme-missiles-0.3 # Sources in a subdirectory.
extra-dep: true # Mark as dep, i.e. not part of the project proper.
extra-deps:
- acme-safe-0.1.0.0 # From Hackage.
- acme-dont-1.1 # Also from Hackage, dependency of acme-safe.
resolver: nightly-2015-07-19
```

`stack build`

will then install the extra dependencies to `.stack-work/install`

. You can use `stack solver`

to chase the indirect dependencies introduced by them. For instance, this is its output after commenting the `acme-dont`

line in the `stack.yaml`

just above:

```
$ stack solver --no-modify-stack-yaml
This command is not guaranteed to give you a perfect build plan
It's possible that even with the changes generated below, you will still
need to do some manual tweaking
Asking cabal to calculate a build plan, please wait
extra-deps:
- acme-dont-1.1
```

To conclude this tour, once you get bored of the initial Stackage snapshot all it takes to switch it is changing the `resolver`

field in `stack.yaml`

(with nightlies, that amounts to changing the date at the end of the snapshot name). That will cause all dependencies to be downloaded and built from the chosen snapshot when `stack build`

is next ran. As of now, the previous snapshot will remain in `~/.stack`

unless you go there and delete it manually; however, a command for removing unused snapshots is in the plans.

I have not tested the sketch of a workflow presented here extensively, yet what I have seen was enough to convince me stack can provide a pleasant experience for casual experiments as well as full-fledged projects. Happy hacking!

**Update:** There is now a follow-up post about the other side of the coin, Migrating a Project to stack.

For that, see Why is stack not cabal?, written by a member of its development team.↩

For installation guidance, see the GitHub project wiki. Installing stack is easy, and there are many ways to do it (I simply got it from Hackage with

`cabal install stack`

).↩To create an environment for an existing project, with its own structure and cabal file, you would use

`stack init`

instead.↩In any case, you can also use

`stack exec -- ghc-pkg list`

to see all packages installed from the snapshot you are currently using. That, however, will be far messier than the`build-depends`

list, as it will include indirect dependencies as well.↩For the latter, see the project wiki.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

`Applicative`

class are not pretty to look at.
```
pure id <*> v = v -- identity
pure f <*> pure x = pure (f x) -- homomorphism
u <*> pure y = pure ($ y) <*> u -- interchange
pure (.) <*> u <*> v <*> w = u <*> (v <*> w) -- composition
```

Monad laws, in comparison, not only look less odd to begin with but can also be stated in a much more elegant way in terms of Kleisli composition `(<=<)`

. Shouldn’t there be an analogous nice presentation for `Applicative`

as well? That became a static question in my mind while I was studying applicative functors many moons ago. After finding surprisingly little commentary on this issue, I decided to try figuring it out by myself. ^{1}

Let’s cast our eye over `Applicative`

:

If our inspiration for reformulating `Applicative`

is Kleisli composition, the only sensible plan is to look for a category in which the `t (a -> b)`

functions-in-a-context from the type of `(<*>)`

are the arrows, just like `a -> t b`

functions are arrows in a Kleisli category. Here is one way to state that plan in Haskell terms:

```
class Applicative t => Starry t where
idA :: t (a -> a)
(.*) :: t (b -> c) -> t (a -> b) -> t (a -> c)
infixl 4 .*
-- The Applicative constraint is wishful thinking:
-- When you wish upon a star...
```

The laws of `Starry`

are the category laws for the `t (a -> b)`

arrows:

```
idA .* v = v -- left identity
u .* idA = u -- right identity
u .* v .* w = u .* (v .* w) -- associativity
```

The question, then, is whether it is possible to reconstruct `Applicative`

and its laws from `Starry`

. The answer is a resounding yes! The proof is in this manuscript, which I have not transcribed here as it is a little too long for a leisurely post like this one ^{2}. The argument is set in motion by establishing that `pure`

is an arrow mapping of a functor from **Hask** to a `Starry`

category, and that both `(<*>)`

and `(.*)`

are arrow mappings of functors in the opposite direction. That leads to several naturality properties of those functors, from which the `Applicative`

laws can be obtained. Along the way, we also get definitions for the `Starry`

methods in terms of the `Applicative`

ones…

… and vice-versa:

Also interesting is how the property relating `fmap`

and `(<*>)`

…

… now tells us that a `Functor`

results from composing the `pure`

functor with the `(<*>)`

functor. That becomes more transparent if we write it point-free:

In order to ensure `Starry`

is equivalent to `Applicative`

we still need to prove the converse, that is, obtain the `Starry`

laws from the `Applicative`

laws plus the definitions of `idA`

and `(.*)`

just above. That is not difficult; all it takes is substituting the definitions in the `Starry`

laws and:

For left identity, noticing that

`(id .) = id`

.For right identity, applying the interchange law and noticing that

`($ id) . (.)`

is`id`

in a better disguise.For associativity, using the laws to move all

`(.)`

to the left of the`(<*>)`

and then verifying that the resulting messes of dots in both sides are equivalent.

As a tiny example, here is the `Starry`

instance of `Maybe`

…

… and the verification of the laws for it:

```
-- Left identity:
idA .* u = u
Just id .* u = u
-- u = Nothing
Just id .* Nothing = Nothing
Nothing = Nothing
-- u = Just f
Just id .* Just f = Just f
Just (id . f) = Just f
Just f = Just f
-- Right identity:
u .* idA = u
u .* Just id = u
-- u = Nothing
Nothing .* Just id = Nothing
Nothing = Nothing
-- u = Just g
Just g .* Just id = Just g
Just (g .* id) = Just g
Just g = Just g
-- Associativity:
u .* v .* w = u .* (v .* w)
-- If any of u, v and w are Nothing, both sides will be Nothing.
Just h .* Just g .* Just f = Just h .* (Just g .* Just f)
Just (h . g) .* Just f = Just h .* (Just (g . f))
Just (h . g . f) = Just (h . (g . f))
Just (h . g . f) = Just (h . g . f)
```

It works just as intended:

```
GHCi> Just (2*) .* Just (subtract 3) .* Just (*4) <*> Just 5
Just 34
GHCi> Just (2*) .* Nothing .* Just (*4) <*> Just 5
Nothing
```

I do not think there will be many opportunities to use the `Starry`

methods in practice. We are comfortable enough with applicative style, through which we see most `t (a -> b)`

arrows as intermediates generated on demand, rather than truly meaningful values. Furthermore, the `Starry`

laws are not really easier to prove (though they are certainly easier to remember!). Still, it was an interesting exercise to do, and it eases my mind to know that there is a neat presentation of the `Applicative`

laws that I can relate to.

This post is Literate Haskell, in case you wish to play with `Starry`

in GHCi (here is the raw .lhs file ).

```
instance Starry Maybe where
instance Starry [] where
instance Starry ((->) a) where
instance Starry IO where
```

As for proper implementations in libraries, the closest I found was `Data.Semigroupoid.Static`

, which lives in Edward Kmett’s `semigroupoids`

package. *“Static arrows”* is the actual technical term for the `t (a -> b)`

arrows. The module provides…

… which uses the definitions shown here for `idA`

and `(.*)`

as `id`

and `(.)`

of its `Category`

instance.

There is a reasonably well-known alternative formulation of

`Applicative`

: the`Monoidal`

class as featured in this post by Edward Z. Yang. While the laws in this formulation are much easier to grasp,`Monoidal`

feels a little alien from the perspective of a Haskeller, as it shifts the focus from function shuffling to tuple shuffling.↩Please excuse some oddities in the manuscript, such as off-kilter terminology and weird conventions (e.g. consistently naming arguments in applicative style as

`w <*> v <*> u`

rather than`u <*> v <*> w`

in applicative style). The most baffling choice was using`id`

rather than`()`

as the throwaway argument to`const`

. I guess I did that because`($ ())`

looks bad in handwriting.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

`fmap`

is saying that it only changes the values in a container, and not its structure. Leaving behind the the functors-as-containers metaphor, we can convey the same idea by saying that `fmap`

leaves the context of the values in a `Functor`

unchanged. But what, exactly, is the “context” or “structure” being preserved? “It depends on the functor”, though correct, is not an entirely satisfactory answer. The functor laws, after all, are highly abstract, and make no mention of anything a programmer would be inclined to call “structure” (say, the skeleton of a list); and yet the preservation we alluded to follows from them. After struggling a bit with this question, I realised that the incompatibility is only apparent. This post shows how the tension can be resolved through the mediation of A correct, if rather cruel, answer to “Why does `fmap`

preserve structure?” would be “By definition, you silly!” To see what would be meant by that, let’s have a look at the functor laws.

`fmap`

is a mapping of functions that takes identity to identity, and composed functions to the corresponding composed functions. Identity and composition make up the structure, in the mathematical sense, of a category. In category theory, a functor is a mapping between categories that preserves category structure. Therefore, the functor laws ensure that Haskell `Functor`

s are indeed functors; more precisely, functors from **Hask** to **Hask**, **Hask** being the category with Haskell types as objects and Haskell functions as arrows.^{1}

That functors preserve category structure is evident. However, our question is not directly about “structure” in the mathematical sense, but with the looser acception it has in programmer parlance. In what follows, our goal will be clarifying this casual meaning.

As an intial, fuzzy characterisation, we can say that, given a functorial value, the `Functor`

context is everything in it other than the wrapped values. Starting from that, a straightforward way of showing why `fmap`

preserves context involves *parametric polymorphism*; more specifically, the preservation is ensured by the wild generality of the types in the signature of `fmap`

.

We will look at `fmap`

as a function of one argument which converts a plain `a -> b`

function into a function which operates on functorial values. The key fact is that there is very little we can do with the `a -> b`

function when defining `fmap`

. Composition is not an option, as choosing a function other than `id`

to compose it with would require knowledge about the `a`

and `b`

types. The only thing that can be done is applying the function to any `a`

values we can retrieve from the `t a`

functorial value. Since the context of a `t a`

value, whatever it is, does not include the `a`

values, it follows that changes to the context cannot depend on the `a -> b`

function. Given that `fmap`

takes no other arguments, any changes in the context must happen for any `a -> b`

arguments uniformly. The first functor law, however, says that `fmap id = id`

, and so there is one argument, `id`

, which leads to no changes in the context. Therefore, `fmap`

never changes the context.

The informal argument above can be made precise through a proper type theory treatment of parametricity. Philip Wadler’s *Theorems for free!* is a well-known example of such work. However, a type theory approach, while entirely appropriate, would have us taking concrete Haksell types for granted and only incidentally concluding they are functors; in contrast, our problem begins with functors. For that reason, we will follow a different path and look at the issue from a primarily categorical point of view.

In the spirit of category theory, we will now focus not on the types but on the functions between them. After all, given functional purity any interesting properties of a Haskell value can be verified with suitable functions. Let’s start with a few concrete examples of how the context of a `Functor`

can be probed with functions.

The length of a list is perhaps the most obvious example of a structural property. It depends only on the list skeleton, and not at all on the values in it. The type of `length`

, with a fully polymorphic element type which is not mentioned by the result type, reflects such an independence. An obvious consequence is that `fmap`

, which only affects the list elements, cannot change the length of a list. We can state that like this:

Or, in a more categorical fashion:

Our second example of a structure-probing function will be `reverse`

:

While the result value of `reverse`

obviously depends on the list elements, `reverse`

cannot actually modify the elements, given that the function is fully polymorphic on the element type. `fmap`

applied to a list after reversing it will thus affect the same element values there were before the reversal; they will only have been rearranged. In other words, `fmap`

*commutes* with `reverse`

:

Our final example will be `listToMaybe`

from `Data.Maybe`

:

Operationally, `listToMaybe`

is a safe version of `head`

, which returns `Nothing`

when given an empty list. Again, the function is fully polymorphic in the element type, and so the value of the first element cannot be affected by it. The scenario is very similar to what we have seen for `reverse`

, and an analogous property holds, with the only difference being that `fmap`

is instantiated at a different `Functor`

at each side of the equation:

Earlier we said that the `Functor`

context consists of everything but the wrapped values. Our examples illustrate how parametric polymorphism makes it possible to keep that general idea while putting functions rather than values under the spotlight. The context is all that can be probed with functions fully polymorphic on the type parameter of the `Functor`

; or, taking the abstraction further, the context *is* the collection of functions fully polymorphic on the type parameter of the `Functor`

. We now have done away with the fuzziness of our preliminary, valure-centric definition. The next step is clarifying how that definition relates to `fmap`

.

By identifying the `Functor`

context with polymorphic functions, we can also state the context-preserving trait of `fmap`

through commutativity equations like those shown in the above examples. For an arbitrary context-probing function `r`

, the equation is:

The equations for `reverse`

and `listToMaybe`

clearly have that shape. `length`

does not seem to fit at first sight, but that can be easily solved by lifting it to a constant functor such as the one provided by `Control.Applicative`

.

```
lengthC :: [a] -> Const Int a
lengthC = Const . length
-- length = getConst . lengthC
-- For constant functors, fmap f = id regardless of f.
fmap f . lengthC = lengthC . fmap f
```

A similar trick can be done with the `Identity`

functor to make functions in which the type parameter of the `Functor`

appears bare, such as `Just :: a -> Maybe a`

, fit our scheme.

It turns out that there is a category theory concept that captures the commutativity property we are interested in. A *natural transformation* is a translation between functors which preserves arrows being mapped through them. For Haskell `Functor`

s, that amounts to preserving functions being mapped via `fmap`

. We can display the relation through a diagram:

The naturality condition matches our commuativity property. Indeed, *polymorphic functions are natural transformations between Haskell Functors*. The proof of this appealing result is not trivial, and requires some theoretical work, just like in the case of the closely related results about parametricity we alluded to earlier. In any case, all it takes to go from “natural transformations preserve

`fmap`

” to “`fmap`

preserves natural transformations” is tilting our heads while looking at the diagram above!Given how we identified `Functor`

contexts, polymorphic functions and natural transformations, we can finally give a precise answer to our question. The context consists of natural transformations between functors, and therefore `fmap`

preserves it.

Earlier on, we have said that we would not be directly concerned with structure in the sense mathematicians use the word, but only with the fuzzy Haskell concept that sometimes goes by the same name. To wrap things up, we will now illustrate the fact that both acceptions are not worlds apart. Let’s have another look at the second functor law, which states that `fmap`

preserves composition:

Structure, in the mathematical sense, refers to some collection of interesting operations and distinguished elements. In this example, the relevant operation is function composition, which is part of the structure of the **Hask** category. Besides that, however, we are now able to note the uncanny resemblance between the shapes of the law, which says that it does not matter whether we compose `f`

and `g`

before applying `fmap`

, and of the commutativity properties we used to characterise functorial contexts. The upshot is that by identifying context and structure of a `Functor`

with polymorphic functions, we retain much of the spirit of the mathematical usage of structure. The interesting operations, in our case, are the polymorphic functions with which the context is probed. Perhaps it even makes sense to keep talking of structure of a `Functor`

even after dropping the container metaphor.

Speaking of the second law, we will, just for kicks, use it to show how to turn things around and look at `fmap`

as a natural transformation between `Functor`

s. In order to do so, we have to recall that `(.)`

is `fmap`

for the function functor:

```
-- First, we rewrite the second law in a more suggestive form:
fmap (g . f) = fmap g . fmap f
fmap (((.) g) f) = (.) (fmap g) (fmap f)
fmap . (.) g = ((.) . fmap) g . fmap
-- Next, some synonyms to indicate the Functors fmap leads to.
-- fmap from identity to t
fmap_t :: (Functor t) => (->) a b -> (->) (t a) (t b)
fmap_t = fmap
-- fmap from identity to ((->) a)
fmap_fun :: (b -> c) -> ((->) a b -> (->) a c)
fmap_fun = (.)
-- fmap from identity to the composite functor ((->) (t a)) . t
fmap_fun_t :: (Functor t)
=> (b -> c) -> ((->) (t a) (t b) -> (->) (t a) (t c))
fmap_fun_t = fmap_fun . fmap_t
-- The second law then becomes:
fmap_t . fmap_fun g = fmap_fun_t g . fmap_t
-- That, however, shows fmap_t is a natural transformation:
fmap . fmap g = fmap g . fmap
```

By fixing `t`

and `a`

in the signature of `fmap_t`

above, we get one functor on either side of the outer function arrow: `((->) a)`

on the left and `((->) (t a)) . t`

on the right. `fmap`

is a natural transformation between these two functors.

In

*The Holy Trinity*, Robert Harper comments on the deep connection between logic, type theory and category theory that allows us to shift seamlessly between the categorical and the type theoretical perspectives, as we have done here.*You Could Have Defined Natural Transformations*by Dan Piponi is a very clear introduction to natural transformations in a Haskell context.We have already mentioned Philip Wadler’s

*Theorems for free!*, which is a reasonably accessible introduction to the*free theorems*.*Free theorems*are results about functions that, thanks to parametric polymorphism, can be deduced from the type of the function alone. Given suitable generalisations, free theorems and naturality conditions provide two parallel ways of reaching the same results about Haskell functions.*Free Theorems Involving Type Constructor Classes*, a functional pearl by Janis Voigtländer that illustrates how free theorem generation can be generalised to types parametric on type constructors and type classes.For an explicitly categorical perspective on parametricity, a good place to start if you are willing to dig into theory is the section on parametricity in

*Some Aspects of Categories in Computer Science*by Philip J. Scott.

A category theory primer would be too big a detour for this post. If the category theory concepts I just mentioned are new to you, I suggest the following gentle introductions for Haskellers, which have very different approaches: Haskell Wikibook chapter on category theory, and Gabriel Gonzalez’s posts The category design pattern and The functor design pattern.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

`lens`

library are its astonishing breadth and generality. And yet, the whole edifice is built around van Laarhoven lenses, which are a simple and elegant concept. In this hands-on exposition, I will show how the `Lens`

type can be understood without prerequisites other than a passing acquaintance with Haskell functors. Encouraging sound intuition in an accessible manner can go a long way towards making `lens`

and lenses less intimidating.
Dramatis personæ:

I will define a toy data type so that we have something concrete to play with, as well as a starting point for working out generalisations.

The record definition gets us a function for accessing the `bar`

field.

As for the setter, we have to define it ourselves, unless we feel like mucking around with record update syntax.

Armed with a proper getter and setter pair, we can easily flip the sign of the `bar`

inside a `Foo`

.

We can make it even easier by defining a modifier function for `bar`

.

`setBar`

can be recovered from `modifyBar`

by using `const`

to discard the original value and put the new one in its place.

If our data type had several fields, defining a modifier for each of them would amount to quite a lot of boilerplate. We could minimise it by, starting from our `modifyBar`

definition, abstracting from the specific getter and setter for `bar`

. Here, things begin to pick up steam. I will define a general `modify`

function, which, given an appropriate getter-setter pair, can deal with any field of any data type.

```
modify :: (s -> a) -> (s -> a -> s) -> (a -> a) -> s -> s
modify getter setter k x = setter x . k . getter $ x
```

It is trivial to recover `modifyBar`

; when we do so, `s`

becomes `Foo`

and `a`

becomes `Int`

.

The next step of generalisation is the one leap of faith I will ask of you in the way towards lenses. I will introduce a variant of `modify`

in which the modifying function, rather than being a plain `a -> a`

function, returns a functorial value. Defining it only takes an extra `fmap`

.

```
modifyF :: Functor f => (s -> a) -> (s -> a -> s)
-> (a -> f a) -> s -> f s
modifyF getter setter k x = fmap (setter x) . k . getter $ x
```

And here is its specialisation for `bar`

.

Why on Earth we would want to do that? For one, it allows for some nifty tricks depending on the functor we choose. Let’s try it with lists. Specialising the `modifyF`

type would give:

Providing the getter and the setter would result in a `(a -> [a]) -> s -> [s]`

function. Can you guess what it would do?

As the types suggest, we get a function which modifies the field in multiple ways and collects the results.

I claimed that moving from `modify`

to `modifyF`

was a generalisation. Indeed, we can recover `modify`

by bringing `Identity`

, the dummy functor, into play.

```
newtype Identity a = Identity { runIdentity :: a }
instance Functor Identity where
fmap f (Identity x) = Identity (f x)
```

```
modify' :: (s -> a) -> (s -> a -> s) -> (a -> a) -> s -> s
modify' getter setter k =
runIdentity . modifyF getter setter (Identity . k)
```

We wrap the field value with `Identity`

value after applying `k`

and unwrap the final result after applying the setter. Since `Identity`

does nothing interesting to the wrapped values, the overall result boils down to our original `modify`

. If you have found this definition confusing, I suggest that you, as an exercise, rewrite it in pointful style and substitute the definition of `modifyF`

.

We managed to get `modify`

back with little trouble, which is rather interesting. However, what is truly surprising is that we can reconstruct not only the modifier but also the getter! To pull that off, we will use `Const`

, which is a very quaint functor.

```
newtype Const a b = Const { getConst :: a }
instance Functor (Const a) where
fmap _ (Const y) = Const y
```

If functors were really containers, `Const`

would be an Acme product. A `Const a b`

value does not contain anything of type `b`

; what it does contain is an `a`

value that we cannot even modify, given that `fmap f`

is `id`

regardless of what `f`

is. As a consequence, if, given a field of type `a`

, we pick `Const a`

as the functor to use with `modifyF`

and use the modifying function to wrap the field value with `Const`

, then the value will not be affected by the setter, and we will be able to recover it later. That suffices for recovering the getter.

```
get :: (s -> a) -> (s -> a -> s) -> s -> a
get getter setter = getConst . modifyF getter setter Const
getBar :: Foo -> Int
getBar = get bar setBar
```

Given a getter and a setter, `modifyF`

gets us a corresponding functorial modifier. From it, by choosing the appropriate functors, we can recover the getter and a plain modifier; the latter, in turn, allows us to recover the setter. We can highlight the correspondence by redefining once more the recovered getters and modifiers, this time in terms of the functorial modifier.

```
modify'' :: ((a -> Identity a) -> s -> Identity s) -> (a -> a) -> s -> s
modify'' modifier k = runIdentity . modifier (Identity . k)
modifyBar'' :: (Int -> Int) -> Foo -> Foo
modifyBar'' = modify'' modifyBarF
set :: ((a -> Identity a) -> s -> Identity s) -> s -> a -> s
set modifier x y = modify'' modifier (const y) x
setBar'' :: Foo -> Int -> Foo
setBar'' = set modifyBarF
get' :: ((a -> Const a a) -> s -> Const a s) -> (s -> a)
get' modifier = getConst . modifier Const
getBar' :: Foo -> Int
getBar' = get' modifyBarF
```

The bottom line is that given `modifyBarF`

we can get by without `modifyBar`

, `setBar`

and `bar`

, as `modify''`

, `set`

and `get'`

allow us to reconstruct them whenever necessary. While our first version of `get`

was, in effect, just a specialised `const`

with a wacky implementation, `get'`

is genuinely useful because it cuts the number of separate field manipulation functions we have to deal with by a third.

Even after all of the work so far we can still generalise further! Let’s have a second look at `modifyF`

.

```
modifyF :: Functor f => (s -> a) -> (s -> a -> s)
-> (a -> f a) -> s -> f s
modifyF getter setter k x = fmap (setter x) . k . getter $ x
```

The type of `setter`

is `(s -> a -> s)`

; however, nothing in the implementation forces the first argument and the result to have the same type. Furthermore, with a different signature `k`

could have a more general type, `(a -> f b)`

, as long as the type of `setter`

was adjusted accordingly. We can thus give `modifyF`

a more general type.

```
modifyGenF :: Functor f => (s -> a) -> (s -> b -> t)
-> (a -> f b) -> s -> f t
modifyGenF getter setter k x = fmap (setter x) . k . getter $ x
```

For the sake of completeness, here are the generalised recovery functions. `get`

is not included because the generalisation does not affect it.

```
modifyGen :: ((a -> Identity b) -> s -> Identity t) -> (a -> b) -> s -> t
modifyGen modifier k = runIdentity . modifier (Identity . k)
setGen :: ((a -> Identity b) -> s -> Identity t) -> s -> b -> t
setGen modifier x y = modifyGen modifier (const y) x
```

By now, it is clear that our getters and setters need not be ways to manipulate fields in a record. In a broader sense, a getter is anything that produces a value from another; in other words, any function can be a getter. By the same token, any binary function can be a setter, as all that is required is that it combines one value with another producing a third; the initial and final values do not even need to have the same type.^{1} That is a long way from the toy data type we started with!

If we look at `modifyGenF`

as a function of two arguments, its result type becomes:

Now, let’s take a peek at Control.Lens.Lens:

It is the same type! We have reached our destination.^{2} A lens is what we might have called a generalised functorial modifier; furthermore, sans implementation details we have that:

- The
`lens`

function is`modifyGenF`

; `modifyF`

is`lens`

specialised to produce simple lenses;^{3}`modifyBarF`

is a lens with type`Lens Foo Foo Int Int`

;`(^.)`

is flipped`get'`

;`set`

is`setGen`

;`over`

is`modifyGen`

further generalised.^{4}

`lens`

uses type synonyms liberally, so those correspondences are not immediately obvious form the signatures in the documentation. Digging a little deeper, however, shows that in

`ASetter`

is merely

Analogously, we have

Behind the plethora of type synonyms - `ASetter`

, `Getting`

, `Fold`

, `Traversal`

, `Prism`

, `Iso`

and so forth - there are different choices of functors,^{5} which make it possible to capture many different concepts as variations on lenses. The variations may be more general or less general than lenses; occasionally they are neither, as the overlap is just partial. The fact that we can express so much through parametrization of functors is key to the extraordinary breadth of `lens`

.

This exposition is primarily concerned with building lenses, and so very little was said about how to use them. In any case, we have seen enough to understand why lenses are also known as functional references. By unifying getters and setters, lenses provide a completely general vocabulary to point at parts of a whole.

Finally, a few words about composition of lenses are unavoidable. One of the great things about lenses is that they are just functions; even better, they are functions with signatures tidy enough for them to compose cleanly with `(.)`

. That makes it possible to compose lenses independently of whether you intend to get, set or modify their targets. Here is a quick demonstration using the tuple lenses from `lens`

.

```
GHCi> :m
GHCi> :m +Control.Lens
GHCi> ((1,2),(3,4)) ^. _1 . _2
GHCi> 2
GHCi> set (_1 . _2) 0 ((1,2),(3,4))
GHCi> ((1,0),(3,4))
```

A perennial topic in discussions about `lens`

is the order of composition of lenses. They are often said to compose backwards; that is, backwards with respect to composition of record accessors and similar getters. For instance, the getter corresponding to the `_1 . _2`

lens is `snd . fst`

. The claim that lenses compose backwards, or in the “wrong order”, however, are only defensible when talking about style, and not about semantics. That becomes clear after placing the signatures of a getter and its corresponding lens side by side.

```
GHCi> :t fst
fst :: (a, b) -> a
GHCi> :t _1 :: Lens' (a, b) a
_1 :: Lens' (a, b) a
:: Functor f => (a -> f a) -> (a, b) -> f (a, b)
```

The getter takes a value of the source type and produces a value of the target type. The lens, however, takes a function from the target type and produces a function from the source type. Therefore, it is no surprise that the order of composition differs, and the order for lenses is entirely natural. That ties in closely to what we have seen while implementing lenses. While we can squeeze lenses until they give back getters, it is much easier to think of them as generalised modifiers.

We are not quite as free when it comes to pairing getters and setters. Beyond the obvious need for getter and setter to start from values of the same type, they should behave sanely when composed. In particular, the following should hold:

↩`get' modifier (setGen modifier y x) ≡ y setGen modifier (get' modifier x) x ≡ x setGen modifier z (setGen modifier y x) ≡ setGen modifier z x`

“What about the

`forall`

?” you might ask. Are we cheating? Not quite. The`forall`

is there to control how`f`

is specialised when lens combinators are used. The underlying issue does not affect our reasoning here. If you are into type system subtleties, there were a few interesting comments about it in the reddit thread for this post.↩`Lens' s a`

or`Lens s s a a`

, as opposed to`Lens s t a b`

.↩Yes, even further; from taking modifying functions to taking modifying profunctors. The difference need not worry us now.↩

And in some cases of profunctors to replace the function type constructor.↩

Post licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.