For some reasons I need to run haskell programs on a RHEL5 machine. Sadly the system packages were too old to support the latest binary, and I don't have the root privilege to install any software so have to build it from source.

Here I want to share my experience of building ghc-7.8 from source in a rather old operating system.

I also want to say thanks to Tim Docker, who posted an article without which my post might not be possible.

Basically, you need to first have a working ghc binary before you can build a newer one yourself. Please refer to GHC Wiki for details.

We will begin with ghc-6.8.3, bootstrap ghc-6.10, ghc-7.0, ghc-7.4 and finally ghc-7.8.

We need all the files above. For items marked with `(latest)`

, just downloading the latest version instead should also work.

I assume you have set the environment variable `SOURCES`

(in your `.bashrc`

) to the directory containing all your tarball files. (it is recommended here to use absolute path), which will also be our working directory for building up everything related.

ghc

If any of the following link does not work, go to this download page and download them accordingly.

gcc

Choose your preferred GCC mirror site on this page. And download the corresponding version from directory

`releases/gcc-x.y.z`

.All links above are based on ConcertPass.com Mirror Server.

binutils-2.25.tar.bz2 (latest)

Checksum:

`$ (cd $SOURCES; sha256sum *) 22defc65cfa3ef2a3395faaea75d6331c6e62ea5dfacfed3e2ec17b08c882923 binutils-2.25.tar.bz2 97ed664694b02b4d58ac2cafe443d02a388f9cb3645e7778843b5086a5fec040 gcc-4.4.3.tar.bz2 2020c98295856aa13fda0f2f3a4794490757fc24bcca918d52cc8b4917b972dd gcc-4.9.2.tar.bz2 d66a8e52572f4ff819fe5c4e34c6dd1e84a7763e25c3fadcc222453c0bd8534d ghc-6.10.4-src.tar.bz2 07d06efa9222638c80b72ceda957d3c7bcbdc2665a9738dd6ef6c0751a05e8ed ghc-6.8.3-x86_64-unknown-linux.tar.bz2 156169c28dab837922260a0fbfcc873c679940d805a736dc78aeb1b60c13ccd9 ghc-7.0.3-src.tar.bz2 f2ee1289a33cc70539287129841acc7eaf16112bb60c59b5a6ee91887bfd836d ghc-7.4.2-src.tar.bz2 59e3bd514a1820cc1c03e1808282205c0b8518369acae12645ceaf839e6f114b ghc-7.8.4-src.tar.bz2`

Note: I can only confirm that I got a working ghc-7.8.4 following these steps in this specific order. It is likely that these early versions of ghc has troubles with the latest gcc and binutils.

In order to get a clean installation, I will isolate each package by using a different prefix directory. But it is still okay to share the same prefix directory if you want. The major difference is that if you share prefix directory between packages, the old files might not be overwritten and thus remain there and become junk.

```
$ cd $SOURCES
$ tar -xf ghc-6.8.3-x86_64-unknown-linux.tar.bz2
$ cd ghc-6.8.3
$ mkdir prefix
$ ./configure --prefix $SOURCES/ghc-6.8.3/prefix
$ make install
```

From now on, whenever you see a `make`

command without explicit goal, feel free to pass `-j N`

to `make`

to speed up compilation (where `N`

is usually set to the number of available CPU cores plus one). If something fails, just rerun `make`

(maybe with a smaller `N`

), do `make clean`

only if you are getting the same error message through different runs.

```
$ cd $SOURCES
$ tar -xf ghc-6.10.4-src.tar.bz2
$ cd ghc-6.10.4
$ mkdir prefix
$ PATH="$SOURCES/ghc-6.8.3/prefix/bin:$PATH" ./configure --prefix $SOURCES/ghc-6.10.4/prefix
$ PATH="$SOURCES/ghc-6.8.3/prefix/bin:$PATH" make
$ make install
```

Now you need to install a newer version of gcc. The gcc shipped with RHEL5 is gcc-4.1.2, which is too old to get ghc-7.0 compiled.

The reason why we do not use the latest version of gcc is that it cannot compile ghc-7.0 either…

Copy a script from gcc-4.9.2 to gcc-4.4.3:

And edit the one under gcc-4.4.3 (feel free to replace `$EDITOR`

with whatever text editor you like):

Find and comment out the following line:

`# GRAPHITE_LOOP_OPT=yes`

Go into the directory, it's time to retrieve some dependencies:

Now create another directory under gcc-4.4.3. To compile gcc you need to make your working directory different from gcc’s. Proceed with routines, remember to set prefix and enable only the necessary language. (P.S. `--enable-languages=c,c++`

is necessary because to bootstrap newer versions of gcc, you will need both `gcc`

and `g++`

).

```
$ mkdir prefix
$ mkdir gcc-build
$ cd gcc-build
$ ../configure --prefix $SOURCES/gcc-4.4.3/prefix --enable-languages=c,c++
$ make
$ make install
```

And now gcc-4.4.3 should be ready, here we can set it permanently. Put the following line to the bottom of your `~/.bashrc`

file:

```
# if you want to unset $SOURCES afterwards,
# remember to replace it with the real location.
export PATH="$SOURCES/gcc-4.4.3/prefix/bin:$PATH"
```

Now verify:

```
$ source ~/.bashrc
$ gcc --version
gcc (GCC) 4.4.3
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
```

```
$ cd $SOURCES
$ tar -xf ghc-7.0.3-src.tar.bz2
$ cd ghc-7.0.3
$ mkdir prefix
$ PATH="$SOURCES/ghc-6.10.4/prefix/bin:$PATH" ./configure --prefix $SOURCES/ghc-7.0.3/prefix
$ PATH="$SOURCES/ghc-6.10.4/prefix/bin:$PATH" make
$ make install
```

```
$ cd $SOURCES
$ tar -xf ghc-7.4.2-src.tar.bz2
$ cd ghc-7.4.2
$ mkdir prefix
$ PATH="$SOURCES/ghc-7.0.3/prefix/bin:$PATH" ./configure --prefix $SOURCES/ghc-7.4.2/prefix
$ PATH="$SOURCES/ghc-7.0.3/prefix/bin:$PATH" make
$ make install
```

This step might not be necessary.

Some packages from hackage (for example ciper-aes) would fail because of the out-dated binutils.

So before we get to 7.8.4, I want to make sure we have a newer version of toolchain, by using which we might also benefit from new optimization techniques.

First we build the latest version of gcc from source.

```
$ source ~/.bashrc
$ cd $SOURCES
# should have been extracted when we were compiling gcc-4.4.3
$ cd gcc-4.9.2
$ ./contrib/download_prerequisites
$ mkdir prefix
$ mkdir gcc-build
$ cd gcc-build
$ ../configure --prefix $SOURCES/gcc-4.9.2/prefix --enable-languages=c,c++
$ make
$ make install
```

Then we can change the last few lines added to our `~/.bashrc`

to:

Then we build binutils:

```
$ source ~/.bashrc
$ cd $SOURCES
$ tar -xf binutils-2.25.tar.bz2
$ cd binutils-2.25
$ mkdir prefix
$ ./configure --prefix $SOURCES/binutils-2.25/prefix
$ make
$ make install
```

And add it to `PATH`

by appending these lines to your `~/.bashrc`

:

And don’t forget to source your `bashrc`

:

```
$ cd $SOURCES
$ tar -xf ghc-7.8.4-src.tar.bz2
$ cd ghc-7.8.4
$ mkdir prefix
$ PATH="$SOURCES/ghc-7.4.2/prefix/bin:$PATH" ./configure --prefix $SOURCES/ghc-7.8.4/prefix
$ PATH="$SOURCES/ghc-7.4.2/prefix/bin:$PATH" make
$ make install
```

`.bashrc`

By now you might have a quite messy `$PATH`

, don’t worry, here we will clean it up.

The last few lines in your `.bashrc`

will look like:

```
export SOURCES="${HOME}/sources"
# if you want to unset $SOURCES afterwards,
# remember to replace it with the real location.
export PATH="$SOURCES/gcc-4.9.2/prefix/bin:$PATH"
export PATH="$SOURCES/binutils-2.25/prefix/bin:$PATH"
```

Here we add one line for ghc-7.8.4, and replace `$SOURCES`

with the corresponding value (here I assume `SOURCE="${HOME}/sources"`

, but you need to change it accordingly).

```
#export SOURCES="${HOME}/sources"
# if you want to unset $SOURCES afterwards,
# remember to replace it with the real location.
export PATH="${HOME}/sources/gcc-4.9.2/prefix/bin:$PATH"
export PATH="${HOME}/sources/binutils-2.25/prefix/bin:$PATH"
export PATH="${HOME}/sources/ghc-7.8.4/prefix/bin:$PATH"
```

Now everything is done, just logout and relogin, then you should have ghc-7.8.4 ready.

```
# login
$ ld --version
GNU ld (GNU Binutils) 2.25
Copyright (C) 2014 Free Software Foundation, Inc.
This program is free software; you may redistribute it under the terms of
the GNU General Public License version 3 or (at your option) a later version.
This program has absolutely no warranty.
$ gcc --version
gcc (GCC) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.8.4
```

I hope this article might be helpful in case you want to do something similar. I like the idea of bootstrapping, but sometimes it is painful to walk through the process, especially when the working binary version is too old and you need to bootstrap multiple times.

Lesson learned: backward compatibility is not always reliable. If you want some old software to work, prefering tools with version relatively recent to that software over ones with the latest version usually help.

I've been using xmonad for a while. One annoyance found when enabling EWMH (see resources in the last section of my post if you don't know what EWMH does) is that pidgin (and maybe some other programs as well) will try to request for window focus whenever xmonad starts / restarts, resulting in your focus moved to the workspace that pidgin belongs to. Today I spent some time to investigate this issue and would like to share my workaround in this post.

The problem arises immediately after xmonad gets started / restarted. I guess there might be some interaction to notify running X11 programs that xmonad is up. And some of these programs would respond to this nofication by sending some requests to xmonad. And the solution is simple: **we make xmonad ignore focus requesting messages when xmonad just get started**.

This can be done by modifying xmonad state to include the time when it got started, and wrap the EWMH event handler to check if the message is “too early” before performing the action. By “too early” I meant the time difference between xmonad startup and when the message get received is below a threshold (just leave few seconds as threshold to allow your xmonad startup script gets fully executed).

First we use `XState`

to make xmonad record its startup time. This can be done easily using `XMonad.Util.ExtensibleState`

from xmonad-contrib.

What we need is a new type of data so that we can put it into `XState`

. And the data we need is just the startup time:

```
{-# LANGUAGE DeriveDataTypeable #-}
import XMonad.Core
import qualified XMonad.Util.ExtensibleState as XS
import Data.Time.Clock
import Data.Typeable
import Data.Time.Calendar
newtype StartupTime =
StartupTime UTCTime
deriving Typeable
instance ExtensionClass StartupTime where
initialValue = StartupTime $ UTCTime d dt
where
d = fromGregorian 1970 1 1
dt = secondsToDiffTime 0
```

We don't have to deal with `extensionType`

, as it is default to `StateExtension`

. Of course we want the startup time updated after an xmonad restart. And actually we would never use `initialValue`

since we will update it immediately after xmonad gets started, but we need to put some value here to make compiler happy, we could have put an `undefined`

or `error "msg"`

here, but I chose the safest way.

Next, we modify `startupHook`

to record its startup time:

```
import Control.Applicative
myStartupHook :: X ()
myStartupHook = do
-- <your original startupHook, can leave this part blank>
StartupTime <$> liftIO getCurrentTime >>= XS.put
```

And EWMH is enabled by passing your config into `ewmh`

function so that it adds few more hooks to do its job. We make our own `myEwmh`

to take care of the event handler:

```
import Data.Monoid
import import XMonad.Hooks.EwmhDesktops
myEwmh :: XConfig a -> XConfig a
myEwmh c = c { startupHook = startupHook c <> ewmhDesktopsStartup
, handleEventHook = handleEventHook c <> myEwmhDesktopsEventHook
, logHook = logHook c <> ewmhDesktopsLogHook
}
```

And finally we wrap `ewmhDesktopsEventHook`

in our `myEwmhDesktopsEventHook`

to add an extra guard: if the request is sent right after xmonad gets started (less than 5 seconds in my code), the request gets ignored:

```
myEwmhDesktopsEventHook :: Event -> X All
myEwmhDesktopsEventHook e@(ClientMessageEvent
{ev_message_type = mt}) = do
a_aw <- getAtom "_NET_ACTIVE_WINDOW"
curTime <- liftIO getCurrentTime
StartupTime starupTime <- XS.get
if mt == a_aw
&& curTime `diffUTCTime` starupTime <= 5.0
then return (All True)
else ewmhDesktopsEventHook e
myEwmhDesktopsEventHook e = ewmhDesktopsEventHook e
```

Now you might recompile xmonad and enjoy the change.

**About my config**

My xmonad config is on github, and the changes related to my workaround is visualized here.

**About EWMH**

Googling `EWMH`

would just give you a “tl;dr” specification which is not useful for most of the user.

Simply put, `EWMH`

allows your X11 programs to request for window focus. One of my old post and the response provides a little more simple explanation.

In the previous post, We' ve discussed a little bit about comonad. But these abstract concepts are useless without practical usage.

Forget about comonad and Conway's Game of Life for a while. Today I want to show you an interesting example, which will give you some ideas about what it means by saying “the value of one place depends on the value of its neighborhoods”. And these examples will be connect to the concept of comonad in a future post.

This example simulates a simple wave propagation by ASCII art.

The “world” is represented by a list of characters, each of which has a special meaning, namely:

`<space>`

: Just the medium. Air, water, etc.`>`

: a wave moving right.`<`

: a wave moving left.`X`

: two waves with opposite direction meeting each other.`*`

: a wave source which will disappear at the next moment, producing waves in both directions.

The simulation of wave propagation will be archieved by continuously printing the next generation of the “world”.

For example, if we are given a string: `"* > * * < **<"`

, the output would be the following:

```
* > * * < **<
> >< > < >< <<X>
> <> X <> <<< >>
X >< >< X<< >>
< > <> <> <<X >
< X X X<< >
< < >< ><<X >
...
```

I believe it's easy to see the pattern. And you can write a function to describe the rule. At the first glance you might think the state of a fixed position in the “world” only depends on its previous state and the previous states of its neighborhoods. But it turns out the previous state of itself isn't necessary, but we just leave it as an argument (Simply because I think it looks better).

```
waveRule :: Char -> Char -> Char -> Char
waveRule _ l r
| fromL && fromR = 'X'
| fromL = '>'
| fromR = '<'
| otherwise = ' '
where
fromL = l `elem` ">*X"
fromR = r `elem` "<*X"
```

(This part is not about zippers or comonads, feel free to skip it) It is not hard to come up with a solution involving only basic list manipulations. I think it would be a good exercise. My solution can be found here.

The output should be:

```
* > * * < **<
> >< > < >< <<X>
> <> X <> <<< >
X >< >< X<<
< > <> <> <<X
< X X X<< >
< < >< ><<X >
< <> <X< > >
< < X<<> > >
< < <<X > >
< < <<< > > >
< <<< > > >
< <<< > > >
< <<< > >
<<< > >
<<< > >
<<< >
<< >
< >
```

Now suppose this 2D world is infinite on both directions, and we introduce the obvious coordinate system into this world. We will no longer see the whole world, but only a portion of it.

Now we are given two coordinates, and we can only observe the world in between these coordinates, can we still work out the problem?

It turns out pretty difficult to reuse the previous approach because:

Lists can only be infinite in one direction while we need a world representation that can be infinite in both directions so that we are allowed to theoretically view the world in between two arbitrary coordinates.

Given a world and its previous generation, it is hard to find the “old cell state” or “old neighboring cell states” unless we can introduce something like coordinates to establish the corrspondence between cells.

We don't know where to start generating the next iteration as the world can be infinite in both directions. We can't simply walk through it from left to right, which might not terminate.

I'd recommend to use a list zipper to overcome these problems.

Zippers are a way to efficiently walk back and forth or update values in certain data structures like lists and trees. Those data structures can usually be traversed in a deterministic way.

A zipper of a certain data structure usually consists of two parts: a stack of data contexts (each data context is an incompete data structure with a “hole” in it), and a value that can fill a “hole”.

Here we are only interested in list zippers. But there are plenty of useful tutorials about zippers like this one from LYAH.

To explain what we just said about zippers, we take a random list `[1,2,3]`

(you should recall that `[1,2,3]`

is just a shorthand for `1:2:3:[]`

) and traverse it to demonstrate list zippers.

Stack | Focus | Zipper = (Stack,Focus) |
---|---|---|

`[]` |
`[1,2,3]` |
`([],[1,2,3])` |

`[1:<hole>]` |
`[2,3]` |
`([1],[2,3])` |

`[2:<hole>, 1:<hole>]` |
`[3]` |
`([2,1],[3])` |

`[3:<hole>, 2:<hole>, 1:<hole>]` |
`[]` |
`([3,2,1],[])` |

A list zipper changes as we are walking in the data structure, the table above shows how the list zipper changes as we walk though a list from left to right. Note that since the data context for a list is always something like `(<value>:<hole>)`

, we can simply represent it as `<value>`

. That is why a list zipper are usually represented as a pair of two lists, or to be more precise, a stack and a list.

The data context stack makes it possible to traverse backwards. Whenever we want to do so, pop one data context from the stack, and fill in its hole by using the current focus. For example, to go backwards when the list zipper is `([2,1],[3])`

, we pop the data context to get `2:<hole>`

, fill in the hole with our current focus, namely `[3]`

and we end up with `([1],2:[3])`

whose focus is changed from `[3]`

to `[2,3]`

.

We can also change the value at the focus efficiently. For example, when the list zipper is `([2,1],[3])`

, we modify the focus to `[4,5,6]`

. And then we keep going backwards to recover the list. We will end up with `1:2:[4,5,6]`

and as you can see the part we were focusing on (namely `[3]`

) is now replaced by `[4,5,6]`

.

With some introduction of zippers, I can now explain how can list zippers solve our problem.

List zippers can be infinite in both directions by using a simple trick: make the context stack infinite. It is an importation observation that the stack in the list zipper are actually the reversed left part of the list and the focus the right part. By making both the reversed left part and right part infinite, we end up with a list zipper that is infinite in both directions.

It's quite easy to find “old cell state” and “old neighboring cell states” given the list zipper. The old cell is the

`head`

of the current focus, the cells right next to it are the top of the stack and the second element of the current focus, respectively. Therefore for any given list zipper, we can yield the next cell state of the`head`

of the current focusing list.We don't need to worry about where to start generating the next world, given a list zipper, we know how to iteratively move the focus to the left or to the right. So as long as we can pin one position to the origin point of the world, we can take steps based on the original zipper by moving either left or right to focus on the coordinate in question. And a list zipper contains sufficient information to calculate the next cell state in question.

First let's begin with zipper implementations. Since the world can never be empty, it is totally safe to break the focusing data (`[a]`

) into its components (`(a,[a])`

). By rearranging the order of arguments (`([a],[a])`

… `([a],(a,[a]))`

… `LZipper a [a] [a]`

) we have our final version of `LZipper`

here:

```
{-# LANGUAGE DeriveFunctor #-}
-- | LZipper <current> <left (reversed)> <right>
data LZipper a = LZipper a [a] [a]
deriving (Show, Functor)
```

Here the old focus would be `<current>:<right>`

but we can break the focusing list to make it looks nicer: now a list zipper in our case consists of three parts, a current focus `<current>`

, everything to the left of it `<left (reversed)>`

and everything to the right of it `<right>`

.

With the list zipper definition given, it's easy to define basic operations:

```
-- | shift left and right
zipperMoveL, zipperMoveR :: LZipper a -> LZipper a
zipperMoveL (LZipper a (x:xs') ys) = LZipper x xs' (a:ys)
zipperMoveL _ = error "Invalid move"
zipperMoveR (LZipper a xs (y:ys')) = LZipper y (a:xs) ys'
zipperMoveR _ = error "Invalid move"
-- | get the current focusing element
current :: LZipper a -> a
current (LZipper v _ _) = v
```

To initialize the world we need to convert from a list of cells to a zipper which represents the infinite world. This can be achieved by padding the list to make it infinite in both directions:

```
-- | initial world to a zipper
rangeToZipper :: a -> [a] -> LZipper a
rangeToZipper v wd = case wd of
[] -> LZipper v pad pad
x:xs -> LZipper x pad (xs ++ pad)
where
pad = repeat v
```

And to view a portion of the infinite world, we take as argument two coordinates and a zipper (the zipper is assumed to point to the origin point), move the zipper to the position specified by one of the coordinate, and then extract the value of the focus from zipper and keep moving the zipper to the other coordinate.

```
-- | a view range (coordinates), a zipper to a portion of the world
zipperToRange :: (Int, Int) -> LZipper a -> [a]
zipperToRange (i,j) zp = fmap current zippers
where
zippers = take (j - i + 1) (iterate zipperMoveR startZ)
startZ = zipperMoveFocus i zp
zipperMoveFocus :: Int -> LZipper a -> LZipper a
zipperMoveFocus t z
| t > 0 = zipperMoveFocus (t-1) (zipperMoveR z)
| t < 0 = zipperMoveFocus (t+1) (zipperMoveL z)
| otherwise = z
```

We modify `waveRule`

function above so that it can produce the next cell state from a zipper. The nice thing about our zipper is that both of the neighboring old cell states can be easily found by pattern matching on arguments.

```
waveRule :: LZipper Char -> Char
waveRule (LZipper _ (l:_) (r:_))
| fromL && fromR = 'X'
| fromL = '>'
| fromR = '<'
| otherwise = ' '
where
fromL = l `elem` ">*X"
fromR = r `elem` "<*X"
waveRule _ = error "null zipper"
```

And then we rush to complete the main function, assuming `nextGen :: LZipper Char -> LZipper Char`

, a function that takes a zipper and produces a zipper of the next generation. has been implemented for us.

```
nextGen :: LZipper Char -> LZipper Char
nextGen = undefined
main :: IO ()
main = mapM_ (putStrLn . zipperToRange (-20,40)) (take 20 (iterate nextGen startZ))
where
startStr = "* > * * < **<"
startZ = rangeToZipper ' ' startStr
```

In the code above, we take 20 generations, view the world within range `(-20,40)`

.

The only thing missing in our implementation is the `nextGen`

function, this is also where the magic happens. Let's implement it step by step.

By taking its type signature into account, we can write down the shape of the body:

```
nextGen :: LZipper Char -> LZipper Char
nextGen z = LZipper c' ls' rs'
where
c' = undefined
ls' = undefined
rs' = undefined
```

And it's not hard to figure out what is `c'`

– the new cell state in correspondence with `c`

:

```
nextGen :: LZipper Char -> LZipper Char
nextGen z = LZipper c' ls' rs'
where
c' = waveRule z
ls' = undefined
rs' = undefined
```

To figure out `ls'`

, we first try to figure out the first element of it, namely `l'`

:

```
nextGen :: LZipper Char -> LZipper Char
nextGen z = LZipper c' ls' rs'
where
c' = waveRule z
l' = undefined
ls' = l' : undefined
rs' = undefined
```

Since the focus of `l'`

is the direct neighborhood of `c`

, we can simply move the zipper to calculate its new state:

```
nextGen :: LZipper Char -> LZipper Char
nextGen z = LZipper c' ls' rs'
where
c' = waveRule z
l' = waveRule . zipperMoveL $ z
ls' = l' : undefined
rs' = undefined
```

Comparing `c'`

and `l'`

, we can find the pattern:

```
nextGen :: LZipper Char -> LZipper Char
nextGen z = LZipper c' ls' rs'
where
c' = waveRule z
l' = waveRule . zipperMoveL $ z
ls' = [waveRule . zipperMoveL $ z, waveRule . zipperMoveL . zipperMoveL $ z, ...]
rs' = undefined
```

And the same pattern holds for `rs'`

: we just keep moving the zipper to its left or right, and produce new states by applying `waveRule`

to it. So we end up with:

```
nextGen :: LZipper Char -> LZipper Char
nextGen z = LZipper c' ls' rs'
where
c' = waveRule z
ls' = map waveRule . tail $ iterate zipperMoveL z
rs' = map waveRule . tail $ iterate zipperMoveR z
```

Now the whole program should be complete, if you run it, you will get something like this:

```
* > * * < **<
< > >< > < >< <<X>
< > <> X <> <<< >>
< X >< >< X<< >>
< < > <> <> <<X >>
< < X X X<< > >>
< < < >< ><<X > >>
< < < <> <X< > > >>
< < < < X<<> > > >>
< < < < <<X > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
< < < < <<< > > > > >>
```

Let's call it a day here. In the next part we'll go back to comonads, and its relationship between zippers. And hopefully we will finally see the implementation of Conway's Game of Life.

You can find my complete code from gist.

(Updated on Aug 22, 2014, better title, add link)

Let's go beyond monad, today I want to show you how to play with comonad. I don't understand the theory behind comonad, but programming with it is quite simple. Basically, I think **it's just an algebraic structure dealing with data context.**

In other words, if you find yourself dealing with some recursive data structure, and in that data structure, the value of one place depends on the value of its neighborhoods (some “data context”, not sure if I'm using the right terminology), you probably want to take a look at comonad.

To do something interesting with comonad, the first thing came to my mind was Game of Life. It's a simple “game” that involves no players. At the beginning, a world is represented as a 2D array of booleans. Then this world evolves following some simple rules.

Looking at these rules, you will find that the next state of a given cell in the world is merely determined by the previous states of itself and its neighborhoods. So I feel Game of Life would be a perfect example for comonad.

Recall what is a monad in Haskell code:

I lied here because to define a `Monad`

, Haskell does not require you to make it an instance of `Functor`

at the first place. But trust me this requirement will be forced in future.

In addition, I don't define a `(>>=)`

but instead a `join`

, since `f >>= m = join (fmap f m)`

, and `m`

should also be an instance of `Functor`

, this monad definition should be almost equivalent to the one found in the base library.

Whenever we “co-” something, it means to filp every `->`

into `<-`

. So we know what a comonad would look like:

```
class Functor w => Comonad w where
extract :: w a -> a
duplicate :: w a -> w (w a)
extend :: (w a -> b) -> w a -> w b
(=>>) :: w a -> (w a -> b) -> w b
```

Here you can see we flip the type signature for `return`

and `join`

, we get `extract`

and `duplicate`

as results. It's also handy to have `extend`

and `(=>>)`

, which can all be defined in terms of `fmap`

and `duplicate`

, just like `(=<<)`

and `(>>=)`

functions for monad. Let's try to figure out the implementation of `extend`

.

The first step is to bind the arguments of `extend`

to some name (here `f`

and `w1`

), so we can list all the tools available:

```
extract :: forall a . w a -> a
duplicate :: forall a . w a -> w (w a)
fmap :: forall a b . (a -> b) -> w a -> w b
f :: w a -> b
w1 :: w a
-- goal:
extend f w :: w b
```

It's tempting to try `f w1 :: b`

, but it is a dead end. Instead, because `f`

“removes a layer of w”, we can try to add another layer before using `f`

:

```
duplicate w1 :: w (w a)
fmap :: (a' -> b') -> w a' -> w b'
-- where a' = w a, b' = b
fmap f (duplicate w1) :: w b
```

Therefore:

or more point-freely:

In my opinion, when trying to understand monad, it's more straightforward to begin with understanding `fmap`

, `return`

and `join`

, but in practice, `>>=`

and `=<<`

come in handy. And the same thing happens to comonad: `extract`

and `duplicate`

are easy, and `extend`

and `=>>`

come in handy.

Recall that in the intro section, I said comonad deals with data context. More precisely, (IMHO) it's a type constructor that adds to a data structure the ability of focusing on one particular value and the ability of moving focus.

`extract`

means to extract a value from the current focus, this can be seen from its type signature`extract :: w a -> a`

.`duplicate`

might not be that straightforward to understand. It means to replace every value in the data structure, with its corresponding context.

To make an analogy, assume you are inside of a big building, there are maps indicating the spot you are currently standing. If this building were a comonad, and the focus was on you. I can use `extract`

to directly find you. In addition, there are some places with maps. These maps provides you with not only the whole structure of this building, but also the place where every map itself is located. So if you are inside a building near a map, you can take a picture on the map and send it to me, then I can locate you using `extract`

.

Now suppose there is a building without maps, what `duplicate`

does is just like drawing maps for each place inside the building and putting them to their right places. So `w`

is the building “comonad”, `a`

is a single place inside the building and `duplicate :: w a -> w (w a)`

draws maps and put them to their corresponding places.

And just like `(>>=)`

in monad, `(=>>)`

is more handy to use in comonad. Looking at its signature `(=>>) :: (w a -> b) -> w a -> w b`

, the most interesting part is its first argument, a function of type `w a -> b`

. This argument is exactly where rules come into play: “Given a data with focus (`w a`

), let's determine the next value of the current focus will be (`b`

).”

I'd like to call it a day here. In the future posts, we'll see these functions in action.

Since I'll be busy for the next few weeks, I can only devote few of my brain power to easy programming tasks in my spare time. Therefore, these days I've started working on exercism.

It's a free website and only requires you having a github account. They offer simple programming problems with test suites. You can solve the problem and submit your source code to the website so that other users would look at your code and have comments on it.

The main target is to improve the readability of your code through comments from others and iterative refinements from yourself.

In a word, it's an excellent website to learn things.

My account is here, feel free to give me comments.

See you there.

In Semantic editor combinators, Conal has shown us a simple but powerful trick of walking into a portion of data and then modifying it.

When I saw this simple idea, I persuaded myself to take it for grainted and tried it out. Even though it works well in practice, I'd like to explain to myself what's happening, which is the motivation of this article.

I'll implement `first`

, `second`

, `result`

and other combinators step by step, and come up with the way to walking into user defined `data`

, which is easy but not presented in Conal's post.

First we need to think about what the type would be for `first`

and `second`

:

When we reach for the portion of data, we need to modify it, so we need a function

`f :: a -> b`

We also need a pair of data

`(x,y) :: (c,d)`

for this editor's input.For

`first`

, we want type`c = a`

, and the return value should be of type`(b,d)`

For

`second`

, we want type`d = a`

, and the return value should be of type`(c,b)`

Put them together, we end up with type signature for `first`

and `second`

:

Introduce `f`

and `(x,y)`

for two arguments, and the implementation is straightforward:

```
first :: (a -> b) -> (a,d) -> (b,d)
first f (x,y) = (f x, y)
second :: (a -> b) -> (c,a) -> (c,b)
second f (x,y) = (x, f y)
```

Let's bring up GHCi for some tests:

```
λ> first (const 1) ("foo","bar") -- (1, "bar")
(1,"bar")
λ> second (* 20) (Just 1, 1) -- (Just 1, 20)
(Just 1,20)
```

There's two way of editing a function: we can either modify the return value (the `result`

combinator), or modify the argument (the `argument`

combinator).

`result`

combinatorLet's think about the type of `result`

:

- How to modify a function is given by
`f :: a -> b`

- The function we want to edit is
`g :: c -> d`

- We want to modify on the return value of
`g`

, therefore`d = a`

. - Then the resulting function should be of type
`c -> b`

We know that `->`

is right-associative, (e.g. `p -> q -> r`

and `p -> (q -> r)`

are equivalent) so we can also think `result`

as a function that accepts 3 arguments, and its type signature becomes:

Introduce `f`

`g`

`x`

for these 3 arguments, and we want a value of type `b`

.

- To produce a value of
`b`

, we might need`f :: a -> b`

. - To produce a value of
`a`

, we might need`g :: c -> a`

. `x`

is of type`c`

Chaining these facts together, we yield the implementation of `result`

:

This is exactly the definition of `(.)`

, function composition.

So we end up with:

Test it:

`argument`

combinatorUse the same approach, let’s implement `argument`

combinator:

`f :: a -> b`

to modify the argument`g :: c -> d`

the function to be modified`b = c`

so that the type of`f`

and`g`

matches.- the resulting function should be of type
`a -> d`

.

Write down the type signature, introduce `f`

, `g`

, `x`

, the goal is to produce a value of type `d`

- To produce a value of
`d`

, we might need`g :: b -> d`

. - To produce a value of
`b`

, we might need`f :: a -> b`

. `x`

is of type`a`

Therefore the definition of `argument`

is:

To simplify this, we do some transformations:

```
g (f x)
> (g . f) x -- definition of (.)
> (.) g f x -- infix operator
> ((.) g f) x -- property of curried function
> ((flip (.)) f g) x -- flip
> flip (.) f g x -- rule of function application
```

Finally:

```
argument :: (a -> b) -> (b -> d) -> a -> d
-- point-free style of: argument f g x = flip (.) f g x
argument = flip (.)
```

Test it:

For now we have `first`

and `second`

to work with pairs, and `argument`

and `result`

with functions, but sometimes we want to go deeper inside data, say:

- Goal 1: we want
`2`

to be`5`

in`(7,(2,1))`

- Goal 2: we want
`a+b`

to be`(a+b)*2`

in`\a -> (a+1 ,\b -> (a+b, 1))`

Here I try to give some explanation of how to do this.

To modify `d`

in `(a,(c,d))`

, that means we need to first focus on `(c,d)`

, this can be achieved by `second`

, let’s pass a const function `const "Focus"`

to `second`

, so we know we are focusing on which portion of the data:

We now know that missing `(2,1)`

is passed to `toFocus`

, we can use a lambda expression to capture it:

Let's do something more interesting:

The observation is: inside the body of lambda expression `\x -> ...`

, we can do something to `(2,1)`

. That means we can apply semantic editor combinators on it!

Now assume we are dealing with `(2,1)`

, to reach the goal, we apply `const 5`

to its `fst`

part:

Now replace the body of `\x -> ...`

with what we want:

Done!

Let's not hurry to Goal 2, I’ll show you some transformations:

```
second (\x -> first (const 5) x) d1
> second (first (const 5)) d1 -- eta reduction / point-free
> (second (first (const 5))) d1 -- rule of function application
> ((second . first) (const 5)) d1 -- definition of (.)
```

I don’t simplify it to `(second . first) (const 5) d1`

on purpose, and I’m going to show you why.

It might not be straightforward at first glance, since function composition is usually “right to left”, that is, `(g . f)`

means “first apply `f`

, on its result, apply `g`

. But here, `second . first`

can be read as”walk to the `snd`

part, and then `fst`

part of it".

My explanation is: **the function does compose from right to left, but what’s “carrying” with the composition is not the data**.

The observation is: no matter how many functions are composed together, the resulting function just takes one argument:

```
λ> let u = undefined
λ> :t u . u
u . u :: a -> c
λ> :t u . u . u . u
u . u . u . u :: a -> c
λ> :t u . u . u . u . u
u . u . u . u . u :: a -> c
```

So what is the argument taken by this resulting function? In our Goal 1, it is `const 5`

, the “how we modify it” part. So despite that `(second . first) (const 5) d1`

is simple, to understand it, what you really need would be: `((second . first) (const 5)) d1`

The observation is: **functions are composed from right to left and the mutator is buried more deeply as the composition goes**.

To see this, let's just pick up combinators randomly to form a chain of function composition, and check its type:

```
λ> :t first (const 5)
first (const 5) :: Num b => (a, d) -> (b, d)
λ> :t (second . first) (const 5)
(second . first) (const 5) :: Num b => (c, (a, d)) -> (c, (b, d))
λ> :t (result . second . first) (const 5)
(result . second . first) (const 5)
:: Num b => (c -> (c1, (a, d))) -> c -> (c1, (b, d))
λ> :t (second . result . second . first) (const 5)
(second . result . second . first) (const 5)
:: Num b => (c, c1 -> (c2, (a, d))) -> (c, c1 -> (c2, (b, d)))
λ> :t (first . second . result . second . first) (const 5)
(first . second . result . second . first) (const 5)
:: Num b =>
((c, c1 -> (c2, (a, d1))), d) -> ((c, c1 -> (c2, (b, d1))), d)
```

Reading the composition from right to left, **we are constructing / definiting the “shape” of the data this combinator would like to work on**. In contrast, reading from left to right, **we are deconstructing the data until we reach for the position that we want to modify**. These two views does not conflict with each other.

Therefore, IMHO, when we are thinking about how to walking into the interesting portion, it's helpful thinking about build up combinator from left to right. But to answer the question about why, we should instead read from right to left.

In addition, here is another interesting observation:

```
(<comb1> . <comb2>) (<comb3> . <mutator>) d1
<=> (<comb1> . <comb2> . <comb3>) <mutator> d1
```

The shape of semantic editor combinators are always: `{comb} {mut} {dat}`

So if we see a combinator appear as the head of the `(.)`

chain in `{mut}`

, we can move it to the tail of the `(.)`

chain in `{comb}`

, and the other way around also holds true.

Recall Goal 2: we want `a+b`

to be `(a+b)*2`

in `\a -> (a+1, \b -> (a+b, 1))`

Based on the previous understand, we need to:

- walk into
`(a+1, \b -> (a+b, 1))`

, using`result`

- walk into
`\b -> (a+b, 1)`

, using`second`

- walk into
`(a+b, 1)`

, using`result`

- walk into
`(a+b)`

, using`first`

- apply
`(* 2)`

Try it out:

```
λ> let d2 = \a -> (a+1, \b -> (a+b, 1))
λ> let m2 = (result . second . result . first) (*2) d2
λ> (snd (d2 10)) 20
(30,1)
λ> (snd (m2 10)) 20
(60,1)
λ> fst ((snd (d2 10)) 20)
30
λ> fst ((snd (m2 10)) 20)
60
λ> (fst . ($ 20) . snd . ($ 10)) d2
30
λ> (fst . ($ 20) . snd . ($ 10)) m2
60
```

Last two programs are just rewriting the two programs right before it, Do you notice the symmetry between `fst . ($ 20) . snd . ($ 10)`

and `result . second . result . first`

?

I believe this is not an accident, but to explain it might beyond my reach, this is what I have for now:

With the help of Lambdabot:

```
(fst . ($ 20) . snd . ($ 10)) (result . second . result . first) (*2) d2
> fst (snd (result (second (result (first 10)))) 20) (2 *) d2
```

I guess when `fst`

and `first`

meet together, or when `($ 10)`

and `result`

meet together, etc. they will be somehow “cancelled”. If someone knows how to finish this explanation, please let me know.

Conal has shown us `element`

, which walks into every element of a list.

It’s easy to work out its type signature:

- how to modify:
`f :: a -> b`

- data to be modified:
`[a]`

- resulting data:
`[b]`

and this is exactly `map`

.

Here I show you 3 extra combinators:

*inHead* modifies the head of a list:

- how to modify:
`f :: a -> a`

(list can only contain elements of same types) - data to be modified:
`[a]`

- resulting data:
`[a]`

*inTail* modifies the tail of a list:

- how to modify:
`f :: [a] -> [a]`

- data to be modified:
`[a]`

- resulting data:
`[a]`

*inPos* modifies the element at a given position:

- we first need the position:
`Int`

- how to modify:
`f :: a -> a`

- data to be modified:
`[a]`

- resulting data:
`[a]`

```
inPos :: Int -> (a -> a) -> [a] -> [a]
inPos n f xs
| null xs = xs
| n < 0 = xs
| n == 0 = let (y:ys) = xs
in f y : ys
| otherwise = let (y:ys) = xs
in y : inPos (n - 1) f ys
```

*inSome* is like `inPos`

, but modifies multiple positions:

- we first need the indices:
`[Int]`

- how to modify:
`f :: a -> a`

- data to be modified:
`[a]`

- resulting data:
`[a]`

Some examples:

```
λ> let d1 = ("foo",[1..10])
λ> (second . element) (+ 1) d1
("foo",[2,3,4,5,6,7,8,9,10,11])
λ> (first . element) ord d1
([102,111,111],[1,2,3,4,5,6,7,8,9,10])
λ> (second . inTail) (take 2) d1
("foo",[1,2,3])
λ> (second . inHead) (* 200) d1
("foo",[200,2,3,4,5,6,7,8,9,10])
λ> (second . inPos 3) (* 200) d1
("foo",[1,2,3,800,5,6,7,8,9,10])
λ> let d2 = replicate 3 (replicate 4 0)
λ> (inPos 1 . inPos 2) (const 255) d2
[[0,0,0,0],[0,0,255,0],[0,0,0,0]]
```

Suppose user has defined a binary tree:

To write a combinator is easy, just follow the pattern `<combinator> <mutator> <data> = <modified-data>`

:

```
treeV :: (a -> a) -> BinTree a -> BinTree a
treeV f (Node x) = Node (f x)
treeV f (Tree l r x) = Tree l r (f x)
treeElement :: (a -> b) -> BinTree a -> BinTree b
treeElement = fmap
treeLeft :: (BinTree a -> BinTree a) -> BinTree a -> BinTree a
treeLeft f (Tree l r e) = Tree (f l) r e
treeLeft f r = r
treeRight :: (BinTree a -> BinTree a) -> BinTree a -> BinTree a
treeRight f (Tree l r e) = Tree l (f r) e
treeRight f r = r
treeNonLeaf :: (a -> a) -> BinTree a -> BinTree a
treeNonLeaf f (Tree l r e) = Tree (treeNonLeaf f l) (treeNonLeaf f r) (f e)
treeNonLeaf f r = r
treeLeaf :: (a -> a) -> BinTree a -> BinTree a
treeLeaf f (Node x) = Node (f x)
treeLeaf f (Tree l r x) = Tree (treeLeaf f l) (treeLeaf f r) x
```

Here `treeElement`

walks into the data part of each node to do the modification. Since `treeElement = fmap`

, we can generalize `element = fmap`

, to make it work for not only lists and `BinTree`

s, but also any other Functors.

`treeLeft`

and `treeRight`

walks into left subtree and right subtree, respectively. But to target at the value of a `BinTree`

, we need `treeV`

, which takes care of walking into the data field of a tree.

`treeNonLeaf`

and `treeLeaf`

are just like `treeElement`

but works merely on non-leaf nodes and leaf nodes, respectively.

See them in action:

```
λ> let d1 = Tree (Tree (Node (1,1)) (Node (1,2)) (2,2)) (Node (3,4)) (5,6)
λ> (treeElement . first) even d1 -- test for each node, if the `fst` part is an even?
Tree (Tree (Node (False,1)) (Node (False,2)) (True,2)) (Node (False,4)) (False,6)
λ> (treeLeft . treeV . first) (* 4) d1 -- walk into left subtree, multiple `fst` part of its value by 4
Tree (Tree (Node (1,1)) (Node (1,2)) (2,2)) (Node (3,4)) (5,6)
λ> (treeLeft . treeRight . treeV . first) (* 4) d1 -- walk into the right subtree of the left subtree, ..
Tree (Tree (Node (1,1)) (Node (4,2)) (2,2)) (Node (3,4)) (5,6)
λ> (treeNonLeaf . first) (const 100) d1 -- for each value of the `fst` part of each non-leaf node
Tree (Tree (Node (1,1)) (Node (1,2)) (100,2)) (Node (3,4)) (100,6)
λ> (treeLeaf . second) (const 100) d1 -- for each value of the `snd` part of each leaf node
Tree (Tree (Node (1,100)) (Node (1,100)) (2,2)) (Node (3,100)) (5,6)
```

The semantic editor combinators can also be chained together.

Suppose we have some combinators:

```
λ> let c13 = (inPos 1) (const 3)
λ> let c05 = (inPos 0) (const 5)
λ> let c06 = (inPos 0) (const 6)
λ> let cf1 = first (const 1)
```

For a given list, we want to apply `c13`

and `c05`

:

So we can use function composition to compose semantic editors:

```
λ> let d2 = (10,replicate 5 0)
λ> (cf1 . (second c13)) d2
(1,[0,3,0,0,0])
λ> (cf1 . (second c13) . (second c05)) d2
(1,[5,3,0,0,0])
λ> (cf1 . (second (c13 . c05))) d2
(1,[5,3,0,0,0])
```

The last example says “apply `1`

, (back to the top level and then) walk into the `snd`

part, apply `c05`

and then `c13`

”. I think this “distributive property” will make things extremely flexible.

Since the function composition happens from right to left, if some combinators are trying to modify the same field, the “leftmost” one take effect:

This article is originally posted on Code Review. And I think this can also be a good post.

I find sometimes it is useful to capture the notion of invertible functions.

The idea is that if two functions `f :: a -> b`

and `g :: b -> a`

are the inverse function of each other, and if there is another function `h :: b -> b`

, then `h`

can also work on values of type `a`

.

Moreover, if there`f'`

and `g'`

are another pair of functions that are the inverse function of each other, `(f,g)`

and `(f',g')`

can actually be composed to `(f' . f, g . g')`

and the invertibility still holds.

The following is my attempt to implement this in haskell, and I’m wondering if an existing library can do the same thing (or even more general thing) for me. Also advice and comments about my code are appreciated.

First I use records to store two functions:

```
data Invertible a b = Invertible
{ into :: a -> b
, back :: b -> a
}
```

`into`

means “convert a into b” while `back`

means “convert b back to a”.

And then few helper functions:

```
selfInv :: (a -> a) -> Invertible a a
selfInv f = Invertible f f
flipInv :: Invertible a b -> Invertible b a
flipInv (Invertible f g) = Invertible g f
borrow :: Invertible a b -> (b -> b) -> a -> a
borrow (Invertible fIn fOut) g = fOut . g . fIn
liftInv :: (Functor f) => Invertible a b -> Invertible (f a) (f b)
liftInv (Invertible a b) = Invertible (fmap a) (fmap b)
```

In the above code `borrow`

will use the pair of functions to make its last argument `g`

available to values of type `a`

. And changing `borrow f`

to `borrow (flipInv f)`

will make `g`

available to values of type `b`

. Therefore `borrow`

captures my initial idea of making a function of type `b -> b`

available for values of `a`

if `a`

and `b`

can be converted to each other.

In addition, `Invertible`

forms a monoid-like structure, I use `rappend`

and `rempty`

to suggest a similiarity between it and `Monoid`

:

```
rempty :: Invertible a a
rempty = selfInv id
rappend :: Invertible a b
-> Invertible b c
-> Invertible a c
(Invertible f1 g1) `rappend` (Invertible f2 g2) =
Invertible (f2 . f1) (g1 . g2)
```

Here I have two examples to demonstrate that `Invertible`

might be useful.

It is natural that `Invertible`

can be used under scenario of symmetric encryption. `Invertible (encrypt key) (decrypt key)`

might be one instance if:

To simplify a little, I make an example of Caesar cipher and assume that plain text contains only uppercase letters:

```
-- constructor should be invisible from outside
newtype OnlyUpper a = OnlyUpper
{ getOU :: [a]
} deriving (Eq, Ord, Show, Functor)
ouAsList :: Invertible (OnlyUpper a) [a]
ouAsList = Invertible getOU OnlyUpper
onlyUpper :: String -> OnlyUpper Char
onlyUpper = OnlyUpper . filter isAsciiUpper
upperAsOrd :: Invertible Char Int
upperAsOrd = Invertible ord' chr'
where
ord' x = ord x - ord 'A'
chr' x = chr (x + ord 'A')
```

And Caesar Cipher is basically doing some modular arithmetic:

```
modShift :: Int -> Int -> Invertible Int Int
modShift base offset = Invertible f g
where
f x = (x + offset) `mod` base
g y = (y + (base - offset)) `mod` base
caesarShift :: Invertible Int Int
caesarShift = modShift 26 4
caesarCipher :: Invertible (OnlyUpper Char) (OnlyUpper Char)
caesarCipher = liftInv (upperAsOrd
-- Char <-> Int
`rappend` caesarShift
-- Int <-> Int
`rappend` flipInv upperAsOrd)
-- Int <-> Char
```

One way to use `Invertible`

is just using its `into`

and `back`

as `encrypt`

and `decrypt`

, and `Invertible`

also gives you the power of manipulating encrypyed data as if it was plain text:

```
exampleCaesar :: IO ()
exampleCaesar = do
let encF = into caesarCipher
decF = back caesarCipher
encrypted = encF (onlyUpper "THEQUICKBROWNFOX")
decrypted = decF encrypted
encrypted' = borrow (flipInv caesarCipher
`rappend` ouAsList) (++ "JUMPSOVERTHELAZYDOG") encrypted
decrypted' = decF encrypted'
print encrypted
-- OnlyUpper {getOU = "XLIUYMGOFVSARJSB"}
print decrypted
-- OnlyUpper {getOU = "THEQUICKBROWNFOX"}
print encrypted'
-- OnlyUpper {getOU = "XLIUYMGOFVSARJSBNYQTWSZIVXLIPEDCHSK"}
print decrypted'
-- OnlyUpper {getOU = "THEQUICKBROWNFOXJUMPSOVERTHELAZYDOG"}
```

Sometimes it’s convenient to write some code that manipulates matrices using `Invertible`

.

Say there is a list of type `[Int]`

in which `0`

stands for an empty cell, and we want every non-zero element move to their leftmost possible position while preserving the order:

```
compactLeft :: [Int] -> [Int]
compactLeft xs = nonZeros ++ replicate (((-) `on` length) xs nonZeros) 0
where nonZeros = filter (/= 0) xs
```

Now consider 2D matrices, we want to “gravitize” the matrix so that every non-zero element in it falls to {left,right,up,down}-most possible position while preserving the order.

```
data Dir = DU | DD | DL | DR deriving (Eq, Ord, Enum, Show, Bounded)
gravitizeMat :: Dir -> [[Int]] -> [[Int]]
gravitizeMat dir = borrow invertible (map compactLeft)
where mirrorI = selfInv (map reverse)
diagonalI = selfInv transpose
invertible = case dir of
DL -> rempty
DR -> mirrorI
DU -> diagonalI
DD -> diagonalI `rappend` mirrorI
```

here `Invertible`

comes into play by the observation that `transpose`

and `map reverse`

are all invertible (moreover, they are inverse functions of themselves). So that we can tranform matrices and pretend the problem is only “gravitize to the left”.

Here is one example:

```
print2DMat :: (Show a) => [[a]] -> IO ()
print2DMat mat = do
putStrLn "Matrix: ["
mapM_ print mat
putStrLn "]"
exampleMatGravitize :: IO ()
exampleMatGravitize = do
let mat = [ [1,0,2,0]
, [0,3,4,0]
, [0,0,0,5]
]
print2DMat mat
let showExample d = do
putStrLn $ "Direction: " ++ show d
print2DMat $ gravitizeMat d mat
mapM_ showExample [minBound .. maxBound]
```

And the result will be:

```
Matrix: [
[1,0,2,0]
[0,3,4,0]
[0,0,0,5]
]
Direction: DU
Matrix: [
[1,3,2,5]
[0,0,4,0]
[0,0,0,0]
]
Direction: DD
Matrix: [
[0,0,0,0]
[0,0,2,0]
[1,3,4,5]
]
Direction: DL
Matrix: [
[1,2,0,0]
[3,4,0,0]
[5,0,0,0]
]
Direction: DR
Matrix: [
[0,0,1,2]
[0,0,3,4]
[0,0,0,5]
]
```

You can find my complete code from gist.

Today I'm going to play with Alternatives.

(spoiler: I failed to understand what `some`

and `many`

do, and I guess that would be an essential part of getting `Alternative`

. So there's not much thing useful in this article.)

Just as my other type-tetris attempts, this is not a tutorial about `Alternative`

, I'll just provide some examples and intuitions gained throughout playing with it.

I happened to heard this typeclass by a quick scan on typeclassopedia, and there are only few functions related to it, which I think might not take much time to try them all. These are all motivations about this article.

As always, **bold sentences** stands for my intuitions.

`Alternative`

is a typeclass lied in `Control.Applicative`

, most of the time, I import this package to use functions like `<$>`

, `<*>`

and data type `ZipList`

, but no more. I happened to heard this typeclass by a quick scan on typeclassopedia, and there are only few functions related to it, which I think might not take much time to try them all. These are all motivations about this article.

First function in this class is called `empty`

, so I guess ** Alternative is some typeclass that allows containing nothing**. This would explain why there are so many

`Applicative`

s but I can only see `[]`

and `Maybe`

being instances of this typeclass.The document says `Alternative`

is “a monoid on applicative functors”. So we can make an analogy between `(mempty, mappend)`

from `Monoid`

and `(empty, <|>)`

from `Alternative`

. Let's try it out:

```
v1 :: [String]
v1 = empty <|> empty <|> empty
v2 :: [String]
v2 = v1 <|> pure "foo" <|> pure "bar"
v3 :: Maybe Int
v3 = pure 10 <|> pure 20
v4 :: Maybe Char
v4 = empty <|> pure 'A' <|> undefined
v5 :: Maybe ()
v5 = empty <|> empty <|> empty
```

Let's bring up GHCi:

`empty`

works like what we expected, and `<|>`

for lists seems straightforward. But when it comes to `Maybe`

, we find that only the first non-`Nothing`

one takes effect, if any.

`some`

and `many`

The type for `some`

and `many`

are identical, so just fill in some instances I come up with on the fly:

But when I try this out on GHCi, something is going wrong:

I gave `v6`

some time to run but it didn't terminate, so I cancelled it by hand.

By looking at the document, I find some clue: `some`

and `many`

seem to look for some solutions to the following equations:

```
some v = (:) <$> v <*> many v
many v = some v <|> pure []
-- rewrite to break the recursive relation will help?
some v = (:) <$> v <*> (some v <|> pure [])
many v = ((:) ($) v <*> many v) <|> pure []
```

I can't go any further from here, but doing some research, I find the following links that might help, but for now I just leave this two functions as mysteries. (Maybe it's just something like `fix`

, interesting, but IMHO not useful ).

Related links:

`optional`

For the rest of the story, I'll just type them directly into GHCi.

The last function about `Alternative`

is `optional`

, Let's check its type and feed it with some instances:

```
λ> :t optional
optional :: Alternative f => f a -> f (Maybe a)
λ> optional Nothing
Just Nothing
λ> optional $ Just 1
Just (Just 1)
λ> optional []
[Nothing]
λ> optional [1,2,3]
[Just 1,Just 2,Just 3,Nothing]
```

Wait a minute, is `f a -> f (Maybe a)`

looks familiar to you? It looks like an unary function that has type `a -> Maybe a`

under some contexts. I think the simplest expression that matches this type would be `(Just <$>)`

. Let's do the same thing on it.

```
λ> :t (Just <$>)
(Just <$>) :: Functor f => f a -> f (Maybe a)
λ> (Just <$>) Nothing
Nothing
λ> (Just <$>) Just 1
Just (Just 1)
λ> (Just <$>) []
[]
λ> (Just <$>) [1,2,3]
[Just 1,Just 2,Just 3]
```

Comparing the output of `optional`

and `(Just <$>)`

, we find that `optional`

will attach an `empty`

to the end of this monoid. And that `empty`

would be the “none” part from `optional`

's description: “One or none”. In addition, we've seen `<|>`

is the binary operation for this monoid, so we can have a guess:

`optional v = (Just <$> v) <|> pure Nothing`

And this turns out to be the exact implementation.

Not much is done in this post, I have to admit that type-tetris is not always the best way. As an afterthought, I didn't know how `Alternative`

will be used so there was little hint that I can rely on when I was trying to figure out `many`

and `some`

.

Anyway, if happened to read all contents in this article, sorry for wasting your time and thanks for your patience.

This article is not a free monad tutorial but some ideas about what free monad does from the prospective of a Haskell newbie.

Some days ago I came across Free monad by reading this Stack Overflow question.

I'm not sure about what exactly it is, although some intuitiions are gained through both reading the question and answers and playing with the library.

In the rest of this article, I'd like to share my experience about how I explore functions listed here and some intuitions gained from this process. These intuitions are marked as **bold sentences**.

`Free f a`

Import related library, you shoule make sure you've installed The free package.

This first thing is to make some values of type `Free f a`

.

The easiest constructor is `Pure`

. In order to fit the type `Free f a`

, I choose to use a list as `f`

and `Int`

as `a`

, because when I looked around the document, `f`

is most likely to be an instance of `Functor`

:

The next thing is about how to use the other constructor: `Free`

. Its type signature is `Free :: f (Free f a) -> Free f a`

. It might not be that friendly, so I simplify it a little with `f = [], a = Int`

. Then it becomes: `Free :: [Free [] Int] -> Free [] Int`

. Go back to the original signature, we can say that `Free`

constructor takes some value of `Free f a`

wrapped in another `f`

, then produces `Free f a`

. By using `Free`

, **the outside functor is somehow “ignored” from its type signature.** Moreover, the part `f (Free f a)`

suggests **an exactly match of inside and outside functors.**

Now we've already have a `v1 :: Free [] Int`

, and by using `Free`

, we should end up with some values of type `Free [] a`

:

Call GHCi to see the values:

Of course free monad should be a monad, let's check the document to find some clues, in the instance list of `data Free f a`

, we find this line:

This is enough for us to write some functions that can be used as the second argument of our old friend `>>=`

:

```
foo1 :: Int -> Free [] Int
foo1 x = Free [Pure $ x*2, Pure $ x+20]
foo2 :: Int -> Free [] String
foo2 x = Free [Pure $ show x]
v4 :: Free [] String
v4 = v1 >>= foo1 >>= foo1 >>= foo1 >>= foo2
```

And the GHCi output is:

```
λ> v4
Free [Free [Free [Free [Pure "80"]
,Free [Pure "60"]]
,Free [Free [Pure "80"]
,Free [Pure "60"]]]
,Free [Free [Free [Pure "120"]
,Free [Pure "80"]]
,Free [Free [Pure "100"]
,Free [Pure "70"]]]]
λ> :{
| let foo1 = \x -> [x*2,x+20]
| foo2 = \x -> [show x]
| in [10] >>= foo1 >>= foo1 >>= foo1 >>= foo2
| :}
["80","60","80","60","120","80","100","70"]
```

You can observe some similiarity between list monad and `Free []`

monad. The intuition is ** Free [] monad seems to be a list monad but without concat**.

`retract`

and `liftF`

The document says “`retract`

is the left inverse of `lift`

and `liftF`

”. So we explore `liftF`

first.

`liftF`

is “A version of lift that can be used with just a Functor for f.” I'm not sure about this, but we can get started from another clue: the type signature `liftF :: (Functor f, MonadFree f m) => f a -> m a`

.

We choose to stick to our simplification: `f = []`

, so we need to find a suitable `m`

that is an instance of `MonadFree [] m`

.

And this one looks promising:

So we can let `m = Free f`

, which gives us the simplified type signature: `liftF :: [a] -> Free [] a`

. We can guess that ** liftF lifts functors into free monads**. To confirm about this, let's try to lift some functors:

```
v7 :: Free [] Int
v7 = liftF [1,2,3,4,8,8,8]
v8 :: Free Maybe String
v8 = liftF $ Just "Foo"
v9 :: Free ((->) Int) Int
v9 = liftF (+ 10)
```

I don't know if there is an easy way of observing `v9`

, for now we just print `v7`

and `v8`

:

Now we know we can lift any functors, so **we can construct Free f a not only from Pure and Free, but also directly from functors.**

Now think about simplified type signature of `retract :: Free [] a -> [a]`

. **it is the reverse of liftF**. Let's have a try in code:

```
v10 :: [Int]
v10 = retract v3
v11 :: Maybe String
v11 = retract v8
v12 :: [String]
v12 = retract v4
v13 :: Int -> Int
v13 = retract v9
```

Call GHCi for results, now we can observe `v9`

indirectly by using function `v13`

:

```
λ> v10
[10,10,10]
λ> v11
Just "Foo"
λ> v12
["80","60","80","60","120","80","100","70"]
λ> map v13 [1..5]
[11,12,13,14,15]
```

Therefore ** retract converts a free monad back to a functor**. (I suspect this conclusion is somehow wrong, but for now I cannot tell what exactly is wrong.)

`iter`

and `iterM`

Use the same trick we've played before, let `f = []`

:

I guess: `iter`

converts a function who does some sorts of reduction on functors into another function who does the same thing but on free monads, and `iterM`

is the monadic version of `iter`

.

Let's have a try:

```
v14 :: Int
v14 = iter sum v3
foo3 :: [IO Int] -> IO Int
foo3 ms = sum `fmap` sequence ms
v15 :: IO Int
v15 = iterM foo3 v7
```

And look at GHCi:

Maybe I can say that `iter`

and `iterM`

lift reduction (for functors) into free monads

`hoistFree`

The type signature contains `forall`

, but I think it's fine to just ignore that part. As we are just trying to get things work:

I guess `hoistFree`

lift a conversion between functors into a conversion between free monads.

I know list and `Maybe`

are good friends:

```
import Data.Maybe
v16 :: Maybe String
v16 = Just "Foo"
v17 :: [String]
v17 = retract $ hoistFree maybeToList $ liftF v16
```

Look at GHCi, we should observe the conversion from `Maybe String`

to `[String]`

:

`_Pure`

and `_Free`

The type signatures of these two functions are really long, but don't be afraid, we still have some clue:

- If you look at
`Choice`

, we find a simple instance:`Choice (->)`

- Pick up an applicative, I choose
`m = []`

- Pick up a functor, I choose
`f = Maybe`

- Ignore
`forall`

parts

We end up with:

```
_Pure :: ( a -> [ a])
-> ( Free Maybe a -> [Free Maybe a])
_Free :: ((Maybe (Free Maybe a)) -> [Maybe (Free Maybe a)])
-> ( (Free Maybe a) -> [ Free Maybe a ])
```

I really can't figure out what these functions do, but I can still use it by feeding it with values of suitable type.

So for the following part of this section, I provide codes and outputs without explanation.

```
v18 :: [Free Maybe String]
v18 = _Pure (:[]) (liftF $ Just "Haskell")
v19 :: [String]
v19 = mapMaybe retract v18
-- f = Maybe
-- p = (->)
-- m = IO
-- _Free :: (Maybe (Free Maybe a) -> IO (Maybe (Free Maybe a)))
-- -> ( Free Maybe a -> IO (Free Maybe a) )
v20 :: IO (Free Maybe Int)
v20 = _Free return (liftF $ Just 123456)
v21 :: IO (Maybe Int)
v21 = retract `fmap` v20
```

GHCi:

```
λ> v18
[Free (Just (Pure "Haskell"))]
λ> v19
["Haskell"]
λ> v20
Free (Just (Pure 123456))
λ> v21
Just 123456
```

`wrap`

methodThis is a method defined for `MonadFree`

s, the document says **“Add a layer.”**, let's see it in action:

```
v22 :: Free Maybe Int
v22 = liftF $ Just 10
v23 :: Free Maybe Int
v23 = wrap (Just v22)
v24 :: Free Maybe Int
v24 = wrap (Just v23)
```

GHCi:

```
λ> v22
Free (Just (Pure 10))
λ> v23
Free (Just (Free (Just (Pure 10))))
λ> v24
Free (Just (Free (Just (Free (Just (Pure 10))))))
```

This is the first time I attempted to do a “type-tetris”, although I still don't know what is free monad, but I do get some intuitions by using the functions provided. So let's call it a day.

As promised, here is a simple tutorial about adding tag supports to your Hakyll blog.

Few things before we start:

My Hakyll version used for this article is

`4.4.3.2`

, other versions shouldn't have much difference.If this article looks too verbose, you can just look at

**bold sentences**.All the changes done below is contained in this archive. You can download, run it and skip rest of this article.

Let's start from scratch to keep it as simple as possible.

So we initialize a new Hakyll website.

```
# initialize the website under dir `mondo`
$ hakyll-init mondo
$ cd mondo
# compile the code necessary,
# in order to see the website.
$ ghc site
```

To create tags, we should first learn how to add tags to our posts, this step is easy, look at the document for tags, it should begin with `tags:`

followed with a comma-separated list.

Now, let's modify posts from `mondo/posts`

:

`mondo/posts/2012-08-12-spqr.markdown`

:

```
---
title: S.P.Q.R.
tags: foo, bar1, bar2
---
```

`mondo/posts/2012-10-07-rosa-rosa-rosam.markdown`

:

```
---
title: Rosa Rosa Rosam
author: Ovidius
tags: bar1
---
```

`mondo/post/2012-11-28-carpe-diem.markdown`

:

```
---
title: Carpe Diem
tags: bar2, foo
---
```

`mondo/posts/2012-12-07-tu-quoque.markdown`

:

```
---
title: Tu Quoque
author: Julius
tags: bar1, bar2
---
```

Now we've assigned tags to the posts, next thing is to make them accessible from Haskell codes.

Before we go through all the posts and generate pages, we should build tags using `buildTags`

.

Unforuntately this function is not well-documented, a short explanation would be : `buildTags pattern makeId`

finds all tags from posts captured by `pattern`

, converts each tag to a corresponding `Identifier`

by using `makeId`

and returns a value of type `Tags`

.

From `site.hs`

file, **find these lines**:

```
match "posts/*" $ do
route $ setExtension "html"
compile $ pandocCompiler
>>= loadAndApplyTemplate "templates/post.html" postCtx
>>= loadAndApplyTemplate "templates/default.html" postCtx
>>= relativizeUrls
```

Insert the following code in front of it:

The code region above says: find all tags by searching the metadata of posts found by pattern `posts/*`

, and the corresponding path for each tag will be of form `tags/*.html`

. (e.g. for tag `foo`

, you can generate a list of all posts that contains tag `foo`

under URL: `{your-website}/tags/foo.html`

.)

After tags are generated, we need to tell the post generator to include the corresponding tag informations for each tag, this is done by modifying `postCtx`

. (for now you don't have to understand the detail of `postCtx`

if you just want to setup up tags.)

Put the following definition somewhere in your `site.hs`

, I choose to put it right after the definition of `postCtx`

:

```
postCtxWithTags :: Tags -> Context String
postCtxWithTags tags = tagsField "tags" tags `mappend` postCtx
```

And then change all the occurrence of `postCtx`

inside the code region mentioned above.

After this change, **the code region should look like**:

```
-- build up tags
tags <- buildTags "posts/*" (fromCapture "tags/*.html")
match "posts/*" $ do
route $ setExtension "html"
compile $ pandocCompiler
>>= loadAndApplyTemplate "templates/post.html" (postCtxWithTags tags)
>>= loadAndApplyTemplate "templates/default.html" (postCtxWithTags tags)
>>= relativizeUrls
```

Now we need to add some changes to our templates, to make tags visible.

I think the following changes in this section are self-explanatory even if you knows nothing about how template works. So Let's go though them quickly.

**Modify your templates/post.html to make it looks like**:

```
<div class="info">
Posted on $date$
$if(author)$
by $author$
$endif$
</div>
<div class="info">
$if(tags)$
Tags: $tags$
$endif$
</div>
$body$
```

Now we create the template for tag pages which lists all posts containing the corresponding tags. Since we already have a template for listing posts(`template/post-list.html`

), we can simply reuse it.

This is done by **creating a new file: templates/tag.html, with the following content**:

This is the final step, we generate tag pages based on the templates we've written.

**Put the following code somewhere after we build up tags,** I choose to place it right after the line

`tags <- buildTags`

:```
tagsRules tags $ \tag pattern -> do
let title = "Posts tagged \"" ++ tag ++ "\""
route idRoute
compile $ do
posts <- recentFirst =<< loadAll pattern
let ctx = constField "title" title
`mappend` listField "posts" (postCtxWithTags tags) (return posts)
`mappend` defaultContext
makeItem ""
>>= loadAndApplyTemplate "templates/tag.html" ctx
>>= loadAndApplyTemplate "templates/default.html" ctx
>>= relativizeUrls
```

Now it's done, recompile `site.hs`

and have fun!

Few screenshots after we adding tags:

I also provide the final version of `mondo`

directory here.

Thanks for the big help from the source code of variadic.me.