While monads are a very useful abstraction for handling side effects, there’s a class of effects they can’t address. For those, you need a related abstraction called comonads.
All programs must have side effects to be useful. There must exist a way for information to get into and out of a program. There are two ways for a program to aquire information to work on. Synchronous or asynchronous, polling or event driven. This distinction shows up at all levels even down to the machine language level where a CPU can either read an IO register periodically or set up a chunk of code to be run when an interrupt signal is received. These two methods of handling input are related to each other by being duals. When I first heard about the concept of duality, I thought of it as something being the inverse of something else. Or maybe the opposite. But that’s not the case. Duality is more like the two sides of a coin. A great paper on the duality of IO by Erik Meijer is here. I highly recommend reading it. Also in this interview, Erik mentions the very provactive idea that all programming is a layering of effects. Since he’s forgotten more than I’ll every know about programming, I tend to accept that statement. You would do well to read or watch that and ponder the implications.
For any program, the first question you have to answer is ‘How am I going to get data to work on? Sync or Async?’. If you’re using a framework or container, that question may be answered for you and you don’t have to worry about it. But if all you’ve got is your language, that’s your first question and the top level side effect that encapsulates everything else. It seems to me that the default choice is to be synchronous. That is, for a program to start executing and to query the outside world for input. Whenever there’s a need for async input, it’s most common to use a framework and not worry about it. You just write a function that has a defined signature and register it with the framework. Synchronous IO can be very nicely handled with the IO monad, as it is in Haskell. As the above links point out, async is the dual of sync and so can be expressed using the dual of the IO monad. To signify the dual of a concept, it’s common to prepend ‘co’ to the front. This gives rise to the name ‘comonad’.
Every monad has an associated comonad which is it’s dual. What that means is, for every monad (which is triple of data type, result function and bind function that obey the monad laws), there exists a triple which has a data type, an extract function and an extend function that obeys the comonad laws. There doesn’t seem to be an established standard for naming
extend, but those seem to do the best job of conveying what those functions do that I’ve found, so that’s what I used.
What are those functions, you ask? Well, a common phrase you hear when looking at the duals of various concepts is “it’s the same thing, except the arrows go in the opposite direction”. If you look at the type signature of the
result function for a monad, you’ll see something like this:
result: a -> m a
This is interpreted to mean that
result is a function that takes any value and returns a monadic value that somehow wraps or contains the given value. (Check out the archives for the earliest post on this blog where I talk about that in detail.)
To get the type signature for
extract which is the dual of
result, you simply reverse the arrow
extract: a <- w a
extract: w a -> a
The ‘w’ seems to be the accepted way to signify a comonadic value. I guess because it’s an upside down ‘m’. Who says computer scientists have no sense of humor. Regardless,
extract is a function that takes a comonadic value (covalue) and extracts the wrapped value from it, which it returns. So if you think of covalues as embedding a value into a context,
extract throws away the context, leaving only the value.
A similar exercise can be done with
bind: m a -> (a -> m b) -> m b
Reversing the arrows gives:
extend: w a <- (a <- w b) <- w b
extend: w b -> (w b -> a) -> w a
This means that
extend is a function that takes two arguments. The first is a covalue and the second is a function that will accept a covalue and return an unwrapped value. And then
extend will return a covalue.
Of course, not just any two functions will do for
extend. They have to satisfy the following:
1. (= (extend cv extract) cv) 2. (= (extract (extend cv f)) (f cv)) 3. (= (extend (extend cv g) f) (extend cv (fn [cx] (f (extend cx g)))))
It’s no good to just talk about an abstraction in the abstract. For the real-world programmer, there has to be an application. Every monad has a corresponding comonad and vice versa. It’s not always easy to figure out the correct dual for all monads or comonads, but they’re there.
The unifying idea of comonads is that you have values that are contained in some kind of context giving a covalue. Covalues can be transformed, or
extended to produce new covalues, and to produce the new values wrapped in the new covalue, you have access to the entire context. This is different than monads where you can transform a monadic value to a another monadic value because with monads, the function doing the transformation is only given the values wrapped by the monadic context but not the context itself.
And the second unifying idea is that you can always
extract a value from a covalue, but there is no universal way to put a value in a covalue. Whereas with monads, you can always put a value into a monadic context, but there is no universal way to get values out.
I’ve created the start of a comonads library for Clojure here.
The simplest comonad to look at is a vector:
(deftype covect [index ^clojure.lang.PersistentVector vec-value] clojure.lang.IDeref (deref [_] [index vec-value]) Comonad (extract [_] (get vec-value index)) (extend [_ f] (covect. index (into  (map #(f (covect. % vec-value)) (range (count vec-value))))))) (defn left [wv] (let [[i v] @wv] (if (> i 0) (covect. (dec i) v) wv))) (defn right [wv] (let [[i v] @wv] (if (< i (dec (count v))) (covect. (inc i) v) wv)))
Here we see the clearly how a comonad works. There is context, which is a vector of values, and one of those values has the focus of attention, the ‘index’. The
extract function just returns the value that currently has the focus. And the
right functions change the focus.
extend function is a little more interesting. It maps a function
f over the values contained in the list, by focusing on each value in turn and passing this temporary covalue to
f then has access to the entire list to use in deriving a new value for the place in the list that currently has the focus.
extend then returns a new covalue containing the values produced by
f with the focus in the orignal location. The first thing that comes to my mind when I look at these semantics is ‘1-dimensional cellular automata’.
My comonads library also defines a comonad for 2-dimensional arrays of primitive values, or matrices. With that comonad and some included helper functions, the Conway’s Game of Life is easy to write.
(defn life-step [co-b] (condp = (->> (apply neighbor-vals @co-b) (filter identity) count) 2 (extract co-b) 3 true false)) (defn prn-life [co-b] (let [[_ _ a] @co-b] (doseq [x (range (alength a)) y (range (alength (aget a 1))) :let [_ (when (= y 0) (println "."))]] (print (if (aget a x y) "* " " "))))) (prn-life (-> initial-position (extend life-step) (extend life-step) (extend life-step) (extend life-step)))
Should print the ending state given an initial state of the world where the
initial-state is a square matrix of booleans.
I started this post talking about IO, so let’s revisit that. If the IO monad is for synchronous IO, then the OI comonad is for async IO. A covalue would be a context that is a source for events that you then
extend with functions that can extract an event and do things with it. By using nested calls to
extend, it looks to me like you can build up a sequence of functionality that get to applied to events as they arrive asynchronously. Which I think is the essence of Functional Reactive Programming. I think that’s what they’re talking about in this paper, but I haven’t been able to read the whole thing yet.
To read more about comonads, I’d suggest you start with the original paper by Richard Kieburtz “Codata and Comonads in Haskell”.
This post represents my current understanding of comonads and I’m just beginning to explore them, so I could easily have something wrong. If you see something that needs correcting, please leave a comment and I’ll fix it as quick as I can.Tweet Jim Duey 02 February 2013