Arrowed Core.logic

After a long hiatus, I’m back to blogging. And I again turn my attention to core.logic. If you haven’t already, you should definitely go read at least the first part of my earlier post for an understanding of how core.logic works and some definitions.

Pretty Pictures

The simplest goal in core.logic:

(def q (lvar))
(== q :a)

unifies the logic variable q with the keyword :a. If you were to represent this graphically, it would look like:

(link)

If we use ‘all’ to combine multiple unifications in sequence:

(all
  (== q :a)
  (== q :b))

it looks like this:

(link)

(incidently, this goal will fail since we’re trying to unify the same lvar with two different literal values, which is impossible)

And the ‘conde’ combinator would look like:

(conde
   [(== q :tea)]
   [(== q :party)])

(link)

Finally, you can define functions to encapsulate goals for easy reuse and naming:

(defno tea [q]
   (== q :tea))

(defno party [q]
   (== q :party))

(defno both [q]
   (conde
      [(tea q)]
      [(party q)]))

(link)

And where you have functions, you have recursion:

(defno nevero [q]
   (conde
      [fail]
      [(nevero q)]))

(link)

We’ll come back to nevero shortly, but first …

The Key Point, thus far

What is not obvious, is that I did not draw those diagrams by hand. They were generated programmatically by analyzing the individual expressions. This kind of static analysis is impossible with standard core.logic/miniKanren since it uses a monad at it’s core. It does not build the search tree all at once and then walk it. Instead, as each function is called, a portion of the search tree is generated, which is then walked depth first until the leaves are reached for each possible answer. Because of the way monads work, you can’t know what the search tree is going to look like until you call the function to generate and walk it. If you squint a little, generating the search tree is like compiling a program and walking the tree is like running it. So with the standard monadic core.logic, the compilation and running phases are intertwined. You can’t have one without the other. So how was I able to break it?

That requires a flashback.

The Backstory

When I first started to learn Clojure and Functional Programming, I kept hearing about these cool things called monads that people loved but were hard to understand. So I started looking for information and very quickly stumbled on a paper called “Generalizing Monads to Arrows” by John Hughes. In my naivety, I thought “If monads are good, then arrows must be better!” so I started reading it. It took me 5 attempts and actually implementing a stream arrow in Clojure before I made it through the paper.

I’ll save a full explanation of arrows for a future post. But for now, the key thing is that all monads can automatically and trivially be converted to a corresponding arrow. So in that sense, all monads are arrows. But monads have an easier notation, so if you don’t need the capabilities an arrow gives you, stick with the monad.

And what are those capabilities, you ask. The one of interest is that they allow you to attach meta data to functions that lets you do static analysis as you compose those functions.

So several months ago, I was laying in bed pondering arrows and it occurred to me that anywhere I could use an monad, I could use an arrow. Which led me to ask “What if I replaced the monad in core.logic with an arrow?” and the light went on.

A divergence

The bane of logic programmers everywhere is divergence. This happens when the search tree has a branch or branches which are effectively infinite because a cycle exists. Much like the nevero tree above. In that case; the search decends the left branch, sees the failure, backtracks to follow the right branch which spawns another level of the search tree, ad infinitum. And since you can’t predict these recursive calls ahead of time, you keep going around in circles and never get an answer.

Divergence takes many forms and a general solution is impossible (See “halting problem”). There are techniques to deal with divergence, but they require the programmer to be aware of implemenation details of their language or use non-relational operators. Take a look at Will Byrd’s PhD thesis for more information.

But using an arrow instead of a monad, you can statically analyze the tree and catch those recursive calls. Not only that, you can do optimizations similar to an optimizing compiler, to improve performance and eliminate some forms of divergence altogether.

Simple optimization

The simplest technique to prevent divergence is to follow the ‘Byrd rule’. In the following expression, the call to the nevero function comes before the call to unify q with :a.

(all
   (never q)
   (== q :a))

(link)

The unification expression is hidden by the call to nevero. Since order of clauses shouldn’t matter, the Byrd rule is that all non-recursive expressions should be written before recursive ones. That way, if any of the non-recursive expressions fail, the divergent expression is never even attempted. A simple optimization is to enforce the Byrd rule automatically, which produces a tree like:

(link)

Bad, bad, bad

Here’s a contrived example:

(declare top)
(defno bottom [q]
   (top q))

(defno middle [q]
   (bottom q))

(defno top [q]
   (middle q))

(defno a [q]
  (== q :a))

(defno b [q]
  (== q :b))

(defno ab [q]
  (conde
   [(nevero q) (b q)]
   [(nevero q)]
   [(a q)]
   [(conde
     [fail]
     [fail])]))

(defno bad [q]
  (conde
   [(top q) (top q)]
   [(ab q)]))

(link)

I’ve left out the unification nodes since they clutter the graph and don’t add to the discussion. As you look at that tree, notice that there are only two leaf nodes that could produce an answer, the empty one and the one labeled ‘a’.

An easy optimization to do would be to eliminate all the ‘fail’ leaves since they will never produce an answer:

(link)

Notice that the blank node on the far right that had two fail leaves below it also got eliminated. That leaves you with some leaves (ha!) that only recurse to themselves. The ones labelled ‘nevero’. They’ll never produce an answer, so they can be eliminated as well.

(link)

So, now you’re left with a tree that has some divergence, but not as much.

If bad was defined like

(defno bad [q]
  (conde
   [(top q) (top q) fail]
   [(ab q)]))

Then even those calls to top could be eliminated and you end up with:

(link)

Table, please.

Another technique is to mark some nodes as tabled, which is sort of like memoization for logic programs. Here’s a place where tabling would help.

(defno arco [x y]
  (conde
   [(== x :a) (== y :b)]
   [(== x :b) (== y :a)]
   [(== x :b) (== y :d)]))

(defno patho [x y]
  (conde
   [(arco x y)]
   [(fresh [z]
           (arco x z)
           (patho z y))]))

(def q (lvar))
(def o (lvar))
(patho q o)

(link)

This expression would diverge. But if you table patho, instead of trying to walk the tree every time it was called, it would check it’s cache and return a cached answer. Instead of requiring the programmer to mark certain functions as tabled, the optimizer could automatically table those functions, and only those functions, with the potential for divergence.

Walkabout

If you simply ignore all the recursive calls, you use the simplest monad possible, the list monad, to walk the tree with backtracking because you know you’ll never have recursion or divergence. Since that’s not possible in the general case, standard miniKanren uses a sophisticated interleaving scheme to reach leaves even in the face of some kinds of divergence. But with these optimizations, I think we can do better.

After you’ve optimized your tree, you start walking it to find the answers. Everytime you come to a recursive call, you create a thunk that preserves your position in the search tree and save it off. After you’ve hit all the leaves in your initial walk, you can go back to the deferred thunks and continue the search for more answers. Simple!

Lets be fair

I also believe (though not proved) that with these optimizations, you get fair conjunction for free. See Christope Grand’s blog for details.

Just the beginning

These simple optimizations have triggered a whole series of other thoughts on how to optimize logic programs to eliminate useless search paths and/or cache results for higher performance. So there’s a lot of new things to explore with this as a starting point.

The code for all this is in my fork of core.logic here. I cut out a majority of core.logic to eliminate sources of complexity for me, so this is only a proof of concept and not intended to be used. But I really hope someone will take on the task of adding it to core.logic. Also, this code won’t run since there are some dependencies I’ve not open sourced yet but will in a day or two as time permits. If you do dive into this code, I’ll warn you now; it involves an arrow that wraps a list monad which is built using a state monad combined with a continuation monad. Proceed at your own risk. I’m going to take the next several posts to explain the internals of this code because it really stretched my abilities to implement.

Jim Duey 05 October 2013
blog comments powered by Disqus