This is probably going to be a long one, but I don’t feel like breaking it up. Some time ago, I started thinking about make core.logic multi-threaded. I finally got to do it and here’s what happened.
Way back in the dawn of Clojure time, I ended up getting “The Reasoned Schemer” by mistake. (long story) Based on the other Little Schemer books and from what I could understand, I knew there was something great in there. But I couldn’t figure it out. So I set out to port miniKanren (the software described in the book) from Scheme to Clojure. I knew a little Scheme, I was learning Clojure and the miniKanren source is extremely dense. Very elegant and concise, but doesn’t lend itself to casual study.
After a bit, I got a rudimentary version working, put it on github and mentioned it in the forums. Sometime later, David Nolen came along and did a much more professional job of implementing miniKanren. Where mine was just for learning, you can actually use David’s for production software.
There a couple core pieces to miniKanren. The unifier, a lazy stream implementation and a variant of the sequence monad. For a variety of reasons, (mostly performance, I believe) David chose to not use an actual monad in his implementation and instead used protocols to great effect.
Combining these three components in the proper way you get an engine that searches a space shaped like a tree.
(For more information about the following, read “The Reasoned Schemer”)
This tree is built up using a few simple functions. The most basic operation is
== named ‘unify’. This establishes that two things can be substituted for each other. The result of an == expression is called a goal.
fresh allows for the introduction of logic variables and composes a series of goals into a single goal.
(def g (fresh [q x z] (== q false) (== x 10) (== "howdy" z)))
all has the effect of composing a series of goals into a single goal without creating logic variables.
(def g (all goal1 goal2 goal3))
Now, when you unify two things, you establish a relationship between then. A collection of these relationships is called a ‘substitution’. A goal actually is a function that takes a substitution as a parameter and adds one or more relationships to it.
So you have a goal, what do you do with it? Well, if you call it with an empty substitution, you’ll get back a substitution that contains relationships. Then you can query that substitution to see what value a particular logic variable might have. This is called reification and is done with the
(run* [q] (== q 6)) => 
You’ll notice that run* actually returned a list instead of just a single number. And that’s where some of the magic comes in. There’s another macro,
conde that lets you specify multiple relationships for a single logic variable.
(run* [q] (conde [(== q 6)] [(== 8 q)])) => [6 8]
conde lets you specify alternatives, so that goals may return one or more substitutions. Each one could have different relationships for the same logic variables.
However, you can’t add a relationship to a substitution that violates a relationship already in there. If you try, that goal fails and produces no results.
So a goal is a function that takes a substitution and returns a list that may contain 0, 1 or many substitutions. Does that remind you of anything?
If you were going to build a goal by hand, the easiest way would be to use Clojure’s ‘for’ statement.
(defn g [s] (for [s1 (goal1 s) s2 (goal2 s1) s3 (goal3 s2)] s3))
This could be stated more succinctly as:
(with-monad seq-m (def g (m-chain [goal1 goal2 goal3])))
So, multiple goals can be composed sequentially to make a single goal, conde is used to introduce alternatives and goals can fail. When run* comes along and passes an empty substitution to a goal, the net effect is to traverse a tree where each node of the tree is created by a conde goal and the substitutions at the leaves hold the answers to your query. Where goals fail, branches of the tree are not followed any further. You can visualize it something like this.
s’s represent answer substitutions and the
x’s represent failures.
When using the sequence monad to implement miniKanren, the search traverses the tree in a linear fashion. Starting at the root and descending to a leaf, then passing that result back up before descending another branch and getting another leaf. All results from the leaves migrate towards the root with the lists being unpacked and repacked at each node.
Here’s the key insight about this whole process. Searching each branch of the tree is totally independent from searching all other branches.
That means you could do your searching in parallel. The problem with that is how to get the answers from the leaves to one place. Another problem is how to know when you’ve reached a leaf.
For the first, you could use the bane of functional programmers everywhere; shared mutable state and side effects. If there were a single place where you could stick the substitution whenever you hit the leaf, you’d be in business. Clojure’s atom datatype fits the bill nicely.
The second problem is actually open to a very elegant solution. What if you used a monad other than seq-m? The continuation monad is exactly what you want. With cont-m, a goal is a function that takes a substitution as a parameter. But instead of returning a list of substitutions, it returns a function that accepts a continuation function. When this function is called with a continuation, it calls the continuation with the new substitutions. A continuation is simply a goal that moves the search closer to the leaves. When a leaf is reached, the continuation is just a function that adds the substitution to the atom where the answers are being collected.
(def answers atom ) (defn leaf [s] (swap! answers conj s))
So how do you implement such a beast? Starting with the vanilla core.logic code, the first step is to rewrite the miniKanren core using the sequence monad.
(defmonad logic-m [m-result (fn [v] [v]) m-bind (fn [mv goal] (->> mv (remove nil?) (mapcat goal))) ]) (defn succeed [a] (with-monad logic-m (m-result a))) (defn fail [a] (with-monad logic-m (m-result nil))) (defmacro == [u v] `(fn [a#] (if-let [b# (unify a# ~u ~v)] (succeed b#) (fail nil)))) (defmacro fresh [[& lvars] & goals] (let [lvars (mapcat (fn [v] `(~v (lvar '~v))) lvars)] `(with-monad logic-m (let [~@lvars] (clojure.algo.monads/m-chain (list ~@goals)))))) (defmacro all ( `clojure.core.logic/s#) ([& goals] `(with-monad logic-m (clojure.algo.monads/m-chain (list ~@goals))))) (defmacro conde [& clauses] (let [clauses (->> clauses (map vec) (into ))] `(with-monad logic-m (fn [s#] (->> ~clauses (map clojure.algo.monads/m-chain) (mapcat #(% s#))))))) (defmacro run* [[x] & goals] `(doall (let [~x (lvar '~x) xs# ((all ~@goals) empty-s)] (->> xs# (remove nil?) (map #(-reify % ~x)))))) (defmacro run-nc* [& goals] `(binding [*occurs-check* false] (run* ~@goals)))
That’s a whole lot of code, but the main take away is that everything is written in terms of the logic-m monad, which is a variant of the sequence monad.
Then delete everything else. :)
Once you’ve done that, it’s a relatively easy task to change to a different monad, cont-m being the first example.
(defmonad logic-m [m-result (fn [v] (fn [c] (c v))) m-bind (fn [mv goal] (fn [c] (mv (fn [a] (when a ((goal a) c)))))) ]) (defmacro conde [& clauses] (let [clauses (->> clauses (map vec) (into ))] `(with-monad logic-m (fn [s#] (fn [c#] (doseq [clause# ~clauses] (((clojure.algo.monads/m-chain clause#) s#) c#))))))) (defmacro run* [[x] & goals] `(doall (let [xs# (atom ) leaf# (fn [s#] (swap! xs# conj s#)) ~x (lvar '~x) solver# ((all ~@goals) empty-s)] (solver# leaf#) (->> xs# (deref) (remove nil?) (map #(-reify % ~x))))))
Behold the power of monads!! You only need to change the monad, conde and run*. Everything else just works.
You can visualize the effect of this change on the search process like this.
You can modify that monad to enable a lazy search through the tree, but I’ll leave that for another time.
So now we can make it multithreaded. Given David Nolen’s forkjoin.clj found here, it’s a breeze.
(def tasks (atom )) (defmacro conde [& clauses] (let [clauses (->> clauses (map #(cons 'all %)) (into ))] `(fn [s#] (fn [c#] (let [[clause# & clauses#] ~clauses fs# (doall (map #(->> ((% s#) c#) task fork) clauses#))] (doseq [f# fs#] (swap! tasks conj f#)) ((clause# s#) c#)))))) (defmacro run* [[x] & goals] `(doall (let [xs# (atom ) leaf# (fn [s#] (swap! xs# conj s#)) ~x (lvar '~x) solver# ((all ~@goals) empty-s) task# (task (solver# leaf#))] (invoke thread-pool task#) (join task#) ;; wait for all tasks to finish (loop [ts# @tasks] (when (seq ts#) (doseq [t# ts#] (join t#) (swap! tasks subvec 1)) (recur @tasks))) (->> xs# (deref) (map #(-reify % ~x))))))
The monad itself doesn’t change, only conde and run*.
So execution starts at the root and moves towards the leaves. At each conde node, tasks are forked to handle other branches while the current task continues on the first branch. Tasks are added to the
tasks atom for later reference.
run* is very similar to the cont-m version. Except instead of calling the solver function directly, it forks that off to a task. Then it joins on each of the tasks in the
tasks atom in turn. When all tasks have completed, the answer substitutions are in xs#.
The final version can be found on my fork of core.logic here on the “fork-join” branch.
A final optimization removed the need to use the monad at all, inlining the m-result and m-bind functions resulted in a measureable speedup. Currently, only one invocation of run* can be executed at a time because of
tasks being global. That could be changed. And a better way needs to be found to wait for all tasks to finish.
Also, restoring tabling would be a good thing to do. And conda/condu don’t really fit the parallel paradigm since their semantics cause search to be bottlenecked at those points, but for completeness it might be good to have them back.
Beyond that, some interesting things to investigate would be the use of different monads. For instance, you could use a monad that distributed the search over machines in a cluster, ala Cascalog. And there’s another monad I’ve been thinking of which could yield something useful. Maybe I’ll get a chance to write that up in the not too distant future.Tweet Jim Duey 26 March 2012