Merging lists
Let's imagine that a thread is just a list of values:
type Thread a = [a]Each a represents one atomic step in our thread.
Now let's assume we have two such threads and we need to schedule them. The simplest way to schedule them would be to interleave them:
zip :: Thread a -> Thread a -> Thread a zip [] ys = ys zip xs [] = xs zip (x:xs) (y:ys) = x:y:zip xs ysHowever, I can think of three obvious problems with this approach. First, it is not associative:
(xs `zip` ys) `zip` zs /= xs `zip` (ys `zip` zs)Second, it assumes that all thread actions have equal priority, which probably isn't the case.
We can fix this by switching to cooperative threads which can either provide a value, yield "left" or yield "right":
data Step a = YieldL | Step a | YieldR type Thread a = [Step a]Now, we can merge threads in such a way that respects their yields:
(<>) :: Thread a -> Thread a -> Thread a [] <> ys = [] (YieldL:xs) <> ys = YieldL:(xs <> ys) (Step a:xs) <> ys = Step a:(xs <> ys) (YieldR:xs) <> (YieldL:ys) = (xs <> ys) -- From this point onward, xs = YieldR:xs' xs <> (Step a:ys) = Step a:(xs <> ys) xs <> (YieldR:ys) = YieldR:(xs <> ys) xs <> [] = []Interestingly, this new operation is associative:
(xs <> ys) <> zs = xs <> (ys <> zs)It also has an empty thread which acts like an identity:
mempty = YieldR:YieldL:mempty mempty <> xs = xs xs <> mempty = xs
Proxies
While working on Proxys, I discovered they had certain nice mathematical properties:
return <-< g = return (respond >=> f) <-< g = respond >=> (f <-< g) (lift . k >=> f) <-< g = lift . k >=> (f <-< g) (request >=> f) <-< (respond >=> g) = (f <-< g) -- For the following equations, f = request >=> f' f <-< (lift . k >=> g) = lift . k >=> (f <-< g) f <-< (request >=> g) = request >=> (f <-< g) f <-< return = returnNow, where have I seen that before? Why, these are identical the equations for the above list merge, except we need to make the following substitutions to make the analogy complete:
(>=>) -> (++) return -> [] respond -> [YieldL] request -> [YieldR] lift . k -> [], [Step a], [Step a, Step a'] ... (<-<) -> (<>)Well, if those substitutions were correct, we'd expect that we could use them to derive the correct form for idT:
mempty = YieldR:YieldL:mempty mempty = [YieldR] ++ [YieldL] ++ mempty idT = request >=> respond >=> idT... and it works!
This is what I mean when I say that Proxy composition is just merging lists of Kleisli arrows.
Conclusions
I was a little bit skeptical at first when I had to give Proxys an extra input parameter to get them to be composable. However, the surprising connection to lists of Kleisli arrows convinced me that the Kleisli arrow is the true currency of concurrency.
I don't understand this:
ReplyDelete(<>) :: Thread a -> Thread a -> Thread a
[] <> ys = []
Surely an empty thread merged with a non-empty thread should result in non-empty.
Sorry for the delayed reply as I am on vacation. In this model of threading, all transfers of control are explicit rather than implicit. If the left thread has control and it terminates before it can relinquish control left or right, then the entire system cannot proceed.
DeleteThis seems like a weird behavior at first until you ask yourself what should happen in the following two scenarios:
(t1 <> []) <> t2
t1 <> ([] <> t2)
Your proposal would not be associative, because in the first example t1 would gain control when the middle thread died, whereas in the second example t2 would gain control when the middle thread died.
Associativity is a useful property because it means we can refactor out arbitrary groups of threads. For example, if I had:
(t1 <> t2) <> t3
... but I then wanted to separate out t2 and t3 into their own component, 'd only be able to do that if I could associate the parentheses. However, if you rely on implicit transfer of control upon termination, then those kinds of refactorings are no longer safe.
I hope that explains the behavior. For an associative model it is the only sane behavior.
Nice post! After reading upon Arrows it seemed that they should capture at least some aspects of concurrency due to the splitting / joining of computation. Did you ever develop your ideas any further?
ReplyDeleteI think what you discuss is only a subset of concurrency problems, where you have some "central" agent/component, which acts as a scheduler or merger or so on. This central agent then somehow has a "global" view on the actions happening concurrently. This, however, I think is hard to generalize well for communication which involves multiple agents, especially asynchronous message passing.
I actually have developed these ideas further in order to solve exactly the problem you describe: distilling concurrent programs with multiple agents into pure equivalents.
DeleteYou can read about the latest summary of my progress here:
https://groups.google.com/forum/?fromgroups#!topic/haskell-pipes/7mDzAD9Al-8
There I describe the beginnings of an approach inspired by model-view-controller. The key ideas are:
* Combine all effectful inputs into a single input
* Combine all outputs into a single output
* Create a pure model that interacts with one input and output
* The model implements both `Arrow` and `ArrowChoice`
* `Arrow` corresponds (roughly) to parallelism
* `ArrowChoice` corresponds (roughly) to concurrency
For that particular example I used push-based pipes, which implement both `Arrow` and `ArrowChoice` as the model.
If you read that post in more detail it describes how I took a pre-existing concurrent system and transformed it into an equivalent pure model that you can refer to in order to understand how this idea works for a real project in practice.
Sounds super interesting, will definitely investigate this. Thanks a lot for the response.
DeleteYou're welcome!
Deletenice
ReplyDelete