Haskell programmers care about the correctness of their software and they specify correctness conditions in the form of equations that their code must satisfy. They can then verify the correctness of these equations using equational reasoning to prove that the abstractions they build are sound. To an outsider this might seem like a futile, academic exercise: proving the correctness of small abstractions is difficult, so what hope do we have to prove larger abstractions correct? This post explains how to do precisely that: scale proofs to large and complex abstractions.
Purely functional programming uses composition to scale programs, meaning that:
- We build small components that we can verify correct in isolation
- We compose smaller components into larger components
If you saw "components" and thought "functions", think again! We can compose things that do not even remotely resemble functions, such as proofs! In fact, Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:
- We build small proofs that we can verify correct in isolation
- We compose smaller proofs into larger proofs
The following sections illustrate in detail how this works in practice, using Monoid
s as the running example. We will prove the Monoid
laws for simple types and work our way up to proving the Monoid
laws for much more complex types. Along the way we'll learn how to keep the proof complexity flat as the types grow in size.
Monoids
Haskell's Prelude provides the following Monoid
type class:
class Monoid m where
mempty :: m
mappend :: m -> m -> m
-- An infix operator equivalent to `mappend`
(<>) :: Monoid m => m -> m -> m
x <> y = mappend x y
... and all Monoid
instances must obey the following two laws:
mempty <> x = x -- Left identity
x <> mempty = x -- Right identity
(x <> y) <> z = x <> (y <> z) -- Associativity
For example, Int
s form a Monoid
:
-- See "Appendix A" for some caveats
instance Monoid Int where
mempty = 0
mappend = (+)
... and the Monoid
laws for Int
s are just the laws of addition:
0 + x = x
x + 0 = x
(x + y) + z = x + (y + z)
Now we can use (<>)
and mempty
instead of (+)
and 0
:
>>> 4 <> 2
6
>>> 5 <> mempty <> 5
10
This appears useless at first glance. We already have (+)
and 0
, so why are we using the Monoid
operations?
Extending Monoids
Well, what if I want to combine things other than Int
s, like pairs of Int
s. I want to be able to write code like this:
>>> (1, 2) <> (3, 4)
(4, 6)
Well, that seems mildly interesting. Let's try to define a Monoid
instance for pairs of Int
s:
instance Monoid (Int, Int) where
mempty = (0, 0)
mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
Now my wish is true and I can "add" binary tuples together using (<>)
and mempty
:
>>> (1, 2) <> (3, 4)
(4, 6)
>>> (1, 2) <> (3, mempty) <> (mempty, 4)
(4, 6)
>>> (1, 2) <> mempty <> (3, 4)
(4, 6)
However, I still haven't proven that this new Monoid
instance obeys the Monoid
laws. Fortunately, this is a very simple proof.
I'll begin with the first Monoid
law, which requires that:
mempty <> x = x
We will begin from the left-hand side of the equation and try to arrive at the right-hand side by substituting equals-for-equals (a.k.a. "equational reasoning"):
-- Left-hand side of the equation
mempty <> x
-- x <> y = mappend x y
= mappend mempty x
-- `mempty = (0, 0)`
= mappend (0, 0) x
-- Define: x = (xL, xR), since `x` is a tuple
= mappend (0, 0) (xL, xR)
-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= (0 + xL, 0 + xR)
-- 0 + x = x
= (xL, xR)
-- x = (xL, xR)
= x
The proof for the second Monoid
law is symmetric
-- Left-hand side of the equation
= x <> mempty
-- x <> y = mappend x y
= mappend x mempty
-- mempty = (0, 0)
= mappend x (0, 0)
-- Define: x = (xL, xR), since `x` is a tuple
= mappend (xL, xR) (0, 0)
-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= (xL + 0, xR + 0)
-- x + 0 = x
= (xL, xR)
-- x = (xL, xR)
= x
The third Monoid
law requires that (<>)
is associative:
(x <> y) <> z = x <> (y <> z)
Again I'll begin from the left side of the equation:
-- Left-hand side
(x <> y) <> z
-- x <> y = mappend x y
= mappend (mappend x y) z
-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)
-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)
= mappend (xL + yL, xR + yR) (zL, zR)
-- mappend (x1, y1) (x2 , y2) = (x1 + x2, y1 + y2)
= mappend ((xL + yL) + zL, (xR + yR) + zR)
-- (x + y) + z = x + (y + z)
= mappend (xL + (yL + zL), xR + (yR + zR))
-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= mappend (xL, xR) (yL + zL, yR + zR)
-- mappend (x1, y1) (x2, y2) = (x1 + x2, y1 + y2)
= mappend (xL, xR) (mappend (yL, yR) (zL, zR))
-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend x (mappend y z)
-- x <> y = mappend x y
= x <> (y <> z)
That completes the proof of the three Monoid
laws, but I'm not satisfied with these proofs.
Generalizing proofs
I don't like the above proofs because they are disposable, meaning that I cannot reuse them to prove other properties of interest. I'm a programmer, so I loathe busy work and unnecessary repetition, both for code and proofs. I would like to find a way to generalize the above proofs so that I can use them in more places.
We improve proof reuse in the same way that we improve code reuse. To see why, consider the following sort
function:
sort :: [Int] -> [Int]
This sort
function is disposable because it only works on Int
s. For example, I cannot use the above function to sort
a list of Double
s.
Fortunately, programming languages with generics let us generalize sort
by parametrizing sort
on the element type of the list:
sort :: Ord a => [a] -> [a]
That type says that we can call sort
on any list of a
s, so long as the type a
implements the Ord
type class (a comparison interface). This works because sort
doesn't really care whether or not the elements are Int
s; sort
only cares if they are comparable.
Similarly, we can make the proof more "generic". If we inspect the proof closely, we will notice that we don't really care whether or not the tuple contains Int
s. The only Int
-specific properties we use in our proof are:
0 + x = x
x + 0 = x
(x + y) + z = x + (y + z)
However, these properties hold true for all Monoid
s, not just Int
s. Therefore, we can generalize our Monoid
instance for tuples by parametrizing it on the type of each field of the tuple:
instance (Monoid a, Monoid b) => Monoid (a, b) where
mempty = (mempty, mempty)
mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
The above Monoid
instance says that we can combine tuples so long as we can combine their individual fields. Our original Monoid
instance was just a special case of this instance where both the a
and b
types are Int
s.
Note: The mempty
and mappend
on the left-hand side of each equation are for tuples. The mempty
s and mappend
s on the right-hand side of each equation are for the types a
and b
. Haskell overloads type class methods like mempty
and mappend
to work on any type that implements the Monoid
type class, and the compiler distinguishes them by their inferred types.
We can similarly generalize our original proofs, too, by just replacing the Int
-specific parts with their more general Monoid
counterparts.
Here is the generalized proof of the left identity law:
-- Left-hand side of the equation
mempty <> x
-- x <> y = mappend x y
= mappend mempty x
-- `mempty = (mempty, mempty)`
= mappend (mempty, mempty) x
-- Define: x = (xL, xR), since `x` is a tuple
= mappend (mempty, mempty) (xL, xR)
-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= (mappend mempty xL, mappend mempty xR)
-- Monoid law: mappend mempty x = x
= (xL, xR)
-- x = (xL, xR)
= x
... the right identity law:
-- Left-hand side of the equation
= x <> mempty
-- x <> y = mappend x y
= mappend x mempty
-- mempty = (mempty, mempty)
= mappend x (mempty, mempty)
-- Define: x = (xL, xR), since `x` is a tuple
= mappend (xL, xR) (mempty, mempty)
-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= (mappend xL mempty, mappend xR mempty)
-- Monoid law: mappend x mempty = x
= (xL, xR)
-- x = (xL, xR)
= x
... and the associativity law:
-- Left-hand side
(x <> y) <> z
-- x <> y = mappend x y
= mappend (mappend x y) z
-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend (mappend (xL, xR) (yL, yR)) (zL, zR)
-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)
= mappend (mappend xL yL, mappend xR yR) (zL, zR)
-- mappend (x1, y1) (x2 , y2) = (mappend x1 x2, mappend y1 y2)
= (mappend (mappend xL yL) zL, mappend (mappend xR yR) zR)
-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)
= (mappend xL (mappend yL zL), mappend xR (mappend yR zR))
-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= mappend (xL, xR) (mappend yL zL, mappend yR zR)
-- mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
= mappend (xL, xR) (mappend (yL, yR) (zL, zR))
-- x = (xL, xR)
-- y = (yL, yR)
-- z = (zL, zR)
= mappend x (mappend y z)
-- x <> y = mappend x y
= x <> (y <> z)
This more general Monoid
instance lets us stick any Monoid
s inside the tuple fields and we can still combine the tuples. For example, lists form a Monoid
:
-- Exercise: Prove the monoid laws for lists
instance Monoid [a] where
mempty = []
mappend = (++)
... so we can stick lists inside the right field of each tuple and still combine them:
>>> (1, [2, 3]) <> (4, [5, 6])
(5, [2, 3, 5, 6])
>>> (1, [2, 3]) <> (4, mempty) <> (mempty, [5, 6])
(5, [2, 3, 5, 6])
>>> (1, [2, 3]) <> mempty <> (4, [5, 6])
(5, [2, 3, 5, 6])
Why, we can even stick yet another tuple inside the right field and still combine them:
>>> (1, (2, 3)) <> (4, (5, 6))
(5, (7, 9))
We can try even more exotic permutations and everything still "just works":
>>> ((1,[2, 3]), ([4, 5], 6)) <> ((7, [8, 9]), ([10, 11), 12)
((8, [2, 3, 8, 9]), ([4, 5, 10, 11], 18))
This is our first example of a "scalable proof". We began from three primitive building blocks:
Int
is aMonoid
[a]
is aMonoid
(a, b)
is aMonoid
ifa
is aMonoid
andb
is aMonoid
... and we connected those three building blocks to assemble a variety of new Monoid
instances. No matter how many tuples we nest the result is still a Monoid
and obeys the Monoid
laws. We don't need to re-prove the Monoid
laws every time we assemble a new permutation of these building blocks.
However, these building blocks are still pretty limited. What other useful things can we combine to build new Monoid
s?
IO
We're so used to thinking of Monoid
s as data, so let's define a new Monoid
instance for something entirely un-data-like:
-- See "Appendix A" for some caveats
instance Monoid b => Monoid (IO b) where
mempty = return mempty
mappend io1 io2 = do
a1 <- io1
a2 <- io2
return (mappend a1 a2)
The above instance says: "If a
is a Monoid
, then an IO
action that returns an a
is also a Monoid
". Let's test this using the getLine
function from the Prelude
:
-- Read one line of input from stdin
getLine :: IO String
String
is a Monoid
, since a String
is just a list of characters, so we should be able to mappend
multiple getLine
statements together. Let's see what happens:
>>> getLine -- Reads one line of input
Hello<Enter>
"Hello"
>>> getLine <> getLine
ABC<Enter>
DEF<Enter>
"ABCDEF"
>>> getLine <> getLine <> getLine
1<Enter>
23<Enter>
456<Enter>
"123456"
Neat! When we combine multiple commands we combine their effects and their results.
Of course, we don't have to limit ourselves to reading strings. We can use readLn
from the Prelude to read in anything that implements the Read
type class:
-- Parse a `Read`able value from one line of stdin
readLn :: Read a => IO a
All we have to do is tell the compiler which type a
we intend to Read
by providing a type signature:
>>> readLn :: IO (Int, Int)
(1, 2)<Enter>
(1 ,2)
>>> readLn <> readLn :: IO (Int, Int)
(1,2)<Enter>
(3,4)<Enter>
(4,6)
>>> readLn <> readLn <> readLn :: IO (Int, Int)
(1,2)<Enter>
(3,4)<Enter>
(5,6)<Enter>
(9,12)
This works because:
Int
is aMonoid
- Therefore,
(Int, Int)
is aMonoid
- Therefore,
IO (Int, Int)
is aMonoid
Or let's flip things around and nest IO
actions inside of a tuple:
>>> let ios = (getLine, readLn) :: (IO String, IO (Int, Int))
>>> let (getLines, readLns) = ios <> ios <> ios
>>> getLines
1<Enter>
23<Enter>
456<Enter>
123456
>>> readLns
(1,2)<Enter>
(3,4)<Enter>
(5,6)<Enter>
(9,12)
We can very easily reason that the type (IO String, IO (Int, Int))
obeys the Monoid
laws because:
String
is aMonoid
- If
String
is aMonoid
thenIO String
is also aMonoid
Int
is aMonoid
- If
Int
is aMonoid
, then(Int, Int)
is also a `Monoid - If
(Int, Int)
is aMonoid
, thenIO (Int, Int)
is also aMonoid
- If
IO String
is aMonoid
andIO (Int, Int)
is aMonoid
, then(IO String, IO (Int, Int))
is also aMonoid
However, we don't really have to reason about this at all. The compiler will automatically assemble the correct Monoid
instance for us. The only thing we need to verify is that the primitive Monoid
instances obey the Monoid
laws, and then we can trust that any larger Monoid
instance the compiler derives will also obey the Monoid
laws.
The Unit Monoid
Haskell Prelude also provides the putStrLn
function, which echoes a String
to standard output with a newline:
putStrLn :: String -> IO ()
Is putStrLn
combinable? There's only one way to find out!
>>> putStrLn "Hello" <> putStrLn "World"
Hello
World
Interesting, but why does that work? Well, let's look at the types of the commands we are combining:
putStrLn "Hello" :: IO ()
putStrLn "World" :: IO ()
Well, we said that IO b
is a Monoid
if b
is a Monoid
, and b
in this case is ()
(pronounced "unit"), which you can think of as an "empty tuple". Therefore, ()
must form a Monoid
of some sort, and if we dig into Data.Monoid
, we will discover the following Monoid
instance:
-- Exercise: Prove the monoid laws for `()`
instance Monoid () where
mempty = ()
mappend () () = ()
This says that empty tuples form a trivial Monoid
, since there's only one possible value (ignoring bottom) for an empty tuple: ()
. Therefore, we can derive that IO ()
is a Monoid
because ()
is a Monoid
.
Functions
Alright, so we can combine putStrLn "Hello"
with putStrLn "World"
, but can we combine naked putStrLn
functions?
>>> (putStrLn <> putStrLn) "Hello"
Hello
Hello
Woah, how does that work?
We never wrote a Monoid
instance for the type String -> IO ()
, yet somehow the compiler magically accepted the above code and produced a sensible result.
This works because of the following Monoid
instance for functions:
instance Monoid b => Monoid (a -> b) where
mempty = \_ -> mempty
mappend f g = \a -> mappend (f a) (g a)
This says: "If b
is a Monoid
, then any function that returns a b
is also a Monoid
".
The compiler then deduced that:
()
is aMonoid
- If
()
is aMonoid
, thenIO ()
is also aMonoid
- If
IO ()
is aMonoid
thenString -> IO ()
is also aMonoid
The compiler is a trusted friend, deducing Monoid
instances we never knew existed.
Monoid plugins
Now we have enough building blocks to assemble a non-trivial example. Let's build a key logger with a Monoid
-based plugin system.
The central scaffold of our program is a simple main
loop that echoes characters from standard input to standard output:
main = do
hSetEcho stdin False
forever $ do
c <- getChar
putChar c
However, we would like to intercept key strokes for nefarious purposes, so we will slightly modify this program to install a handler at the beginning of the program that we will invoke on every incoming character:
install :: IO (Char -> IO ())
install = ???
main = do
hSetEcho stdin False
handleChar <- install
forever $ do
c <- getChar
handleChar c
putChar c
Notice that the type of install
is exactly the correct type to be a Monoid
:
()
is aMonoid
- Therefore,
IO ()
is also aMonoid
- Therefore
Char -> IO ()
is also aMonoid
- Therefore
IO (Char -> IO ())
is also aMonoid
Therefore, we can combine key logging plugins together using Monoid
operations. Here is one such example:
type Plugin = IO (Char -> IO ())
logTo :: FilePath -> Plugin
logTo filePath = do
handle <- openFile filePath WriteMode
hSetBuffering handle NoBuffering
return (hPutChar handle)
main = do
hSetEcho stdin False
handleChar <- logTo "file1.txt" <> logTo "file2.txt"
forever $ do
c <- getChar
handleChar c
putChar c
Now, every key stroke will be recorded to both file1.txt
and file2.txt
. Let's confirm that this works as expected:
$ ./logger
Test<Enter>
ABC<Enter>
42<Enter>
<Ctrl-C>
$ cat file1.txt
Test
ABC
42
$ cat file2.txt
Test
ABC
42
Try writing your own Plugin
s and mixing them in with (<>)
to see what happens. "Appendix C" contains the complete code for this section so you can experiment with your own Plugin
s.
Applicatives
Notice that I never actually proved the Monoid
laws for the following two Monoid
instances:
instance Monoid b => Monoid (a -> b) where
mempty = \_ -> mempty
mappend f g = \a -> mappend (f a) (g a)
instance Monoid a => Monoid (IO a) where
mempty = return mempty
mappend io1 io2 = do
a1 <- io1
a2 <- io2
return (mappend a1 a2)
The reason why is that they are both special cases of a more general pattern. We can detect the pattern if we rewrite both of them to use the pure
and liftA2
functions from Control.Applicative
:
import Control.Applicative (pure, liftA2)
instance Monoid b => Monoid (a -> b) where
mempty = pure mempty
mappend = liftA2 mappend
instance Monoid b => Monoid (IO b) where
mempty = pure mempty
mappend = liftA2 mappend
This works because both IO
and functions implement the following Applicative
interface:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
-- Lift a binary function over the functor `f`
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
liftA2 f x y = (pure f <*> x) <*> y
... and all Applicative
instances must obey several Applicative
laws:
pure id <*> v = v
((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
pure f <*> pure x = pure (f x)
u <*> pure x = pure (\f -> f y) <*> u
These laws may seem a bit adhoc, but this paper explains that you can reorganize the Applicative
class to this equivalent type class:
class Functor f => Monoidal f where
unit :: f ()
(#) :: f a -> f b -> f (a, b)
Then the corresponding laws become much more symmetric:
fmap snd (unit # x) = x -- Left identity
fmap fst (x # unit) = x -- Right identity
fmap assoc ((x # y) # z) = x # (y # z) -- Associativity
where
assoc ((a, b), c) = (a, (b, c))
fmap (f *** g) (x # y) = fmap f x # fmap g y -- Naturality
where
(f *** g) (a, b) = (f a, g b)
I personally prefer the Monoidal
formulation, but you go to war with the army you have, so we will use the Applicative
type class for this post.
All Applicative
s possess a very powerful property: they can all automatically lift Monoid
operations using the following instance:
instance (Applicative f, Monoid b) => Monoid (f b) where
mempty = pure mempty
mappend = liftA2 mappend
This says: "If f
is an Applicative
and b
is a Monoid
, then f b
is also a Monoid
." In other words, we can automatically extend any existing Monoid
with some new feature f
and get back a new Monoid
.
Note: The above instance is bad Haskell because it overlaps with other type class instances. In practice we have to duplicate the above code once for each Applicative
. Also, for some Applicative
s we may want a different Monoid
instance.
We can prove that the above instance obeys the Monoid
laws without knowing anything about f
and b
, other than the fact that f
obeys the Applicative
laws and b
obeys the Applicative
laws. These proofs are a little long, so I've included them in Appendix B.
Both IO
and functions implement the Applicative
type class:
instance Applicative IO where
pure = return
iof <*> iox = do
f <- iof
x <- iox
return (f x)
instance Applicative ((->) a) where
pure x = \_ -> x
kf <*> kx = \a ->
let f = kf a
x = kx a
in f x
This means that we can kill two birds with one stone. Every time we prove the Applicative
laws for some functor F
:
instance Applicative F where ...
... we automatically prove that the following Monoid
instance is correct for free:
instance Monoid b => Monoid (F b) where
mempty = pure mempty
mappend = liftA2 mappend
In the interest of brevity, I will skip the proofs of the Applicative
laws, but I may cover them in a subsequent post.
The beauty of Applicative
Functor
s is that every new Applicative
instance we discover adds a new building block to our Monoid
toolbox, and Haskell programmers have already discovered lots of Applicative
Functor
s.
Revisiting tuples
One of the very first Monoid
instances we wrote was:
instance (Monoid a, Monoid b) => Monoid (a, b) where
mempty = (mempty, mempty)
mappend (x1, y1) (x2, y2) = (mappend x1 x2, mappend y1 y2)
Check this out:
instance (Monoid a, Monoid b) => Monoid (a, b) where
mempty = pure mempty
mappend = liftA2 mappend
This Monoid
instance is yet another special case of the Applicative
pattern we just covered!
This works because of the following Applicative
instance in Control.Applicative
:
instance Monoid a => Applicative ((,) a) where
pure b = (mempty, b)
(a1, f) <*> (a2, x) = (mappend a1 a2, f x)
This instance obeys the Applicative
laws (proof omitted), so our Monoid
instance for tuples is automatically correct, too.
Composing applicatives
In the very first section I wrote:
Haskell programmers prove large-scale properties exactly the same way we build large-scale programs:
- We build small proofs that we can verify correct in isolation
- We compose smaller proofs into larger proofs
I don't like to use the word compose lightly. In the context of category theory, compose has a very rigorous meaning, indicating composition of morphisms in some category. This final section will show that we can actually compose Monoid
proofs in a very rigorous sense of the word.
We can define a category of Monoid
proofs:
- Objects are types and their associated
Monoid
proofs - Morphisms are
Applicative
Functor
s - The identity morphism is the
Identity
applicative - The composition operation is composition of
Applicative
Functor
s - The category laws are isomorphisms instead of equalities
So in our Plugin
example, we began on the proof that ()
was a Monoid
and then composed three Applicative
morphisms to prove that Plugin
was a Monoid
. I will use the following diagram to illustrate this:
+-----------------------+
| |
| Legend: * = Object |
| |
| v |
| | = Morphism |
| v |
| |
+-----------------------+
* `()` is a `Monoid`
v
| IO
v
* `IO ()` is a `Monoid`
v
| ((->) String)
v
* `String -> IO ()` is a `Monoid`
v
| IO
v
* `IO (String -> IO ())` (i.e. `Plugin`) is a `Monoid`
Therefore, we were literally composing proofs together.
Conclusion
You can equationally reason at scale by decomposing larger proofs into smaller reusable proofs, the same way we decompose programs into smaller and more reusable components. There is no limit to how many proofs you can compose together, and therefore there is no limit to how complex of a program you can tame using equational reasoning.
This post only gave one example of composing proofs within Haskell. The more you learn the language, the more examples of composable proofs you will encounter. Another common example is automatically deriving Monad
proofs by composing monad transformers.
As you learn Haskell, you will discover that the hard part is not proving things. Rather, the challenge is learning how to decompose proofs into smaller proofs and you can cultivate this skill by studying category theory and abstract algebra. These mathematical disciplines teach you how to extract common and reusable proofs and patterns from what appears to be disposable and idiosyncratic code.
Appendix A - Missing Monoid
instances
These Monoid
instance from this post do not actually appear in the Haskell standard library:
instance Monoid b => Monoid (IO b)
instance Monoid Int
The first instance was recently proposed here on the Glasgow Haskell Users mailing list. However, in the short term you can work around it by writing your own Monoid
instances by hand just by inserting a sufficient number of pure
s and liftA2
s.
For example, suppose we wanted to provide a Monoid
instance for Plugin
. We would just newtype Plugin
and write:
newtype Plugin = Plugin { install :: IO (String -> IO ()) }
instance Monoid Plugin where
mempty = Plugin (pure (pure (pure mempty)))
mappend (Plugin p1) (Plugin p2) =
Plugin (liftA2 (liftA2 (liftA2 mappend)) p1 p2)
This is what the compiler would have derived by hand.
Alternatively, you could define an orphan Monoid
instance for IO
, but this is generally frowned upon.
There is no default Monoid
instance for Int
because there are actually two possible instances to choose from:
-- Alternative #1
instance Monoid Int where
mempty = 0
mappend = (+)
-- Alternative #2
instance Monoid Int where
mempty = 1
mappend = (*)
So instead, Data.Monoid
sidesteps the issue by providing two newtypes to distinguish which instance we prefer:
newtype Sum a = Sum { getSum :: a }
instance Num a => Monoid (Sum a)
newtype Product a = Product { getProduct :: a}
instance Num a => Monoid (Product a)
An even better solution is to use a semiring, which allows two Monoid
instances to coexist for the same type. You can think of Haskell's Num
class as an approximation of the semiring class:
class Num a where
fromInteger :: Integer -> a
(+) :: a -> a -> a
(*) :: a -> a -> a
-- ... and other operations unrelated to semirings
Note that we can also lift the Num
class over the Applicative
class, exactly the same way we lifted the Monoid
class. Here's the code:
instance (Applicative f, Num a) => Num (f a) where
fromInteger n = pure (fromInteger n)
(+) = liftA2 (+)
(*) = liftA2 (*)
(-) = liftA2 (-)
negate = fmap negate
abs = fmap abs
signum = fmap signum
This lifting guarantees that if a
obeys the semiring laws then so will f a
. Of course, you will have to specialize the above instance to every concrete Applicative
because otherwise you will get overlapping instances.
Appendix B
These are the proofs to establish that the following Monoid
instance obeys the Monoid
laws:
instance (Applicative f, Monoid b) => Monoid (f b) where
mempty = pure mempty
mappend = liftA2 mappend
... meaning that if f
obeys the Applicative
laws and b
obeys the Monoid
laws, then f b
also obeys the Monoid
laws.
Proof of the left identity law:
mempty <> x
-- x <> y = mappend x y
= mappend mempty x
-- mappend = liftA2 mappend
= liftA2 mappend mempty x
-- mempty = pure mempty
= liftA2 mappend (pure mempty) x
-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> pure mempty) <*> x
-- Applicative law: pure f <*> pure x = pure (f x)
= pure (mappend mempty) <*> x
-- Eta conversion
= pure (\a -> mappend mempty a) <*> x
-- mappend mempty x = x
= pure (\a -> a) <*> x
-- id = \x -> x
= pure id <*> x
-- Applicative law: pure id <*> v = v
= x
Proof of the right identity law:
x <> mempty = x
-- x <> y = mappend x y
= mappend x mempty
-- mappend = liftA2 mappend
= liftA2 mappend x mempty
-- mempty = pure mempty
= liftA2 mappend x (pure mempty)
-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> x) <*> pure mempty
-- Applicative law: u <*> pure y = pure (\f -> f y) <*> u
= pure (\f -> f mempty) <*> (pure mappend <*> x)
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (.) <*> pure (\f -> f mempty)) <*> pure mappend) <*> x
-- Applicative law: pure f <*> pure x = pure (f x)
= (pure ((.) (\f -> f mempty)) <*> pure mappend) <*> x
-- Applicative law : pure f <*> pure x = pure (f x)
= pure ((.) (\f -> f mempty) mappend) <*> x
-- `(.) f g` is just prefix notation for `f . g`
= pure ((\f -> f mempty) . mappend) <*> x
-- f . g = \x -> f (g x)
= pure (\x -> (\f -> f mempty) (mappend x)) <*> x
-- Apply the lambda
= pure (\x -> mappend x mempty) <*> x
-- Monoid law: mappend x mempty = x
= pure (\x -> x) <*> x
-- id = \x -> x
= pure id <*> x
-- Applicative law: pure id <*> v = v
= x
Proof of the associativity law:
(x <> y) <> z
-- x <> y = mappend x y
= mappend (mappend x y) z
-- mappend = liftA2 mappend
= liftA2 mappend (liftA2 mappend x y) z
-- liftA2 f x y = (pure f <*> x) <*> y
= (pure mappend <*> ((pure mappend <*> x) <*> y)) <*> z
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (((pure (.) <*> pure mappend) <*> (pure mappend <*> x)) <*> y) <*> z
-- Applicative law: pure f <*> pure x = pure (f x)
= ((pure f <*> (pure mappend <*> x)) <*> y) <*> z
where
f = (.) mappend
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((((pure (.) <*> pure f) <*> pure mappend) <*> x) <*> y) <*> z
where
f = (.) mappend
-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure f <*> pure mappend) <*> x) <*> y) <*> z
where
f = (.) ((.) mappend)
-- Applicative law: pure f <*> pure x = pure (f x)
= ((pure f <*> x) <*> y) <*> z
where
f = (.) ((.) mappend) mappend
-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f = ((.) mappend) . mappend
-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x = (((.) mappend) . mappend) x
-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x = (.) mappend (mappend x)
-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f x = mappend . (mappend x)
-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x y = (mappend . (mappend x)) y
-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x y = mappend (mappend x y)
-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x y z = mappend (mappend x y) z
-- Monoid law: mappend (mappend x y) z = mappend x (mappend y z)
= ((pure f <*> x) <*> y) <*> z
where
f x y z = mappend x (mappend y z)
-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x y z = (mappend x . mappend y) z
-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x y = mappend x . mappend y
-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f x y = (.) (mappend x) (mappend y)
-- (f . g) x = f
= ((pure f <*> x) <*> y) <*> z
where
f x y = (((.) . mappend) x) (mappend y)
-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x y = ((((.) . mappend) x) . mappend) y
-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f x = (((.) . mappend) x) . mappend
-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f x = (.) (((.) . mappend) x) mappend
-- Lambda abstraction
= ((pure f <*> x) <*> y) <*> z
where
f x = (\k -> k mappend) ((.) (((.) . mappend) x))
-- (f . g) x = f (g x)
= ((pure f <*> x) <*> y) <*> z
where
f x = (\k -> k mappend) (((.) . ((.) . mappend)) x)
-- Eta conversion
= ((pure f <*> x) <*> y) <*> z
where
f = (\k -> k mappend) . ((.) . ((.) . mappend))
-- (.) f g = f . g
= ((pure f <*> x) <*> y) <*> z
where
f = (.) (\k -> k mappend) ((.) . ((.) . mappend))
-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure g <*> pure f) <*> x) <*> y) <*> z
where
g = (.) (\k -> k mappend)
f = (.) . ((.) . mappend)
-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure (.) <*> pure (\k -> k mappend)) <*> pure f) <*> x) <*> y) <*> z
where
f = (.) . ((.) . mappend)
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (\k -> k mappend) <*> (pure f <*> x)) <*> y) <*> z
where
f = (.) . ((.) . mappend)
-- u <*> pure y = pure (\k -> k y) <*> u
= (((pure f <*> x) <*> pure mappend) <*> y) <*> z
where
f = (.) . ((.) . mappend)
-- (.) f g = f . g
= (((pure f <*> x) <*> pure mappend) <*> y) <*> z
where
f = (.) (.) ((.) . mappend)
-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure g <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z
where
g = (.) (.)
f = (.) . mappend
-- Applicative law: pure f <*> pure x = pure (f x)
= (((((pure (.) <*> pure (.)) <*> pure f) <*> x) <*> pure mappend) <*> y) <*> z
where
f = (.) . mappend
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (((pure (.) <*> (pure f <*> x)) <*> pure mappend) <*> y) <*> z
where
f = (.) . mappend
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z
where
f = (.) . mappend
-- (.) f g = f . g
= ((pure f <*> x) <*> (pure mappend <*> y)) <*> z
where
f = (.) (.) mappend
-- Applicative law: pure f <*> pure x = pure (f x)
= (((pure f <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z
where
f = (.) (.)
-- Applicative law: pure f <*> pure x = pure (f x)
= ((((pure (.) <*> pure (.)) <*> pure mappend) <*> x) <*> (pure mappend <*> y)) <*> z
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= ((pure (.) <*> (pure mappend <*> x)) <*> (pure mappend <*> y)) <*> z
-- Applicative law: ((pure (.) <*> u) <*> v) <*> w = u <*> (v <*> w)
= (pure mappend <*> x) <*> ((pure mappend <*> y) <*> z)
-- liftA2 f x y = (pure f <*> x) <*> y
= liftA2 mappend x (liftA2 mappend y z)
-- mappend = liftA2 mappend
= mappend x (mappend y z)
-- x <> y = mappend x y
= x <> (y <> z)
Appendix C: Monoid key logging
Here is the complete program for a key logger with a Monoid
-based plugin system:
import Control.Applicative (pure, liftA2)
import Control.Monad (forever)
import Data.Monoid
import System.IO
instance Monoid b => Monoid (IO b) where
mempty = pure mempty
mappend = liftA2 mappend
type Plugin = IO (Char -> IO ())
logTo :: FilePath -> Plugin
logTo filePath = do
handle <- openFile filePath WriteMode
hSetBuffering handle NoBuffering
return (hPutChar handle)
main = do
hSetEcho stdin False
handleChar <- logTo "file1.txt" <> logTo "file2.txt"
forever $ do
c <- getChar
handleChar c
putChar c
I got empty logfiles using the program in Appendix C. Adding "hSetBuffering handle NoBuffering" to logTo resolved the problem... not sure if there's a better solution?
ReplyDeleteI fixed it.
DeleteYes, there is a better solution, which is to use the `Managed` applicative instead of the `IO` applicative for that particular part. This is an `Applicative` that my `mvc` library defines, but I just split it out into a separate library today because this use case for it comes up a lot:
http://www.haskellforall.com/2014/08/managed-100-monad-for-managed-resources.html
Using the `Managed` Applicative the new type of `Plugin` would be:
type Plugin = Managed (Char -> IO ())
... and the implementation of the logger would be:
logTo :: FilePath -> Plugin
logTo filePath = do
handle <- managed (withFile filePath ReadMode)
return (hPutChar handle)
`withFile` will then properly tear down the file when you interrupt the program, so you don't need to worry about disabling buffering.
But functions /are/ proofs.
ReplyDeleteThank, Gabriel, you for another great post. When I try to understand some [relatively small] piece of code I really miss some "automated rewriting system" specifically for Haskell and equational-reasoning (yes, I know that there are COQ, Agda Isabelle and so on). Have you ever thought about such a system (maybe it even exists somewhere but I don't really know).
ReplyDeleteYes, I've really wanted such a rewriting system myself. I'm not aware of one, though.
DeleteThank you very much for another excellent article.
ReplyDeleteThe interface for Applicative comes from replacing the explicit types and constructors () and (,) in Monoidal with things that those types and constructors can be passed in to to recover Monoidal. Applicative then takes the weird step of replacing liftA2 in the interface with function application lifted into the Monoidal
(<*>) = liftA2
From the category theory side of things, Applicative makes more sense if you consider it in terms of pure and liftA2.
That should have been
Delete(<*>) = liftA2 ($)
Yeah, I agree that `liftA2` is probably more fundamental. It certainly is much easier to write `Applicative` instances in terms of `liftA2`.
DeleteThis comment has been removed by the author.
ReplyDeleteI just used monoids to compose functions for the first time. So thank you for the super helpful mind extension. Also, I've read some of your posts here and on reddit about using categories for program architecture. While this seems to be appropriate for library design (like your pipes library), I'm struggling to see where this fits into classic enterprise problems. Things like User, PurchaseOrder, LineItem, and Account aren't really monoids or applicatives. Just wondering if you'd had any thoughts about this type of problem.
ReplyDeleteYeah, I have thought about more mundane data types like that. You can always turn any type into a `Monoid` just by wrapping it with `Maybe` and then defining `mappend` to take the first `Just` result (or `Nothing` if neither of them are `Just`) and `mempty` is `Nothing`. This is sort of like the monoid of last resort, and it already exists in `Data.Monoid` as the `First` monoid. It also exists for `Maybe`, too, as the `Alternative`/`MonadPlus` instances for `Maybe`, which also do the same thing.
ReplyDeleteThat particular monoid is useful for combining a bunch of partially-completed records into a new record. However, you lose information, specifically you no longer record in the types whether or not a field is defined or not. There is a variation on this monoid that does record that information in the types so that you can see at a glance from the inferred type which fields are complete and which fields are still incomplete, but it's more difficult to implement because it requires advanced type extensions. However, I'll give a rough sketch of what the types would look like. You'd have records whose types would indicate which fields are defined or undefined:
-- First field defined
record1 :: Record True False
-- Second field defined
record2 :: Record False True
... and then you would have something analogous to `mappend` that combines records together and deduces which fields are still defined:
mappend :: Record a1 b1 -> Record a2 b2 -> Record (a1 || a2) (b1 || b2)
... and something analogous to `mempty` of type:
mempty :: Record False False
So if you combined the above two records, the compiler would deduce that the resulting type was fully defined:
mappend record1 record2 :: Record True True
You can actually make that valid Haskell using the `DataKinds` and `TypeFamilies` extension, so it's totally possible to implement. The hard part is making it work recursively over nested records or tuples of defined values the way that the `Monoid` instances worked in this post.
So it's possible in theory and it gets much better in a dependently typed language like Idris.