Lambdalog

Sean Seefried's programming blog

12 Dec 2014

File system snapshots make build scripts easy

or, how Docker can relieve the pain of developing long running build scripts

I think I’ve found a pretty compelling use case for Docker. But before you think that this is yet another blog post parroting the virtues of Docker I’d like to make clear that this post is really about the virtues of treating your file system as a persistent data structure. Thus, the insights of this post are equally applicable to other copy-on-write filesystems such as btrfs, and ZFS.

The problem

Let’s start with the problem I was trying to solve. I was developing a long running build script that consisted of numerous steps.

  • It took 1-2 hours to run.
  • It downloaded many fairly large files from the Internet. (One exceeded 300M.)
  • Later stages depended heavily on libraries built in earlier stages.

But the most salient feature was that it took a long time to run.

Filesystems are inherently stateful

We typically interact with filesystems in a stateful way. We might add, delete or move a file. We might change a file’s permissions or its access times. In isolation most actions can be undone. e.g. you can move a file back to its original location after having moved it somewhere else. What we don’t typically do is take a snapshot and revert back to that state. This post will suggest that making more use of this feature can be a great boon to developing long running build scripts.

Snapshots using union mounts

Docker uses what is called a union filesystem called AUFS. A union filesystem implements what is known as a union mount. As the name suggests this means that files and directories of separate file systems are layered on top of each other forming a single coherent file system. This is done in a hierarchical manner. If a file appears in two filesystems the one further up the hierarchy will be the one presented. (The version of the file further down the hierarchy is there, unchanged, but invisible.)

Docker calls each filesystem in the union mount a layer. The upshot of using this technology is that it implements snapshots as a side effect. Each snapshot is a simply a union mount of all the layers up to a certain point in the hierarchy.

Snapshots for build scripts

Snapshots make developing a long-running build script a dream. The general idea is to break up the script up into smaller scripts (which I like to call scriptlets) and run each one individually, snapshotting the filesystem after each one is run. (Docker does this automatically.) If you find that a scriptlet fails, one simply has to go back to the last snapshot (still in its pristine state!) and try again. Once you have completed your build script you have a very high assurance that the script works and can now be distributed to others.

Constrast this with what would happen if you weren’t using snapshots. Except for those among us with monk-like patience, no one is going to going to run their build script from scratch when it fails an hour and a half into building. Naturally, we’ll try our best to put the system back into the state it was in before we try to build the component that failed last time. e.g. we might delete a directory or run a make clean.

However, we might not have perfect understanding of the component we’re trying to build. It might have a complicated Makefile that puts files in places on the file system which we are unaware of. The only way to be truly sure is to revert to a snapshot.

Using Docker for snapshotted build scripts

In this section I’ll cover how I used Docker to implement a build script for a GHC 7.8.3 ARM cross compiler. Docker was pretty good for this task, but not perfect. I did some things that might look wasteful or inelegant but were necessary in order to keep the total time developing the script to a minimum. The build script can be found here.

Building with a Dockerfile

Docker reads from a file called Dockerfile to build images. A Dockerfile contains a small vocabulary of commands to specify what actions should be performed. A complete reference can be found here. The main ones used in my script are WORKDIR, ADD, and RUN. The ADD command is particularly useful because it allows you to add files that external to the current Docker image into the image’s filesystem before running them. You can see the many scriptlets that make up the build script here.

Design

1. ADD scriptlets just before you RUN them.

If you ADD all the scriptlets too early in the Dockerfile you may run into the following problem: your script fails, you go back to modify the scriptlet and you run docker build . again. But you find that Docker starts building at the point where the scriptlets were first added! This wastes a lot of time and defeats the purpose of using snapshots.

The reason this happens is because of how Docker tracks its intermediate images (snapshots). As Docker steps through the Dockerfile it compares the current command with an intermediate image to see if there is a match. However, in the case of the ADD command the contents of the files being put into the image are also examined. This makes sense. If the files have changed with respect to an existing intermediate image Docker has no choice but to build a new image from that point onwards. There’s just no way it can know that those changes don’t affect the build. Even if they wouldn’t it must be conservative.

Also, beware using RUN commands that would cause different changes to the filesystem each time they are run. In this case Docker will find the intermediate image and use it, but this will be the wrong thing for it to do. RUN commands must cause the same change to the filesystem each time they are run. As an example, I ensured that in my scriptlets I always downloaded a known version of a file with a specific MD5 checksum.

A more detailed explanation of Docker’s build cache can be found here.

2. Don’t use the ENV command to set environment variables. Use a scriptlet.

It may seem tempting to use the ENV command to set up all the environment variables you need for your build script. However, it does not perform variable substitution the way a shell would. e.g. ENV BASE=$HOME/base will set BASE to have the literal value $HOME/base which is probably not what you want.

Instead I used the ADD command to add a file called set-env.sh. This file is included in each subsequent scriptlet with:

THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source $THIS_DIR/set-env-1.sh

What if you don’t get set-env.sh right the first time around? Since it’s added so early in the Dockerfile doesn’t this mean that modifying it would invalidate and subsequent snapshots?

Yes, and this leads to some inelegance. While developing the script I discovered that I’d missed adding a useful environment variable in set-env.sh. The solution was to create a new file set-env-1.sh containing:

THIS_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
source $THIS_DIR/set-env.sh

if ! [ -e "$CONFIG_SUB_SRC/config.sub" ] ; then
    CONFIG_SUB_SRC=${CONFIG_SUB_SRC:-$NCURSES_SRC}
fi

I then included this file in all subsequent scriptlets. Now that I have finished the build script I could go back and fix this up but, in a sense, it would defeat the purpose. I would have to run the build script from scratch to see if this change worked.

Drawbacks

The one major drawback to this approach is that the resulting image is larger than it needs to be. This is especially true in my case because I remove a large number of files at the end. However, these files are still present in a lower layer filesystem in the union mount, so the entire image is larger than it needs to be by at least the size of the removed files.

However, there is a work-around. I did not publish this image to the Docker Hub Registry. Instead, I:

  • used docker export to export the contents as a tar archive.
  • created a new Dockerfile that simply added the contents of this tar archive.

The resulting image was as small as it could be.

Conclusion

The advantage of this approach is two-fold:

  • it keeps development time to a minimum. No longer do you have to sit through builds of sub-components that you already know succeed. You can focus on the bits that are still giving you grief.

  • it is great for maintaining a build script. There is a chance that the odd RUN command changes its behaviour over time (even though it shoudn’t). The build may fail but at least you don’t have to go back to the beginning once you’ve fixed the Dockerfile

Also, as I alluded to earlier in the post Docker only makes writing these build scripts easier. With the right tooling the same could be accomplished in any file system that provides snapshots.

Happy building!

Tagged as: Docker, snapshots, union mount, union filesystem, Haskell, GHC.

27 Jun 2011

Generic dot products

Companion slides

This is the first in a series of posts about program derivation. In particular, I am attempting to derive a matrix multiplication algorithm that runs efficiently on parallel architectures such as GPUs.

As I mentioned in an earlier post, I’ve been contributing to the Accelerate project. The Accelerate EDSL defines various parallel primitives such as map, fold, and scan (and many more).

The scan primitive (also known as all-prefix-sums) is quite famous because it is useful in a wide range of parallel algorithms and, at first glance, one could be forgiven for thinking it is not amenable to parallelisation. However, a well-known work efficient algorithm for scan was popularised by Guy Blelloch which performs \(O(n)\) work.

The algorithm is undeniably clever. Looking at it, it is not at all obvious how one might have gone about developing it oneself. A recent series of posts by Conal Elliott set out to redress this situation by attempting to derive the algorithm from a high level specification. His success has inspired me to follow a similar process to derive a work efficient matrix multiplication algorithm.

The process I am following is roughly as follows:

  • generalise the concept of matrix multiplication to data structures other than lists or arrays.

  • develop a generic implementation that relies, as far as possible, on reusable algebraic machinery in type classes such as Functor, Applicative, Foldable and Traversable.

  • use this generic implementation as a specification to derive an efficient algorithm. To call it a hunch that the underlying data structure is going to be tree-like is an understatement.

This post is a preamble. It develops a generic dot product implementation that will serve as a specification for the derivation of an efficient algorithm in a later post.

Background

In order to understand this post I highly recommend that you read Conor McBride and Ross Paterson’s paper: Applicative programming with effects. A basic grasp of linear algebra would also be helpful.

What is a dot product?

In mathematics the dot product is usually defined on vectors. Given two vectors of length \(n\), \((a_1, \dots, a_n)\) and \((b_1, \dots, b_n)\) the dot product is defined as:

\(a_1 b_1 + \dots + a_n b_n\)

or without the use of the pernicious “\(\dots\)”:

\(\sum_{i=1}^{n}{a_i b_i}\)

The implementation for lists is fairly straightforward.

dot :: Num a => [a] -> [a] -> a
dot xs ys = foldl (+) 0 (zipWith (*) xs ys)

This definition will work just fine on two lists of different length, owing to the definition of zipWith.

zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith f [] _ = []
zipWith f _ [] = []
zipWith f (x:xs) (y:ys) = f x y : zipWith f xs ys

This is fine for lists but will become problematic later when we look at other data structures.

There is no reason that this definition shouldn’t be extended to other data structures such as \(n\)-dimensional arrays or even trees. Let’s look at how we might define dot products on trees.

Dot product on Trees

We define trees as follows (however it does not really matter whether only the leaves contain numbers or whether branch nodes can too):

data Tree a = Leaf a | Branch (Tree a) (Tree a)

For the sake of succinctness, I will represent trees using nested pairs denoted with curly braces. e.g. Branch (Leaf 1) (Leaf 2) becomes {1,2}, Branch (Leaf 1) (Branch (Leaf 2) (Leaf 3)) becomes {1,{2,3}}.

What should be the dot product of {1,{2,3}} and {4,{5,6}}? A reasonable answer would be 1 * 4 + 2 * 5 + 3 * 6 == 32. For each leaf in the first tree find the corresponding leaf in the second tree, multiply them together and then sum all the results together.

This definition relies on the two trees having the same shape. To see why let’s see if we can we define a function in the style of zipWith for trees. Unfortunately, this is problematic.

zipWithT f (Leaf a) (Leaf b)           = Leaf (f a b)
zipWithT f (Branch s t) (Branch s' t') = Branch (zipWithT f s s')
                                                (zipWithT f t t')
zipWithT f (Leaf a) (Branch s' t')     = {- ? -} undefined
zipWithT f (Branch s t) (Leaf b)       = {- ? -} undefined

There’s a problem with the last two cases. While I won’t go so far as to say that there is no definition we could provide, it’s clear that there are a number of choices that could be taken. In each case one needs to take an arbitrary element from the Branch argument and apply function f to it and the Leaf argument.

Even if there is a definition that makes reasonable sense can we say whether it’s possible to provide a zipWith-like definition for an arbitrary data structure?

An alternative is to modify our data structures to contain types that represent the shape of the data structure. We can then define dot such that it must take two arguments of exactly the same shape.

Data structures with shapes

I’ll illustrate this approach with vectors first, before moving onto trees. Vectors are just lists with their length encode into their type.

Vectors

First, we add some essentials to the top of our module.

{-# LANGUAGE GADTs, EmptyDataDecls, FlexibleInstances, DeriveFunctor, DeriveFoldable #-}
{-# LANGUAGE ScopedTypeVariables, FlexibleContexts, UndecidableInstances #-}

Now we define two new data types, Z and S, representing Peano numbers. Both data types are empty since we will never be using their values.

data Z
data S n

Now vectors.

infixr 5 `Cons`
data Vec n a where
  Nil  :: Vec Z a
  Cons :: a -> Vec n a -> Vec (S n) a

If you haven’t seen these data types before it’s worth noting that you can define total (vs partial) versions of head and tail. Trying to take the head of an empty vector simply doesn’t type check.

headVec :: Vec (S n) a -> a
headVec (Cons x _) = x

tailVec :: Vec (S n) a -> Vec n a
tailVec (Cons _ xs) = xs

With can now define zipWithV.

zipWithV :: (a -> b -> c) -> Vec n a -> Vec n b -> Vec n c
zipWithV f Nil Nil = Nil
zipWithV f (Cons x xs) (Cons y ys) = f x y `Cons` zipWithV f xs ys

Unfortunately, GHC’s type checker does not detect that a case such as the one below is impossible. (In fact, if your warnings are turned up high enough GHC will warn that two patterns are missing in the definition above.)

-- Although this pattern match is impossible GHC's type checker
-- won't complain
zipWithV f (Cons x xs) Nil = {- something -} undefined

Trees

The length of a tree is not quite a meaningful enough representation. Instead we represent its shape as a nested tuples of the unit (()) type.

data Tree sh a where
  Leaf   :: a -> Tree () a
  Branch :: Tree m a -> Tree n a -> Tree (m,n) a

For example:

{1,{2,3}} :: Tree ((),((),())) Integer

The new definition of zipWithT only differs in its type.

zipWithT :: (a -> b -> c) -> Tree sh a -> Tree sh b -> Tree sh c
zipWithT f (Leaf a)     (Leaf b)       = Leaf (f a b)
zipWithT f (Branch s t) (Branch s' t') = Branch (zipWithT f s s')
                                                (zipWithT f t t')

Now finish off the definitions:

foldlT :: (a -> b -> a) -> a -> Tree sh b -> a
foldlT f z (Leaf a)     = f z a
foldlT f z (Branch s t) = foldlT f (foldlT f z s) t

dotT :: Num a => Tree sh a -> Tree sh a -> a
dotT t1 t2 = foldlT (+) 0 (zipWithT (*) t1 t2

Generalising to arbitrary data structures

Any seasoned Haskell veteran knows the utility of type classes such as Functor, Applicative, and Foldable. We have now seen that a dot product is essentially a zipWith followed by a fold. (It makes little difference whether its a left or right fold).

Since zipWith is really just liftA2 (found in module Control.Applicative) on the ZipList data structure. This leads us to the following definition:

dot :: (Num a, Foldable f, Applicative f) => f a -> f a -> a
dot x y = foldl (+) 0 (liftA2 (*) x y)

This function requires instances for Functor, Foldable and Applicative. Given that instances for the first two type classes are both easy to write (and in some cases derivable using Haskell’s deriving syntax), I will only discuss Applicative instances in this post. (The instances for vectors and shape-encoded trees are left as an exercise for the reader.)

One might reasonably wonder, must the two arguments to dot have the same shape as before? It turns out that, yes, they do and for similar reasons. I’ll demonstrate the point by looking at how to define Applicative instances for lists, vectors and trees.

Lists

The default Applicative instance for lists is unsuitable for a generic dot product. However, the Applicative instance on its wrapper type ZipList is adequate but has an unsatisfying definition for pure (to say the least).

instance Applicative ZipList where
  pure x = ZipList (repeat x)
  ZipList fs <*> ZipList xs = ZipList (zipWith id fs xs)

Of course, this is necessary for lists since we can’t guarantee that two lists of the same length will be applied together. How else would you define pure to make it work on an arbitrary length lists xs?

(+) <$> (pure 1) <*> (ZipList xs)

The definition of pure is much more satisfying for vectors.

Vectors

Obviously we want a similar definition for pure as for lists (ZipList). But we don’t want to produce an infinite list, just one of the appropriate length.

Defining the Applicative instance for vectors leads us to an interesting observation which holds true in general. For any data structure which encodes its own shape:

  1. You need one instance of Applicative for each constructor of the data type.
  2. The instance heads must mirror the types of the constructors.

In the code below there are two instances and each instance head closely mirrors the data constructor’s type. e.g. Cons :: a -> Vec n a -> Vec (S n) a mirrors instance Applicative (Vec n) => Applicative (Vec (S n)).

instance Applicative (Vec Z) where
  pure _                            = Nil
  Nil <*> Nil                       = Nil

instance Applicative (Vec n) => Applicative (Vec (S n)) where
  pure a                            = a `Cons` pure a
  (fa `Cons` fas) <*> (a `Cons` as) = fa a `Cons` (fas <*> as)

That’s it. Function pure will produce a vector of just the right length.

Trees

Unlike the case for lists, it’s hard to define an Applicative instance for non-shape-encoded trees. Let’s have a look.

instance Applicative Tree where
  pure a = Leaf a
  (Leaf fa) <*> (Leaf b) = Leaf (fa b)
  (Branch fa fb) <*> (Branch a b) = Branch (fa <*> a) (fb <*> b)
  (Leaf fa) <*> (Branch a b) = {- ? -} undefined
  (Branch fa fb) <*> (Leaf a) = {- ? -} undefined

This problem has been noticed before on the Haskell-beginners mailing list. The response is interesting because it mentions the “unpleasant property of returning infinite tree[s]”; the same problem we had with lists!

With shape-encoded trees this is not a problem. Function pure produces a tree of the appropriate shape. Also, note how the head of the second instance mirrors the definition of the Branch constructor (:: Tree m a -> Tree n a -> Tree (m,n) a)

instance Applicative (Tree ()) where
  pure a                          = Leaf a
  Leaf fa <*> Leaf a              = Leaf (fa a)

instance (Applicative (Tree m), Applicative (Tree n))
         => Applicative (Tree (m,n)) where
  pure a                          = Branch (pure a) (pure a)
  (Branch fs ft) <*> (Branch s t) = Branch (fs <*> s) (ft <*> t)

Arbitrary binary associative operators.

Phew, that’s it. We now have an implementation for dot that will work on an arbitrary data structure as long as one can define Functor, Foldable and Applicative instances. We have also learned that it is a good idea to encode the data structure’s shape in its type so that Applicative instances can be defined. (This will be important later on when we want to take the transpose of generic matrices, but I’m getting ahead of myself.)

But what if you want to use binary associative operators other than addition and multiplication for the dot product? This is easy using Haskell’s Monoid type class, and it plays nicely with the Foldable type class. In fact, it allows us to omit any mention of identity elements using the method fold:: (Foldable t, Monoid m) => t m -> m. We define an even more generic dot product as follows:

dotGen :: (Foldable f, Applicative f, Monoid p, Monoid s)
       => (a -> p, p -> a) -> (a -> s, s-> a) -> f a -> f a -> a
dotGen (pinject, pproject) (sinject, sproject) x y =
   sproject . fold . fmap (sinject . pproject) $ liftA2 mappend px py
  where
    px = fmap pinject x
    py = fmap pinject y

This function takes two pairs of functions for injecting into and projecting from monoids. We can then define our original dot function using the existing Sum and Product wrapper types.

dot :: (Num a, Foldable f, Applicative f) => f a -> f a -> a
dot = dotGen (Product, getProduct) (Sum, getSum)

In the next episode…

In my next post we will consider generic matrix multiplication. This operation is defined over arbitrary collections of collections of numbers and, naturally, makes use of our generic dot product. Until then, adios.

Slides

On 17 Nov 2011 I gave a talk at fp-syd about this work. You can find the slides here.

Tagged as: Haskell, type classes, dot product, matrix multiplication, program derivation.

22 Nov 2011

Haskell GADTs in Scala

This is an updated version of an earlier post. Owing to a comment by Jed Wesley-Smith I restructured this post somewhat to introduce two techniques for programming with GADTs in Scala. Thanks also go to Tony Morris.

First we’ll start with a fairly canonical example of why GADTs are useful in Haskell.

{-# LANGUAGE GADTs #-}
module Exp where

data Exp a where
  LitInt  :: Int                        -> Exp Int
  LitBool :: Bool                       -> Exp Bool
  Add     :: Exp Int -> Exp Int         -> Exp Int
  Mul     :: Exp Int -> Exp Int         -> Exp Int
  Cond    :: Exp Bool -> Exp a -> Exp a -> Exp a
  EqE     :: Eq a => Exp a -> Exp a     -> Exp Bool

eval :: Exp a -> a
eval e = case e of
  LitInt i       -> i
  LitBool b      -> b
  Add e e'       -> eval e + eval e'
  Mul e e'       -> eval e * eval e'
  Cond b thn els -> if eval b then eval thn else eval els
  EqE e e'       -> eval e == eval e'

Here we have defined a data structure that represents the abstract syntax tree (AST) of a very simple arithmetic language. Notice that it ensures terms are well-typed. For instance something like the following just doesn’t type check.

LitInt 1 `Add` LitBool True -- this expression does not type check

I have also provided a function eval that evaluates terms in this language.

In Scala it is quite possible to define data structures which have the same properties as a GADT declaration in Haskell. You can do this with case classes as follows.

abstract class Exp[A] 

case class LitInt(i: Int)                                     extends Exp[Int]
case class LitBool(b: Boolean)                                extends Exp[Boolean]
case class Add(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int]
case class Mul(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int]
case class Cond[A](b: Exp[Boolean], thn: Exp[A], els: Exp[A]) extends Exp[A]
case class Eq[A](e1: Exp[A], e2: Exp[A])                      extends Exp[Boolean]

But how do we implement eval. You might think that the following code would work. I mean, it looks like the Haskell version, right?

abstract class Exp[A] {
  def eval = this match {
    case LitInt(i)       => i
    case LitBool(b)      => b
    case Add(e1, e2)     => e1.eval + e2.eval
    case Mul(e1, e2)     => e1.eval * e2.eval
    case Cond(b,thn,els) => if ( b.eval ) { thn.eval } else { els.eval }
    case Eq(e1,e2)       => e1.eval == e2.eval
  }

}

case class LitInt(i: Int)                                     extends Exp[Int]
case class LitBool(b: Boolean)                                extends Exp[Boolean]
case class Add(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int]
case class Mul(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int]
case class Cond[A](b: Exp[Boolean], thn: Exp[A], els: Exp[A]) extends Exp[A]
case class Eq[A](e1: Exp[A], e2: Exp[A])                      extends Exp[Boolean]

Unfortunately for us, this doesn’t work. The Scala compiler is unable to instantiate the type Exp[A] to more specific ones (such as LitInt which extends Exp[Int])

3: constructor cannot be instantiated to expected type;
  found   : FailedExp.LitInt
  required: FailedExp.Exp[A]
    case LitInt(i)       => i
        ^

There are two solutions to this problem.

Solution 1: The object-oriented way

You must write eval the object-oriented way. The definition of eval gets spread over each of the sub-classes of Exp[A].

abstract class Exp[A] {
  def eval: A
}

case class LitInt(i: Int)                                     extends Exp[Int] {
  def eval = i
}

case class LitBool(b: Boolean)                                extends Exp[Boolean] {
  def eval = b
}

case class Add(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int] {
  def eval = e1.eval + e2.eval
}
case class Mul(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int] {
  def eval = e1.eval * e2.eval
}
case class Cond[A](b: Exp[Boolean], thn: Exp[A], els: Exp[A]) extends Exp[A] {
  def eval = if ( b.eval ) { thn.eval } else { els.eval }
}
case class Eq[A](e1: Exp[A], e2: Exp[A])                      extends Exp[Boolean] {
  def eval = e1.eval == e2.eval
}

Solution 2: The functional Haskell-like way

Personally I don’t like the OO style as much as the Haskell-like style. However, it turns out that you can program in that style by using a companion object.

object Exp {
  def evalAny[A](e: Exp[A]): A = e match {
    case LitInt(i)         => i
    case LitBool(b)        => b
    case Add(e1, e2)       => e1.eval + e2.eval
    case Mul(e1, e2)       => e1.eval * e2.eval
    case Cond(b, thn, els) => if (b.eval) { thn.eval } else { els.eval }
    case Eq(e1, e2)        => e1.eval == e2.eval
  }
}

abstract class Exp[A] {
  def eval: A = Exp.evalAny(this)
}

case class LitInt(i: Int)                                     extends Exp[Int]
case class LitBool(b: Boolean)                                extends Exp[Boolean]
case class Add(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int]
case class Mul(e1: Exp[Int], e2: Exp[Int])                    extends Exp[Int]
case class Cond[A](b: Exp[Boolean], thn: Exp[A], els: Exp[A]) extends Exp[A]
case class Eq[A](e1: Exp[A], e2: Exp[A])                      extends Exp[Boolean]

Ah, much better. But why does this work when the previous style doesn’t? The problem is that the constructors are not polymorphic. In Haskell-speak the type is:

LitInt :: Int -> Exp Int

not

LitInt :: Int -> Exp a

The second solution is subtly different. Method evalAny is polymorphic but its type is instantiated to that of the value of whatever it is called on. For instance evalAny when called on LitInt(42) equates type variable A with Int. It can then correctly deduce that it does indeed take a value of Exp[Int] and produce a value of Int.

Tagged as: Haskell, Scala, GADTs.

More posts…