Stop. Put away your email client that is half-way through writing me about how "Google is filled with the world's best engineers," and that "anything they build is, by definition, not built by amateurs." I don't want to hear it.

Let's just get this out of the way. Full disclosure: I used to work at Google. It was the first (but unfortunately, not the last) place I ever used protobuffers. All of the problems I want to talk about today exist inside of Google's codebase; it's not just a matter of "using protobuffers wrong" or some such nonsense like that.

By far, the biggest problem with protobuffers is their terrible type-system. Fans of Java should feel right at home with protobuffers, but unfortunately, literally nobody considers Java to have a well-designed type-system. The dynamic typing guys complain about it being too stifling, while the static typing guys like me complain about it being too stifling without giving you any of the things you actually want in a type-system. Lose lose.

The ad-hoc-ness and the built-by-amateurs-itude go hand-in-hand. So much of the protobuffer spec feels bolted on as an afterthought that it clearly *was* bolted on as an afterthought. Many of its restrictions will make you stop, scratch your head and ask "wat?" But these are just symptoms of the deeper answer, which is this:

Protobuffers were obviously built by amateurs because they offer *bad solutions to widely-known and already-solved problems.*

Protobuffers offer several "features", but none of them see to work with one another. For example, look at the list of orthogonal-yet-constrained typing features that I found by skimming the documentation.

`oneof`

fields can't be`repeated`

.`map<k,v>`

fields have dedicated syntax for their keys and values, but this isn't used for any other types.- Despite
`map`

fields being able to be parameterized, no user-defined types can be. This means you'll be stuck hand-rolling your own specializations of common data structures. `map`

fields cannot be`repeated`

.`map`

keys*can*be`string`

s, but*can not*be`bytes`

. They also can't be`enum`

s, even though`enum`

s are considered to be equivalent to integers everywhere else in the protobuffer spec.`map`

values cannot be other`map`

s.

This insane list of restrictions is the result of unprincipled design choices and bolting on features after the fact. For example, `oneof`

fields can't be `repeated`

because rather than resulting in a coproduct type, instead the code generator will give you a product of mutually-exclusive optional fields. Such a transformation is only valid for a singular field (and, as we'll see later, not even then.)

The restriction behind `map`

fields being unable to be `repeated`

is related, but shows off a different limitation of the type-system. Behind the scenes, a `map<k,v>`

is desugared into something spiritually similar to `repeated Pair<k,v>`

. And because `repeated`

is a magical language keyword rather than a type in its own right, it doesn't compose with itself.

Your guess is as good as mine for why an `enum`

can't be used as a `map`

key.

What's so frustrating about all of this is a little understanding of how modern type-systems work would be enough to *drastically simplify* the protobuffer spec and simultaneously *remove all of the arbitrary restrictions.*

The solution is as follows:

- Make all fields in a message
`required`

. This makes messages*product types*. - Promote
`oneof`

fields to instead be standalone data types. These are*coproduct types*. - Give the ability to parameterize product and coproduct types by other types.

That's it! These three features are all you need in order to define any possible piece of data. With these simpler pieces, we can re-implement the rest of the protobuffer spec in terms of them.

For example, we can rebuild `optional`

fields:

```
product Unit {
// no fields
}
coproduct Optional<t> {
t value = 0;
Unit unset = 1;
}
```

Building `repeated`

fields is simple too:

```
coproduct List<t> {
Unit empty = 0;
Pair<t, List<t>> cons = 1;
}
```

Of course, the actual serialization logic is allowed to do something smarter than pushing linked-lists across the network---after all, implementations and semantics don't need to align one-to-one.

In the vein of Java, protobuffers make the distinction between *scalar* types and *message* types. Scalars correspond more-or-less to machine primitives---things like `int32`

, `bool`

and `string`

. Messages, on the other hand, are everything else. All library- and user-defined types are messages.

The two varieties of types have completely different semantics, of course.

Fields with scalar types are always present. Even if you don't set them. Did I mention that (at least in proto3^{1}) all protobuffers can be zero-initialized with absolutely no data in them? Scalar fields get false-y values---`uint32`

is initialized to `0`

for example, and `string`

is initialized as `""`

.

It's impossible to differentiate a field that was missing in a protobuffer from one that was assigned to the default value. Presumably this decision is in place in order to allow for an optimization of not needing to send default scalar values over the wire. Presumably, though the encoding guide makes no mention of this optimization being performed, so your guess is as good as mine.

As we'll see when we discuss protobuffers' claim to being god's gift to backwards- and forwards-compatible APIs, this inability to distinguish between unset and default values is a nightmare. Especially if indeed it's a design decision made in order to save one bit (set or not) per field.

Contrast this behavior against message types. While scalar fields are dumb, the behavior for message fields is outright *insane.* Internally, message fields are either there or they're not---but their behavior is crazy. Some pseudocode for their accessor is worth a thousand words. Pretend this is Java or something similar:

```
private Foo m_foo;
public Foo foo {
// only if `foo` is used as an expression
get {
if (m_foo != null)
return m_foo;
else
return new Foo();
}
// instead if `foo` is used as an lvalue
mutable get {
if (m_foo = null)
m_foo = new Foo();
return m_foo;
}
}
```

The idea is that if the `foo`

field is unset, you'll see a default-initialized copy whenever you ask for it, but won't actually modify its container. But if you modify `foo`

, it will modify its parent as well! All of this just to avoid using a `Maybe Foo`

type and the associated "headaches" of the nuance behind needing to figure out what an unset value should mean.

This behavior is especially egregious, because it breaks a law! We'd expect the assignment `msg.foo = msg.foo;`

to be a no-op. Instead the implementation will actually silently change `msg`

to have a zero-initialized copy of `foo`

if it previously didn't have one.

Unlike scalar fields, at least it's possible to detect if a message field is unset. Language bindings for protobuffers offer something along the lines of a generated `bool has_foo()`

method. In the frequent case of copying a message field from one proto to another, iff it was present, you'll need to write the following code:

```
if (src.has_foo(src)) {
dst.set_foo(src.foo());
}
```

Notice that, at least in statically-typed languages, this pattern *cannot be abstracted* due to the nominal relationship between the methods `foo()`

, `set_foo()`

and `has_foo()`

. Because all of these functions are their own *identifiers*, we have no means of programmatically generating them, save for a preprocessor macro:

```
#define COPY_IFF_SET(src, dst, field) \
if (src.has_##field(src)) { \
dst.set_##field(src.field()); \
}
```

(but preprocessor macros are verboten by the Google style guide.)

If instead all optional fields were implemented as `Maybe`

s, you'd get abstract-able, referentially transparent call-sites for free.

To change tack, let's talk about another questionable decision. While you can define `oneof`

fields in protobuffers, their semantics are *not* of coproduct types! Rookie mistake my dudes! What you get instead is an optional field for each case of the `oneof`

, and magic code in the setters that will just unset any other case if this one is set.

At first glance, this seems like it should be semantically equivalent to having a proper union type. But instead it is an accursed, unutterable source of bugs! When this behavior teams up with the law-breaking implementation of `msg.foo = msg.foo;`

, it allows this benign-looking assignment to silently delete arbitrary amounts of data!

What this means at the end of the day is that `oneof`

fields do not form law-abiding `Prism`

s, nor do messages form law-abiding `Lens`

es. Which is to say good luck trying to write bug-free, non-trivial manipulations of protobuffers. It is *literally impossible to write generic, bug-free, polymorphic code over protobuffers.*

That's not the sort of thing anybody likes to hear, let alone those of us who have grown to love parametric polymorphism---which gives us the *exact opposite promise.*

One of the frequently cited killer features of protobuffers is their "hassle-free ability to write backwards- and forwards-compatible APIs." This is the claim that has been pulled over your eyes to blind you from the truth.

What protobuffers are is *permissive.* They manage to not shit the bed when receiving messages from the past or from the future because they make absolutely no promises about what your data will look like. Everything is optional! But if you need it anyway, protobuffers will happily cook up and serve you something that typechecks, regardless of whether or not it's meaningful.

This means that protobuffers achieve their promised time-traveling compatibility guarantees by *silently doing the wrong thing by default.* Of course, the cautious programmer can (and should) write code that performs sanity checks on received protobuffers. But if at every use-site you need to write defensive checks ensuring your data is sane, maybe that just means your deserialization step was too permissive. All you've managed to do is decentralize sanity-checking logic from a well-defined boundary and push the responsibility of doing it throughout your entire codebase.

One possible argument here is that protobuffers will hold onto any information present in a message that they don't understand. In principle this means that it's nondestructive to route a message through an intermediary that doesn't understand this version of its schema. Surely that's a win, isn't it?

Granted, on paper it's a cool feature. But I've never once seen an application that will actually preserve that property. With the one exception of routing software, nothing wants to inspect only some bits of a message and then forward it on unchanged. The vast majority of programs that operate on protobuffers will decode one, transform it into another, and send it somewhere else. Alas, these transformations are bespoke and coded by hand. And hand-coded transformations from one protobuffer to another don't preserve unknown fields between the two, because it's literally meaningless.

This pervasive attitude towards protobuffers always being compatible rears its head in other ugly ways. Style guides for protobuffers actively advocate against DRY and suggest inlining definitions whenever possible. The reasoning behind this is that it allows you to evolve messages separately if these definitions diverge in the future. To emphasize that point, the suggestion is to fly in the face of 60 years' worth of good programming practice just in case *maybe* one day in the future you need to change something.

At the root of the problem is that Google conflates the meaning of data with its physical representation. When you're at Google scale, this sort of thing probably makes sense. After all, they have an internal tool that allows you to compare the finances behind programmer hours vs network utilization vs the cost to store *x* bytes vs all sorts of other things. Unlike most companies in the tech space, paying engineers is one of Google's smallest expenses. Financially it makes sense for them to waste programmers' time in order to shave off a few bytes.

Outside of the top five tech companies, none of us is within five orders of magnitude of being Google scale. Your startup *cannot afford* to waste engineer hours on shaving off bytes. But shaving off bytes and wasting programmers' time in the process is exactly what protobuffers are optimized for.

Let's face it. You are not Google scale and you never will be. Stop cargo-culting technology just because "Google uses it" and therefore "it's an industry best-practice."

If it were possible to restrict protobuffer usage to network-boundaries I wouldn't be nearly as hard on it as a technology. Unfortunately, while there are a few solutions in principle, none of them is good enough to actually be used in real software.

Protobuffers correspond to the data you want to send over the wire, which is often *related* but not *identical* to the actual data the application would like to work with. This puts us in the uncomfortable position of needing to choose between one of three bad alternatives:

- Maintain a separate type that describes the data you actually want, and ensure that the two evolve simultaneously.
- Pack rich data into the wire format for application use.
- Derive rich information every time you need it from a terse wire format.

Option 1 is clearly the "right" solution, but its untenable with protobuffers. The language isn't powerful enough to encode types that can perform double-duty as both wire and application formats. Which means you'd need to write a completely separate datatype, evolve it synchronously with the protobuffer, and *explicitly write serialization code between the two.* Seeing as most people seem to use protobuffers in order to not write serialization code, this is obviously never going to happen.

Instead, code that uses protobuffers allows them to proliferate throughout the codebase. True story, my main project at Google was a compiler that took "programs" written in one variety of protobuffer, and spit out an equivalent "program" in another. Both the input and output formats were expressive enough that maintaining proper parallel C++ versions of them could never possibly work. As a result, my code was unable to take advantage of any of the rich techniques we've discovered for writing compilers, because protobuffer data (and resulting code-gen) is simply too rigid to do anything interesting.

The result is that a thing that could have been 50 lines of recursion schemes was instead 10,000 lines of ad-hoc buffer-shuffling. The code I wanted to write was literally impossible when constrained by having protobuffers in the mix.

While this is an anecdote, it's not in isolation. By virtue of their rigid code-generation, manifestations of protobuffers in languages are never idiomatic, nor can they be made to be---short of rewriting the code-generator.

But even then, you still have the problem of needing to embed a shitty type-system into the targeted language. Because most of protobuffers' features are ill-conceived, these unsavory properties leak into our codebases. It means we're forced to not only implement, but also use these bad ideas in any project which hopes to interface with protobuffers.

While it's easy to implement inane things out of a solid foundation, going the other direction is challenging at best and the dark path of Eldrich madness at worst.

In short, abandon all hope ye who introduce protobuffers into your projects.

To this day, there's a raging debate inside Google itself about proto2 and whether fields should ever be marked as

`required`

. Manifestos with both titles "`optional`

considered harmful"*and*"`required`

considered harmful." Good luck sorting that out.↩

You can find a very early pre-release of it here.

]]>A common misperception of free monads is that they allow for analysis of an program expressed with them. This is not true, and in fact, monads are too liberal of an abstraction to allow for inspection in general.

In order to see why, consider the following monadic expression:

```
getLine
>>= \str -> if str == "backdoor"
then launchNukes
else pure ()
```

The problem here is that bind is expressed via a continuation, and we're unable to look inside that continuation without calling the function. So we're stuck. We can't determine if the above program will ever call `launchNukes`

unless we just happen to call the lambda with the exact string `"backdoor"`

.

So, in general, we're unable to statically inspect monads. We can *run* them (not necessarily in the `IO`

monad) and see what happens, but getting a free monad to help with this is equivalent to mocking the exact problem domain. But, even though we can't do it in general, it seems like we should be able to do it in certain cases. Consider the following monadic expression:

```
getLine
>>= \_ -> launchNukes
```

In this case, the computation doesn't actually care about the result of `getLine`

, so in theory we can just call the continuation with `undefined`

and find that yes indeed this expression will call `launchNukes`

.

Notice that we *can't* use this strategy in the first expression we looked at, because that one scrutinized the result of `getLine`

, and branched depending on it. If we tried passing `undefined`

to it, it would crash with an error when we tried to force the final value of the monad (although this might be preferable to actually launching nukes.)

This example of `launchNukes`

is admittedly rather silly. My original motivation for investigating this is in the context of ecstasy in which users can query and manipulate disparate pieces of data. For example, if we wanted to write a physics simulator where each object may or may not have any of a `position :: V2 Double`

, a `velocity :: V2 Double`

and a `hasPhysics :: Bool`

, we could write the following piece of code to update the positions of any entities that have a velocity and are, in fact, affected by physics:

```
emap $ do
p <- query position
v <- query velocity
h <- query hasPhysics
guard h
pure unchanged
{ position = Set $ p + v ^* timeDelta
}
```

Because objects are not required to have all of the possible data, mapping this function will intentionally fail for any of the following reasons:

- the object did not have a
`position`

field - the object did not have a
`velocity`

field - the object did not have a
`hasPhysics`

field - the object had a
`hasPhysics`

field whose value was`False`

Without being able to statically analyze this monadic code, our only recourse is to attempt to run it over every object in the universe, and be happy when we succeed. While such an approach works, it's terribly inefficient if the universe is large but any of the `position`

, `velocity`

or `hasPhysics`

fields is sparse.

What would be significantly more efficient for large worlds with sparse data would be to compute the intersection of objects who have all three of `position`

, `velocity`

and `hasPhysics`

, and then run the computation only over those objects. Free applicatives (which *are* amenable to static analysis) are no good here, since our `guard h`

line really-and-truly is necessarily monadic.

Any such static analysis of this monad would be purely an optimization, which suggests we don't need it to be *perfect;* all that we are asking for is for it to be better than nothing. A best-effort approach in the spirit of our earlier "just pass `undefined`

around and hope it doesn't crash" would be sufficient. If we can be convinced it won't actually crash.

What we'd *really* like to be able to do is count every occurrence of `query`

in this monad before it branches based on the result of an earlier `query`

. Which is to say we'd like to pass `undefined`

around, do as much static analysis as we can, and then somehow `fail`

our analysis *just before* Haskell would crash due to evaluating an `undefined`

.

I've been playing around with this conceptual approach for some time, but could never seem to get it to work. Laziness can sure be one hell of a bastard when you're trying to pervert Haskell's execution order.

However, last week Foner et al. dropped a bomb of a paper Keep Your Laziness in Check, which describes a novel approach for observing evaluations of thunks in Haskell. The gist of the technique is to use `unsafePerformIO`

to build an `IORef`

, and then set its value at the same time you force the thunk. If you (unsafely) read from the `IORef`

and see that it hasn't been set, then nobody has forced your value yet.

We can use a similar approach to accomplish our optimization goals. For the crux of the approach, consider the follow `verify`

function that will evaluate a pure thunk and return `empty`

if it instead found a bottom:

```
verify :: Alternative f => a -> f b
verify f a = unsafePerformIO $ do
catch
(let !_ = a
in pure $ pure a)
(\(_ :: SomeException) -> pure empty)
```

The bang pattern `!_ = a`

tells Haskell to `seq`

`a`

, which reduces it to WHNF, which, if its WHNF is bottom, will be caught inside of the `catch`

. `unsafePerformIO`

is necessary here, because exceptions can only be caught in `IO`

.

Using this approach, if we're very careful, we can tear down a free monad by following its continuations using bottom, and doing the `verify`

trick to stop exactly when we need to.

I call such a thing `prospect`

, and you can find it on github. The name comes from the fact that this can lead to gold, but carries with it the intrinsic dangers of plumbing into the depths, such as cave-ins, blackened lungs, or the worse things that dwell in the darkness.

The primary export of `prospect`

is the titular function `prospect :: Free f a -> (Maybe a, [f ()])`

, which tears down a free monad, tells you whether or not it has a pure return value, and spits out as many `f`

constructors as it could before the computation branched. If you got a `Just`

back, it means it found every constructor, but there are no other guarantees.

Huge shoutouts to Vikrem who was a very patient sounding-board for all of my crazy ideas, and to kcsongor who suggested that I pay a lot more attention to where I'm being strict.

]]>You've heard of the type system, which makes sure your terms are sane. Maybe you're also aware of the kind system, whose job it is is to make sure your types are reasonable! But did you know Haskell has an even more obscure system than these? It's called the role system, and its purpose in life is to prevent you from shooting yourself in the foot when dealing with *coercions.*

Coercions and roles have been around since 2014, but there's been surprisingly little discussion about them in the blogosphere. In short, if two types have the same representation at runtime, then it should be safe to coerce a value of one into a value of the other. The role system is used to describe under what circumstances such a coercion is legal.

To illustrate the point, let's talk about newtypes. Consider the following:

`newtype AnInt = AnInt Int`

The promise of a newtype in Haskell is that it is zero-overhead; at runtime, `AnInt`

is exactly identical to `Int`

. Newtypes are often used for adding type-safety; it's nice if you have a `newtype Temperature = Temperature Int`

and a `newtype Money = Money Int`

because the extra type wrappers ensure you can't accidentally add the weather to your bank account, even if at the end of the day they *are* both just integers.

`AnInt`

and `Int`

are not *literally* the same type, but they don't actually differ at runtime. This property is known as being *representationally equal.* If two types are representationally equal, we should be able to do the equivalent of C++'s `reinterpret_cast`

and just pretend like a value of one is in fact a value of the other. Since these types correspond exactly at runtime, this is usually a safe thing to do.

If `AnInt`

and `Int`

are the same type at runtime, it means we should be able to `coerce :: AnInt -> Int`

(and backwards) freely between the two types without any problems. Morally, this `coerce`

function is just `id`

, because we're not actually doing any work to the value.

Consider now the slightly more interesting type:

`newtype Identity a = Identity a`

Again, because `Identity`

is a newtype, we should expect `Identity a`

to be *representationally equal* to `a`

. Since this is true, we expect that `Identity AnInt`

also be representationally equal to `Identity Int`

, via `Identity AnInt --> AnInt --> Int --> Identity Int`

. And thus, we should be able to `coerce :: Identity AnInt -> Identity Int`

. We can see that `Identity a`

preserves the coercion relationship of its type parameter `a`

, and this property is known as the `a`

having role `representational`

.

More generally, if the type parameter `a`

in `F a`

has role `representational`

, then `F X`

is representationally equal to `F Y`

whenever `X`

is representationally equal to `Y`

. This works whether `F`

be a `data`

or a `newtype`

.

However, not all type parameters have role `representational`

! Consider `Data.Map.Map k v`

which has keys `k`

and values `v`

. Because `Map`

is implemented as a balanced tree, it uses the `Ord k`

instance to figure out where to store a kv-pair.

One of the reasons we write newtypes is to give a different typeclass instance than the underlying type has. For example, `newtype ZipList a = ZipList [a]`

has a different `Applicative`

instance than `[]`

does. In general, we have no reason to expect that a newtype and its underlying type have instances that agree with one another.

Which leads us to a problem. Because a value of `Map k v`

is a balanced tree which depends on the `Ord k`

instance, we can't simply swap in `SomeOtherK`

and expect everything to work hunky-dory. They have different `Ord`

instances, and things would go screwy at runtime. All of this is to say that we **do not** want to be able to `coerce :: Map AnInt v -> Map Int v`

because it's likely to crash at runtime.

However, it is still fine to `coerce :: Map k AnInt -> Map k Int`

, because the values don't have this implicit dependency on any typeclass instances. There are no invariants to maintain on the `v`

parameter, and so we are free to `coerce`

to our hearts' content.

The role system is what describes this difference between the `k`

and `v`

type parameters of `Data.Map.Map`

. While `v`

is still role `representational`

, `k`

has role `nominal`

.

`nominal`

coercions of the form `coerce :: a -> b`

are allowed iff you already have a proof that `a ~ b`

, which is to say that `a`

and `b`

are literally the same type.

There's also a third role, `phantom`

, which, you guessed it, is given to phantom type parameters (eg. the `a`

in `data Const x a = Const x`

.) Because phantom types are by-definition not referenced in the data definition of a type, we are always free to coerce a `phantom`

type to any other type.

All of this cashes out in the form of `Data.Coerce`

's `coerce :: Coercible a b => a -> b`

. GHC will automatically provide instances of `Coercible a b`

whenever `a`

and `b`

are representationally coercible. That means you get these instances (and all of their symmetries):

- Given
`a ~ b`

,`Coercible a b`

- If
`NT`

is a newtype over`T`

,`Coercible NT T`

- If
`p`

in`F p`

has role`phantom`

,`Coercible (F a) (F b)`

- If
`r`

in`F r`

has role`representational`

,`Coercible a b => Coercible (F a) (F b)`

- If
`n`

in`F n`

has role`nominal`

,`(a ~ b) => Coercible (F a) (F b)`

GHC is pretty clever, and has a role-inference mechanism. It works by knowing that `(->)`

has two `representational`

roles, that `(~)`

has two `nominal`

roles, and propagates from there. Every type parameter is assumed to have role `phantom`

until it is used, whence it gets upgraded to the more restrictive role corresponding to the position it was used in. For example, if `a`

is used in `a ~ Bool`

, `a`

gets role `nominal`

since both of `(~)`

's parameters have `nominal`

roles.

GADTs are syntactic sugar on top of `(~)`

so expect GADTs to have `nominal`

role type parameters. Furthermore, any parameters of a `type family`

that are scrutinized will also have role `nominal`

(the motivated reader will be able to find an interesting implementation of `unsafeCoerce :: forall a b. a -> b`

if this were not the case.)

This inference mechanism will give you the most permissible roles that don't *obviously* destroy the type system, but sometimes it's necessary to explicitly give a role annotation, like in the `Data.Map.Map`

example. Role annotations can be given by eg. `type role Map nominal representational`

after turning on `{-# LANGUAGE RoleAnnotations #-}`

. It's worth pointing out that you can only give *less* permissive roles than GHC has inferred; there's no fighting with it on this one.

At the end of the day, why is any of this stuff useful? Besides being nice to know, custom role annotations can provide type-safety ala `Data.Map.Map`

. But we can also get asymptotic performance gains out of `coerce`

:

If `f :: Coercible a b => a -> b`

(common if `f`

is a newtype un/wrapper, or composition of such), then `fmap f`

*O*(*n*) is equivalent to `coerce`

*O*(1) in most cases^{1}. In fact, mpickering has written a GHC source plugin that will tell you if you can apply this transformation. Cool!

Assuming

`f`

's type parameter`a`

is not`nominal`

role.↩

Before diving into what I've been changing recently, it's probably a good idea to quickly talk inside baseball about how ecstasy works. The basic idea is this, you define a "world" higher-kinded data (HKD) corresponding to the components you care about. The library instantiates your HKD world in different ways to form a *structure-of-arrays* corresponding to the high-efficiency storage of the ECS, and to form *just a structure* corresponding to an actual entity.

This machinery is built via the `Component`

type family:

```
type family Component (s :: StorageType)
(c :: ComponentType)
(a :: *) :: *
```

Using `DataKinds`

, `Component`

is parameterized by three types. `s :: StorageType`

describes how the library wants to use this component -- possibly in the "structure-of-arrays" format consumed by the library, or as an entity structure, to be used by the application programmer. `s`

is left polymorphic when defining the HKD.

The `c :: ComponentType`

parameter is used to indicate the *semantics* of the field; some options include "each entity may or may not have this field" or "at most one entity may have this field." The former might be used for something like `position`

, while the latter could be `focusedOnByTheCamera`

.

Finally, `a :: *`

is the actual type you want the field to have.

Having data is a great first step, but it's currently just an opaque blob to the library. This is where GHC.Generics comes in -- given an (auto-derivable) `Generic`

instance for our world, we can use use `GHC.Generics`

to automatically further derive more specialized machinery for ourselves.

As an example, assume our world looked like this (absent the `Component`

trickery):

```
data World f = World
{ position :: f (V2 Double)
, graphics :: f Graphics
}
```

we can use `GHC.Generics`

to automatically generate the equivalent to a function:

```
getEntity :: Int -> World Data.IntMap.IntMap -> World Maybe
getEntity ent storage =
World (Data.IntMap.lookup ent $ position storage)
(Data.IntMap.lookup ent $ graphics storage)
```

which converts from a structure-of-arrays representation to a structure-of-maybes. The actual technique behind implementing these generic functions is out of scope for today's topic, but I've written on it previously.

For its part, `ecstasy`

exposes the `SystemT`

monad, which at its heart is just a glorified `Control.Monad.Trans.State.StateT (Int, World 'Storage)`

. The `Int`

keeps track of the next ID to give out for a newly created entity.

To a rough approximation, this is all of the interesting stuff inside of `ecstasy`

. So armed with this knowledge, we're ready to tackle some of the problems that have been unearthed recently.

My original test for `ecstasy`

was a small platformer -- a genre not known for the sheer number of entities all interacting at once. As a result, `ecstasy`

performed terribly, but I didn't notice because I hadn't benchmarked it or actually stress-tested it whatsoever. But that's OK, I wrote it to scratch an itch while hanging out in a Thai airport; I've never claimed to write titanium-grade software :)

But in my RTS, the library was obvious struggling after allocating only 100 dudes. The thing was leaking memory like crazy, which was because I used lazy state and containers. Oopsie daisies! Replacing `Control.Monad.Trans.State`

and `Data.IntMap`

with their strict versions cleared it up.

Honestly I'm not sure why the lazy versions are the default, but I guess that's the world we live in. **SANDY'S HOT PRO TIPS**: don't use lazy maps or state unless you've really thought about it.

While working on my RTS, I realized that I was going to need fast spacial queries to answer questions like "is there anyone around that I should attack?" The result was some sort of Frankenstein bastard child of a quadtree and a reverse index to answer both "where am I?" and "who's nearby?"

This worked well to answer the queries I asked of it, but posed a problem; in order to maintain its indices, my datastructure needed to be the source of truth on who was where. Having a `position`

component wasn't going to cut it anymore, since the ECS was no longer responsible for this data. I briefly considered trying to write a shim to keep the two datasources in sync, but it felt simultaneously like an ad-hoc hack and a maintenance nightmare, so I gave up and removed the component.

Unfortunately, all was not well. I added some monadic getters and setters to help shuffle the position information around, but oh god this became a garbage fire. Things that were before atomic updates now had extra calls to get and set the bastard, and everything was miserable.

I realized what I really wanted was the capability for `ecstasy`

to be *aware* of components without necessarily being the *owner* of them. Which is to say, components whose reads and writes invisibly dispatched out to some other monadic system.

OK, great, I knew what I wanted. Unfortunately, the implementation was not so straightforward. The problem was the functions I wanted:

```
vget :: Ent -> m (Maybe a)
vset :: Ent -> Update a -> m ()
```

had this troublesome `m`

parameter, and there was no clear place to put it. The monad to dispatch virtual calls to is a property of the interpretation of the data (actually running the sucker), not the data itself.

As a result, it wasn't clear where to actually keep the `m`

type parameter. For example, assuming we want `position`

to be virtual in our world:

```
data World s = World
{ position :: Component s 'Virtual (V2 Double)
}
```

Somehow, after unifying `s ~ 'Storage`

, we want this to come out as:

```
data World 'Storage = World
{ position :: ( Ent -> m (Maybe (V2 Double) -- vget
, Ent -> Update (V2 Double) -> m () -- vset
)
}
```

But where do we get the `m`

from? There's no obvious place.

We could add it as a mandatory parameter on `World`

, but that forces an implementation detail on people who don't need any virtual fields.

We *could* existentialize it, and then `unsafeCoerce`

it back, but... well, I stopped following that line of thought pretty quickly.

My first solution to this problem was to add a `Symbol`

to the `Virtual`

component-type token, indicating the "name" of this component, and then using a typeclass instance to actually connect the two:

```
data World s = World
{ position :: Component s ('Virtual "position") (V2 Double)
}
-- we put the monad here: `m`
instance VirtualAccess "position" m (V2 Double) where
vget = ...
vset = ...
```

While it *worked*, this was obviously a hack and my inner muse of library design was so offended that I spent another few days looking for a better solution. Thankfully, I came up with one.

The solution is one I had already skirted around, but failed to notice. This monad is a property only of the interpretation of the data, which is to say it really only matters when we're building the world *storage*. Which means we can do some janky dependency-injection stuff and hide it inside of the storage-type token.

Which is to say, that given a world of the form:

```
data World s = World
{ position :: Component s 'Virtual (V2 Double)
}
```

we could just pass in the appropriate monad when instantiating the world for its storage. Pseudocode:

```
data World (Storage m) = World
{ position :: Component (Storage m) 'Virtual (V2 Double)
}
```

All of a sudden, the `Component`

type family now has access to `m`

, and so it can expand into the `vget`

/`vset`

pair in a type-safe way. And the best part is that this is completely invisible to the user who never needs to care about our clever implementation details.

Spectacular! I updated all of the code generated via `GHC.Generics`

to run in `m`

so it could take advantage of this virtual dispatch, and shipped a new version of `ecstasy`

.

While all of this virtual stuff worked, it didn't work particularly quickly. I noticed some significant regressions in performance in my RTS upon upgrading to the new version. What was up? I dug in with the profiler and saw that my `GHC.Generics`

-derived code was no longer being inlined. HKD was performing more terribly than I thought!

All of my `INLINE`

pragmas were still intact, so I wasn't super sure what was going on. I canvassed #ghc on freenode, and the ever-helpful glguy had this to say:

generics can't optimize away when that optimization relies on GHC applying Monad laws to do it

Oh. Lame. That's why my performance had gone to shit!

I'm not sure if this is true, but my understanding is that the problem is that my monad was polymorphic, and thus the inliner wasn't getting a chance to fire. glguy pointed me towards the aptly-named confusing lens combinator, whose documentation reads:

Fuse a

`Traversal`

by reassociating all of the`<*>`

operations to the left and fusing all of the`fmap`

calls into one. This is particularly useful when constructing a`Traversal`

using operations from`GHC.Generics`

...

`confusing`

exploits the Yoneda lemma to merge their separate uses of`fmap`

into a single`fmap`

and it further exploits an interesting property of the right Kan lift (or Curried) to left associate all of the uses of`<*>`

to make it possible to fuse together more`fmap`

s.This is particularly effective when the choice of functor

`f`

is unknown at compile time or when the`Traversal`

in the above description is recursive or complex enough to prevent inlining.

That sounds *exactly* like the problem I was having, doesn't it? The actual `confusing`

combinator itself was no help in this situation, so I dug in and looked at its implementation. It essentially lifts your `m`

-specific actions into `Curried (Yoneda m) (Yoneda m)`

(don't ask me!), and then lowers it at the very end. My (shaky) understanding is this:

`Yoneda f`

is a functor even when `f`

itself is not, which means we have a free functor instance, which itself means that `fmap`

on `Yoneda f`

can't just lift `fmap`

from `f`

. This is cool if `fmap`

ing over `f`

is expensive -- `Yoneda`

just fuses all `fmap`

s into a single one that gets performed when you lower yourself out of it. Essentially it's an encoding that reduces an *O*(*n*) cost of doing *n* `fmap`

s down to *O*(1).

`Curried f f`

similarly has a free `Applicative`

instance, which, he says waving his hands furiously, is where the `<*>`

improvements come from.

So I did a small amount of work to run all of my `GHC.Generics`

code in `Curried (Yoneda m) (Yoneda m)`

rather than in `m`

directly, and looked at my perf graphs. While I was successful in optimizing away my `GHC.Generics`

code, I was also successful in merely pushing all of the time and allocations out of it and into `Yoneda.fmap`

. Curiously, this function isn't marked as `INLINE`

which I suspect is why the inliner is giving up (the isomorphic `Functor`

instance for `Codensity`

*is* marked as `INLINE`

, so I am *very hesitantly* rallying the hubris to suggest this is a bug in an Ed Kmett library.)

Despite the fact that I've been saying "we want to run virtual monadic actions," throughout this post, I've really meant "we want to run virtual applicative actions." Which is why I thought I could get away with using `Curried (Yoneda m) (Yoneda m)`

to solve my optimization problems for me.

So instead I turned to `Codensity`

, which legend tells can significantly improve the performance of free *monads* by way of the same mystical category-theoretical encodings. Lo and behold, moving all of my monadic actions into `Codensity m`

was in fact enough to get the inliner running again, and as a result, getting our HKD once more to be less terrible.

If you're curious in how `Codensity`

and friends work their magic, glguy pointed me to a tutorial he wrote explaining the technique. Go give it a read if you're feeling plucky and adventurous.

Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.

Ian, Jurassic Park

Designing an abstraction or library often feels wonderfully unconstrained; it is the task of the engineer (or logician) to create something from nothing. With experience and training, we begin to be able to consider and make trade-offs: efficiency vs simplicity-of-implementation vs ease-of-use vs preventing our users from doing the wrong thing, among many other considerations. Undeniably, however, there seems to be a strong element of "taste" that goes into design as well; two engineers with the same background, task, and sensibilities will still come up with two different interfaces to the same abstraction.

The tool of denotational design aims to help us nail down exactly what is this "taste" thing. Denotational design gives us the ability to look at designs and ask ourselves whether or not they are *correct.*

However, it's important to recognize that having a tool to help us design doesn't need to take the *fun* out of the endeavor. Like any instrument, it's up to the craftsman to know when and how to apply it.

This essay closely works through Conal Elliott's fantastic paper Denotational design with type class morphisms.

Consider the example of `Data.Map.Map`

. At it's essence, the interface is given by the following "core" pieces of functionality:

```
empty :: Map k v
insert :: k -> v -> Map k v -> Map k v
lookup :: k -> Map k v -> Maybe v
union :: Map k v -> Map k v -> Map k v
```

With the laws:

```
-- get back what you put in
lookup k (insert k v m) = Just v
-- keys replace one another
insert k b (insert k a m) = insert k b m
-- empty is an identity for union
union empty m = m
union m empty = m
-- union is just repeated inserts
insert k v m = union (insert k v empty) m
```

These laws correspond with our intuitions behind what a `Map`

is, and furthermore, capture exactly the semantics we'd like. Although it might seem silly to explicitly write out such "obvious" laws, it is the laws that give your abstraction meaning.

Consider instead the example:

```
empathy :: r -> f -> X r f -> X r f
fear :: e -> X e m -> Either () m
taste :: X o i -> X o i -> X o i
zoo :: X z x
```

It might take you some time to notice that this `X`

thing is just the result of me randomly renaming identifiers in `Map`

. The names are valuable to us only because they suggest meanings to us. Despite this, performing the same substitutions on the `Map`

laws would still capture the semantics we want. The implication is clear: names are helpful, but laws are invaluable.

Our quick investigation into the value of laws has shown us one example of how to assert meaning on our abstractions. We will now take a more in-depth look at another way of doing so.

Let us consider the concept of a "meaning functor." We can think of the term `μ(Map k v)`

as "the meaning of `Map k v`

." `μ(Map k v)`

asks not how is `Map k v`

implemented, but instead, how should we think about it? What metaphor should we use to think about a `Map`

? The *μ*(⋅) operator, like any functor, will map types to types, and functions to functions.

We can encode this mapping as a function, and the partiality with `Maybe`

:

`μ(Map k v) = k -> Maybe v`

With the meaning of our type nailed down, we can now also provide meanings for our primitive operations on `Map`

s:

` μ(empty) = \k -> Nothing`

An empty map is one which assigns `Nothing`

to everything.

` μ(lookup k m) = μ(m) k`

Looking up a key in the map is just giving back the value at that key.

```
μ(insert k' v m) = \k ->
if k == k'
then Just v
else μ(m) k
```

If the key we ask for is the one we inserted, give back the value associated with it.

```
μ(union m1 m2) = \k ->
case μ(m1) k of
Just v -> Just v
Nothing -> μ(m2) k
```

Attempt a lookup in a union by looking in the left map first.

Looking at these definitions of meaning, it's clear to see that they capture an intuitive (if perhaps, naive) meaning and implementation of a `Map`

. Regardless of our eventual implementation of `Map`

, *μ*(⋅) is a functor that transforms it into the same "structure" (whatever that means) over *functions.*

Herein lies the core principle of denotational design: for any type `X`

designed in this way, `X`

*must be isomorphic* to `μ(X)`

; literally no observational (ie. you're not allowed to run a profiler on the executed code) test should be able to differentiate one from the other.

This is not to say that it's necessary that `X = μ(X)`

. Performance or other engineering concerns may dissuade us from equating the two -- after all, it would be insane if `Map`

were actually implemented as a big chain of nested if-blocks. All we're saying is that nothing in the implementation is allowed to break our suspension of believe that we are actually working with `μ(Map)`

. Believe it or not, this is a desirable property; we all have a lot more familiarity with functions and other fundamental types than we do with the rest of the (possibly weird corners of) ecosystem.

The condition that `X`

≅ `μ(X)`

is much more constraining than it might seem at first glance. For example, it means that all instances of our type-classes must agree between `X`

and `μ(X)`

-- otherwise we'd be able to differentiate the two.

Our `Map`

has some obvious primitives for building a `Monoid`

, so let's do that:

```
instance Monoid (Map k v) where
mempty = empty
mappend = union
```

While this is indeed a `Monoid`

, it looks like we're already in trouble. The `Monoid`

instance definition for `μ(Map)`

, after specializing to our types, instead looks like this:

`instance Monoid v => Monoid (k -> Maybe v) where`

There's absolutely no way that these two instances could be the same. Darn. Something's gone wrong along the way; suggesting that `μ(Map)`

isn't in fact a denotation of `Map`

. Don't panic; this kind of thing happens. We're left with an intriguing question; is it our meaning functor that's wrong, or the original API itself?

Our instances of `Monoid Map`

and `Monoid μ(Map)`

do not agree, leading us to the conclusion that `μ(Map)`

*cannot be* the denotation for `Map`

. We are left with the uneasy knowledge that at least one of them is incorrect, but without further information, we are unable to do better.

A property of denotations is that their instances of typeclasses are always homomorphisms, which is to say that they are *structure preserving.* Even if you are not necessarily familiar with the word, you will recognize the concept when you see it. It's a pattern that often comes up when writing instances over polymorphic datastructures.

For example, let's look at the `Functor`

instance for a pair of type `(a, b)`

:

```
instance Functor ((,) a) where
fmap f (a, b) = (a, f b)
```

This is a common pattern; unwrap your datatype, apply what you've got anywhere you can, and package it all up again in the same shape. It's this "same shape" part that makes the thing structure preserving.

The principle to which we must adhere can be expressed with a pithy phrase: *the meaning of the instance is the instance of the meaning.* This is true for any meaning functor which is truly a denotation. What this means, for our hypothetical type `μ(X)`

, is that all of our instances must be of this form:

```
instance Functor μ(X) where
μ(fmap f x) = fmap f μ(x)
instance Applicative μ(X) where
μ(pure x) = pure x
μ(f <*> x) = μ(f) <*> μ(x)
```

and so on.

Having such a principle gives us an easy test for whether or not our meaning functor is correct; if any of our instances do not reduce down to this form, we know our meaning must be incorrect. Let's take a look at our implementation of `mempty`

:

```
μ(mempty) = \k -> Nothing
= \k -> mempty
= const mempty
= mempty -- (1)
```

At (1), we can collapse our `const mempty`

with `mempty`

because that is the definition of the `Monoid ((->) a)`

instance. So far, our meaning is looking like a true denotation. Let's also look at `mappend`

:

```
μ(mappend m1 m2) = \k ->
case μ(m1) k of
Just v -> Just v
Nothing -> μ(m2) k
```

It's not immediately clear how to wrestle this into a homomorphism, so let's work backwards and see if we can go backwards:

```
mappend μ(m1) μ(m2)
= mappend (\k -> v1) (\k -> v2)
= \k -> mappend v1 v2
= \k ->
case v1 of -- (2)
z@(Just a) ->
case v2 of
Just b -> Just $ mappend a b
Nothing -> z
Nothing -> v2
```

At (2) we inline the definition of `mappend`

for `Maybe`

.

That's as far as we can go, and, thankfully, that's far enough to see that our instances do not line up. While `mappend`

for `μ(Map)`

is left-biased, the one for our denotation may not be.

We're left with the conclusion that our meaning functor *μ*(⋅) must be wrong; either the representation of `μ(Map)`

is incorrect, or our meaning `μ(mappend)`

is. Fortunately, we are free to change either in order to make them agree. Because we're sure that the left-bias in `mappend`

is indeed the semantics we want, we must change the representation.

Fortunately, this is an easy fix; `Data.Monoid`

provides the `First`

newtype wrapper, which provides the left-biased monoid instance we want. Substituting it in gives us:

`μ(Map k v) = k -> First v`

Subsequent analysis of this revised definition of `μ(Map)`

reveals that indeed it satisfies the homomorphism requirement. This is left as an exercise to the reader.

We have now derived a denotation behind `Map`

, one with a sensible `Monoid`

instance. This gives rise to a further question---which other instances should we provide for `Map`

?

`Map`

is obviously a `Functor`

, but is it an `Applicative`

? There are certainly *implementations* for `Applicative (Map k)`

, but it's unclear which is the one we should provide. To make the discussion concrete, what should be the semantics behind `pure 17`

? Your intuition probably suggests we should get a singleton `Map`

with a value of 17, but what should it's key be? There's no obvious choice, unless we ensure `k`

is a `Monoid`

.

Another alternative is that we return a `Map`

in which *every* key maps to 17. This is implementation suggested by the `Applicative`

homomorphism of `μ(Map)`

, but it doesn't agree with our intuition. Alternatively, we could follow in the footsteps of `Data.Map.Map`

, whose solution to this predicament is to sit on the fence, and not provide any `Applicative`

instance whatsoever.

Sitting on the fence is not a very satisfying solution, however. `Applicative`

is a particularly useful class, and having access to it would greatly leverage the Haskell ecosystem in terms of what we can do with our `Map`

. As a general rule of thumb, any type which *can* be an instance of the standard classes *should* be, even if it requires a little finagling in order to make happen.

We find ourselves at an impasse, and so we can instead turn to other tweaks in our meaning functor, crossing our fingers that they will elicit inspiration.

Given the `Compose`

type from `Data.Functor.Compose`

, we can re-evaluate our choices once more (as we will see, this is a common theme in denotational design.)

```
data Compose f g a = Compose
{ getCompose :: f (g a)
}
```

`Compose`

is a fantastic tool when building new types that are composites of others. For example, consider the meaning of `μ(Map k v) = \k -> First v`

. If we'd like to `fmap`

over the `v`

here, we'll need to perform two of them:

```
f :: v -> w
fmap (fmap f) :: μ(Map k v) -> μ(Map k w)
```

Although it seems minor, this is in fact quite a large inconvenience. Not only does it require us two `fmap`

through two layers of functors, more egregiously, it allows us to use a *single* `fmap`

to break the abstraction. Consider the case of `fmap (const 5)`

-- this will transform a `μ(Map k v)`

into a `k -> 5`

, which is obviously *not* a functor. Yikes.

We instead can re-redefine `μ(Map k v)`

:

``μ(Map k v) = Compose ((->) k) First v``

Presented in this form, we are exposed to another interpretation of what our type means. `μ(Map)`

is a composition of some sort of *mapping-ness* `((->) k)`

and of *partiality* (`First`

). The mapping-ness is obviously crucial to the underlying concept, but it's harder to justify the partiality. One interpretation is that we use the `Nothing`

value to indicate there was no corresponding key, but another is that we use `Nothing`

as a *default value*.

When viewed as a default, a few minutes' pondering on this thought reveals that a partial map (`k -> Maybe v`

) is just a special case of a total map (`k -> v`

) where the value itself is partial. Maybe---if you'll excuse the pun---partiality is completely orthogonal to the semantics we want to express.

As our final (and ultimately correct) attempt, we define

`μ(Map k v) = \k -> v`

From here, the problem of "what typeclasses should this thing have" becomes quite trivial---we should provide equivalent instances for all of those of `k -> v`

. The question about what should our `Applicative`

instance do is resolved: the same thing arrows do.

A point worth stressing here is that just because the *meaning* of `Map k v`

is `k -> v`

, it doesn't mean our *representation* must be. For example, we could conceive implementing `Map`

as the following:

```
data Map k v = Map
{ mapDefVal :: v
, mapTree :: BalancedTree k v
}
lookup :: Ord k => Map k v -> k -> v
lookup m = fromMaybe (mapDefVal m) . treeLookup (mapTree m)
```

Such an implementation gives us all of the asymptotics of a tree-based map, but the denotations of (and therefore the *intuitions* behind) functions.

Hopefully this worked example has given you some insight into how the process of denotational design works. Guess at a denotation and then ruthlessly refine it until you get something that captures the real essence of what you're trying to model. It's an spectacularly rewarding experience to find an elegant solution to a half-baked idea, and your users will thank you to boot.

]]>As of yesterday, I have typeclass resolution working. The algorithm to desugar constraints into dictionaries hasn't been discussed much. Since it's rather involved, and quite interesting, I thought it might make a good topic for a blog post.

Our journey begins having just implemented Algorithm W aka Hindley-Milner. This is pretty well described in the literature, and there exist several implementations of it in Haskell, so we will not dally here. Algorithm W cashes out in a function of the type:

`infer :: SymTable VName Type -> Exp VName -> TI Type`

where `SymTable VName`

is a mapping from identifiers in scope to their types, `Exp VName`

is an expression we want to infer, and `TI`

is our type-inference monad. As a monad, `TI`

gives us the ability to generate fresh type variables, and to unify types as we go. `Type`

represents an unqualified type, which is to say it can be used to describe the types `a`

, and `Int`

, but not `Eq a => a`

. We will be implementing qualified types in this blog post.

`infer`

is implemented as a catamorphism, which generates a fresh type variable for every node in the expression tree, looks up free variables in the `SymTable`

and attempts to unify as it goes.

The most obvious thing we need to do in order to introduce constraints to our typechecker is to be able to represent them, so we two types:

```
infixr 0 :=>
data Qual t = (:=>)
{ qualPreds :: [Pred]
, unqualType :: t
} deriving (Eq, Ord, Functor, Traversable, Foldable)
data Pred = IsInst
{ predCName :: TName
, predInst :: Type
} deriving (Eq, Ord)
```

Cool. A `Qual Type`

is now a qualified type, and we can represent `Eq a => a`

via `[IsInst "Eq" "a"] :=> "a"`

(assuming `OverloadedStrings`

is turned on.) With this out of the way, we'll update the type of `infer`

so its symbol table is over `Qual Types`

, and make it return a list of `Pred`

s:

`infer :: SymTable VName (Qual Type) -> Exp VName -> TI ([Pred], Type)`

We update the algebra behind our `infer`

catamorphism so that adds any `Pred`

s necessary when instantiating types:

```
infer sym (V a) =
case lookupSym a sym of
Nothing -> throwE $ "unbound variable: '" <> show a <> "'"
Just sigma -> do
(ps :=> t) <- instantiate a sigma
pure (ps, t)
```

and can patch any other cases which might generate `Pred`

s. At the end of our cata, we'll have a big list of constraints necessary for the expression to typecheck.

As a first step, we'll just write the type-checking part necessary to implement this feature. Which is to say, we'll need a system for discharging constraints at the type-level, without necessarily doing any work towards code generation.

Without the discharging step, for example, our algorithm will typecheck `(==) (1 :: Int)`

as `Eq Int => Int -> Bool`

, rather than `Int -> Bool`

(since it knows `Eq Int`

.)

Discharging is a pretty easy algorithm. For each `Pred`

, see if it matches the instance head of any instances you have in scope; if so, recursively discharge all of the instance's context. If you are unable to find any matching instances, just keep the `Pred`

. For example, given the instances:

```
instance Eq Int
instance (Eq a, Eq b) => Eq (a, b)
```

and a `IsInst "Eq" ("Int", "c")`

, our discharge algorithm will look like this:

```
discharging: Eq (Int, c)
try: Eq Int --> does not match
try: Eq (a, b) --> matches
remove `Eq (Int, c)` pred
match types:
a ~ c
b ~ Int
discharge: Eq Int
discharge: Eq c
discharging: Eq Int
try: Eq Int --> matches
remove `Eq Int` pred
discharging: Eq c
try: Eq Int --> does not match
try: Eq (a, b) --> does not match
keep `Eq c` pred
```

We can implement this in Haskell as:

```
match :: Pred -> Pred -> TI (Maybe Subst)
getInsts :: ClassEnv -> [Qual Pred]
discharge :: ClassEnv -> Pred -> TI (Subst, [Pred])
discharge cenv p = do
-- find matching instances and return their contexts
matchingInstances <-
for (getInsts cenv) $ \(qs :=> t) -> do
-- the alternative here is to prevent emitting kind
-- errors if we compare this 'Pred' against a
-- differently-kinded instance.
res <- (fmap (qs,) <$> match t p) <|> pure Nothing
pure $ First res
case getFirst $ mconcat matchingInstances of
Just (qs, subst) ->
-- match types in context
let qs' = sub subst qs
-- discharge context
fmap mconcat $ traverse (discharge cenv) qs'
Nothing ->
-- unable to discharge
pure (mempty, pure p)
```

Great! This works as expected, and if we want to only write a type-checker, this is sufficient. However, we don't want to only write a type-checker, we also want to generate code capable of using these instances too!

We can start by walking through the transformation in Haskell, and then generalizing from there into an actual algorithm. Starting from a class definition:

```
class Functor f where
fmap :: (a -> b) -> f a -> f b
```

we will generate a dictionary type for this class:

```
data @Functor f = @Functor
{ @fmap :: (a -> b) -> f a -> f b
}
```

(I'm using the `@`

signs here because these things are essentially type applications. That being said, there will be no type applications in this post, so the `@`

should always be understood to be machinery generated by the compiler for dictionary support.)

Such a definition will give us the following terms:

```
@Functor :: ((a -> b) -> f a -> f b) -> @Functor f
@fmap :: @Functor f -> (a -> b) -> f a -> f b
```

Notice that `@fmap`

is just `fmap`

but with an explicit dictionary (`@Functor f`

) being passed in place of the `Functor f`

constraint.

From here, in order to actually construct one of these dictionaries, we can simply inline an instances method:

```
instance Functor Maybe where
fmap = \f m -> case m of { Just x -> Just (f x); Nothing -> Nothing }
-- becomes
@Functor@Maybe :: @Functor Maybe
@Functor@Maybe =
@Functor
{ @fmap = \f m -> case m of { Just x -> Just (f x); Nothing -> Nothing }
}
```

Now we need to look at how these dictionaries actually get used. It's clear that every `fmap`

in our expression tree should be replaced with `@fmap d`

for some `d`

. If the type of `d`

is monomorphic, we can simply substitute the dictionary we have:

```
x :: Maybe Int
x = fmap (+5) (Just 10)
-- becomes
x :: Maybe Int
x = @fmap @Functor@Maybe (+5) (Just 10)
```

but what happens if the type `f`

is polymorphic? There's no dictionary we can reference statically, so we'll need to take it as a parameter:

```
y :: Functor f => f Int -> f Int
y = \z -> fmap (+5) z
-- becomes
y :: @Functor f -> f Int -> f Int
y = \d -> \z -> @fmap d (+5) z
```

A reasonable question is when should we insert these lambdas to bind the dictionaries? This stumped me for a while, but the answer is whenever you get to a binding group; which is to say whenever your expression is bound by a `let`

, or whenever you finish processing a top-level definition.

One potential gotcha is what should happen in the case of instances with their own contexts? For example, `instance (Eq a, Eq b) => Eq (a, b)`

? Well, the same rules apply; since `a`

and `b`

are polymorphic constraints, we'll need to parameterize our `@Eq@(,)`

dictionary by the dictionaries witnessing `Eq a`

and `Eq b`

:

```
instance (Eq a, Eq b) => Eq (a, b) where
(==) = \ab1 ab2 -> (==) (fst ab1) (fst ab2)
&& (==) (snd ab1) (snd ab2)
-- becomes
@Eq@(,) :: @Eq a -> @Eq b -> @Eq (a, b)
@Eq@(,) = \d1 -> \d2 ->
@Eq
{ (@==) = \ab1 ab2 -> (@==) d1 (fst ab1) (fst ab2)
&& (@==) d2 (snd ab1) (snd ab2)
}
```

Super-class constraints behave similarly.

So with all of the theory under our belts, how do we actually go about implementing this? The path forward isn't as straight-forward as we might like; while we're type-checking we need to desugar terms with constraints on them, but the result of that desugaring depends on the eventual type these terms receive.

For example, if we see `(==)`

in our expression tree, we want to replace it with `(@==) d`

where `d`

might be `@Eq@Int`

, or it might be `@Eq@(,) d1 d2`

, or it might just stay as `d`

! And the only way we'll know what's what is *after* we've performed the dischargement of our constraints.

As usual, the solution is to slap more monads into the mix:

```
infer
:: SymTable VName (Qual Type)
-> Exp VName
-> TI ( [Pred]
, Type
, Reader (Pred -> Exp VName)
(Exp VName)
)
```

Our `infer`

catamorphism now returns an additional `Reader (Pred -> Exp VName) (Exp VName)`

, which is to say an expression that has access to which expressions it should substitute for each of its `Pred`

s. We will use this mapping to assign dictionaries to `Pred`

s, allowing us to fill in the dictionary terms once we've figured them out.

We're in the home stretch; now all we need to do is to have `discharge`

build that map from `Pred`

s into their dictionaries and we're good to go.

```
getDictTerm :: Pred -> Exp VName
getDictTypeForPred :: Pred -> Type
-- DSL-level function application
(:@) :: Exp VName -> Exp VName -> Exp VName
discharge
:: ClassEnv
-> Pred
-> TI ( Subst
, [Pred]
, Map Pred (Exp VName)
, [Assump Type]
, [Exp VName]
)
discharge cenv p = do
matchingInstances <-
for (getInsts cenv) $ \(qs :=> t) -> do
res <- (fmap (qs, t, ) <$> match t p) <|> pure Nothing
pure $ First res
case getFirst $ mconcat matchingInstances of
Just (qs, t, subst) ->
-- discharge all constraints on this instance
(subst', qs', mapPreds, assumps, subDicts)
<- fmap mconcat
. traverse (discharge cenv)
$ sub subst qs
let dictTerm = getDictTerm t
myDict = foldl (:@) dictTerm subDicts
pure ( subst'
, qs'
, mapPreds <> M.singleton p myDict
, assumps
-- this is just in a list so we can use 'mconcat' to
-- collapse our traversal
, [myDict]
)
Nothing ->
-- unable to discharge, so assume the existence of a new
-- variable with the correct type
param <- newVName "d"
pure ( mempty
, [p]
, M.singleton p param
, [MkAssump param $ getDictTypeForPred p]
, [param]
)
```

The logic of `discharge`

is largely the same, except we have a little more logic being driven by its new type. We now, in addition to our previous substitution and new predicates, also return a map expanding dictionaries, a list of `Assump`

s (more on this in a second), and the resulting dictionary witnessing this discharged `Pred`

.

If we were successful in finding a matching instance, we discharge each of its constraints, and fold the resulting dictionaries into ours. The more interesting logic is what happens if we are unable to discharge a constraint. In that case, we create a new variable of the necessary type, give that as our resulting dictionary, and emit it as an `Assump`

. `Assump`

s are used to denote the creation of a new variable in scope (they are also used for binding pattern matches).

The result of our new `discharge`

function is that we have a map from every `Pred`

we saw to the resulting dictionary for that instance, along with a list of generated variables. We can build our final expression tree via running the `Reader (Pred -> Exp VName)`

by looking up the `Pred`

s in our dictionary map. Finally, for every assumption we were left with, we fold our resulting term in a lambda which binds that assumption.

Very cool! If you're interested in more of the nitty-gritty details behind compiling Haskell98, feel free to SMASH THAT STAR BUTTON on Github.

]]>One of the biggest concerns over the HKD technique was that it breaks automated deriving of instances. This is not entirely true, it just requires turning on `{-# LANGUAGE StandaloneDeriving #-}`

and then using one of two approaches.

The simplest method is that we can simply derive all of our instances only for the types we expect to use:

```
deriving instance Eq (Person' Identity)
deriving instance Eq (Person' Maybe)
deriving instance Ord (Person' Identity)
deriving instance Ord (Person' Maybe)
```

Admittedly it's kind of a shit solution, but technically it does work.

An alternative approach is to automatically lift these instances from `f a`

over the `HKD f a`

type family. The construction is a little more involved than I want to get into today, but thankfully it's available as library code from the spiffy `one-liner`

package.

After adding `one-liner`

as a dependency, we can lift our instances over a polymorphic `f`

using the `Constraints`

type-synonym:

`deriving instance (Constraints (Person' f) Eq) => Eq (Person' f)`

Easy!

The other big concern was over whether we pay performance costs for getting so many cool things for free.

For the most part, if you mark all of your generic type-class methods as `INLINE`

and turn on `-O2`

, most of the time you're not going to pay any runtime cost for using the HKD technique.

Don't believe me? I can prove it, at least for our free lenses.

Let's fire up the `inspection-testing`

package, which allows us to write core-level equalities that we'd like the compiler to prove for us. The equality we want to show is that the core generated for using our free lenses is exactly what would be generated by using hand-written lenses.

We can do this by adding some front-matter to our module:

```
{-# LANGUAGE TemplateHaskell #-}
{-# OPTIONS_GHC -O -fplugin Test.Inspection.Plugin #-}
import Test.Inspection
```

This installs the `inspection-testing`

compiler plugin, which is responsible for doing the work for us. Next, we'll define our lenses:

```
freeName :: Lens' (Person' Identity) String
Person (LensFor freeName) _ = getLenses
handName :: Lens' (Person' Identity) String
handName a2fb s = a2fb (pName s) <&> \b -> s { pName = b }
```

and finally, we can write the equalities we'd like GHC to prove for us. This is done in two steps -- writing top-level left- and right- handed sides of the equality, and then writing a TemplateHaskell splice to generate the proof.

```
viewLhs, viewRhs :: Person' Identity -> String
viewLhs = view freeName
viewRhs = view handName
inspect $ 'viewLhs === 'viewRhs
```

Compiling this dumps some new information into our terminal:

```
src/Main.hs:34:1: viewLhs === viewRhs passed.
inspection testing successful
expected successes: 1
```

We can write an analogy equality to ensure that the generated setter code is equivalent:

```
setLhs, setRhs :: String -> Person' Identity -> Person' Identity
setLhs y = freeName .~ y
setRhs y = handName .~ y
inspect $ 'setLhs === 'setRhs
```

And upon compiling this:

```
src/Main.hs:34:1: viewLhs === viewRhs passed.
src/Main.hs:35:1: setLhs === setRhs passed.
inspection testing successful
expected successes: 2
```

Cool! Just to satisfy your curiosity, the actual lenses themselves aren't equivalent:

`inspect $ 'freeName === 'handName`

results in a big core dump showing that `freeName`

is a gross disgusting chain of `fmap`

s and that `handName`

is pretty and elegant. And the module fails to compile, which is neat -- it means we can write these proofs inline and the compiler will keep us honest if we ever break them.

But what's cool here is that even though our lenses do *not* result in equivalent code, actually using them does -- which means that under most circumstances, we won't be paying to use them.

`* -> *`

, and subsequently wrapping each of its fields by this parameter. The example we used previously was transforming this type:
```
data Person = Person
{ pName :: String
, pAge :: Int
} deriving (Generic)
```

into its HKD representation:

```
data Person' f = Person
{ pName :: HKD f String
, pAge :: HKD f Int
} deriving (Generic)
```

Recall that `HKD`

is a type family given by

```
type family HKD f a where
HKD Identity a = a
HKD f a = f a
```

which is responsible for stripping out an `Identity`

wrapper. This means we can recreate our original `Person`

type via `type Person = Person' Identity`

, and use it in all the same places we used to be able to.

Our previous exploration of the topic unearthed some rather trivial applications of this approach; we generated a function `validate :: f Maybe -> Maybe (f Identity)`

which can roughly be described as a "type-level `sequence`

." In fact, in the comments, Syrak pointed out we can implement this function in a less-round-about way via `gtraverse id`

.

So, how about we do something a little more interesting today? Let's generate lenses for arbitrary product types.

In my opinion, one of the biggest advantages of the HKD approach is it answers the question "where can we put this stuff we've generated?" Generating lenses generically is pretty trivial (once you have wrapped your head around the mind-boggling types), but the harder part is where to put it. The `lens`

package uses TemplateHaskell to generate new top-level bindings so it has somewhere to put the lenses. But we have HKD.

Recall, our `Person'`

type:

```
data Person' f = Person
{ pName :: HKD f String
, pAge :: HKD f Int
} deriving (Generic)
```

By substituting `f ~ Lens' (Person' Identity)`

, we'll have `pName :: Lens' (Person' Identity) String`

, which is exactly the type we need. All of a sudden it looks like we have an answer to "where should we put it": inside our original structure itself. If we can generate a record of type `Person' (Lens' (Person' Identity)`

, destructuring such a thing will give us the lenses we want, allowing us to name them when we do the destructuring. Cool!

Unfortunately, we're unable to partially apply type-synonyms, so we'll need to introduce a new type constructor that we *can* partially apply. Enter `LensesFor`

:

```
data LensFor s a = LensFor
{ getLensFor :: Lens' s a
}
```

The next step is to *think really hard* about what our lens-providing type-class should look like. At the risk of sounding like a scratched CD in a walkman, I consider the design of the typeclass to be by far the hardest part of this approach. So we'll work through the derivation together:

I always begin with my "template" generic-deriving class:

```
class GLenses i o where
glenses :: i p -> o p
```

where `p`

is a mysterious existentialized type parameter "reserved for future use" by the `GHC.Generics`

interface. Recall that `i`

is the incoming type for the transformation (*not* the original `Person'`

type), and `o`

is correspondingly the output type of the transformation.

Since lenses don't depend on a particular "input" record -- they should be able to be generated *ex nihilo* -- we can drop the `i p`

parameter from `glenses`

. Furthermore, since eventually our lenses are going to depend on our "original" type (the `Person'`

in our desired `LensesFor (Person' Identity)`

), we'll need another parameter in our typeclass to track that. Let's call it `z`

.

```
class GLenses z i o where
glenses :: o p
```

As far as methods go, `glenses`

is pretty unsatisfactory right now; it leaves most of its type parameters ambiguous. No good. We can resolve this issue by realizing that we're going to need to actually provide lenses at the end of the day, and because `GHC.Generics`

doesn't give us any such functionality, we'll need to write it ourselves. Which implies we're going to need to do structural induction as we traverse our generic `Rep`

resentation.

The trick here is that in order to provide a lens, we're going to need to have a lens to give. So we'll add a `Lens'`

to our `glenses`

signature -- but what type should it have? At the end of the day, we want to provide a `Lens' (z Identity) a`

where `a`

is the type of the field we're trying to get. Since we always want a lens starting from `z Identity`

, that pins down one side of our lens parameter.

```
class GLenses z i o where
glenses :: Lens' (z Identity) _ -> o p
```

We still have the notion of an `i`

nput to our `glenses`

, which we want to transform into our `o`

utput. And that's what tears it; if we have a lens from our original type where we currently are in our Generic traversal, we can transform that into a Generic structure which contains the lenses we want.

```
class GLenses z i o where
glenses :: Lens' (z Identity) (i p) -> o p
```

Don't worry if you're not entirely sure about the reasoning here; I wasn't either until I worked through the actual implementation. It took a few iterations to get right. Like I said, figuring out what this method should look like is by far the hardest part. Hopefully going through the rest of the exercise will help convince us that we got our interface correct.

With our typeclass pinned down, we're ready to begin our implementation. We start, as always, with the base case, which here is "what should happen if we have a `K1`

type?" Recall a `K1`

corresponds to the end of our generic structural induction, which is to say, this is a type that isn't ours. It's the `HKD f String`

in `pName :: HKD f String`

from our example.

So, if we have an `a`

wrapped in a `K1`

, we want to instead produce a `LensFor (z Identity) a`

wrapped in the same.

```
instance GLenses z (K1 _x a)
(K1 _x (LensFor (z Identity) a)) where
glenses l = K1 -- [3]
$ LensFor -- [2]
$ \f -> l $ fmap K1 . f . unK1 -- [1]
{-# INLINE glenses #-}
```

Egads there's a lot going on here. Let's work through it together. In [1], we transform the lens we were given (`l`

) so that it will burrow through a `K1`

constructor -- essentially turning it from a `Lens' (z Identity) (K1 _x a)`

into a `Lens' (z Identity) a`

. At [2], we wrap our generated lens in the `LensFor`

constructor, and then in [3] we wrap our generated lens back in the `GHC.Generics`

machinery so we can transform it back into our HKD representation later.

And now for our induction. The general idea here is that we're going to need to transform the lens we got into a new lens that focuses down through our generic structure as we traverse it. We can look at the `M1`

case because it's babby's first instance when compared to `K1`

:

```
instance (GLenses z i o)
=> GLenses z (M1 _a _b i) (M1 _a _b o) where
glenses l = M1 $ glenses $ \f -> l $ fmap M1 . f . unM1
{-# INLINE glenses #-}
```

Here we're saying we can lift a `GLenses z i o`

over an `M1`

constructor by calling `glenses`

with an updated lens that will burrow through the `M1`

-ness. This transformation is completely analogous to the one we did for `K1`

. Once we have our generated lenses, we need to re-wrap the structure in an `M1`

constructor so we can transform it back into our HKD representation.

The product case looks a little trickier, but it's only because `GHC.Generics`

doesn't provide us with any useful un/wrapping combinators for the `(:*:)`

constructor.

```
instance (GLenses z i o, GLenses z i' o')
=> GLenses z (i :*: i') (o :*: o') where
glenses l = glenses (\f -> l (\(a :*: b) -> fmap (:*: b) $ f a))
:*: glenses (\f -> l (\(a :*: b) -> fmap (a :*:) $ f b))
{-# INLINE glenses #-}
```

We finish it off with the trivial instances for `V1`

and `U1`

:

```
instance GLenses z V1 V1 where
glenses l = undefined
instance GLenses z U1 U1 where
glenses l = U1
```

And voila! Our induction is complete. Notice that we *did not* write an instance for `(:+:)`

(coproducts), because lenses are not defined for coproduct types. This is fine for our `Person'`

case, which has no coproducts, but types that do will simply be unable to find a `GLenses`

instance, and will fail to compile. No harm, no foul.

With this out of the way, we need to write our final interface that will use all of the generic machinery and provide nice access to all of this machinery. We're going to need to call `glenses`

(obviously), and pass in a `Lens' (z Identity) (Rep (z Identity))`

in order to get the whole thing running. Then, once everything is build, we'll need to call `to`

to turn our generic representation back into the HKD representation.

But how can we get a `Lens'(z Identity) (Rep (z Identity))`

? Well, we know that `GHC.Generics`

gives us an isomorphism between a type and its `Rep`

, as witnessed by `to`

and `from`

. We further know that every `Iso`

is indeed a `Lens`

, and so the lens we want is just `iso from to`

. Our function, then, is "simply":

```
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeApplications #-}
getLenses
:: forall z
. ( Generic (z Identity)
, Generic (z (LensFor (z Identity)))
, GLenses z (Rep (z Identity))
(Rep (z (LensFor (z Identity))))
)
=> z (LensFor (z Identity))
getLenses = to $ glenses @z $ iso from to
```

where I just wrote the `z (LensFor (z Identity))`

part of the type signature, and copy-pasted constraints from the error messages until the compiler was happy.

OK, so let's take it for a spin, shall we? We can get our lenses thusly:

`Person (LensFor lName) (LensFor lAge) = getLenses`

Yay! Finally we can ask GHCi for their types, which is a surprisingly satisfying experience:

```
> :t lName
lName :: Lens' (Person' Identity) String
```

Pretty sweet, ne? Now that `getLenses`

has been implemented generically, it can become library code that will work for any product-type we can throw at it. Which means free lenses without TemplateHaskell for any types we define in the HKD form.

This HKD pattern is useful enough that I've begun implement literally all of my "data" (as opposed to "control") types as higher-kinded data. With an extra type synonym `type X = X' Identity`

, and `{-# LANGUAGE TypeSynonymInstances #-}`

, nobody will ever know the difference, except that it affords me the ability to use all of this stuff in the future should I want to.

As Conal says, all of this stuff might not necessarily be "for free" but at the very least, it's "already paid for."

More shoutouts to Travis Athougies, whose sweet library beam uses this approach to generate lenses for working with SQL tables. I consulted the beam source more than a couple times in writing this post. Thanks again, Travis!

]]>`sequence`

over data-types; and automatically track dependencies for usages of record fields.
As for this post, we'll look at how to build type-level sequencing, and investigate some other uses in subsequent ones. For our examples, let's define the following (completely arbitrary) data-type:

```
data Person = Person
{ pName :: String
, pAge :: Int
} deriving (Generic)
```

That's cool and all, I guess. For purposes of discussion, let's imagine that we want to let the user fill in a `Person`

via a web-form or something. Which is to say, it's possible they'll screw up filling in some piece of information without necessarily invalidating the rest of the datastructure. If they successfully filled in the entire structure, we'd like to get a `Person`

out.

One way of modeling this would be with a second datatype:

```
data MaybePerson = MaybePerson
{ mpName :: Maybe String
, mpAge :: Maybe Int
} deriving (Generic)
```

and a function:

```
validate :: MaybePerson -> Maybe Person
validate (MaybePerson name age) =
Person <$> name <*> age
```

This works, but it's annoying to write by hand, since it's completely mechanical. Furthermore, having duplicated this effort means we'll need to use our brains in the future to make sure all three definitions stay in sync. Wouldn't it be cool if the compiler could help with this?

SURPRISE! IT CAN! And that's what I want to talk about today.

Notice that we can describe both `Person`

and `MaybePerson`

with the following higher-kinded data (henceforth "**HKD**") definition:

```
data Person' f = Person
{ pName :: f String
, pAge :: f Int
} deriving (Generic)
```

Here we've parameterized `Person'`

over something `f`

(of kind `* -> *`

), which allows us to do the following in order to get our original types back:

```
type Person = Person' Identity
type MaybePerson = Person' Maybe
```

While this works, it's kind of annoying in the `Person`

case, since now all of our data is wrapped up inside of an `Identity`

:

```
> :t pName @Identity
pName :: Person -> Identity String
> :t runIdentity . pName
runIdentity . pName :: Person -> String
```

We can fix this annoyance trivially, after which we will look at why defining `Person'`

as such is actually useful. To get rid of the `Identity`

s, we can use a type family (a function at the type-level) that erases them:

```
{-# LANGUAGE TypeFamilies #-}
-- "Higher-Kinded Data"
type family HKD f a where
HKD Identity a = a
HKD f a = f a
data Person' f = Person
{ pName :: HKD f String
, pAge :: HKD f Int
} deriving (Generic)
```

Using the `HKD`

type family means that GHC will automatically erase any `Identity`

wrappers in our representations:

```
> :t pName @Identity
pName :: Person -> String
> :t pName @Maybe
pName :: Person -> Maybe String
```

and with that, the higher-kinded version of `Person`

can be used as a drop-in replacement for our original one. The obvious question is what have we bought ourselves with all of this work. Let's look back at `validate`

to help us answer this question. Compare our old implementation:

```
validate :: MaybePerson -> Maybe Person
validate (MaybePerson name age) =
Person <$> name <*> age
```

with how we can now rewrite it with our new machinery:

```
validate :: Person' Maybe -> Maybe Person
validate (Person name age) =
Person <$> name <*> age
```

Not a very interesting change is it? But the intrigue lies in how little needed to change. As you can see, only our type and pattern match needed to change from our original implementation. What's neat here is that we have now consolidated `Person`

and `MaybePerson`

into the same representation, and therefore they are no longer related only in a nominal sense.

We can write a version of `validate`

that will work for any higher-kinded datatype.

The secret is to turn to `GHC.Generics`

. If you're unfamiliar with them, they provide an isomorphism from a regular Haskell datatype to a generic representation that can be structurally manipulated by a clever programmer (ie: us.) By providing code for what to do for constant types, products and coproducts, we can get GHC to write type-independent code for us. It's a really neat technique that will tickle your toes if you haven't seen it before.

To start with, we need to define a typeclass that will be the workhorse of our transformation. In my experience, this is always the hardest part -- the types of these generic-transforming functions are exceptionally abstract and in my opinion, very hard to reason about. I came up with this:

```
{-# LANGUAGE MultiParamTypeClasses #-}
class GValidate i o where
gvalidate :: i p -> Maybe (o p)
```

I only have "soft-and-slow" rules for reasoning about what your typeclass should look like, but in general you're going to need both an `i`

nput and an `o`

utput parameter. They both need to be of kind `* -> *`

and then be passed this existentialized `p`

, for dark, unholy reasons known not by humankind. I then have a little checklist I walk through to help me wrap my head around this nightmarish hellscape that we'll walk through in a later installment of the series.

Anyway, with our typeclass in hand, it's now just a matter of writing out instances of our typeclass for the various GHC.Generic types. We can start with the base case, which is we should be able to validate a `Maybe k`

:

```
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE TypeOperators #-}
instance GValidate (K1 a (Maybe k)) (K1 a k) where
-- gvalidate :: K1 a (Maybe k) -> Maybe (K1 a k)
gvalidate (K1 k) = K1 <$> k
{-# INLINE gvalidate #-}
```

`K1`

represents a "constant type", which is to say that it's where our structural recursion conks out. In our `Person'`

example, it's the `pName :: HKD f String`

bit.

Most of the time, once you have the base case in place, the rest is to just mechanically provide instances for the other types. Unless you need to access metadata about the original type anywhere, these instances will almost always be trivial homomorphisms.

We can start with products -- if we have `GValidate i o`

and `GValidate i' o'`

, we should be able to run them in parallel:

```
instance (GValidate i o, GValidate i' o')
=> GValidate (i :*: i') (o :*: o') where
gvalidate (l :*: r) = (:*:)
<$> gvalidate l
<*> gvalidate r
{-# INLINE gvalidate #-}
```

If `K1`

referred directly to the selectors of our `Person'`

, `(:*:)`

corresponds roughly to the `,`

piece of syntax we separate our record fields with.

We can define a similar instance of `GValidate`

for coproducts (corresponding to a `|`

in a data definition):

```
instance (GValidate i o, GValidate i' o')
=> GValidate (i :+: i') (o :+: o') where
gvalidate (L1 l) = L1 <$> gvalidate l
gvalidate (R1 r) = R1 <$> gvalidate r
{-# INLINE gvalidate #-}
```

Furthermore, if we don't care about looking at metadata, we can simply lift a `GValidate i o`

over the metadata constructor:

```
instance GValidate i o
=> GValidate (M1 _a _b i) (M1 _a' _b' o) where
gvalidate (M1 x) = M1 <$> gvalidate x
{-# INLINE gvalidate #-}
```

Just for kicks, we can provide the following trivial instances, for uninhabited types (`V1`

) and for constructors without any parameters (`U1`

):

```
instance GValidate V1 V1 where
gvalidate = undefined
{-# INLINE gvalidate #-}
instance GValidate U1 U1 where
gvalidate U1 = Just U1
{-# INLINE gvalidate #-}
```

The use of `undefined`

here is safe, since it can only be called with a value of `V1`

. Fortunately for us, `V1`

is uninhabited, so this can never happen, and thus we're morally correct in our usage of `undefined`

.

Without further ado, now that we have all of this machinery out of the way, we can finally write a non-generic version of `validate`

:

```
{-# LANGUAGE FlexibleContexts #-}
validate
:: ( Generic (f Maybe)
, Generic (f Identity)
, GValidate (Rep (f Maybe))
(Rep (f Identity))
)
=> f Maybe
-> Maybe (f Identity)
validate = fmap to . gvalidate . from
```

I always get a goofy smile when the signature for my function is longer than the actual implementation; it means we've hired the compiler to write code for us. What's neat about `validate`

here is that it doesn't have any mention of `Person'`

; this function will work for *any* type defined as higher-kinded data. Spiffy.

That's all for today, folks. We've been introduced to the idea of higher-kinded data, seen how it's completely equivalent with a datatype defined in a more traditional fashion, and also caught a glimmer of what kind of things are possible with this approach. This is where we stop for today, but in the next post we'll look at how we can use the HKD approach to generate lenses without resorting to TemplateHaskell.

Happy higher-kinding!

Big shoutouts to Travis Athougies from whom I originally learned this technique, and to Ariel Weingarten and Fintan Halpenny for proofreading earlier versions of this post.

]]>