Last week, I talked about how to obtain generalisations of the notion of monad using lax 2-functors. You can find a summary over at my blog.

# HoTT for lazy functional programmers (8/5/2015)

After having to endure some talks about Homotopy Type Theory some members of our group wanted to know what this is all about. I tried to give an introduction which reflects my own views on HoTT which may be relevant for functional programmers. I was able to assume that people had seen dependent types and that they had some idea about propositions as types. Nice. When using propositions as types but also when programming with dependent types in general the need for an equality type arises, that is we have Now the type is the type of proofs that a is equal to b. The canonical way to prove an equality is reflexivity: This is called an *introduction rule* while the previous rule is a *formation rule.* Now we know how to make elements of but how can we define functions out of ? This is explained by the elimination principle.To define a function with , it is enough to explain what it is doing for refl: where . As an example I can prove symmetry: I am using here to denote *definitional equality *this is the equality used by the type checker which can be determined by normalisation. This is only the non-dependent elimination principle, there is also a dependent one which tells us how to define dependent functions . Graham wondered why I am presenting the elimination principle much more informally then the formation and introduction rules. Indeed, I could have just given the elimination constant instead: with the computation rule However, this looks rather scary without any additional explanation. Ok, now there are some questions about equality. The first one is: what is the equality of functions? Can we prove *functional extensionality*? The answer is **no** for vanilla Intensional Type Theory. We see this as follows: without any assumptions (i.e. in the empty context) a term of a given type has to be build with an introduction rule, because all occurences of the elimination rule can be eliminated by normalisation. That is in particular a closed proof of an equality has to be of the form and it can only prove an equality of the form where and and hence . That is in the empty context we can only prove equality of terms which normalize to the same value. Now consider where Now we can prove using induction (that is the elimination principle for . However, we cannot prove because to show this, since we have no assumptions, we need that which requires that $n+0 \equiv 0+n$. But depending wether we define addition by recursion over the first or the second argument only one of them reduces to n and hence they are not convertible. And this point many people give up and some even view this lack of extensionality as a feature. I don’t agree with this because extensionality is an essential feature if you want to build abstractions. Moreover, it is a bit strange that on the one hand we have no observation which enables us to distinguish extensionally equal functions but on the other hand we also can’t prove them equal. This is a fundamental incompleteness of Intensional Type Theory (ITT) People working in ITT overcome this defect by using *setoids*. A setoid is a structure containing a type and an equivalence relation. We can define the function setoid which simply uses extensional equality as the equivalence relation. And whenever we compare elements of a setoid we use the special relation $\latex \sim$ instead of equality. However, we may end up in setoid hell! We now have to lift all operations on setoids, eg apart from the ordinary List type there is an extension of lists to setoid which also explains how the equality of the underlying type is lifted to the equality of lists of that type. For operations on types with more than one argument there is also a choice which to lift to setoids and which not. For families (dependent types) the overhead is even worse. How can we avoid setoid hell? Actually, wouldn’t it be better if we lifted** everything** to setoids? And indeed there is a model of type theory in setoids! This was the idea behind my 1999 LICS paper which was further developed in *Observational Type Theory* with Conor McBride and Wouter Swierstra.* *However, there is one caveat: to make sure that we actually get a model we have to restrict the codomain of to types which at most one element. These are called **propositions. ** Ok, so good so far. Another question we can ask about equality is wether it has to be propositional. That is wether we can prove *uniqueness of identity proofs (UIP):* It was an open question wether this can be proven from J. Martin Hofmann and Thomas Streicher settled the question by showing that there is a model of Type Theory where UIP doesn’t hold: the *groupoid* model. The groupoid model is a variation of the setoid model where the condition is replaced by demanding that the proofs of relfexivity, transitivity and symmetry satisfy the laws of a groupoid, that is a category where every morphism has a chosen inverse. The setoid model clearly validates UIP. For a while we didn’t have an issue with UIP. But then came Vladimir and started to talk about Homotopy Type Theory. First that seemed very exotic – what was Homotopy Theory again? But then I realized that in our quest for extensionality we had missed one question: **What is an extensional equality on types?** Vladimir gave the answer, which in simplified form is: isomorphic types are equal (Here by isomorphism of types I mean having functions in both directions which are inverse to each other). This is the **univalence principle. **Because indeed we cannot observe any difference between isomorphic types. There is a snag: the univalence principle is inconsistent with UIP. Why? Consider **Bool, **there are two isomorphisms on **Bool **with itself: the identity and negation. But if we identify these two which UIP forces us to do we also have to identify true and false which is bad news. Now our simple way to eliminate extensionality using the setoid model is out of the window. And the groupoid model has only limited success too, because I forgot to mention we need to force equality of proofs to be propositional. Hence the groupoid model only models a simplified version of HoTT where equality of equality proofs is trivial – this is called the 1-truncated theory. To capture all of HoTT we need to move to weak ω-groupoids. To write down exactly what they are turns out to be quite hard. One approach is to use Kan complexes in simplicial sets but they have the disadvantage that they require classical logic which makes them less useful for our purposes.

# Conor’s talk on linear dependent types (1/5/2015)

Conor was talking about a new approach to integrating linearity and dependent types. Attached are the smartboard images (ignore the first one :-).fplunch

# Mixed induction/coinduction in Agda (13/2/2015)

I gave an introduction to conduction in Agda using the operator. Basically the idea is that values is the type of computations of . Using this we can define the type of streams where the tail is a delayed computation:

data _ω (A : Set) : Set where _∷_ : A → ∞ (A ω) → A ω

To create stream programs we use the two operations: **delay** which creates delayed computations and **force ** which evaluates a previously delayed computation. Clearly we have . We can use to produce the stream of numbers starting from a given number:

from : ℕ → ℕ ω from n = n ∷ ♯ (from (suc n))

Another example where we need to use both and is map for streams:

map : {A B : Set} → (A → B) → A ω → B ω map f (a ∷ as) = (f a) ∷ ♯ (map f (♭ as))

Here we use to delay the recursive stream computation and within this we use to force the computation of the input stream. We frequently see this sort of force / delay combination.

Using this simple extension we can write simple coinductive programs and a trivial extension of Agda’s termination checker handles this situation: Apart from guard on the left using pattern matching we can also have guard on the right using . We can also define predicates coinductively: A standard example is bisimilarity of streams

data _≈_ {A : Set} : A ω → A ω → Set where _∷_ : {a a' : A} → a ≡ a' → {as as' : ∞ (A ω)} → ∞ ((♭ as) ≈ (♭ as')) → (a ∷ as) ≈ (a' ∷ as')

Note that we use for the infinite recursive proof that the tails are equal. It is easy to show reflexivity:

refl≈ : {A : Set} → (as : A ω) → as ≈ as refl≈ (a ∷ as) = refl ∷ ♯ (refl≈ (♭ as))

A more interesting example is the following simple fact about from and map:

lem : (n : ℕ) → map suc (from n) ≈ from (suc n) lem n = refl ∷ ♯ (lem (suc n))

It isn’t always so easy because we may have to generalize our theorem so that the recursion goes through. However, this approach compares favourably with the usual definition of bisimilarity as the greatest bisimulation which is defined as a relation which is stable under the destructors, here head and tail. Thinking extensionally, bisimilarity is the equality for streams, or more precisely is an equivalence in the sense of HoTT.

The nice thing about using is that it is easy to define types which use both induction and conduction. A standard example are stream processors, a representation of functions on streams:

data SP (A B : Set) : Set where get : (A → SP A B) → SP A B put : B → ∞ (SP A B) → SP A B

we can only read a finite amount of data before outputing however the latter can happen infinitely often. This is reflected by using in the type of the 2nd constructor but not in the first. We can interpret a stream processor as a function on streams:

⟦_⟧ : {A B : Set} → SP A B → A ω → B ω ⟦ get f ⟧ (a ∷ as) = ⟦ f a ⟧ (♭ as) ⟦ put b sp ⟧ as = b ∷ ♯ ( ⟦ ♭ sp ⟧ as)

This function uses a mixture of structural recursion and guarded corecursion: the first line is structurally recursive and the 2nd is guarded. While they can also be understood as a nested conductive/inductive type , the Agda approach seems more intuitive. An example are the composition operators, input driven:

_>>>l_ : {A B C : Set} → SP A B → SP B C → SP A C (put b sp) >>>l (get f) = (♭ sp) >>>l f b (get f) >>>l (get g) = get (λ a → (f a) >>>l (get g)) (put b sp) >>>l (put c sp') = put c (♯ ((put b sp) >>>l ♭ sp')) (get f) >>>l (put c sp) = get (λ a → (f a) >>>l (put c sp))

or output driven:

_>>>r_ : {A B C : Set} → SP A B → SP B C → SP A C (put b sp) >>>r (get f) = (♭ sp) >>>r f b (get f) >>>r (get g) = get (λ a → (f a) >>>r (get g)) (put b sp) >>>r (put c sp') = put c (♯ ((put b sp) >>>r ♭ sp')) (get f) >>>r (put c sp) = put c (♯ (get f >>>r ♭ sp))

The only difference is in the last line when both output and input is possible. I leave it as an exercise to derive both operators using only categorical combinators (fold, unfold).

Using this very naive approach it is easy to represent types of the form it turns out to be impossible to implement types of the form . I think this problem could be solved by mixing size change termination with parity games. See our short paper for a more detailed description of the problem.