After having to endure some talks about Homotopy Type Theory some members of our group wanted to know what this is all about. I tried to give an introduction which reflects my own views on HoTT which may be relevant for functional programmers. I was able to assume that people had seen dependent types and that they had some idea about propositions as types. Nice. When using propositions as types but also when programming with dependent types in general the need for an equality type arises, that is we have Now the type is the type of proofs that a is equal to b. The canonical way to prove an equality is reflexivity: This is called an introduction rule while the previous rule is a formation rule. Now we know how to make elements of but how can we define functions out of ? This is explained by the elimination principle.To define a function with , it is enough to explain what it is doing for refl: where . As an example I can prove symmetry: I am using here to denote definitional equality this is the equality used by the type checker which can be determined by normalisation. This is only the non-dependent elimination principle, there is also a dependent one which tells us how to define dependent functions . Graham wondered why I am presenting the elimination principle much more informally then the formation and introduction rules. Indeed, I could have just given the elimination constant instead: with the computation rule However, this looks rather scary without any additional explanation. Ok, now there are some questions about equality. The first one is: what is the equality of functions? Can we prove functional extensionality? The answer is no for vanilla Intensional Type Theory. We see this as follows: without any assumptions (i.e. in the empty context) a term of a given type has to be build with an introduction rule, because all occurences of the elimination rule can be eliminated by normalisation. That is in particular a closed proof of an equality has to be of the form and it can only prove an equality of the form where and and hence . That is in the empty context we can only prove equality of terms which normalize to the same value. Now consider where Now we can prove using induction (that is the elimination principle for . However, we cannot prove because to show this, since we have no assumptions, we need that which requires that $n+0 \equiv 0+n$. But depending wether we define addition by recursion over the first or the second argument only one of them reduces to n and hence they are not convertible. And this point many people give up and some even view this lack of extensionality as a feature. I don’t agree with this because extensionality is an essential feature if you want to build abstractions. Moreover, it is a bit strange that on the one hand we have no observation which enables us to distinguish extensionally equal functions but on the other hand we also can’t prove them equal. This is a fundamental incompleteness of Intensional Type Theory (ITT) People working in ITT overcome this defect by using setoids. A setoid is a structure containing a type and an equivalence relation. We can define the function setoid which simply uses extensional equality as the equivalence relation. And whenever we compare elements of a setoid we use the special relation $\latex \sim$ instead of equality. However, we may end up in setoid hell! We now have to lift all operations on setoids, eg apart from the ordinary List type there is an extension of lists to setoid which also explains how the equality of the underlying type is lifted to the equality of lists of that type. For operations on types with more than one argument there is also a choice which to lift to setoids and which not. For families (dependent types) the overhead is even worse. How can we avoid setoid hell? Actually, wouldn’t it be better if we lifted everything to setoids? And indeed there is a model of type theory in setoids! This was the idea behind my 1999 LICS paper which was further developed in Observational Type Theory with Conor McBride and Wouter Swierstra. However, there is one caveat: to make sure that we actually get a model we have to restrict the codomain of to types which at most one element. These are called propositions. Ok, so good so far. Another question we can ask about equality is wether it has to be propositional. That is wether we can prove uniqueness of identity proofs (UIP): It was an open question wether this can be proven from J. Martin Hofmann and Thomas Streicher settled the question by showing that there is a model of Type Theory where UIP doesn’t hold: the groupoid model. The groupoid model is a variation of the setoid model where the condition is replaced by demanding that the proofs of relfexivity, transitivity and symmetry satisfy the laws of a groupoid, that is a category where every morphism has a chosen inverse. The setoid model clearly validates UIP. For a while we didn’t have an issue with UIP. But then came Vladimir and started to talk about Homotopy Type Theory. First that seemed very exotic – what was Homotopy Theory again? But then I realized that in our quest for extensionality we had missed one question: What is an extensional equality on types? Vladimir gave the answer, which in simplified form is: isomorphic types are equal (Here by isomorphism of types I mean having functions in both directions which are inverse to each other). This is the univalence principle. Because indeed we cannot observe any difference between isomorphic types. There is a snag: the univalence principle is inconsistent with UIP. Why? Consider Bool, there are two isomorphisms on Bool with itself: the identity and negation. But if we identify these two which UIP forces us to do we also have to identify true and false which is bad news. Now our simple way to eliminate extensionality using the setoid model is out of the window. And the groupoid model has only limited success too, because I forgot to mention we need to force equality of proofs to be propositional. Hence the groupoid model only models a simplified version of HoTT where equality of equality proofs is trivial – this is called the 1-truncated theory. To capture all of HoTT we need to move to weak ω-groupoids. To write down exactly what they are turns out to be quite hard. One approach is to use Kan complexes in simplicial sets but they have the disadvantage that they require classical logic which makes them less useful for our purposes.
Conor was talking about a new approach to integrating linearity and dependent types. Attached are the smartboard images (ignore the first one :-).fplunch