From a14dbe47126a3cacdafe127f435690ffd690c198 Mon Sep 17 00:00:00 2001
From: owo-bot We have designed a binary operator system in Aya which happens to be (we didn't copy!) very similar to Rhombus (a.k.a. Racket 2) and Swift 5.7. TL;DR: it supports making any identifier a custom operator with precedences specified by a partial ordering. Left and right associativities are supported. The precedence and associativity information is bound to a name, not a definition. This means we can import a name from another module with changes to its name, associativity, and precedence. Importing with renaming is an established feature, but changing associativity and precedence is not that popular (though implemented in Agda already). Here are some code examples (implementations are omitted for simplicity): The With imports, it looks like this: Specifying operator precedences with a partial ordering is way better than with a number. In Haskell, if we already have In the future, we plan to support mixfix operators as in Agda (the current framework can support mixfix easily, but abusing mixfix notations can harm readability). Aya is now moving away from univalent type theory. Note that this does not mean we are moving away from cubical type theory -- we are trying to adapt an extensional version cubical type theory, called XTT, which is a cubical approach towards observational equality (the idea is due to Altenkirch and McBride): the equality type In case of cubical, this is automatic, due to how path types are defined. The reference for XTT can be found in the paper A Cubical Language for Bishop Sets by Sterling, Angiuli, and Gratzer. This paper has a previous version which has a universe hierarchy, called Cubical Syntax for Reflection-Free Extensional Equality, by the same authors. We plan to use XTT as the basis for Aya's type theory. We will change the following in v0.30 Aya: Yes, the last two items indicate a major change in the implementation of Aya, which is essentially a rewrite of the type checker. We took this chance to revisit a couple of old issues and fix them. Currently, we have suceeded in extracting a Java module for the syntax definition from the type checker module, which will benefit third-party libraries who want to deal with serialized Aya terms. We will not adapt the following features from XTT: The development is still in a private work-in-progress repository, which we will open-source and be ported to the main repo once we can compile this website with the new type checker, which implies complete support for inductive types except for the positivity checker. We will also have to rewrite some guides about higher inductive types, and instead use some quotient type examples. From that, we will start considering support for classes with extensions, and try to formalize some mathematics and do some real-world programming with Aya, partially bootstrapping the type checker. Stay tuned! We want a class system with the following basic capabilities: To add more flexibility to it, we want the following feature. Suppose we have a class Suppose the syntax for creating an instance of a class is This is called anonymous class extension, already implemented in the Arend language. As a syntactic sugar, we may write We further want definitional projection: This concludes the basic features of the class system. To implement this, it may seem that we need to have access to types in the normalizer, which makes it very heavy (in contrast to the lightweight normalizer you can have for plain MLTT). A uniform implementation of this definitional projection requires the definitional equality to commute with substitution, say, we may have We want the above to be equal to Here's a trick: whenever we see By that, we will never have This should implement the definitional projection feature without even modifying the MLTT normalizer. The idea of this feature comes from the treatment of extension types inspired from cooltt, see relevant post.Binary operators in Aya
-- Left-associative
+def infixl + (x y : Nat) : Nat => {??}
+-- Left-associative, bind tighter than +
+def infixl * (x y : Nat) : Nat => {??} tighter +
+-- Prefix operator
+def fixl ! (x : Nat) : Nat => {??}
+-- Postfix operator
+def fixr ? (x : Nat) : Nat => {??}
tighter
keyword works like this: when there are expressions like a * b + c
which may either mean (a * b) + c
or a * (b + c)
, we will put the tighter operator in the parenthesis. In case we found the two operators share the same priority, Aya will report an error.open import Primitives using (
+ invol as fixl ~ tighter =, \\/, /\\,
+ intervalMin as infix /\\ tighter \\/,
+ intervalMax as infix \\/,
+)
infix 3 +
and infix 4 *
and we hope to add a new operator which has higher precedence than +
but lower than *
, it's going to be impossible. Agda introduced float-point precedence levels to address the issue, but I think it does not solve the essential problem: that I have to lookup the numbers (of existing operator precedences) every time I write a new operator.Moving away from univalent type theory
a =_A b
is no longer defined uniformly for all types A
, but rather defined by assuming a closed (inductive-recursive) universe, and defining a type family (A : Type) -> A -> A -> Type
by casing on what A
is. For function types, we can define it as pointwise equality, which makes function extensionality true by definition.Partial
type to represent partial elements.Prop
universe, while the XTT paper said they intend to add it. We encourage the users to embrace the axiom of propositional resizing, which makes not just Prop
s to be impredicative, but also all h-props (e.g. types that are provably props) to be impredicative.Class extension with definitional projection
extends
in Java).Anonymous extensions
Precat
for precategories (written in pseudocode):class Precat
+| Ob : Type
+| Hom : Ob -> Ob -> Type
+| Hom-set (A B : Ob) : isSet (Hom A B)
+| id (A : Ob) : Hom A A
+| ....
new Precat { Ob := .., Hom := .., ... }
. I want the following:Precat
is the type for all instances of the class Precat
.Precat { Ob := Group }
is the type for all instances of the class Precat
whose Ob
field is (definitionally) Group
.Precat { Ob := Group, Hom := GroupHom }
is the type for all instances of the class Precat
whose Ob
field is Group
and Hom
field is GroupHom
.Precat { Ob := Group }
as Precat Group
, where the application is ordered the same as the fields in the class definition.Definitional projection
A : Precat Group
, then A.Ob
is definitionally equal to Group
.A : Precat Group GroupHom
, then A.Hom
is definitionally equal to GroupHom
.Implementation
Group
as well. Without access to contexts, it seems really hard!A : Precat Group
, we elaborate it into (the idea is similar to an η-expansion):A ==> new Precat
+ { Ob := Group
+ , Hom := A.Hom
+ , Hom-set := A.Hom-set
+ , id := A.id
+ , ...
+ }
A.Ob
in the source language, because it always gets elaborated into Group
directly. In case we partially know about A
from the type, we really elaborate the type information right into the core term. So, we don't even have a chance to touch the bare A
(not being projected) in the core language, and anything of a class type is always in an introduction form.(xs ++ (ys ++ zs))
and ((xs ++ ys) ++ zs)
, the left hand side has type Vec (xs.size ++ (ys.size ++ zs.size)) A
, and the right hand side has type Vec ((xs.size ++ ys.size) ++ zs.size)
.
So, the equality type is heterogeneous, and I introduce a type Vec (+-assoc i) A
for it, where +-assoc
is the associativity.
So this should type check, right? But pattern unification fails! I've left the two sides of +-assoc
implicit, so I'm supposed to infer what numbers' associativity I care about, using pattern unification.
Then, pattern unification fails because the constraints are generated from cubical boundaries, where the "interval" variable is substituted to its sides. So, we have this type (the Path
is called PathP
in Agda):
Γ ⊢ Path (fn i => Vec (+-assoc i) Nat) vecA vecB
Note the type of +-assoc
is Fn (_0 _1 _2 : Nat) → ((_0 + _1) + _2) = (_0 + (_1 + _2))
.
So elaboration inserts metavariables:
Γ ⊢ Path (fn i => Vec (+-assoc {?} {?} {?} i) Nat) vecA vecB
Where these metavariables have the following scope:
Γ , i : I ⊢ ? : Nat
Note that the i : I
binding is in-scope. So the metavariables with their spines added together becomes:
Γ ⊢ Path (fn i => Vec (+-assoc {?a Γ i} {?b Γ i} {?c Γ i} i) Nat) vecA vecB
Then, we get the following tycking problems, according to the rules of PathP:
vecA : Vec (+-assoc {?a Γ 0} {?b Γ 0} {?c Γ 0} 0) Nat
+vecB : Vec (+-assoc {?a Γ 1} {?b Γ 1} {?c Γ 1} 0) Nat
Look at the spines of all of these metavariables. None of them are in pattern fragment. So every equality constraint cannot be solved by pattern, because they're always equality after a substitution!
This can be solved by further extending your algorithm with pruning or a constraint system with a "lax" mode of solving metas when your equations rely essentially on non-pattern equations, but I feel it has defeated the point of finding the most general solution, which I used to believe to be the purpose of pattern unification....
Right now Aya will try to prune these non-pattern arguments out and try to solve them. This obviously generates non-unique solutions, but I think it will be useful in practice.
In Agda, the following code is in the library:
++-assoc : ∀ {m n k} (xs : Vec A m) (ys : Vec A n) (zs : Vec A k) →
+ PathP (λ i → Vec A (+-assoc m n k (~ i)))
+ ((xs ++ ys) ++ zs) (xs ++ ys ++ zs)
+++-assoc {m = zero} [] ys zs = refl
+++-assoc {m = suc m} (x ∷ xs) ys zs i = x ∷ ++-assoc xs ys zs i
However, if we replace the m
with _
, Agda will fail with the following error:
Failed to solve the following constraints:
+ _41 (xs = (x ∷ xs)) (ys = ys) (zs = zs) = x ∷ ++-assoc xs ys zs i1
+ : Vec A
+ (+-assoc (_m_39 (xs = (x ∷ xs)) (ys = ys) (zs = zs) (i = i1)) n k
+ (~ i1))
+ (blocked on any(_41, _57))
+ _40 (xs = (x ∷ xs)) (ys = ys) (zs = zs) = x ∷ ++-assoc xs ys zs i0
+ : Vec A
+ (+-assoc (_m_39 (xs = (x ∷ xs)) (ys = ys) (zs = zs) (i = i0)) n k
+ (~ i0))
+ (blocked on any(_40, _57))
+ +-assoc (_m_39 (xs = xs) (ys = ys) (zs = zs) (i = i)) n k (~ i)
+ = _n_49
+ : ℕ
+ (blocked on _n_49)
+ +-assoc (_m_39 (xs = (x ∷ xs)) (ys = ys) (zs = zs) (i = i)) n k
+ (~ i)
+ = ℕ.suc _n_49
+ : ℕ
+ (blocked on _m_39)
+ _40 (xs = []) (ys = ys) (zs = zs)
+ = _41 (xs = []) (ys = ys) (zs = zs)
+ : _x.A_43
+ (blocked on any(_40, _41))
+ _x.A_43
+ = Vec A
+ (+-assoc (_m_39 (xs = []) (ys = ys) (zs = zs) (i = i)) n k (~ i))
+ : Type
+ (blocked on _x.A_43)
+ _m_39 (i = i0) = m : ℕ (blocked on _m_39)
+ _m_39 (i = i1) + (n + k) = m + (n + k) : ℕ (blocked on _m_39)
In Aya, this will raise the following warning:
6 │ def ++-assoc-type (xs : Vec n A) (ys : Vec m A) (zs : Vec o A)
+ 7 │ => Path (fn i => Vec (+-assoc i) A)
+ 8 │ (xs ++ (ys ++ zs))
+ │ ╰──────────────╯ ?a n A m o xs ys zs 0 >= n, ?b n A m o xs ys zs 0 >= m,
+ ?c n A m o xs ys zs 0 >= o
+ 9 │ ((xs ++ ys) ++ zs)
+ │ ╰──────────────╯
+ │ ╰──────────────╯ ?a n A m o xs ys zs 1 >= n, ?b n A m o xs ys zs 1 >= m,
+ ?c n A m o xs ys zs 1 >= o
+
+Info: Solving equation(s) with not very general solution(s)
The inline equations are the type checking problems that Aya did something bad to solve.
Conor McBride told me pattern unification is a good algorithm, but the problem of interest might not be what we think it is. It is good for undergraduate induction, i.e. the object being induct on is a variable, and the motive of such induction is pattern. This is an enlightening perspective! But now that we have more problems, I think we might want to extend it. Just think about how many people use --lossy-unification
in Agda.
Throughout this blog post, I will use the term Prop
to mean the type of propositions, which does not have to be strict, but has the property that it cannot eliminate to Type
.
Long time ago I wrote a PASE question regarding definitional irrelevance. An important pro of Prop
in my opinion is that it is more convenient to be turned impredicative. Mathematicians want impredicativity for various reasons, which I will not explain in detail here.
Now I want to point out several reasons to avoid Prop
and impredicativity based on Prop
. Note that I'm not asking you to get rid of impredicativity in general!
If you're a type theorist and you surf the web, you probably have seen Jon Sterling complaining about inductive Props from a semantic point of view (I think there is something in the second XTT paper, but I've heard more of it from various discord servers).
Prop
There is another related PASE question regarding termination. You don't have to read it, I'll paraphrase the example.
It makes sense to define the following type:
data BrouwerTree
+ = Leaf Bool
+ | Branch (Nat -> BrouwerTree)
and have the following structural-induction:
left :: BrouwerTree -> Bool
+left (Leaf b) = b
+left (Branch xs) = left (xs 0)
Note that in the clause of left (Branch xs)
, the recursive call left (xs 0)
is considered smaller. This assumption is called 'predicative assumption', which cannot be made for Prop
because it is impredicative. The famous W-type is also using this assumption.
A counterexample with Prop
looks like this (since we need to talk about Props, we start using Agda syntax instead of Haskell):
data Bad : Prop where
+ b : ((P : Prop) → P → P) → Bad
+
+bad : Bad
+bad = b (λ P p → p)
+
+no-bad : Bad → ⊥
+no-bad (b x) = no-bad (x Bad bad)
+
+very-bad : ⊥
+very-bad = no-bad bad
Notice that the no-bad (b x)
clause uses the recursion no-bad (x Bad bad)
, which is only valid with the predicative assumption. So, having this predicative assumption actually proves false for Prop
, so for Prop
, we need to patch the termination checker to ban this rule. It just doesn't feel right!
Coq and Lean have a similar problem, but they are generating eliminators for inductive definitions, so they can generate the correct eliminator for Prop
, instead of patching the termination checker. In Coq, the eliminator for Bad
is
Bad_ind : forall P : Prop,
+ ((forall p : Prop, p -> p) -> P) ->
+ Bad -> P
Note that there is no recursive arguments, so there is no recursion allowed. There is no need to patch the termination checker in Coq. If you're using a termination checker, you want to get rid of impredicativity of Prop
! This eliminates the need of a universe-based irrelevance.
We may use axioms to get impredicativity. Suppose we define (since we no longer have it in the language) Prop := Σ (A : Type) (isProp A)
, there are two different axioms that imply impredicativity of Prop
:
A : Prop
is either ⊤
or ⊥
, which further implies that Prop ≅ Bool
, which implies resizing.If we think of the right way of doing math is to work with classical axioms, why on earth are we forging a theorem into the language?
`,23),o=[n];function p(r,l,h,d,c,k){return i(),a("div",null,o)}const m=e(t,[["render",p]]);export{u as __pageData,m as default}; diff --git a/assets/blog_ind-prop.md.BEJJ837H.lean.js b/assets/blog_ind-prop.md.BEJJ837H.lean.js new file mode 100644 index 0000000..56ae955 --- /dev/null +++ b/assets/blog_ind-prop.md.BEJJ837H.lean.js @@ -0,0 +1 @@ +import{_ as e,c as a,o as i,a3 as s}from"./chunks/framework.CAj4mBJo.js";const u=JSON.parse('{"title":"Inductive Prop are so wrong!","description":"","frontmatter":{},"headers":[],"relativePath":"blog/ind-prop.md","filePath":"blog/ind-prop.md","lastUpdated":1679786145000}'),t={name:"blog/ind-prop.md"},n=s("",23),o=[n];function p(r,l,h,d,c,k){return i(),a("div",null,o)}const m=e(t,[["render",p]]);export{u as __pageData,m as default}; diff --git a/assets/blog_index-unification.md.BBAtZbbv.js b/assets/blog_index-unification.md.BBAtZbbv.js new file mode 100644 index 0000000..963a109 --- /dev/null +++ b/assets/blog_index-unification.md.BBAtZbbv.js @@ -0,0 +1,7 @@ +import{_ as e,c as t,o as a,a3 as n}from"./chunks/framework.CAj4mBJo.js";const m=JSON.parse('{"title":"Index unification and forced patterns in Aya","description":"","frontmatter":{},"headers":[],"relativePath":"blog/index-unification.md","filePath":"blog/index-unification.md","lastUpdated":1661920240000}'),o={name:"blog/index-unification.md"},i=n(`Aya implements a version of index unification algorithm that allows emission of obvious patterns. Here's an example. Consider the famous "sized-vector" Vec (n : Nat) (A : Type)
definition, and we can perform some pattern matching:
len : ∀ {A} -> (n : Nat) -> Vec n A -> Nat
+len a vnil = 0
+len a (vcons _ x) = suc (len _ x)
This code may seem obviously correct, but why would I write about it if it's so simple? 😉 Let's run the type checking in our head, clause by clause and pattern by pattern.
a
, is a valid pattern for Nat
. This means we will substitute the codomain of the pattern matching with [a/n]
, where n
is the corresponding name in the telescope and a
is the term corresponding to the pattern.vnil
, is a pattern for Vec zero A
. However, the expected type is Vec a A
, which does not match the type of the pattern.So, here is the problem! The well-typed version of the program is actually:
len : ∀ {A} -> (n : Nat) -> Vec n A -> Nat
+len zero vnil = 0
+len (suc a) (vcons _ x) = suc (len a x)
However, isn't it obvious that the first pattern in the first clause must be zero
? It would be nice if the type checker can figure this out by itself. In fact, both Agda and Idris can do this! In Agda, the feature is called "dotted patterns" in the documentation and "inaccessible patterns" in the paper. I will prefer calling it "forced patterns" because the patterns are actually accessible (in the sense that the bindings in the patterns are used) and does not use the Agda dot syntax.
Forced patterns are not easy to implement. The simplest pattern type checking algorithm can be quite straightforward: we check the type of the pattern, add the bindings to the context so we can type the rest of the telescope, and check the body of the clause. With forced patterns, we will need to change the existing well-typed variable patterns into constructor patterns, so the algorithm becomes stateful.
In Aya, I introduced the concept of "meta patteriables" which is a funny reference to "meta variables" used in unification in conversion check.
Related PR: #198
When we see a variable pattern, we transform it into a MetaPat
which is a "unification variable" pattern that can be "solved" into another pattern. A reference to a MetaPat
is converted into a special meta variable that has a mutable reference to the MetaPat
(this can be replaced by a mutable map in the type checking state when you need purity, but I prefer mutable references for implementation simplicity).
When we are type checking a pattern of type D a
for D
an indexed inductive family and the expected type is D b
where b
is the special meta variable, we claim that b
is solved to a
, and the MetaPat
that corresponds to b
will be transformed into a
when we finalize the type checking results.
There are two more cases to deal with:
MetaPat
is not "solved", we just let it be a variable pattern.MetaPat
is "solved" more than once, we must make sure the solutions are identical.Note that a MetaPat
may contain bindings, but these bindings are already from the current context, so we do not need to add them again to the context.
Now, let's run the new algorithm:
len : ∀ {A} -> (n : Nat) -> Vec n A -> Nat
+len a vnil = 0
+len a (vcons _ x) = suc (len _ x)
a
, is a valid pattern for Nat
, so we generate a MetaPat(a)
and substitute the codomain with MetaPatRef(a)
, e.g. Vec MetaPatRef(a) A -> Nat
.vnil
, is a pattern for Vec zero A
. The expected type is Vec MetaPatRef(a) A
, and we solve MetaPat(a)
to zero
.a
is solved to zero
, we generate the well-typed clause len zero vnil = 0
which is exactly what we need.Thanks for reading!
`,21),s=[i];function c(d,r,l,p,h,u){return a(),t("div",null,s)}const b=e(o,[["render",c]]);export{m as __pageData,b as default}; diff --git a/assets/blog_index-unification.md.BBAtZbbv.lean.js b/assets/blog_index-unification.md.BBAtZbbv.lean.js new file mode 100644 index 0000000..4aacb3c --- /dev/null +++ b/assets/blog_index-unification.md.BBAtZbbv.lean.js @@ -0,0 +1 @@ +import{_ as e,c as t,o as a,a3 as n}from"./chunks/framework.CAj4mBJo.js";const m=JSON.parse('{"title":"Index unification and forced patterns in Aya","description":"","frontmatter":{},"headers":[],"relativePath":"blog/index-unification.md","filePath":"blog/index-unification.md","lastUpdated":1661920240000}'),o={name:"blog/index-unification.md"},i=n("",21),s=[i];function c(d,r,l,p,h,u){return a(),t("div",null,s)}const b=e(o,[["render",c]]);export{m as __pageData,b as default}; diff --git a/assets/blog_index.md.Bzluj1q-.js b/assets/blog_index.md.Bzluj1q-.js new file mode 100644 index 0000000..a4456b3 --- /dev/null +++ b/assets/blog_index.md.Bzluj1q-.js @@ -0,0 +1 @@ +import{_ as t,c as a,o,j as e,a as s}from"./chunks/framework.CAj4mBJo.js";const x=JSON.parse('{"title":"Aya blogs","description":"","frontmatter":{},"headers":[],"relativePath":"blog/index.md","filePath":"blog/index.md","lastUpdated":1662566075000}'),n={name:"blog/index.md"},r=e("h1",{id:"aya-blogs",tabindex:"-1"},[s("Aya blogs "),e("a",{class:"header-anchor",href:"#aya-blogs","aria-label":'Permalink to "Aya blogs"'},"")],-1),l=e("p",null,"See the sidebar 👈 for the list of blog posts.",-1),i=e("p",null,"Note that some posts are written before some breaking syntax changes. The code examples may not work with the latest version of Aya.",-1),d=[r,l,i];function c(h,_,p,b,m,f){return o(),a("div",null,d)}const y=t(n,[["render",c]]);export{x as __pageData,y as default}; diff --git a/assets/blog_index.md.Bzluj1q-.lean.js b/assets/blog_index.md.Bzluj1q-.lean.js new file mode 100644 index 0000000..a4456b3 --- /dev/null +++ b/assets/blog_index.md.Bzluj1q-.lean.js @@ -0,0 +1 @@ +import{_ as t,c as a,o,j as e,a as s}from"./chunks/framework.CAj4mBJo.js";const x=JSON.parse('{"title":"Aya blogs","description":"","frontmatter":{},"headers":[],"relativePath":"blog/index.md","filePath":"blog/index.md","lastUpdated":1662566075000}'),n={name:"blog/index.md"},r=e("h1",{id:"aya-blogs",tabindex:"-1"},[s("Aya blogs "),e("a",{class:"header-anchor",href:"#aya-blogs","aria-label":'Permalink to "Aya blogs"'},"")],-1),l=e("p",null,"See the sidebar 👈 for the list of blog posts.",-1),i=e("p",null,"Note that some posts are written before some breaking syntax changes. The code examples may not work with the latest version of Aya.",-1),d=[r,l,i];function c(h,_,p,b,m,f){return o(),a("div",null,d)}const y=t(n,[["render",c]]);export{x as __pageData,y as default}; diff --git a/assets/blog_lang-exts.md.BR4-INkK.js b/assets/blog_lang-exts.md.BR4-INkK.js new file mode 100644 index 0000000..4bd9ef8 --- /dev/null +++ b/assets/blog_lang-exts.md.BR4-INkK.js @@ -0,0 +1 @@ +import{_ as e,c as a,o as t,a3 as s}from"./chunks/framework.CAj4mBJo.js";const m=JSON.parse('{"title":"Haskell or Agda style extensions","description":"","frontmatter":{},"headers":[],"relativePath":"blog/lang-exts.md","filePath":"blog/lang-exts.md","lastUpdated":1662566075000}'),o={name:"blog/lang-exts.md"},i=s('In Haskell, you can do {-# LANGUAGE TypeFamilies #-}
, and similarly in Agda you can {-# OPTIONS --two-levels #-}
. These "pragma" can also be specified via command line arguments. Since Haskell is too weak and even basic features need extensions, I'll be avoiding talking about it and stick to Agda.
The purpose of these pragma is threefold:
One special pragma is to ensure that no known inconsistent flag or combination of flags is turned on -- --safe
. Let's discuss it later.
The current status of Agda libraries, that having separate cubical, HoTT library, and standard library, implementing the basic features individually, is a significant evidence that Agda is barely a programming language, but a collection of programming languages that share a lot in common and have good interoperability. Each flag that enables a certain language feature makes Agda a different language, and it is difficult in general to make two different language source-to-source compatible (see Kotlin-Java, Scala-Java, etc).
It is good to keep your language evolving like Agda (adding new features aggressively), and indeed Agda is the proof assistant with the richest set of language features I've known so far. However, this also negatively impacts Agda's reputation to some extent, that people say it's an experiment in type theory. Well, maybe it's not a negative impact, but it prevents big customers (such as Mathematicians looking for a tool to formalize math) from choosing the language. At least, we don't want this to happen to our language.
So, we will not introduce any "feature" flags, and will have only one base library. Aya will be one language, its features are its features. If we decide on removing a feature, then we remove it from the language (not going to keep it as an optional flag). If we decide on adding a feature, we add it and it should be available without any options.
It should still be encouraged to add some fancy, experimental features, but I think they should stay in branches or forks and will be either enlisted to the language or abandoned eventually.
However, the "parameters" part is not as bad. For example, it is very easy to allow type-in-type in the type checker -- we just disable the level solver. This is useful when the level solver prevents us from experimenting something classical using our language features but unfortunately the level solver is just unhappy with something minor. We can also like tweak the conversion checking algorithm we use, like we can use a simpler one that only solves first-order equations or we can enable the full-blown pattern unification algorithm. Verbosity levels, can also be seen as such parameter, and it's extremely useful for debugging the compiler. So we can apply that.
To be honest, it's hard to decide on a semantic of the word "safe", and relate that to the Agda pragma --safe
. To me, it means "logical consistency", and if we can set --safe
as the last argument of an Agda file, it should be guaranteed by Agda that it cannot provide you a proof of false. There are many related discussion in the Agda issue tracker that talks 'bout how should --safe
behave. Sometimes it fits my guess (for logical consistency), sometimes it implies more stuffs.
Anyway, a "logical consistency" flag seems useful, and will probably appear in Aya.
For disabling or enabling some checks, if we disable a check that is required to be consistent, then it should break --safe
. I think we will of course enable all of these checks by default, so exploiting the disabledness of a check can lead to inconsistency eventually. So, we can use an "unsafe" flag to ensure that our language is only unsafe when we want it to be. It is quite meaningful as well to have an "unsafe" mode, from a real-world programming perspective.
We'll have a language, with some flags that tweaks the parameters of some algorithms (which are no-harm), and some flags for disabling some checks (which will lead to an error at the end of tycking), and an unsafe flag that enables a set of features such as sorry
and suppresses the error of disabling checks.
Speaking of the base library design, I have some vague ideas in mind. I'd like it to be split into three parts (not sure if we're gonna make it three modules inside one stdlib or three standalone libraries):
Then, we can use these libraries on-demand.
',22),n=[i];function r(l,c,h,d,u,g){return t(),a("div",null,n)}const p=e(o,[["render",r]]);export{m as __pageData,p as default}; diff --git a/assets/blog_lang-exts.md.BR4-INkK.lean.js b/assets/blog_lang-exts.md.BR4-INkK.lean.js new file mode 100644 index 0000000..cd249f3 --- /dev/null +++ b/assets/blog_lang-exts.md.BR4-INkK.lean.js @@ -0,0 +1 @@ +import{_ as e,c as a,o as t,a3 as s}from"./chunks/framework.CAj4mBJo.js";const m=JSON.parse('{"title":"Haskell or Agda style extensions","description":"","frontmatter":{},"headers":[],"relativePath":"blog/lang-exts.md","filePath":"blog/lang-exts.md","lastUpdated":1662566075000}'),o={name:"blog/lang-exts.md"},i=s("",22),n=[i];function r(l,c,h,d,u,g){return t(),a("div",null,n)}const p=e(o,[["render",r]]);export{m as __pageData,p as default}; diff --git a/assets/blog_path-elab.md.BQ0zsQ1c.js b/assets/blog_path-elab.md.BQ0zsQ1c.js new file mode 100644 index 0000000..3b96257 --- /dev/null +++ b/assets/blog_path-elab.md.BQ0zsQ1c.js @@ -0,0 +1,21 @@ +import{_ as e,c as a,o as t,a3 as n}from"./chunks/framework.CAj4mBJo.js";const y=JSON.parse('{"title":"Elaboration of the \\"extension\\" type","description":"","frontmatter":{},"headers":[],"relativePath":"blog/path-elab.md","filePath":"blog/path-elab.md","lastUpdated":1679673438000}'),s={name:"blog/path-elab.md"},i=n(`Aya uses the so-called "extension" type (probably first-appeared here) as a generalized version of path type.
Instead of using the conventional path type, as in Cubical Agda:
PathP (λ i → A i) a b
for a : A 0
and b : A 1
λ i → a : PathP (λ i → A i) (a 0) (a 1)
for a : A i
p i : A i
for p : PathP (λ i → A i) a b
p 0 = a
and p 1 = b
This type looks good, but it does not scale to higher dimensions. Consider, for example, the type of a square with four faces specified (from Agda's cubical library):
Square :
+ {a₀₀ a₀₁ : A} (a₀₋ : a₀₀ ≡ a₀₁)
+ {a₁₀ a₁₁ : A} (a₁₋ : a₁₀ ≡ a₁₁)
+ (a₋₀ : a₀₀ ≡ a₁₀) (a₋₁ : a₀₁ ≡ a₁₁)
+ → Type _
+Square a₀₋ a₁₋ a₋₀ a₋₁ = PathP (λ i → a₋₀ i ≡ a₋₁ i) a₀₋ a₁₋
It gets even worse when the type is heterogeneous:
SquareP :
+ (A : I → I → Type ℓ)
+ {a₀₀ : A i0 i0} {a₀₁ : A i0 i1} (a₀₋ : PathP (λ j → A i0 j) a₀₀ a₀₁)
+ {a₁₀ : A i1 i0} {a₁₁ : A i1 i1} (a₁₋ : PathP (λ j → A i1 j) a₁₀ a₁₁)
+ (a₋₀ : PathP (λ i → A i i0) a₀₀ a₁₀) (a₋₁ : PathP (λ i → A i i1) a₀₁ a₁₁)
+ → Type ℓ
+SquareP A a₀₋ a₁₋ a₋₀ a₋₁ = PathP (λ i → PathP (λ j → A i j) (a₋₀ i) (a₋₁ i)) a₀₋ a₁₋
We have decided to use a partial element to represent these faces, and so we can freely add or delete these a face, without having to explicitly write down all faces for generality. This leads to the following syntax:
-------- ↓ type ↓ the "i = 0" end is b
+[| i |] (A i) {| i := a | ~ i := b |}
+-- ^ interval ^ the "i = 1" end is a
The above type is equivalent to PathP (λ i → A i) a b
. We may use this to simplify the type signature of path concatenation:
def concat {A : Type}
+ (p : [| i |] A {| |})
+ (q : [| i |] A {| ~ i := p 1 |})
+ : [| i |] A {| ~ i := p 0 | i := q 1 |}
It has fewer parameters than the conventional version:
def concat {A : Type}
+ {a b c : A}
+ (p : Path A a b)
+ (q : Path A b c)
+ : Path A a c
Now, how to implement this type? We have decided to overload lambdas and expressions as Cubical Agda did, but we have encountered several problems. Here's the story, in chronological order.
Below, we use "type checking" and we actually mean "elaboration".
Principle: do not annotate the terms (including variable references) with types, because this is going to harm efficiency and the code that tries to generate terms (now they'll have to generate the types as well, pain!).
Problem: reduction of path application is type-directed, like p 1
will reduce according to the type of p
.
Solution: annotate the path applications instead. Every time we do type checking & we get a term of path type, we "η-expand" it into a normal lambda expression with a path application inside. This secures the reduction of path applications.
New Problem: we expand too much. In case we want to check the type of term against a path type, the term is actually η-expanded and has a Π-type. So, we have the manually write path lambdas everywhere, e.g. given p : Path A a b
, and only λ i → p i
is a valid term of type Path A a b
, not p
(which is internally a lambda).
Lesson: we need to preserve the types somehow, generate path applications only when necessary.
New Solution: when checking something against a path type, we directly apply the boundary checks, instead of trying to invoke synthesize and unify the types. This eliminates a lot of λ i → p i
problems.
New Problem: this is incompatible with implicit arguments. Consider the following problem:
idp : {a : A} -> Path A a a
λ i → idp i : {a : A} -> I -> A
idp : Path Nat zero zero
The new solution will try to apply the boundary before inserting the implicit arguments, which leads to type-incorrect terms.
Lesson: we probably should not change the bidirectional type checking algorithm too much.
New Solution: the type information is known in the bidirectional type checking anyway, so we only generate path applications during the type checking of application terms.
This has worked so far, with some unsolved problems (yet to be discussed):
p : [| i |] A {| |}
an instance of type [| i |] A {| i := a |}
? If you have any thoughts, feel free to reach out :)
The implementation has been updated to solve some the above problems partially. Essentially, we need to do one thing: coercive subtyping. Since the type checking already respects the type (say, does not change the type), it remains to insert an η-expansion when the subtyping is invoked. We also need to store the boundary information in the path application term to have simple normalization algorithm.
Carlo Angiuli told me that in cooltt, the path type is decoded (in the sense of the universe à la Tarski el
operator) into a Π-type that returns a cubical subtype, and since el
is not required to be injective, this should be fine. At first, I was worried about the fibrancy of the path type, because a Π-type into a subtype is not fibrant, but it turns out that this is unrelated. We don't talk about the fibrancy of the types, but only the fibrancy of the type codes.
So for example, set truncation from HoTT looks like this:
inductive SetTrunc (A : Type)
+| mk : A -> SetTrunc A
+| trunc : isSet (SetTrunc A)
The trunc
constructor is elaborated to cubical syntax by flattening the type and attach the partial on the return type to the constructor, something like this:
trunc : Π (a b : SetTrunc A)
+ -> (p q : a = b)
+ -> (j i : I) -> SetTrunc A
+ { i = 1 -> a
+ ; i = 0 -> b
+ ; j = 1 -> q @ i
+ ; j = 0 -> p @ i
+ }
Aya is currently working on the so-called IApplyConfluence
problem for recursive higher inductive types like SetTrunc
, see this question which is a problem I'm wrapping my head around at the moment. More details will be posted later.
=0)c=r.activeElement;else{var f=i.tabbableGroups[0],p=f&&f.firstTabbableNode;c=p||h("fallbackFocus")}if(!c)throw new Error("Your focus-trap needs to have at least one focusable element");return c},v=function(){if(i.containerGroups=i.containers.map(function(c){var f=br(c,a.tabbableOptions),p=wr(c,a.tabbableOptions),C=f.length>0?f[0]:void 0,I=f.length>0?f[f.length-1]:void 0,M=p.find(function(m){return le(m)}),P=p.slice().reverse().find(function(m){return le(m)}),z=!!f.find(function(m){return se(m)>0});return{container:c,tabbableNodes:f,focusableNodes:p,posTabIndexesFound:z,firstTabbableNode:C,lastTabbableNode:I,firstDomTabbableNode:M,lastDomTabbableNode:P,nextTabbableNode:function(x){var $=arguments.length>1&&arguments[1]!==void 0?arguments[1]:!0,K=f.indexOf(x);return K<0?$?p.slice(p.indexOf(x)+1).find(function(Q){return le(Q)}):p.slice(0,p.indexOf(x)).reverse().find(function(Q){return le(Q)}):f[K+($?1:-1)]}}}),i.tabbableGroups=i.containerGroups.filter(function(c){return c.tabbableNodes.length>0}),i.tabbableGroups.length<=0&&!h("fallbackFocus"))throw new Error("Your focus-trap must have at least one container with at least one tabbable node in it at all times");if(i.containerGroups.find(function(c){return c.posTabIndexesFound})&&i.containerGroups.length>1)throw new Error("At least one node with a positive tabindex was found in one of your focus-trap's multiple containers. Positive tabindexes are only supported in single-container focus-traps.")},y=function w(c){var f=c.activeElement;if(f)return f.shadowRoot&&f.shadowRoot.activeElement!==null?w(f.shadowRoot):f},b=function w(c){if(c!==!1&&c!==y(document)){if(!c||!c.focus){w(d());return}c.focus({preventScroll:!!a.preventScroll}),i.mostRecentlyFocusedNode=c,Ar(c)&&c.select()}},E=function(c){var f=h("setReturnFocus",c);return f||(f===!1?!1:c)},g=function(c){var f=c.target,p=c.event,C=c.isBackward,I=C===void 0?!1:C;f=f||Ae(p),v();var M=null;if(i.tabbableGroups.length>0){var P=l(f,p),z=P>=0?i.containerGroups[P]:void 0;if(P<0)I?M=i.tabbableGroups[i.tabbableGroups.length-1].lastTabbableNode:M=i.tabbableGroups[0].firstTabbableNode;else if(I){var m=ft(i.tabbableGroups,function(B){var U=B.firstTabbableNode;return f===U});if(m<0&&(z.container===f||_e(f,a.tabbableOptions)&&!le(f,a.tabbableOptions)&&!z.nextTabbableNode(f,!1))&&(m=P),m>=0){var x=m===0?i.tabbableGroups.length-1:m-1,$=i.tabbableGroups[x];M=se(f)>=0?$.lastTabbableNode:$.lastDomTabbableNode}else ge(p)||(M=z.nextTabbableNode(f,!1))}else{var K=ft(i.tabbableGroups,function(B){var U=B.lastTabbableNode;return f===U});if(K<0&&(z.container===f||_e(f,a.tabbableOptions)&&!le(f,a.tabbableOptions)&&!z.nextTabbableNode(f))&&(K=P),K>=0){var Q=K===i.tabbableGroups.length-1?0:K+1,q=i.tabbableGroups[Q];M=se(f)>=0?q.firstTabbableNode:q.firstDomTabbableNode}else ge(p)||(M=z.nextTabbableNode(f))}}else M=h("fallbackFocus");return M},S=function(c){var f=Ae(c);if(!(l(f,c)>=0)){if(ye(a.clickOutsideDeactivates,c)){s.deactivate({returnFocus:a.returnFocusOnDeactivate});return}ye(a.allowOutsideClick,c)||c.preventDefault()}},T=function(c){var f=Ae(c),p=l(f,c)>=0;if(p||f instanceof Document)p&&(i.mostRecentlyFocusedNode=f);else{c.stopImmediatePropagation();var C,I=!0;if(i.mostRecentlyFocusedNode)if(se(i.mostRecentlyFocusedNode)>0){var M=l(i.mostRecentlyFocusedNode),P=i.containerGroups[M].tabbableNodes;if(P.length>0){var z=P.findIndex(function(m){return m===i.mostRecentlyFocusedNode});z>=0&&(a.isKeyForward(i.recentNavEvent)?z+1
H)for(;E<=B;)Le(u[E],b,S,!0),E++;else{const W=E,z=E,Q=new Map;for(E=z;E<=H;E++){const be=d[E]=R?Ge(d[E]):Ae(d[E]);be.key!=null&&Q.set(be.key,E)}let te,ae=0;const Te=H-z+1;let pt=!1,zr=0;const St=new Array(Te);for(E=0;E {const{el:S,type:L,transition:x,children:R,shapeFlag:E}=u;if(E&6){tt(u.component.subTree,d,g,_);return}if(E&128){u.suspense.move(d,g,_);return}if(E&64){L.move(u,d,g,ht);return}if(L===ye){r(S,d,g);for(let B=0;B The Aya compiler generates styled (e.g. with colors and text attributes) code snippets for many targets, like HTML, LaTeX, etc., and it's tempting to use the same tool but for different languages. This is what the fake literate mode is for. Right now it only targets LaTeX, but it can be extended to other languages. Let me know if you want other backend supports. To start, install the latest version of Aya, put the following code in a file named Then, run the following command to generate literate output, where you replace Then it will print the following output: You may add The following code provides a quick macro to include the generated code: Use Great. I expect you to know something about GHCi and algebraic data types. This is an Aya tutorial for Haskell programmers. If you find a bug, open an issue on GitHub! Aya has a REPL that works similar to GHCi. You can start it by running If you're using jar with java, use the following instead: In the REPL, you can use To work multiline, use the pair Aya supports pretty-printing of any terms, including ✨lambdas✨. Note that Aya does not automatically support generic lambdas, so typing Aya support Read project-tutorial, it is very short. It is recommended to practice the following with an Aya project in VSCode, see vscode-tutorial. About modules: Ok, let's write some code! Natural numbers. In Haskell: In Aya (we replaced the keyword We don't enforce capitalization of constructors. The constructors need to be qualified (like Bonus: if you define a data type that looks like Functions are defined with In Aya (remember the numeric literal thing?): There are plenty of differences. Let's go through them one by one. The We do not use a number to denote precedence, but a partial order. This allows arbitrary insertion of new precedence level into previously defined ones. Say you want You also have The parameters and the return type are separated using These names can be used for one-linear function bodies: Aya supports a painless version of the section syntax, where the top-level does not need parentheses. See the following REPL output (the underscored names are internally generated variable names. If you have an idea on how to make them better, open an issue and let's discuss!). When we only need to pattern match on a subset of the parameters, we can use the In Haskell: In Aya: Observations: Type constructors are like It is recommended to play with it in the REPL to get a feel of it. There is a famous example of dependent types in Haskell -- the sized vector type: In Aya, we have a better syntax: Imagine how much work this is in Haskell. There is one more bonus: in Aya, you may modify the definition of It says we not only compute With This makes Aya is a programming language and an interactive proof assistant designed for type-directed programming and formalizing math. The type system of Aya has the following highlights: At this stage of development, we recommend using the nightly version of Aya. Go to GitHub Release, there will be a plenty of files. It's updated per-commit in the Checking the section below that fits your platform. After the installation, run Aya is available for Windows, Linux, and macOS, as listed below. Here's a hands-on script I wrote to (re)install Aya to If it's the first time you install Aya, you may want to do (or replace If you want to use Aya in your GitHub Actions workflow, you can use aya-prover/setup-aya like The step above will install the latest version of Aya to Very cool! Now you can try the prebuilt jars (much smaller and platform-independent) or build Aya from source. We will always be using the latest release of Java. Not that we do not stay on LTS releases. Download the jar version of cli (for using command line) and lsp (for using VSCode) and run it with Clone the repository. Then, run build with Gradle supports short-handed task names, so you can run An Aya project consists of a directory with a To build a project, run This is pretty much the same theorem! Now, suppose we need to show a propositional equality between two records. This means we have to show they're memberwise equal. One record has a member To have function extensionality as a theorem, you came across two distinct type theories: observational type theory and cubical type theory. Aya chose the latter. Here's the proof of function extensionality in Aya: This is because Aya has a "cubical" equality type that is not inductively defined. An equality And to show that We may also prove the action-on-path theorem, commonly known as It is tempting to use the below definition: However, this definition is not well-typed: They are not the same! Fortunately, we can prove that they are propositionally equal. We need to show that natural number addition is associative, which is the key lemma of this propositional equality: It is harder to prove because in the induction step, one need to show that So if we have We may then use the following type signature: This is an uninteresting quotient type, that is basically What's interesting about this type, is that its elimination implies function extensionality: The Go to GitHub Releases, click the latest successful run, scroll down to the bottom of the page, download the "aya-prover-vscode-extension", and unzip it. Then, follow VSCode docs to install the extension. It remains to configure the Aya language server. There are two ways to use the server. First, open settings, search for "Aya path", you should see a text box. Then, you have a choice: Then, open a directory that is an Aya project (see project-tutorial). Open any The rest of the features should be quite discoverable for regular programmers, such as hovering a red or a yellow wavy line to see the error message, etc. Please create issues and discuss ideas on how to improve the error reports. Fz
zp lGn}2sQH*LR4c5fTIEaBo#xwg-_iq)>lUQpM258DTOPQ5F?
zwfK0Z)OKR=;ExiMtA_ZR;?{>X@IlCC{-RG2d5-&|-P-%$%uc)gueoM^+Z7wfd)-Z+
z9lZYY`MOD>691XWNsS~AUb_gpN5?iUR%IOESL?M(*+1uwRsY@CzBfC3mp3w-fnEnK
z>6u3zx~b(_GvQ6zj-2Fake literate mode
hello.flcl
:keyword: data where;
+symbol: ≃;
+data: Int;
+constructor: zero succ;
+------
+data Int where
+ zero : Int
+ succ : Int ≃ Int
<AYA>
with either java -jar <path-to-aya.jar>
or aya
depending on your installation:<AYA> --fake-literate hello.flcl
\\AyaKeyword{data}\\hspace{0.5em}\\AyaData{Int}\\hspace{0.5em}\\AyaKeyword{where}~\\\\
+\\hspace{1.0em}\\AyaConstructor{zero}\\hspace{0.5em}:\\hspace{0.5em}\\AyaData{Int}~\\\\
+\\hspace{1.0em}\\AyaConstructor{succ}\\hspace{0.5em}:\\hspace{0.5em}\\AyaData{Int}\\hspace{0.5em}≃\\hspace{0.5em}\\AyaData{Int}
-o hello.tex
to let it write to a file instead of printing to the console. With minimal configurations such as below, you can compile it with any LaTeX toolchain:\\usepackage{newunicodechar}
+\\newunicodechar{≃}{\\ensuremath{\\mathrel{\\simeq}}}
+
+\\usepackage{xcolor}
+
+% Aya highlighting
+\\definecolor{AyaFn}{HTML}{00627a}
+\\definecolor{AyaConstructor}{HTML}{067d17}
+\\definecolor{AyaStruct}{HTML}{00627a}
+\\definecolor{AyaGeneralized}{HTML}{00627a}
+\\definecolor{AyaData}{HTML}{00627a}
+\\definecolor{AyaPrimitive}{HTML}{00627a}
+\\definecolor{AyaKeyword}{HTML}{0033b3}
+\\definecolor{AyaComment}{HTML}{8c8c8c}
+\\definecolor{AyaField}{HTML}{871094}
+\\newcommand\\AyaFn[1]{\\textcolor{AyaFn}{#1}}
+\\newcommand\\AyaConstructor[1]{\\textcolor{AyaConstructor}{#1}}
+\\newcommand\\AyaCall[1]{#1}
+\\newcommand\\AyaStruct[1]{\\textcolor{AyaStruct}{#1}}
+\\newcommand\\AyaGeneralized[1]{\\textcolor{AyaGeneralized}{#1}}
+\\newcommand\\AyaData[1]{\\textcolor{AyaData}{#1}}
+\\newcommand\\AyaPrimitive[1]{\\textcolor{AyaPrimitive}{#1}}
+\\newcommand\\AyaKeyword[1]{\\textcolor{AyaKeyword}{#1}}
+\\newcommand\\AyaLocalVar[1]{\\textit{#1}}
+\\newcommand\\AyaComment[1]{\\textit{\\textcolor{AyaComment}{#1}}}
+\\newcommand\\AyaField[1]{\\textcolor{AyaField}{#1}}
\\newcommand{\\includeFlcl}[1]{{
+\\vspace{0.15cm}
+\\RaggedRight
+% https://tex.stackexchange.com/a/35936/145304
+\\setlength\\parindent{0pt}
+\\setlength{\\leftskip}{1cm}
+\\input{#1}
+
+\\setlength{\\leftskip}{0cm}
+\\vspace{0.15cm}
+}}
\\includeFlcl{hello}
to include the generated code in your document.Working with the REPL
aya -i
in your terminal, and you can start typing definitions or expressions.aya -i
java --enable-preview -jar cli-fatjar.jar -i
:l
to load a file, :q
to quit, and :?
to get help. Use :t
to show the type. Since it's dependent type, you can toggle normalization levels by :normalize
followed by NF
, WHNF
, or NULL
(don't normalize).:{
and :}
-- same as GHCi.\\x => x
would not work. You need to specify the type of x
, like \\(x : Int) => x
.fn
as an alias to \\
instead of λ
, similar to Coq and Lean (but not Agda). This is because users (especially mathematicians) are likely to use λ
as a variable name. Similarly, we used Fn
over Pi
or Π
for the same reason.Working with projects
::
, not .
.import X
) are qualified by default, use open import X
to unqualify. This is short for import X
followed by open X
.open import X using (x)
(this only imports x
from X
) you may also use open import X hiding (x)
to import everything except x
from X
.open import X using (x as y)
and the meaning is obvious.public open
.Programming in Aya
data Nat = Zero | Suc Nat
data
with inductive
because we want to use it as a package name):Nat::zero
) to access. As you may expect, Nat
automatically becomes a module, so we can use open
and public open
to unqualify the constructors.Nat
, then you can use numeric literals.def
, followed by pattern matching. Consider this natural number addition in Haskell (intentionally not called +
to avoid name clash with Prelude):(<+>) :: Nat -> Nat -> Nat
+Zero <+> n = n
+Suc m <+> n = Suc (m <+> n)
+
+infixl 6 <+>
infixl
declares <+>
to be a left-associative infix operator. Other options include infix
, infixr
, fixl
, and fixr
. Without it, the function will work the same as normal function. Unlike Haskell, we do not distinguish "operator" names and "function" names.<+>
to have a lower precedence than <*>
, you can do:def infixl <+> Nat Nat : Nat
+/// .... omitted
+looser <*>
tighter
, with the obvious meaning.:
. The parameter types can be written directly, without ->
. Aya allow naming the parameters like this:def oh (x : Nat) : Nat
> 1 <+>
+suc
+
+> <+> 1
+λ _7 ⇒ _7 <+> 1
+
+> 1 <+> 1
+suc 1
+
+> 2 <+>
+λ _5 ⇒ suc (suc _5)
+
+> <+> 2
+λ _7 ⇒ _7 <+> 2
elim
keyword:Type-level programming
id :: a -> a
+id x = x
Type
, we also have Set
, and ISet
. For now, don't use the others.{F : Type -> Type}
(and yes, the ->
denotes function types, works for both values and types), very obvious. Definition of Maybe
in Aya:{-# LANGUAGE GADTs #-}
+{-# LANGUAGE DataKinds #-}
+-- Maybe you need more, I don't remember exactly
+
+data Vec :: Nat -> Type -> Type where
+ Nil :: Vec Zero a
+ (:<) :: a -> Vec n a -> Vec (Suc n) a
+infixr :<
<+>
to be:overlap def infixl <+> Nat Nat : Nat
+| 0, n => n
+| n, 0 => n
+| suc m, n => suc (m <+> n)
0 + n = n
, but when the first parameter is neither 0
nor suc
, we may take a look at the second parameter and seek for other potential computations. This is completely useless at runtime, but very good for type checking. For instance, we may want a Vec
of size n
, and what we have is some Vec
of size n + 0
. Then having n + 0
to directly reduce to n
is very useful, otherwise we will need to write a conversion function that does nothing but changes the type, or use unsafeCoerce
.n + 0 = n
, we may add the following clause to ++
:| xs, nil => xs
++
compute on more cases too.The Aya Prover
',5),o=[l];function n(s,d,h,p,f,m){return r(),a("div",null,o)}const g=e(i,[["render",n]]);export{c as __pageData,g as default};
diff --git a/assets/guide_index.md.BmZ0zcov.lean.js b/assets/guide_index.md.BmZ0zcov.lean.js
new file mode 100644
index 0000000..d649e32
--- /dev/null
+++ b/assets/guide_index.md.BmZ0zcov.lean.js
@@ -0,0 +1 @@
+import{_ as e,c as a,o as r,a3 as t}from"./chunks/framework.CAj4mBJo.js";const c=JSON.parse('{"title":"The Aya Prover","description":"","frontmatter":{},"headers":[],"relativePath":"guide/index.md","filePath":"guide/index.md","lastUpdated":1717298851000}'),i={name:"guide/index.md"},l=t("",5),o=[l];function n(s,d,h,p,f,m){return r(),a("div",null,o)}const g=e(i,[["render",n]]);export{c as __pageData,g as default};
diff --git a/assets/guide_install.md.DYXt-NYj.js b/assets/guide_install.md.DYXt-NYj.js
new file mode 100644
index 0000000..e4e55d6
--- /dev/null
+++ b/assets/guide_install.md.DYXt-NYj.js
@@ -0,0 +1,26 @@
+import{_ as a,c as s,o as e,a3 as i}from"./chunks/framework.CAj4mBJo.js";const g=JSON.parse('{"title":"Install Aya","description":"","frontmatter":{},"headers":[],"relativePath":"guide/install.md","filePath":"guide/install.md","lastUpdated":1693870722000}'),t={name:"guide/install.md"},n=i(`funExt
and quotients are available without axioms (like Agda, redtt, and Arend but not higher-dimensional),Install Aya
main
branch, but the release date displayed is very old and is an issue of GitHub itself.aya --help
for general instructions and aya -i
to start an interactive REPL. If you chose the jlink version, the bin
folder contains the executable scripts.Download from GitHub Release
Platform Architecture JLink Native Windows x64 zip exe Linux x64 zip exe macOS x64 zip exe Windows aarch64 zip Unavailable yet Linux aarch64 zip Unavailable yet macOS aarch64 zip Unavailable yet $AYA_PREFIX
(define the variable somewhere or replace with your preferred prefix, e.g. /opt/aya
) on Linux x64:#!/bin/bash
+sudo mkdir -p \${AYA_PREFIX:-/tmp}
+sudo chown $USER \${AYA_PREFIX:-/tmp}
+rm -rf \${AYA_PREFIX:-/tmp}/*
+cd \${AYA_PREFIX:-/tmp}
+wget https://github.com/aya-prover/aya-dev/releases/download/nightly-build/aya-prover_jlink_linux-x64.zip
+unzip aya-prover_jlink_linux-x64.zip
+rm aya-prover_jlink_linux-x64.zip
+cd -
~/.bashrc
with your shell's rc file):echo 'export PATH="$AYA_PREFIX/bin:$PATH"' >> ~/.bashrc
+source ~/.bashrc
Use Aya in GitHub Actions
- name: Setup Aya
+ uses: aya-prover/setup-aya@latest
+ with:
+ version: 'nightly-build'
PATH
. You can find the complete example here.If you already have Java runtime...
Prebuilt binary
java --enable-preview -jar [file name].jar
.Build from source
./gradlew
followed by a task name. If you have problems downloading dependencies (like you are in China), check out how to let gradle use a proxy.# build Aya and its language server as applications to \`ide-lsp/build/image/current\`
+# the image is usable in Java-free environments
+./gradlew jlinkAya --rerun-tasks
+# build Aya and its language server as executable
+# jars to <project>/build/libs/<project>-<version>-fat.jar
+./gradlew fatJar
+# build a platform-dependent installer for Aya and its language
+# server with the jlink artifacts to ide-lsp/build/jpackage
+# requires https://wixtoolset.org/releases on Windows
+./gradlew jpackage
+# run tests and generate coverage report to build/reports
+./gradlew testCodeCoverageReport
+# (Windows only) show the coverage report in your default browser
+./gradlew showCCR
./gradlew fJ
to invoke fatJar
, tCCR
to invoke testCodeCoverageReport
, and so on.Aya Package
aya.json
file (project metadata) and a src
directory for source code. Here's a sample aya.json
:{
+ "ayaVersion": "0.31",
+ // ^ The version of Aya you are using -- for compatibility checks
+ "name": "<project name>",
+ "version": "<project version>",
+ "group": "<project group>",
+ // ^ The group is used to distinguish different projects with the same modules
+
+ "dependency": {
+ "<name of dependency>": {
+ "file": "<directory to your dependency>"
+ },
+ // We plan to support other sources of dependencies,
+ // but we do not have money to
+ // host a package repository for now.
+ }
+}
aya --make <parent dir of aya.json>
(incremental). For force-rebuilding, replace --make
with --remake
. For jar users, run java --enable-preview -jar cli-fatjar.jar --make <parent dir of aya.json>
.\\ _0 ⇒ not (not 1)
, and the other has id
. This time, you cannot cheat by changing the goal type. You post the question on some mailing list and people are telling you that the alternative version of the theorem you have shown does not imply the original, unless "function extensionality" is a theorem in your type theory.Cubical
a = b
for a, b : A
is really just a function I → A
where I
is a special type that has two closed instances 0
and 1
, but there is not a pattern matching operation that distinguishes them. Then, for f : I -> A
, the corresponding equality type is f 0 = f 1
. By this definition, we can "prove" reflexivity by creating a constant function:f = g
, it suffices to construct a function q : I -> (A -> B)
such that q 0 = f
and q 1 = g
. This is true for the proof above: (fn i a => p a i) 0 β-reduce
+= fn a => p a 0 p a : f a = g a
+= fn a => f a η-reduce
+= f
cong
, but renamed to pmap
to avoid a potential future naming clash:overlap def ++-assoc (xs : Vec n A) (ys : Vec m A) (zs : Vec o A)
+ : (xs ++ ys) ++ zs = xs ++ (ys ++ zs) elim xs
+| nil => refl
+| x :< xs => pmap (x :<) (++-assoc xs ys zs)
cast (pmap (\\ _0 ⇒ Vec 1 A) +-assoc)
is equivalent to the identity function in order to use the induction hypothesis. For the record, here's the proof:X : A = B
and a : A
, b : B
, then Path (\\i => X i) a b
expresses the heterogeneous equality between a
and b
nicely.Bool
but saying its two values are equal, so it's really just Unit
. The type of line
will be translated into I → Interval
together with the judgmental equality that line 0
is left
and line 1
is right
. Every time we do pattern matching, we need to make sure it preserves these judgmental equalities.succ
operator has the first three clauses straightforward, and the last one is a proof of succ (neg 0)
equals succ (pos 0)
, as we should preserve the judgmental equality in the type of zro
. We need to do the same for abs
.So you are using VSCode
lsp-fatjar.jar
file path there. Make sure you have a java
executable in the Path (recommended) or in java.home
key in the settings json.aya-lsp
(or aya-lsp.bat
if you are on Windows) file path there, which is under the bin
folder of the jlink distribution. In this case, you don't need to have a java
executable in the Path..aya
file, you should see some basic highlight (keywords, comments, etc.). Wait for VSCode to activate the extension, and hit Ctrl+L Ctrl+L
to load the file. At this point, you should see advanced highlight (type names, constructors, etc.), with clickable definitions.aJxy{j56_L
zLQcd%;`&~HJsDrJW_a4>d&hA{Nt%hyNLF?&qFj~s+^=YLS&kL
tzY@L_F
znjt%f)7FgGCH%U2%r<$m;hG7%frnO4*
Cxz1l_%IQ-v1?C?Bp7%)F*OJ0z@yqEz+=WM=ei*ZAH
zzJ8H5?>a4seuL@4^zBx9ybKd#iB%8H59d?OUdVf!acjRSr8nu%NZwVCDI3byABa}{
zPQ!S+Y2vq;JukGy$P9
!>=)6>(KIHsWo&ikq@7K~44&+u;-l&
9cpqY4e@b{IL0yDOmNms)K5{s;cHyXVgXYf%=Oy
zyfhxujM7vzp9N8Yynw1eP2rYsZv-A`jP%lS(YivuqL3&TR4CdM?SoE0muc7Oc<4Og
z*6Et)`sgO=R_gB2z0ITb4qy~9W|%(AB32t)gR{lSaB_Wt{#}K>zG!`i0oGu`aKdoK
z{oGq%yyBOe+*B#1%NpNo)a#OIBBv8?kUV_QZd~u$XygAJ%1g$3HTfF0mlgYn(&M
zlY;ZBnl#;wJ==pZZ!@+nvut?@_wkXfxrGHrYSFVxz;f|0T*u|H4XO127ZvKKt?R|h
zsR}Vz3F2^Tfbl3EE>mAwZ?qDe?K-;^$B|Va%u7T&YPUOCea1UR(j>Z$h6L23DWCE7
ztY*u3s$|BXaRUY3C&?W}1%HcrSwL-3W(5J
vKPpBs