From rae at cs.brynmawr.edu Sat Jan 6 17:23:34 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 6 Jan 2018 17:23:34 -0500 Subject: [TYPES] Statement of structural induction Message-ID: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Does anyone know a place I could see a statement of how structural induction works, spelled out in full detail? I don't need a "canonical" (i.e. citable) reference, per se, but a place I can enhance my own understanding of structural induction. My problem is that I had been blithely assuming that structural induction was reducible to natural-number induction, where the natural number was the size (number of nodes in the tree) of the structure. But this is clearly not true, as both Coq and Agda accept this (written in Coq notation): Inductive ty := | Ax : ty | Suc : ty -> ty | Mk : (nat -> ty) -> ty. Fixpoint f (t : ty) : nat := match t with | Ax => 1 | Suc t' => 1 + f t' | Mk x => 1 + (f (x 2)) end. Note that there's no good way (that I see) of denoting the size of such a structure. And yet, Coq feels confident that my f will terminate. Indeed, I certainly don't have a counterexample to this, and I don't believe there is one. So, I'm looking for a description of structural induction that explains how the function above is terminating in a rigorous way. Bonus points if it also shows how non-strictly-positive recursive occurrences of a type in its own definition cause structural induction to fail. Thanks! Richard -=-=-=-=-=-=-=-=-=-=- Richard A. Eisenberg, PhD Asst. Prof. of Computer Science Bryn Mawr College Bryn Mawr, PA, USA cs.brynmawr.edu/~rae From burak.emir at gmail.com Sat Jan 6 18:33:30 2018 From: burak.emir at gmail.com (Burak Emir) Date: Sun, 7 Jan 2018 00:33:30 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: Hello, are you looking for Noetherian (well-founded) induction? There is an article here, which should provide a good starting point: https://en.m.wikipedia.org/wiki/Well-founded_relation - Burak On Jan 6, 2018 11:39 PM, "Richard Eisenberg" wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Does anyone know a place I could see a statement of how structural > induction works, spelled out in full detail? I don't need a "canonical" > (i.e. citable) reference, per se, but a place I can enhance my own > understanding of structural induction. > > My problem is that I had been blithely assuming that structural induction > was reducible to natural-number induction, where the natural number was the > size (number of nodes in the tree) of the structure. But this is clearly > not true, as both Coq and Agda accept this (written in Coq notation): > > Inductive ty := > | Ax : ty > | Suc : ty -> ty > | Mk : (nat -> ty) -> ty. > > Fixpoint f (t : ty) : nat := > match t with > | Ax => 1 > | Suc t' => 1 + f t' > | Mk x => 1 + (f (x 2)) > end. > > Note that there's no good way (that I see) of denoting the size of such a > structure. And yet, Coq feels confident that my f will terminate. Indeed, I > certainly don't have a counterexample to this, and I don't believe there is > one. > > So, I'm looking for a description of structural induction that explains > how the function above is terminating in a rigorous way. Bonus points if it > also shows how non-strictly-positive recursive occurrences of a type in its > own definition cause structural induction to fail. > > Thanks! > Richard > > -=-=-=-=-=-=-=-=-=-=- > Richard A. Eisenberg, PhD > Asst. Prof. of Computer Science > Bryn Mawr College > Bryn Mawr, PA, USA > cs.brynmawr.edu/~rae > From sherman at csail.mit.edu Sat Jan 6 19:44:53 2018 From: sherman at csail.mit.edu (Ben Sherman) Date: Sat, 6 Jan 2018 16:44:53 -0800 Subject: [TYPES] Statement of structural induction In-Reply-To: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: <9F1B9847-3D43-4FFD-BD53-5CEB6D51D505@csail.mit.edu> Hi Richard, You may be interested in the following papers: http://www.cse.chalmers.se/~peterd/papers/Wellorderings.pdf Peter Dyber Representing inductively defined sets by wellorderings in Martin-L?f's type theory 1996 http://www.cse.chalmers.se/~peterd/papers/ESSLLI94.pdf Thierry Coquand and Peter Dyber Inductive definitions and type theory - an introduction 1997 In general, the sizes of structures can be represented as ordinals, and termination arguments can be described as demonstrating that recursive calls always occur on strictly smaller ordinals. The finite ordinals correspond to the natural number sizes that you refer to. Your type [ty] is exactly what is known as ?Brouwer ordinals,? which serve to represent countable ordinals (though I?m not sure of their exact relation to countable ordinals). There is a close relationship between Brouwer ordinals and W-types, which with a single set of judgments/axioms can encode likely all of the inductive datatypes that you?re thinking of (though schemes like induction-recursion require more complicated types than W-types, and I believe also more complicated ordinals than Brouwer ordinals). Ben > On Jan 6, 2018, at 2:23 PM, Richard Eisenberg wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Does anyone know a place I could see a statement of how structural induction works, spelled out in full detail? I don't need a "canonical" (i.e. citable) reference, per se, but a place I can enhance my own understanding of structural induction. > > My problem is that I had been blithely assuming that structural induction was reducible to natural-number induction, where the natural number was the size (number of nodes in the tree) of the structure. But this is clearly not true, as both Coq and Agda accept this (written in Coq notation): > > Inductive ty := > | Ax : ty > | Suc : ty -> ty > | Mk : (nat -> ty) -> ty. > > Fixpoint f (t : ty) : nat := > match t with > | Ax => 1 > | Suc t' => 1 + f t' > | Mk x => 1 + (f (x 2)) > end. > > Note that there's no good way (that I see) of denoting the size of such a structure. And yet, Coq feels confident that my f will terminate. Indeed, I certainly don't have a counterexample to this, and I don't believe there is one. > > So, I'm looking for a description of structural induction that explains how the function above is terminating in a rigorous way. Bonus points if it also shows how non-strictly-positive recursive occurrences of a type in its own definition cause structural induction to fail. > > Thanks! > Richard > > -=-=-=-=-=-=-=-=-=-=- > Richard A. Eisenberg, PhD > Asst. Prof. of Computer Science > Bryn Mawr College > Bryn Mawr, PA, USA > cs.brynmawr.edu/~rae From james.wood.100 at strath.ac.uk Sat Jan 6 20:25:50 2018 From: james.wood.100 at strath.ac.uk (James Wood) Date: Sun, 7 Jan 2018 01:25:50 +0000 Subject: [TYPES] Statement of structural induction In-Reply-To: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: <5b3a0837-b001-bd70-f77a-0d66a7ec8bd5@strath.ac.uk> The old Agda wiki has an article ( http://wiki.portal.chalmers.se/agda/pmwiki.php?n=ReferenceManual.TerminationChecker ) which defines the structurally-smaller relation <. Note that it's defined on untyped syntax. James On 06/01/18 22:23, Richard Eisenberg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Does anyone know a place I could see a statement of how structural induction works, spelled out in full detail? I don't need a "canonical" (i.e. citable) reference, per se, but a place I can enhance my own understanding of structural induction. > > My problem is that I had been blithely assuming that structural induction was reducible to natural-number induction, where the natural number was the size (number of nodes in the tree) of the structure. But this is clearly not true, as both Coq and Agda accept this (written in Coq notation): > > Inductive ty := > | Ax : ty > | Suc : ty -> ty > | Mk : (nat -> ty) -> ty. > > Fixpoint f (t : ty) : nat := > match t with > | Ax => 1 > | Suc t' => 1 + f t' > | Mk x => 1 + (f (x 2)) > end. > > Note that there's no good way (that I see) of denoting the size of such a structure. And yet, Coq feels confident that my f will terminate. Indeed, I certainly don't have a counterexample to this, and I don't believe there is one. > > So, I'm looking for a description of structural induction that explains how the function above is terminating in a rigorous way. Bonus points if it also shows how non-strictly-positive recursive occurrences of a type in its own definition cause structural induction to fail. > > Thanks! > Richard > > -=-=-=-=-=-=-=-=-=-=- > Richard A. Eisenberg, PhD > Asst. Prof. of Computer Science > Bryn Mawr College > Bryn Mawr, PA, USA > cs.brynmawr.edu/~rae From Xavier.Leroy at inria.fr Sun Jan 7 11:41:43 2018 From: Xavier.Leroy at inria.fr (Xavier Leroy) Date: Sun, 7 Jan 2018 17:41:43 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: On 01/06/2018 11:23 PM, Richard Eisenberg wrote: > My problem is that I had been blithely assuming that structural induction was reducible to natural-number induction, where the natural number was the size (number of nodes in the tree) of the structure. But this is clearly not true, as both Coq and Agda accept this (written in Coq notation): > > Inductive ty := > | Ax : ty > | Suc : ty -> ty > | Mk : (nat -> ty) -> ty. The reduction to natural-number induction works if you consider the height of the tree representing the data structure a tree, instead of its size. As your example shows, the tree can contain infinitely-branching nodes, hence the size can be infinite, but all paths are finite, hence the depth is finite. Recursing from a node to one of its children decreases the height. Hope this helps, - Xavier Leroy From ameyer at mit.edu Sun Jan 7 15:02:57 2018 From: ameyer at mit.edu (Albert R Meyer) Date: Sun, 7 Jan 2018 15:02:57 -0500 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: We teach our students that it is a bad idea to replace structural induction on recursive data types by integer induction, even when this is possible. A reason for this is illustrated by the attached excerpt on win-lose games from our text "Mathematics for Computer Science," where reduction to integer induction is actually not possible. The full text is available for download at https://learning-modules.mit.edu/materials/index.html?uuid= /course/6/fa17/6.042#materials Yours truly, Albert R. Meyer Professor of Computer Science Department of Electrical Engineering and Computer Science MIT, Cambridge MA 02139 On Sun, Jan 7, 2018 at 11:41 AM, Xavier Leroy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > On 01/06/2018 11:23 PM, Richard Eisenberg wrote: > > > My problem is that I had been blithely assuming that structural > induction was reducible to natural-number induction, where the natural > number was the size (number of nodes in the tree) of the structure. But > this is clearly not true, as both Coq and Agda accept this (written in Coq > notation): > > > > Inductive ty := > > | Ax : ty > > | Suc : ty -> ty > > | Mk : (nat -> ty) -> ty. > > The reduction to natural-number induction works if you consider the height > of the tree representing the data structure a tree, instead of its size. > > As your example shows, the tree can contain infinitely-branching nodes, > hence the size can be infinite, but all paths are finite, hence the depth > is finite. Recursing from a node to one of its children decreases the > height. > > Hope this helps, > > - Xavier Leroy > From xavier.leroy at inria.fr Tue Jan 9 11:14:59 2018 From: xavier.leroy at inria.fr (Xavier Leroy) Date: Tue, 9 Jan 2018 17:14:59 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: On 07/01/2018 17:41, Xavier Leroy wrote: > As your example shows, the tree can contain infinitely-branching nodes, hence the size can be infinite, but all paths are finite, hence the height is finite. I was confused. All paths in this tree are finite indeed, and that's why it induces a well-founded ordering. But in the presence of infinitely-branching nodes, the height can still be infinite and is not an appropriate justification for structural induction. Sorry for the noise, - Xavier Leroy From frederic.blanqui at inria.fr Wed Jan 10 05:31:28 2018 From: frederic.blanqui at inria.fr (=?UTF-8?B?RnLDqWTDqXJpYyBCbGFucXVp?=) Date: Wed, 10 Jan 2018 11:31:28 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: Hello. Well, it is if you define the height as a transfinite ordinal. In case of an infinitely branching constructor like Mk : (nat -> ty) -> ty, height(Mk f) = sup{height(f n) | n in nat} + 1. Fr?d?ric. Le 09/01/2018 ? 17:14, Xavier Leroy a ?crit?: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On 07/01/2018 17:41, Xavier Leroy wrote: > >> As your example shows, the tree can contain infinitely-branching nodes, hence the size can be infinite, but all paths are finite, hence the height is finite. > I was confused. All paths in this tree are finite indeed, and that's why it induces a well-founded ordering. But in the presence of infinitely-branching nodes, the height can still be infinite and is not an appropriate justification for structural induction. > > Sorry for the noise, > > - Xavier Leroy From p.giarrusso at gmail.com Wed Jan 10 10:45:32 2018 From: p.giarrusso at gmail.com (Paolo Giarrusso) Date: Wed, 10 Jan 2018 16:45:32 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: Since I (and maybe others) understand structural induction better than transfinite induction, I now wonder if one can use structural induction to understand better both specific uses of transfinite induction and proof-theoretic ordinal analysis. Has anybody written, say, an introduction to ordinals for the Coq programmer or some such thing? Googling found little,* maybe because this seems pointless to people who do understand ordinals. But maybe such a presentation would be more accessible to computer scientists used to proof assistants? For instance: (1) can one define the proof-theoretic strength of an inductive datatype, such as Richard?s example? (2) Gentzen?s proof of consistency for PA translates derivations in PA to infinitary derivations in PA_\omega [1], and then requires transfinite induction up to \epsilon_0. Could one just use as structural induction on an inductive type of PA_\omega infinitary derivations? (2b) In fact, if I try to encode PA_\omega derivations in Coq (pseudocode) naively, the result looks similar to Richard?s datatype (though more indexed) ? there?s one premise with an argument per natural: Inductive derivation Gamma Delta := | omegaR : forall F Gamma Delta, (forall n, derivation Gamma (F n : Delta)) -> derivation Gamma ((Forall x, F x) : Delta) | omegaL : forall F Gamma Delta, (forall n, derivation (F n : Gamma) Delta) -> derivation ((Exists x, F x) : Gamma) Delta * Forall x, F x *The best answer seems to be Emil Je??bek at https://mathoverflow.net/a/61045/36016, saying (IIUC) that ordinals can be understood as trees, but pointing out those trees are more complex. But some still prefer structural recursion even when it can be translated to induction on naturals, so maybe the same applies here. [1] I?m following the presentation in https://www1.maths.leeds.ac.uk/~rathjen/Sepp-chiemsee.pdf. > On Wed, 10 Jan 2018 at 11:45, Fr?d?ric Blanqui wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello. > > Well, it is if you define the height as a transfinite ordinal. In case > of an infinitely branching constructor like Mk : (nat -> ty) -> ty, > height(Mk f) = sup{height(f n) | n in nat} + 1. > > Fr?d?ric. > > > Le 09/01/2018 ? 17:14, Xavier Leroy a ?crit : > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > On 07/01/2018 17:41, Xavier Leroy wrote: > > > >> As your example shows, the tree can contain infinitely-branching nodes, hence the size can be infinite, but all paths are finite, hence the height is finite. > > I was confused. All paths in this tree are finite indeed, and that's why it induces a well-founded ordering. But in the presence of infinitely-branching nodes, the height can still be infinite and is not an appropriate justification for structural induction. > > > > Sorry for the noise, > > > > - Xavier Leroy > From ameyer at mit.edu Wed Jan 10 11:33:08 2018 From: ameyer at mit.edu (Albert R Meyer) Date: Wed, 10 Jan 2018 11:33:08 -0500 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: Yes, structural induction is, in a precise sense, equivalent to ordinal induction -- which is why it is, again in a precise sense, more powerful than integer induction. Pedagogically, however, structural induction is easy for beginning students to manage while they generally are at a loss to cope with ordinals. regards, a Albert R. Meyer Professor of Computer Science Department of Electrical Engineering and Computer Science MIT, Cambridge MA 02139 On Wed, Jan 10, 2018 at 5:31 AM, Fr?d?ric Blanqui wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hello. > > Well, it is if you define the height as a transfinite ordinal. In case of > an infinitely branching constructor like Mk : (nat -> ty) -> ty, height(Mk > f) = sup{height(f n) | n in nat} + 1. > > Fr?d?ric. > > > Le 09/01/2018 ? 17:14, Xavier Leroy a ?crit : > >> [ The Types Forum, http://lists.seas.upenn.edu/ma >> ilman/listinfo/types-list ] >> >> On 07/01/2018 17:41, Xavier Leroy wrote: >> >> As your example shows, the tree can contain infinitely-branching nodes, >>> hence the size can be infinite, but all paths are finite, hence the height >>> is finite. >>> >> I was confused. All paths in this tree are finite indeed, and that's why >> it induces a well-founded ordering. But in the presence of >> infinitely-branching nodes, the height can still be infinite and is not an >> appropriate justification for structural induction. >> >> Sorry for the noise, >> >> - Xavier Leroy >> > > From pierre.courtieu at gmail.com Wed Jan 10 11:37:05 2018 From: pierre.courtieu at gmail.com (Pierre Courtieu) Date: Wed, 10 Jan 2018 17:37:05 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: I don't know exactly if this can be useful but Pierre Cast?ran (with parts also done by ?velyne Contejean) formalized ordinals and proved the termination of hydra's problem usinng them there: http://www.labri.fr/perso/casteran/hydra-ludica/index.html This is work in progress but the proofs for hydra is finished as far as I know. Best regards, Pierre Courtieu 2018-01-10 16:45 GMT+01:00 Paolo Giarrusso : > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Since I (and maybe others) understand structural induction better than transfinite induction, I now wonder if one can use structural induction to understand better both specific uses of transfinite induction and proof-theoretic ordinal analysis. > > Has anybody written, say, an introduction to ordinals for the Coq programmer or some such thing? Googling found little,* maybe because this seems pointless to people who do understand ordinals. But maybe such a presentation would be more accessible to computer scientists used to proof assistants? > > For instance: > > (1) can one define the proof-theoretic strength of an inductive datatype, such as Richard?s example? > (2) Gentzen?s proof of consistency for PA translates derivations in PA to infinitary derivations in PA_\omega [1], and then requires transfinite induction up to \epsilon_0. Could one just use as structural induction on an inductive type of PA_\omega infinitary derivations? > > (2b) In fact, if I try to encode PA_\omega derivations in Coq (pseudocode) naively, the result looks similar to Richard?s datatype (though more indexed) ? there?s one premise with an argument per natural: > > Inductive derivation Gamma Delta := > | omegaR : forall F Gamma Delta, (forall n, derivation Gamma (F n : Delta)) -> derivation Gamma ((Forall x, F x) : Delta) > | omegaL : > forall F Gamma Delta, (forall n, derivation (F n : Gamma) Delta) -> > derivation ((Exists x, F x) : Gamma) Delta > > * Forall x, F x > > *The best answer seems to be Emil Je??bek at https://mathoverflow.net/a/61045/36016, saying (IIUC) that ordinals can be understood as trees, but pointing out those trees are more complex. But some still prefer structural recursion even when it can be translated to induction on naturals, so maybe the same applies here. > > [1] I?m following the presentation in https://www1.maths.leeds.ac.uk/~rathjen/Sepp-chiemsee.pdf. > >> On Wed, 10 Jan 2018 at 11:45, Fr?d?ric Blanqui wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hello. >> >> Well, it is if you define the height as a transfinite ordinal. In case >> of an infinitely branching constructor like Mk : (nat -> ty) -> ty, >> height(Mk f) = sup{height(f n) | n in nat} + 1. >> >> Fr?d?ric. >> >> >> Le 09/01/2018 ? 17:14, Xavier Leroy a ?crit : >> > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > >> > On 07/01/2018 17:41, Xavier Leroy wrote: >> > >> >> As your example shows, the tree can contain infinitely-branching nodes, hence the size can be infinite, but all paths are finite, hence the height is finite. >> > I was confused. All paths in this tree are finite indeed, and that's why it induces a well-founded ordering. But in the presence of infinitely-branching nodes, the height can still be infinite and is not an appropriate justification for structural induction. >> > >> > Sorry for the noise, >> > >> > - Xavier Leroy >> From p.giarrusso at gmail.com Thu Jan 11 08:02:01 2018 From: p.giarrusso at gmail.com (Paolo Giarrusso) Date: Thu, 11 Jan 2018 14:02:01 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: <9813F792-202D-4983-9D3F-E392C183FAE3@gmail.com> Thanks! Meanwhile, it has been suggested to me (by Dan Doel on ##dependent) that W-types relate to both ordinals and inductive types, so they probably answer my question (even though W-types are not equivalent to inductive types without further extensionality assumptions). Paolo Giarrusso -- Sent from my iPad > On 10 Jan 2018, at 17:37, Pierre Courtieu wrote: > > I don't know exactly if this can be useful but Pierre Cast?ran (with > parts also done by ?velyne Contejean) formalized ordinals and proved > the termination of hydra's problem usinng them there: > > http://www.labri.fr/perso/casteran/hydra-ludica/index.html > > This is work in progress but the proofs for hydra is finished as far as I know. > > Best regards, > Pierre Courtieu > > > 2018-01-10 16:45 GMT+01:00 Paolo Giarrusso : >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Since I (and maybe others) understand structural induction better than transfinite induction, I now wonder if one can use structural induction to understand better both specific uses of transfinite induction and proof-theoretic ordinal analysis. >> >> Has anybody written, say, an introduction to ordinals for the Coq programmer or some such thing? Googling found little,* maybe because this seems pointless to people who do understand ordinals. But maybe such a presentation would be more accessible to computer scientists used to proof assistants? >> >> For instance: >> >> (1) can one define the proof-theoretic strength of an inductive datatype, such as Richard?s example? >> (2) Gentzen?s proof of consistency for PA translates derivations in PA to infinitary derivations in PA_\omega [1], and then requires transfinite induction up to \epsilon_0. Could one just use as structural induction on an inductive type of PA_\omega infinitary derivations? >> >> (2b) In fact, if I try to encode PA_\omega derivations in Coq (pseudocode) naively, the result looks similar to Richard?s datatype (though more indexed) ? there?s one premise with an argument per natural: >> >> Inductive derivation Gamma Delta := >> | omegaR : forall F Gamma Delta, (forall n, derivation Gamma (F n : Delta)) -> derivation Gamma ((Forall x, F x) : Delta) >> | omegaL : >> forall F Gamma Delta, (forall n, derivation (F n : Gamma) Delta) -> >> derivation ((Exists x, F x) : Gamma) Delta >> >> * Forall x, F x >> >> *The best answer seems to be Emil Je??bek at https://mathoverflow.net/a/61045/36016, saying (IIUC) that ordinals can be understood as trees, but pointing out those trees are more complex. But some still prefer structural recursion even when it can be translated to induction on naturals, so maybe the same applies here. >> >> [1] I?m following the presentation in https://www1.maths.leeds.ac.uk/~rathjen/Sepp-chiemsee.pdf. >> >>>>> On Wed, 10 Jan 2018 at 11:45, Fr?d?ric Blanqui wrote: >>> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Hello. >>> >>> Well, it is if you define the height as a transfinite ordinal. In case >>> of an infinitely branching constructor like Mk : (nat -> ty) -> ty, >>> height(Mk f) = sup{height(f n) | n in nat} + 1. >>> >>> Fr?d?ric. >>> >>> >>>> Le 09/01/2018 ? 17:14, Xavier Leroy a ?crit : >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>> >>>>> On 07/01/2018 17:41, Xavier Leroy wrote: >>>>> >>>>> As your example shows, the tree can contain infinitely-branching nodes, hence the size can be infinite, but all paths are finite, hence the height is finite. >>>> I was confused. All paths in this tree are finite indeed, and that's why it induces a well-founded ordering. But in the presence of infinitely-branching nodes, the height can still be infinite and is not an appropriate justification for structural induction. >>>> >>>> Sorry for the noise, >>>> >>>> - Xavier Leroy >>> From tim.sweeney at epicgames.com Thu Jan 11 19:37:27 2018 From: tim.sweeney at epicgames.com (Tim Sweeney) Date: Thu, 11 Jan 2018 19:37:27 -0500 Subject: [TYPES] Objects in type theory Message-ID: Hickey's 1996 "Formal Objects in Type Theory using Very Dependent Types" ( http://www.cs.cornell.edu/jyh/papers/fool3/paper.pdf) laid an interesting type theoretical foundation for object oriented programming constructs. I'm seeking references on any later work in this direction. Of particular value would be papers that explore the following: 1. Referencing superclass fields in derived classes. E.g. A Java programmer might create a class with a member function like "void updateObject() {updateNewThings(); super.updateObject();}". I can't see how to patch this on top of Hickey's framework without losing the straightforward classes-as-types mapping. 2. Virtual classes (https://en.wikipedia.org/wiki/Virtual_class). Perhaps F-bounded polymorphism comes to the rescue. 3. Ensuring open-world forward-compatibility across modules. Java and C# define a broad set of ways that a given module may evolve without breaking compatibility with existing code in other modules which reference it. For example, one may safely add a new member function to a class. The supporting subtyping mechanisms are well-understood, but naming and disambiguation are required too. Thanks, Tim Sweeney From andreas.abel at ifi.lmu.de Sat Jan 13 06:43:51 2018 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Sat, 13 Jan 2018 12:43:51 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: <00ce17bb-52df-d438-44ca-43b883c197f9@ifi.lmu.de> Richard, My master's thesis contains a rigorous proof of wellfoundedness of the structural ordering on inductive types, including your example. There is also a journal publication of this result at JFP. http://www.cse.chalmers.se/~abela/publications.html#jfp02 http://www.cse.chalmers.se/~abela/publications.html#diplomathesis Cheers, Andreas On 06.01.2018 23:23, Richard Eisenberg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Does anyone know a place I could see a statement of how structural induction works, spelled out in full detail? I don't need a "canonical" (i.e. citable) reference, per se, but a place I can enhance my own understanding of structural induction. > > My problem is that I had been blithely assuming that structural induction was reducible to natural-number induction, where the natural number was the size (number of nodes in the tree) of the structure. But this is clearly not true, as both Coq and Agda accept this (written in Coq notation): > > Inductive ty := > | Ax : ty > | Suc : ty -> ty > | Mk : (nat -> ty) -> ty. > > Fixpoint f (t : ty) : nat := > match t with > | Ax => 1 > | Suc t' => 1 + f t' > | Mk x => 1 + (f (x 2)) > end. > > Note that there's no good way (that I see) of denoting the size of such a structure. And yet, Coq feels confident that my f will terminate. Indeed, I certainly don't have a counterexample to this, and I don't believe there is one. > > So, I'm looking for a description of structural induction that explains how the function above is terminating in a rigorous way. Bonus points if it also shows how non-strictly-positive recursive occurrences of a type in its own definition cause structural induction to fail. > > Thanks! > Richard > > -=-=-=-=-=-=-=-=-=-=- > Richard A. Eisenberg, PhD > Asst. Prof. of Computer Science > Bryn Mawr College > Bryn Mawr, PA, USA > cs.brynmawr.edu/~rae > -- Andreas Abel <>< Du bist der geliebte Mensch. Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden andreas.abel at gu.se http://www.cse.chalmers.se/~abela/ From rae at cs.brynmawr.edu Sat Jan 13 20:18:28 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 13 Jan 2018 20:18:28 -0500 Subject: [TYPES] Statement of structural induction In-Reply-To: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: Many, many thanks to all who provided responses, several of which were off-list. I thought I'd summarize the bulk of responses. What I did not explain in my original email -- but bears importance in how this all fits together -- is my mental model of induction. For a natural number induction of a proposition P : Nat -> Prop, we can imagine that, for any n, we can eventually build up a proof of P(n). We do this by starting by proving P(0), using that to prove P(1), using that (perhaps both of those) to prove P(2), and so on. After n+1 steps, we'll prove P(n). Thus, we can prove P(n) in finite time, for any finite n. Accordingly, we can conclude (forall n, P(n)). However, this approach -- prove a proposition by building up a finite number of sub-proofs -- fails for my `ty`, below. (This `ty` is, by sheer coincidence, a representation of the ordinals, as used in, e.g., Abel and Altenkirch's http://www.cse.chalmers.se/~abela/foetuswf.pdf ) It would take an unbounded number of sub-proofs in order to prove a proposition over a `ty` made with `Mk`. The insight I gained from the answers is that it's OK to require this unbounded number of sub-proofs. Equivalently, it might take an infinite amount of time to prove a proposition, but that doesn't cause trouble. The well-foundedness of this approach, called transfinite induction, is proved in Abel and Altenkirch's paper, above. I have not tried to digest it, but I trust the result. Indeed, some responses have pointed out that because structural induction is equivalent in power to transfinite induction, it might be easier to teach the latter by reference to the former. Dybjer (http://www.cse.chalmers.se/~peterd/papers/Inductive_Families.pdf ) also gives a thorough treatment of inductive types. Having been satisfied by the Abel and Altenkirch paper, I have not dived into the details here, but it seems to address my concern, as well. Another intriguing answer is that some aspect of mathematics is, and always will be, an article of faith. We can use the edifice of mathematics built up from, say, set theory to prove the well-foundedness of transfinite induction. Or, if we use type theory as the basis for mathematics, structural induction is taken as a given. Either way, we're basing our result on unprovable axioms. Once again, thanks to all. I've learned a new nugget of knowledge through this interaction! Richard > On Jan 6, 2018, at 5:23 PM, Richard Eisenberg wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Does anyone know a place I could see a statement of how structural induction works, spelled out in full detail? I don't need a "canonical" (i.e. citable) reference, per se, but a place I can enhance my own understanding of structural induction. > > My problem is that I had been blithely assuming that structural induction was reducible to natural-number induction, where the natural number was the size (number of nodes in the tree) of the structure. But this is clearly not true, as both Coq and Agda accept this (written in Coq notation): > > Inductive ty := > | Ax : ty > | Suc : ty -> ty > | Mk : (nat -> ty) -> ty. > > Fixpoint f (t : ty) : nat := > match t with > | Ax => 1 > | Suc t' => 1 + f t' > | Mk x => 1 + (f (x 2)) > end. > > Note that there's no good way (that I see) of denoting the size of such a structure. And yet, Coq feels confident that my f will terminate. Indeed, I certainly don't have a counterexample to this, and I don't believe there is one. > > So, I'm looking for a description of structural induction that explains how the function above is terminating in a rigorous way. Bonus points if it also shows how non-strictly-positive recursive occurrences of a type in its own definition cause structural induction to fail. > > Thanks! > Richard > > -=-=-=-=-=-=-=-=-=-=- > Richard A. Eisenberg, PhD > Asst. Prof. of Computer Science > Bryn Mawr College > Bryn Mawr, PA, USA > cs.brynmawr.edu/~rae From dan.doel at gmail.com Mon Jan 15 00:24:52 2018 From: dan.doel at gmail.com (Dan Doel) Date: Mon, 15 Jan 2018 00:24:52 -0500 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: So, you're probably on your way to new understanding. But, I thought I'd present a couple quibbles with the way you described your understanding of induction, in case they could help when reading the papers and whatnot. First, your description of induction on the naturals sounds like a procedure for enumerating proofs for every possible value. If this is indeed, the intention, I don't think it even requires ordinals to break down. For instance, consider arguing that the length function on lists is justified. Unless you can enumerate the type of the _elements_ of the list, there is no way to enumerate all lists. You can't enumerate all the lists of real numbers. So it seems like any tying of induction to enumeration of the underlying type is doomed to failure. Rather, the claim of (forall n. P n) is that for any choice of n, we can calculate a proof P n. In type theories, this works exactly like recursive definitions are expected to work. Inspect n. If it's 0, use the base case. If it's 1+m, make a recursive call with m, and apply the successor case. The other quibble is why you believe (forall n. P n) is justified, even given your description. Yes, for finite n, we can accomplish this procedure in finite time. Presumably this means that you believe all natural numbers are finite. This presumably means you also believe that there is a lack of ability to construct self-referential values, so that for instance m in my above description is 'smaller' than n. But this is the same reasoning behind well-founded trees (ordinals, your ty type, ...). There is no ability to construct self-referential trees, so the function that you get from t = Mk f must produce smaller trees than o itself. And recursion consuming any particular well-founded tree takes finite time, so (forall t. P t) is justified. For instance, copying the finite naturals into ty is a function producing all finite values: embed : nat -> ty embed 0 = Ax embed (S n) = Suc (embed n) Of course, the apparent self-reference in embed is really sugar for recursion on the natural numbers. All the values produced by `embed` are lesser than `Mk embed`, and the idea is that the system is constructed so that this is always the case. Any `Mk f` was necessarily produced by defining f to produce values that cannot include `Mk f` themselves. The other piece is a generalization of your function. When you see `Mk x`, you call x with 2. So, even though `Mk embed` cannot exactly be given any finite height, because embed produces arbitrarily large values, you only request the one with size 2. The generalization is that when we proceed recursively by applying: lim : (forall n. P (x n)) -> P (Mk x) the argument is simply bundling up the recursive call as a function, and lim is only capable of calling it at finitely many values of finite size. So the trace of constructing the proof of `P (Mk x)` only proceeds down finitely many of the infinite choices of branches, and each of those branches are well-founded, and take finite time to complete, by hypothesis. (This probably gets a little more complicated if P itself is like a function, so it can choose how many branches to use based on an argument; but then for each choice of argument, a finite number of branches is followed, and it doesn't take infinite time to 'construct' a function just because the domain is infinite.) Trusting/proving these properties of the system isn't trivial, but it must be done to justify even induction on the natural numbers. It may be a bit harder to see why you should trust them for more complicated well-founded trees, but the arguments are kind of fundamentally the same. Cheers, Dan On Sat, Jan 13, 2018 at 8:18 PM, Richard Eisenberg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Many, many thanks to all who provided responses, several of which were > off-list. I thought I'd summarize the bulk of responses. > > What I did not explain in my original email -- but bears importance in how > this all fits together -- is my mental model of induction. For a natural > number induction of a proposition P : Nat -> Prop, we can imagine that, for > any n, we can eventually build up a proof of P(n). We do this by starting > by proving P(0), using that to prove P(1), using that (perhaps both of > those) to prove P(2), and so on. After n+1 steps, we'll prove P(n). Thus, > we can prove P(n) in finite time, for any finite n. Accordingly, we can > conclude (forall n, P(n)). > > However, this approach -- prove a proposition by building up a finite > number of sub-proofs -- fails for my `ty`, below. (This `ty` is, by sheer > coincidence, a representation of the ordinals, as used in, e.g., Abel and > Altenkirch's http://www.cse.chalmers.se/~abela/foetuswf.pdf < > http://www.cse.chalmers.se/~abela/foetuswf.pdf>) It would take an > unbounded number of sub-proofs in order to prove a proposition over a `ty` > made with `Mk`. > > The insight I gained from the answers is that it's OK to require this > unbounded number of sub-proofs. Equivalently, it might take an infinite > amount of time to prove a proposition, but that doesn't cause trouble. The > well-foundedness of this approach, called transfinite induction, is proved > in Abel and Altenkirch's paper, above. I have not tried to digest it, but I > trust the result. Indeed, some responses have pointed out that because > structural induction is equivalent in power to transfinite induction, it > might be easier to teach the latter by reference to the former. > > Dybjer (http://www.cse.chalmers.se/~peterd/papers/Inductive_Families.pdf < > http://www.cse.chalmers.se/~peterd/papers/Inductive_Families.pdf>) also > gives a thorough treatment of inductive types. Having been satisfied by the > Abel and Altenkirch paper, I have not dived into the details here, but it > seems to address my concern, as well. > > Another intriguing answer is that some aspect of mathematics is, and > always will be, an article of faith. We can use the edifice of mathematics > built up from, say, set theory to prove the well-foundedness of transfinite > induction. Or, if we use type theory as the basis for mathematics, > structural induction is taken as a given. Either way, we're basing our > result on unprovable axioms. > > Once again, thanks to all. I've learned a new nugget of knowledge through > this interaction! > Richard > > > On Jan 6, 2018, at 5:23 PM, Richard Eisenberg > wrote: > > > > [ The Types Forum, http://lists.seas.upenn.edu/ > mailman/listinfo/types-list ] > > > > Does anyone know a place I could see a statement of how structural > induction works, spelled out in full detail? I don't need a "canonical" > (i.e. citable) reference, per se, but a place I can enhance my own > understanding of structural induction. > > > > My problem is that I had been blithely assuming that structural > induction was reducible to natural-number induction, where the natural > number was the size (number of nodes in the tree) of the structure. But > this is clearly not true, as both Coq and Agda accept this (written in Coq > notation): > > > > Inductive ty := > > | Ax : ty > > | Suc : ty -> ty > > | Mk : (nat -> ty) -> ty. > > > > Fixpoint f (t : ty) : nat := > > match t with > > | Ax => 1 > > | Suc t' => 1 + f t' > > | Mk x => 1 + (f (x 2)) > > end. > > > > Note that there's no good way (that I see) of denoting the size of such > a structure. And yet, Coq feels confident that my f will terminate. Indeed, > I certainly don't have a counterexample to this, and I don't believe there > is one. > > > > So, I'm looking for a description of structural induction that explains > how the function above is terminating in a rigorous way. Bonus points if it > also shows how non-strictly-positive recursive occurrences of a type in its > own definition cause structural induction to fail. > > > > Thanks! > > Richard > > > > -=-=-=-=-=-=-=-=-=-=- > > Richard A. Eisenberg, PhD > > Asst. Prof. of Computer Science > > Bryn Mawr College > > Bryn Mawr, PA, USA > > cs.brynmawr.edu/~rae > > From frederic.blanqui at inria.fr Mon Jan 15 02:59:59 2018 From: frederic.blanqui at inria.fr (=?UTF-8?B?RnLDqWTDqXJpYyBCbGFucXVp?=) Date: Mon, 15 Jan 2018 08:59:59 +0100 Subject: [TYPES] Statement of structural induction In-Reply-To: References: <6D509566-803B-4DBB-915C-6E0C6AB1CF3F@cs.brynmawr.edu> Message-ID: Le 14/01/2018 ? 02:18, Richard Eisenberg a ?crit?: > Indeed, some responses have pointed out that because structural induction is equivalent in power to transfinite induction, it might be easier to teach the latter by reference to the former. Hi. I would perhaps not say that it is equivalent but, for sure, structural induction can be justified by transfinite induction, which itself is a particular case of and is equivalent to well-founded induction. By the way, let me recall that the idea that well-founded induction should be preferred to transfinite induction and that, indeed, transfinite induction can be replaced by well-founded induction, is quite old. I believe that the oldest work about this is Kuratowski's 1922 paper (in French) on "Une m?thode d'?limination des nombres transfinis des raisonnements math?matiques" (A method of elimination of transfinite numbers in mathematical proofs) in Fundamenta Mathematicae (see https://eudml.org/doc/213282). Best regards, Fr?d?ric. From nicolai.kraus at gmail.com Mon Jan 15 19:25:43 2018 From: nicolai.kraus at gmail.com (Nicolai Kraus) Date: Tue, 16 Jan 2018 00:25:43 +0000 Subject: [TYPES] Objects in type theory In-Reply-To: References: Message-ID: Hello, as Bas wrote (I think his message was not published on the list, but thanks for notifying me!), Hickey's very dependent types have been of some interest in homotopy type theory, and I have mentioned them in my thesis [1].? However, the context is very different from Tim's and I am not sure whether this is relevant to the question. Not much has actually been said about very dependent types in HoTT, and in my thesis, I only mention them in the "future directions" chapter.? I don't think anyone has made a suggestion of how to add very types to HoTT (or basic MLTT) so far, and I also don't know how one might do that.? The reason why they came up is that, in HoTT, one problem is whether type-valued diagrams/presheaves can be represented internally, in particular if the index category is the simplex category Delta restricted to injective maps [2]; this is known as the challenge of defining semisimplicial types.? The standard approach in HoTT is trying to write down a Reedy fibrant diagram that encodes the dependency structure, e.g. as: ? Points : Type ? Edges : Points -> Points -> Type ? Triangles : (a b c : Points) -> Edges(a,b) -> Edges(b,c) -> Edges(a,c) -> Type and so on.? (I have learned about this presentation of structures in Shulman's work [3], and I suppose the general concept is better known in the context of Makkai's FOLD [4] in this community.) Hickey's "very dependent function types" (p5 in his paper) seem to be a type-theoretic way of talking about Reedy fibrant diagrams, and that's the reason behind the interest in HoTT that Bas has pointed out.? But this has not led to anything so far. Best, Nicolai Kraus [1] http://www.cs.nott.ac.uk/~psznk/docs/thesis_nicolai.pdf [2] https://ncatlab.org/nlab/show/simplex+category [3] Shulman, https://arxiv.org/abs/1203.3253 [4] Makkai, http://www.math.mcgill.ca/makkai/folds/foldsinpdf/FOLDS.pdf On 12/01/18 10:18, Bas Spitters wrote: > There's a recent interest in them in HoTT, e.g. the discussion in the > thesis by Nicolai Kraus. > http://eprints.nottingham.ac.uk/28986/ > > I am not sure what the best reference is for this, but I am sure > Nicolai and Thorsten have more information. > > Bas > > On Fri, Jan 12, 2018 at 1:37 AM, Tim Sweeney wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hickey's 1996 "Formal Objects in Type Theory using Very Dependent Types" ( >> http://www.cs.cornell.edu/jyh/papers/fool3/paper.pdf) laid an interesting >> type theoretical foundation for object oriented programming constructs. >> >> I'm seeking references on any later work in this direction. Of particular >> value would be papers that explore the following: >> >> 1. Referencing superclass fields in derived classes. E.g. A Java >> programmer might create a class with a member function like "void >> updateObject() {updateNewThings(); super.updateObject();}". I can't see >> how to patch this on top of Hickey's framework without losing the >> straightforward classes-as-types mapping. >> >> 2. Virtual classes (https://en.wikipedia.org/wiki/Virtual_class). Perhaps >> F-bounded polymorphism comes to the rescue. >> >> 3. Ensuring open-world forward-compatibility across modules. Java and C# >> define a broad set of ways that a given module may evolve without breaking >> compatibility with existing code in other modules which reference it. For >> example, one may safely add a new member function to a class. The >> supporting subtyping mechanisms are well-understood, but naming and >> disambiguation are required too. >> >> Thanks, >> >> Tim Sweeney From kbbruce47 at gmail.com Tue Jan 16 01:23:11 2018 From: kbbruce47 at gmail.com (Kim Bruce) Date: Mon, 15 Jan 2018 22:23:11 -0800 Subject: [TYPES] Objects in type theory In-Reply-To: References: Message-ID: <6BA1BB9A-5493-4DE7-815E-05D70D443579@gmail.com> A paper not cited in that one that may be relevant is Comparing Object Encodings,? by Kim B. Bruce, Luca Cardelli, and Benjamin C. Pierce, Information and Computation 155(1999), pp 108-133. My book, Foundations of Object-Oriented Languages: Types and Semantics, MIT Press, 2002, goes into more detail on the encodings, including details on handling super accesses. Of course, Pierce?s text, Types and Programming Languages, 2002, also from MIT Press, discusses many of these issues in the context of a broader look at types. Lots of the work on foundations for Scala should also be relevant. Kim Bruce. > On Jan 11, 2018, at 4:37 PM, Tim Sweeney wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hickey's 1996 "Formal Objects in Type Theory using Very Dependent Types" ( > http://www.cs.cornell.edu/jyh/papers/fool3/paper.pdf) laid an interesting > type theoretical foundation for object oriented programming constructs. > > I'm seeking references on any later work in this direction. ... > Thanks, > > Tim Sweeney From stefan.ciobaca at gmail.com Sat Feb 3 17:03:18 2018 From: stefan.ciobaca at gmail.com (Stefan Ciobaca) Date: Sun, 4 Feb 2018 00:03:18 +0200 Subject: [TYPES] preterms Message-ID: Could anyone help me with a reference to when the concepts of preterm/preproof/pretype were first introduced, or what counts as acceptable definitions of preterms? Unfortunately, google is not too helpful when searching for the terms above. Thanks! Stefan Ciobaca From jays at panix.com Mon Feb 5 22:15:51 2018 From: jays at panix.com (Jay Sulzberger) Date: Mon, 5 Feb 2018 22:15:51 -0500 (EST) Subject: [TYPES] preterms In-Reply-To: References: Message-ID: On Sun, 4 Feb 2018, Stefan Ciobaca wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Could anyone help me with a reference to when the concepts of > preterm/preproof/pretype were first introduced, or what counts as > acceptable definitions of preterms? > > Unfortunately, google is not too helpful when searching for the terms above. > > Thanks! > > Stefan Ciobaca I believe that the book by M. H. B. Sorenson and P. Urzyczyn Lectures on the Curry-Howard Isomorphism (May 1998 not final version) which, I think, is at http://disi.unitn.it/~bernardi/RSISE11/Papers/curry-howard.pdf has a definition on page 2 of Chapter 1. The "nominal sets" of M. J. Gabbay are part of, and also an extension of, one explication explication of the passage from "preterms" to "terms". https://ncatlab.org/nlab/show/nominal+set I think before "preterms", mathematicians did not completely explicitly distinguish between terms and preterms. There were just lambda expressions, and then an "equivalence" between terms, terms which we now call preterms. The equivalence classes of the equivalence relation, all now admit, are the actual terms of the lambda calculus. Of course, also in near branches of mathematics, we must, on occasion, distinguish "formulae" and "formulae up to careful renaming of bound variables". It is clear that the absolute binary opposition of "free" vs "bound" variables is just the most simple case of the general situation: as we reason, or program, we impose conditions on the objects of our attention, and so in our notations, we will have partly bound and partly un-bound names of things. oo--JS. From jays at panix.com Mon Feb 5 23:17:18 2018 From: jays at panix.com (Jay Sulzberger) Date: Mon, 5 Feb 2018 23:17:18 -0500 (EST) Subject: [TYPES] preterms In-Reply-To: References: Message-ID: On Mon, 5 Feb 2018, Jay Sulzberger wrote: > > On Sun, 4 Feb 2018, Stefan Ciobaca wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Could anyone help me with a reference to when the concepts of >> preterm/preproof/pretype were first introduced, or what counts as >> acceptable definitions of preterms? >> >> Unfortunately, google is not too helpful when searching for the terms >> above. >> >> Thanks! >> >> Stefan Ciobaca > > I believe that the book by M. H. B. Sorenson and P. Urzyczyn > > Lectures on the Curry-Howard Isomorphism (May 1998 not final version) > > which, I think, is at > > http://disi.unitn.it/~bernardi/RSISE11/Papers/curry-howard.pdf > > has a definition on page 2 of Chapter 1. > > The "nominal sets" of M. J. Gabbay are part of, and also an > extension of, one explication explication of the passage from > "preterms" to "terms". Ah, Andrew Pitts is also one of the authors of the theory of nominal sets. My error in attributing nominal sets, as she is spoke today, is due to my too quick reading of the ncatlab page. http://www.cl.cam.ac.uk/teaching/1314/L23/lectures/lecture-1.pdf And I have failed entirely to answer your questions about when were "preterms" formally introduced. ad troubles with scopes of names: See "upward funarg problem". I remember at least one Lisp of the 1960s whose interpreter and compiler would disagree on what certain programs should do. http://archive.computerhistory.org/resources/access/text/2012/08/102746453-05-01-acc.pdf And I have heard that the great 1928 textbook on logic of D. Hilbert and W. Ackermann also had some difficulty with one of the scope rules, and that the difficulty was close to the funarg difficulty. https://en.wikipedia.org/wiki/Principles_of_Mathematical_Logic [page was last edited on 26 June 2017, at 00:02] oo--JS. > > https://ncatlab.org/nlab/show/nominal+set > > I think before "preterms", mathematicians did not completely > explicitly distinguish between terms and preterms. There were > just lambda expressions, and then an "equivalence" between terms, > terms which we now call preterms. The equivalence classes of the > equivalence relation, all now admit, are the actual terms of the > lambda calculus. Of course, also in near branches of > mathematics, we must, on occasion, distinguish "formulae" and > "formulae up to careful renaming of bound variables". It is > clear that the absolute binary opposition of "free" vs "bound" > variables is just the most simple case of the general situation: > as we reason, or program, we impose conditions on the objects of > our attention, and so in our notations, we will have partly bound > and partly un-bound names of things. > > oo--JS. > > From uni at hoffjan.de Thu Feb 8 18:12:43 2018 From: uni at hoffjan.de (Jan Hoffmann) Date: Thu, 8 Feb 2018 18:12:43 -0500 Subject: [TYPES] Martin Hofmann (1965-2018) Message-ID: Dear Colleagues, With deep sadness we are writing to let you know that our friend, mentor, colleague, and PhD advisor Martin Hofmann did not return from a hike at Nikko Shirane in Japan on January 21. Martin received his PhD from the University of Edinburgh in 1995 and most recently held the Gerhard Gentzen Chair for Theoretical Computer Science at LMU Munich. He is survived by his wife and three children. Martin was one of the most talented and knowledgeable scientists we know. He made countless contributions to programming languages and CS theory but seemed to also know just everything about any other field of CS, math, and physics. He was a true academic, driven by the desire to acquire and share knowledge. His friendly, humble, and unassuming personality made him a fantastic teacher and collaborator, and the long list of his co-authors is a testimony to his popularity in the community. In many ways, Martin has been a role model for us. We are thankful for his kindness, support, encouragement, and loyalty throughout the years. We are unbelievably sad that he is gone but grateful we knew him. Sincerely, Lennart Beringer Jan Hoffmann Steffen Jost Ulrich Sch?pp From eijiro.sumii at gmail.com Fri Feb 9 12:09:26 2018 From: eijiro.sumii at gmail.com (Eijiro Sumii) Date: Sat, 10 Feb 2018 02:09:26 +0900 Subject: [TYPES] Martin Hofmann (1965-2018) In-Reply-To: References: Message-ID: [Rewritten and resent after moderation due to overlap with the previous message] NII (the host organization for his trip) has expressed deepest condolences to the family and friends of Professor Martin Hofmann http://shonan.nii.ac.jp/shonan/blog/2018/01/26/a-researcher-who-was-supposed-to-participate-at-nii-shonan-meeting-has-been-missing/ which we share. May he rest in peace. On Fri, Feb 9, 2018 at 8:12 AM, Jan Hoffmann wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Colleagues, > > With deep sadness we are writing to let you know that our friend, > mentor, colleague, and PhD advisor Martin Hofmann did not return from > a hike at Nikko Shirane in Japan on January 21. > > Martin received his PhD from the University of Edinburgh in 1995 and > most recently held the Gerhard Gentzen Chair for Theoretical Computer > Science at LMU Munich. He is survived by his wife and three children. > > Martin was one of the most talented and knowledgeable scientists we > know. He made countless contributions to programming languages and CS > theory but seemed to also know just everything about any other field > of CS, math, and physics. He was a true academic, driven by the desire > to acquire and share knowledge. His friendly, humble, and unassuming > personality made him a fantastic teacher and collaborator, and the > long list of his co-authors is a testimony to his popularity in the > community. > > In many ways, Martin has been a role model for us. We are thankful for > his kindness, support, encouragement, and loyalty throughout the > years. We are unbelievably sad that he is gone but grateful we knew > him. > > Sincerely, > > Lennart Beringer > Jan Hoffmann > Steffen Jost > Ulrich Sch?pp From bcpierce at cis.upenn.edu Wed Feb 14 12:53:26 2018 From: bcpierce at cis.upenn.edu (Benjamin C. Pierce) Date: Wed, 14 Feb 2018 12:53:26 -0500 Subject: [TYPES] Best pedagogical presentation of CPS? Message-ID: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Dear Types types, What is the best presentation of the continuation-passing transform and its proof of correctness (?best? in the sense of clarity of both algorithm and proof, not efficiency or generality)? Many thanks, - Benjamin From francois.pottier at inria.fr Wed Feb 14 13:13:29 2018 From: francois.pottier at inria.fr (=?UTF-8?Q?Fran=c3=a7ois_Pottier?=) Date: Wed, 14 Feb 2018 19:13:29 +0100 Subject: [TYPES] Best pedagogical presentation of CPS? In-Reply-To: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> References: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Message-ID: <0e3076c1-3572-b595-caeb-36bd2e81fd7f@inria.fr> Dear Benli, Le 14/02/2018 ? 18:53, Benjamin C. Pierce a ?crit : > What is the best presentation of the continuation-passing transform and its proof of correctness? I recently taught CPS and its proof of correctness, formulated in a small-step style. Despite the huge literature on CPS I had some surprises and wrote down some of my findings in this paper (in submission): http://gallium.inria.fr/~fpottier/publis/fpottier-cps.pdf The paper is still more complicated than I would like. I am thinking that perhaps the "small-step simulation diagram" approach is too complex, and a proof based on a big-step operational semantics would be simpler. Investigating this is on my TODO list for next year. The slides of my lecture on CPS are here: https://gitlab.inria.fr/fpottier/mpri-2.4-public/blob/master/slides/fpottier-04.pdf I hope this is useful. Your comments are welcome! -- Fran?ois Pottier francois.pottier at inria.fr http://gallium.inria.fr/~fpottier/ From Xavier.Leroy at inria.fr Wed Feb 14 14:28:38 2018 From: Xavier.Leroy at inria.fr (Xavier Leroy) Date: Wed, 14 Feb 2018 19:28:38 +0000 Subject: [TYPES] Best pedagogical presentation of CPS? In-Reply-To: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> References: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Message-ID: On Wed, Feb 14, 2018 at 7:08 PM Benjamin C. Pierce wrote: > What is the best presentation of the continuation-passing transform and > its proof of correctness (?best? in the sense of clarity of both algorithm > and proof, not efficiency or generality)? > In my lectures I use Plotkin's original CBV presentation (the one that is nicely compositional but produces lots of administrative redexes) and a correctness proof based on big-step semantics that avoids the usual difficulties with administrative redexes. See https://xavierleroy.org/mpri/2-4/, part 3, slides 65 to 71. You can even mechanize the proof, it's a bit awkward in Coq but doable (see part 6 for a naively-named solution, or my paper with Zaynah Dargaye https://xavierleroy.org/bibrefs/Dargaye-Leroy-CPS.html for a de Bruijn solution). For a proof using small-step semantics, I'd rather use Danvy and Nielsen's transformation (which avoids generating administrative redexes) than work through Plotkin's original proof with the colon translation. Finally, for a higher-level presentation you could go straight to monads and use Moggi's computational lambda-calculus to reason about the semantics of monadic terms. Hope this helps, - Xavier Leroy > Many thanks, > > - Benjamin > > From wadler at inf.ed.ac.uk Thu Feb 15 08:44:59 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Thu, 15 Feb 2018 11:44:59 -0200 Subject: [TYPES] Best pedagogical presentation of CPS? In-Reply-To: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> References: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Message-ID: Amr Sabry and I were the first to show that CPS is sound and complete with a reduction semantics. Sabry and Felleisen had previously shown a translation sound and complete with equational semantics. A reflection on call-by-value Amr Sabry and Philip Wadler. ACM Transactions on Programming Langaguage, 19(6):916-941, November 1997. http://homepages.inf.ed.ac.uk/wadler/papers/reflection- journal/reflection-journal.pdf https://dl.acm.org/citation.cfm?doid=267959.269968 The results are, to my mind, surprisingly straightforward, with the tricky part being to get the source and target calculi and the administrative reductions correct. Plotkin showed soundness but not completeness. When I presented these results at Edinburgh (long before I moved there), Plotkin was in the audience. With some trepidation, I said, "I believe this improves on your result of twenty years ago." Gordon immediately responded, "Yes, but you've had twenty years!" Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ On 14 February 2018 at 15:53, Benjamin C. Pierce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Types types, > > What is the best presentation of the continuation-passing transform and > its proof of correctness (?best? in the sense of clarity of both algorithm > and proof, not efficiency or generality)? > > Many thanks, > > - Benjamin > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From adamc at csail.mit.edu Thu Feb 15 10:31:04 2018 From: adamc at csail.mit.edu (Adam Chlipala) Date: Thu, 15 Feb 2018 10:31:04 -0500 Subject: [TYPES] Best pedagogical presentation of CPS? In-Reply-To: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> References: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Message-ID: On 02/14/2018 12:53 PM, Benjamin C. Pierce wrote: > What is the best presentation of the continuation-passing transform and its proof of correctness (?best? in the sense of clarity of both algorithm and proof, not efficiency or generality)? I'm not too well-informed about all the different varieties of CPS transform, but one for simply typed lambda calculus from my Lambda Tamer library might qualified as appealingly pedagogical.? It takes up a bit over 100 lines of Coq source code, including defining the transform and proving its correctness.? The semantics is a tagless interpreter, which greatly simplifies things vs. operational semantics. Unfortunately, I did this project in the days before GitHub, and its SourceForge-hosted web page seems to be offline just now!? But here's a temporary link to the file in question (found in examples/STLC/CPSify.v in the full distribution). From P.B.Levy at cs.bham.ac.uk Thu Feb 15 14:10:34 2018 From: P.B.Levy at cs.bham.ac.uk (Paul Blain Levy) Date: Thu, 15 Feb 2018 19:10:34 +0000 Subject: [TYPES] Best pedagogical presentation of CPS? In-Reply-To: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> References: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Message-ID: <221b352b-95b8-a455-8e1e-7d56ad3b0d40@cs.bham.ac.uk> Dear Benjamin, This probably doesn't answer your question, but... In my view, the call-by-value CPS transform is a composite of three transforms: 1. From call-by-value lambda-calculus to fine-grain call-by-value. 2. From fine-grain call-by-value to call-by-push-value. 3. From call-by-push-value to a special fragment of lambda-calculus, where nothing returns. (I like to call it "Calculus of no return".) Part (3) is in my view the "real" CPS transform.? Part (1) and (2) are about general effects. The part which involves fiddling (administrative redexes) is (1).? That's because (coarse-grain) call-by-value doesn't distinguish between values and computations, so when it sees a value that's a tuple of tuples of tuples (say), it won't realize that it's a value and will reevaluate it. Correctness of (2) and (3) is immediate if you use a CK-machine (computation+stack) to present operational semantics.? Agreement of CK-machine and big-step is also easy. This is in my book (and thesis, for an older version).? For a more recent but brief story, here are the slides from my course on lambda-calculus, effects and call-by-push-value at Midlands Graduate School 2017: www.cs.bham.ac.uk/~pbl/mgsfastlam.pdf The last part ("Control") is about CPS, but it only gives the outline, so it doesn't show the translation on terms. Paul On 14/02/18 17:53, Benjamin C. Pierce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Types types, > > What is the best presentation of the continuation-passing transform and its proof of correctness (?best? in the sense of clarity of both algorithm and proof, not efficiency or generality)? > > Many thanks, > > - Benjamin > From yuting at cs.umn.edu Sat Feb 17 20:59:32 2018 From: yuting at cs.umn.edu (Yuting Wang) Date: Sat, 17 Feb 2018 20:59:32 -0500 Subject: [TYPES] Best pedagogical presentation of CPS? In-Reply-To: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> References: <515F02AE-F138-4EE9-99BC-30FBF650A5ED@cis.upenn.edu> Message-ID: Dear Benjamin, As part of my thesis work, I have went through the exercise of formalizing the CPS transformation in the style of Danvy & Filinski for simply typed lambda terms. Both the specification and the correctness proofs of the transformation exploit a form of higher-order abstract syntax, which leads to very concise code and proofs. For example, the essential part of the specification is given as follows: cps (nat N) K (K (nat N)). cps (abs R) K (K (abs k\ abs x\ R' k x)) :- pi k\ pi x\ (pi k'\ cps x k' (k' x)) => cps (R x) (y\ app k y) (R' k x). cps (app M1 M2) K M' :- pi x1\ cps M2 (x2\ app (app x1 (abs K)) x2) (M2' x1), cps M1 M2' M'. It is basically a direct translation of Figure 2 in Danvy & Filinski's paper(https://doi.org/10.1017/S0960129500001535), albeit in a relational style. You can find the full source code at http://www.cs.yale.edu/homes/wang- yuting/cps/. The dynamic semantics of the language is given in a small-step style. The proof of semantics preservation is based on a logical relation. You can also find a more complete story on formalizing transformations on functional programs using our HOAS-based approach via the following link: http://sparrow.cs.umn.edu/compilation Best Regards, - Yuting http://www.cs.yale.edu/homes/wang-yuting/ On Wed, Feb 14, 2018 at 12:53 PM, Benjamin C. Pierce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Types types, > > What is the best presentation of the continuation-passing transform and > its proof of correctness (?best? in the sense of clarity of both algorithm > and proof, not efficiency or generality)? > > Many thanks, > > - Benjamin > > -- Yuting Wang University of Minnesota, Dept. of Computer Science http://www.cs.umn.edu/~yuting From sestini.filippo at gmail.com Sun Feb 25 06:06:46 2018 From: sestini.filippo at gmail.com (Filippo Sestini) Date: Sun, 25 Feb 2018 12:06:46 +0100 Subject: [TYPES] Typed lambda calculi with weak conversion Message-ID: Dear all, The notion of weak reduction first appeared in [1], and was later developed into the definition of a weak lambda calculus in, for example, [2]. The crucial property of weak reduction is that it does not validate the ? rule, but some form of computation under binders is still possible. I'm looking for references in the literature treating typed calculi with full lambda syntax and weak conversion, in particular regarding normalization. What I've found so far seems to either be limited to a form of combinator syntax, or only consider evaluation to weak head normal form ([3], [4]). Do you have any suggestions? Thank you Sincerely [1] W.A. Howard, Assignment of Ordinals to Terms for Primitive Recursive Functionals of Finite Type, 1970 1970 [2] N. ?a?man, J.R. Hindley, Combinatory weak reduction in lambda calculus, 1998 [3] P. Martin-L?f, An Intuitionistic Theory of Types: Predicative Part, 1975 [4] T. Coquand, P. Dybjer, Intuitionistic Model Constructions and Normalization Proofs, 1998 -- Filippo Sestini sestini.filippo at gmail.com (+39) 333 6757668 From gabriel.scherer at gmail.com Sun Feb 25 10:34:34 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sun, 25 Feb 2018 16:34:34 +0100 Subject: [TYPES] Typed lambda calculi with weak conversion In-Reply-To: References: Message-ID: # ?a?man and Hindley's notion of weak reduction For those that would not be familiar with the work of ?a?man and Hindley, 1998 (it is a nice paper that is still readable today, so don't hesitate to have a look), they give a definition of weak reduction (as opposed to strong/full reduction) that is a bit stronger (and arguably better-behaved) than the standard one: instead of saying "don't reduce under lambdas", they say "only reduce closed term" -- this is essentially the same thing, except that they allow themselves to reduce closed sub-terms that would occur under a lambda. For example: (lam y. (lam x. x) (lam z. z)) would not reduce for the common notion of weak reduction ((lam y . []) is not an evaluation context), but it does reduce to (lam y. (lam z. z)) for their notion of weak reduction. (Luc Maranget once claimed to me that this is the "right" notion of weak reduction.) One nice idea in the paper of ?a?man and Hindley is that this can be characterized very nicely by using the no-variable-capture property of substitutions (as opposed to context plugging). Strong/full reduction can be defined by saying that, for any head-reduction from A to B (a "head" reduction is when A is the redex, instead of merely containing the redex as a subterm; if you only have functions, this is when A is of the form ((lam x. C) D)): C[A] reduces to C[B] for an arbitrary context C. (Just a term with a hole, no particular restriction to a family of evaluation contexts). ?a?man and Hindley remark that their notion of weak reduction can be characterized by saying that, for any head-reduction from A to B, C[A/x] reduces to C[B/x] using (capture-avoiding) substitution instead of context plugging imposes that A and B do not contain a variable bound in C, which precisely corresponds to their notion of weak reduction. (Consider (lam y . z) and (lam y . y), the former can be written as (lam y . x)[z/x], but the latter is not equal to (lam y . x)[y/x].) # Normalization of weak reduction To answer Filippo's question: I don't think that enlarging the notion of weak reduction to reduce closed subterms changes the normalization behavior in any way: the two notions of weak reduction have the same normalization properties. In particular, consider a term of the form C[A] where A is a closed sub-term in the sense that is under some lambdas in C, but does not use their bound variables. A is not reducible under the no-reduction-under-lambda interpretation (the common one), but it is reducible under the reduce-closed-subterms interpretation (?a?man and Hindley's -- let's call it "CH weak reduction" for short). But this term is beta-equivalent to (lam x. C[x]) A where A is in reducible position under either notions of reductions. If C[A] was non-normalizable for CH weak reduction, then ((lam x. C[x]) A) is non-normalizable for standard weak reduction. If the standard calculus has a weak-normalization property, then you know that Ch-reducing A in C[A] preserves normalization. My intuition suggests that this can be turned a general construction that extrudes all closed sub-term of a term, and produces from any term T an extruded term T' such that the CH-weak normal form of T is the standard normal form of T'. (Of course, this is only intuition, one has to do the work.) (This extrusion is type-preserving.) This means that any normalization result for standard weak reduction implies a normalization result for CH weak reduction. # Explicit substitutions I believe that CH-weak reduction is may be related to calculi with explicit substitutions. CH-weak reduction is "more confluent" than standard reduction, as reducing (lam x. A) B preserves the property that B is in reducible position. (This is not true of standard weak reduction, consider the case where A is (lam y . C)). This property that head reduction preserves reducibility of their subterms also holds for explicit substitution calculi. On Sun, Feb 25, 2018 at 12:06 PM, Filippo Sestini wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear all, > > The notion of weak reduction first appeared in [1], and was later > developed into the definition of a weak lambda calculus in, for > example, [2]. The crucial property of weak reduction is that it > does not validate the ? rule, but some form of computation under > binders is still possible. > > I'm looking for references in the literature treating typed > calculi with full lambda syntax and weak conversion, in > particular regarding normalization. What I've found so far seems > to either be limited to a form of combinator syntax, or only > consider evaluation to weak head normal form ([3], [4]). > Do you have any suggestions? > > Thank you > > Sincerely > > [1] W.A. Howard, Assignment of Ordinals to Terms for Primitive > Recursive Functionals of Finite Type, 1970 > 1970 > > [2] N. ?a?man, J.R. Hindley, Combinatory weak reduction in lambda > calculus, 1998 > > [3] P. Martin-L?f, An Intuitionistic Theory of Types: Predicative > Part, 1975 > > [4] T. Coquand, P. Dybjer, Intuitionistic Model Constructions and > Normalization Proofs, 1998 > > -- > Filippo Sestini > sestini.filippo at gmail.com > (+39) 333 6757668 > > From oleg at okmij.org Mon Feb 26 06:24:10 2018 From: oleg at okmij.org (Oleg) Date: Mon, 26 Feb 2018 20:24:10 +0900 Subject: [TYPES] A new take on bracket abstraction, and a historical question about the first take Message-ID: <20180226112410.GA4286@Magus.localnet> There have been recently several interesting historical threads on this list. I also have a question, which came when developing a new approach to translating lambda-terms to SK combinators. Incidentally, this new approach may be of interest to this list because of the close relationship with types: the meaning derivation (i.e., translation) mirrors the typing derivation. The translation of course works in the general un(i)typed lambda calculus. One may thus say, types are useful even when they are absent. The traditional approach -- bracket abstraction -- is syntactic: a term-rewriting system. It is formulated as a set of ordered re-writing rules, with side conditions checking if a particular variable occurs free within a term. The new approach is semantic: the SK term for a given possibly open lambda-expression is _compositionally_ computed as the expression's denotation. It is described in a paper accepted for FLOPS, whose near final version is available at http://okmij.org/ftp/tagless-final/ski.pdf There is also a code with many variations and examples: http://okmij.org/ftp/tagless-final/skconv.ml My question is to check my understanding of the origins of the bracket abstraction. As far as I can see from reading the Stanford Encyclopedia of Philosophy and Schoenfinkel's original paper, Schoenfinkel was the first to describe the main idea: converting a combinator term with variables into a form in which all of its variables are at the margin. However, Curry should be credited with showing that lambda-calculus is combinatorially complete (that every lambda-expression can be represented as a combinator applied to the series of the expression's free variables). The Curry's bracket abstraction is essentially Schoenfinkel's. Although Schoenfinkel has introduced B and C combinators (called differently, though), his bracket abstraction only deals with S and K. It is David Turner who has to be credited with optimizations. Is this understanding correct? From gabriel.scherer at gmail.com Mon Feb 26 08:11:59 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Mon, 26 Feb 2018 14:11:59 +0100 Subject: [TYPES] A new take on bracket abstraction, and a historical question about the first take In-Reply-To: <20180226112410.GA4286@Magus.localnet> References: <20180226112410.GA4286@Magus.localnet> Message-ID: Dear Oleg (and list), Thanks for the reference to your work, which I didn't know about. Below is a question. You highlight that your translation to a the combinator calculus is compositional. This is also a property of the Categorical Abstract Machine of Guy Cousineau, Pierre-Louis Curien and Michel Mauny (1987). They used a combinator calculus extended with tuples and projections, which corresponds to a syntax for morphisms in cartesian-closed categories, and makes it easy to represent the environmnet as nested tuples and variables as de-bruijn-index accessors. Can your compositional translation be understood as a curryfication/de-tuplification of the CAM compilation strategy (you remark that your semantics has a currified environment)? (Your De Bruijn presentation is more general than the one used in the CAM, as you represent the "successor" operation S as a general term combinator corresponding to weakening, which suffices to express some of Turner's optimisations. I suppose that the question makes precise sense only when you restrict your system to use S on "variable" terms of the form (S^n z), and don't use your rule Abs0. But any scope-related typing rules could easily be taken into account by an optimized CAM translation, so you could also compare your system with such an optimized CAM.) Best On Mon, Feb 26, 2018 at 12:24 PM, Oleg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > There have been recently several interesting historical threads on > this list. I also have a question, which came when developing a new > approach to translating lambda-terms to SK combinators. Incidentally, > this new approach may be of interest to this list because of the close > relationship with types: the meaning derivation (i.e., translation) > mirrors the typing derivation. The translation of course works in the > general un(i)typed lambda calculus. One may thus say, types are useful > even when they are absent. > > The traditional approach -- bracket abstraction -- is syntactic: a > term-rewriting system. It is formulated as a set of ordered re-writing > rules, with side conditions checking if a particular variable occurs > free within a term. > > The new approach is semantic: the SK term for a given possibly open > lambda-expression is _compositionally_ computed as the expression's > denotation. It is described in a paper accepted for FLOPS, whose near > final version is available at > http://okmij.org/ftp/tagless-final/ski.pdf > There is also a code with many variations and examples: > http://okmij.org/ftp/tagless-final/skconv.ml > > My question is to check my understanding of the origins of the bracket > abstraction. As far as I can see from reading the Stanford > Encyclopedia of Philosophy and Schoenfinkel's original paper, > Schoenfinkel was the first to describe the main idea: converting a > combinator term with variables into a form in which all of its > variables are at the margin. However, Curry should be credited with > showing that lambda-calculus is combinatorially complete (that every > lambda-expression can be represented as a combinator applied to the > series of the expression's free variables). The Curry's bracket > abstraction is essentially Schoenfinkel's. Although Schoenfinkel has > introduced B and C combinators (called differently, though), his > bracket abstraction only deals with S and K. It is David Turner who > has to be credited with optimizations. Is this understanding correct? > > > From sestini.filippo at gmail.com Tue Feb 27 06:34:22 2018 From: sestini.filippo at gmail.com (Filippo Sestini) Date: Tue, 27 Feb 2018 12:34:22 +0100 Subject: [TYPES] Typed lambda calculi with weak conversion In-Reply-To: <961-5a93e380-57-15132060@156992327> References: <961-5a93e380-57-15132060@156992327> Message-ID: Dear all, Thank you for your helpful comments. I am not familiar with the suggested references, so I will certainly give them a read. As Gabriel said, it is not too hard to derive CH-normalization results by looking at how other notions of reduction behave. In fact I should clarify that, for this reason, I am more interested in literature dealing with normalization algorithms producing weak normal forms. I agree on your suggested connection with explicit substitutions. In fact, in [1] the authors give a very interesting version of the weak lambda calculus formulated with explicit subs. The translation from the original calculus to explicit subs., called "I" in the paper, basically corresponds to your extrusion, in that it extracts all "maximal" redexes from a term and puts them into a substitution. Then {I(M)} = M, where {_} performs all suspended substitutions. Unfortunately, it seems to me that their explicit subs. calculus cannot be used to reduce terms to weak normal form under this translation, in the sense that it is not the case, in general, that if M normalizes to N, then I(M) normalizes to P such that {P} = N. The reason is that beta reduction may unveil redexes that were not there before. Under a maximal translation, this means it may ruin the maximality property of substitutions. Consider a term M = (lam x . (lam z . z) x) A, where A is closed and in normal form. M has a weak normal form N = A. The maximal translation on M yields (lam x . y x)[lam z . z/y] I(A) and a beta contraction gives P = (y x)[lam z . z/y, I(A)/x], which is normal in the expl. subs. calculus. However, {P} = (lam z . z) A /= A. Of course the authors were well aware of this, as they stated the property I referred to for single-step reductions only (Proposition 6). Looking forward to any other comments or suggestions that you may have. Sincerely [1] J-J L?vy and L. Maranget, Explicit Substitutions and Programming Languages, 1999 -- Filippo Sestini sestini.filippo at gmail.com (+39) 333 6757668 > On Feb 26, 2018, at 11:38 AM, Delia Kesner wrote: > > Dear Filippo, > I've studied intersection types for weak reduction calculi > (call by need which is a form of weak reduction and weak neededness) > in two articles that you will find here: > > https://www.irif.fr/~kesner//papers/call-by-need.pdf > https://www.irif.fr/~kesner//papers/fossacs2018.pdf > > All the best > Delia > > Le Dimanche 25 F?vrier 2018 12:06 CET, Filippo Sestini a ?crit: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Dear all, >> >> The notion of weak reduction first appeared in [1], and was later >> developed into the definition of a weak lambda calculus in, for >> example, [2]. The crucial property of weak reduction is that it >> does not validate the ? rule, but some form of computation under >> binders is still possible. >> >> I'm looking for references in the literature treating typed >> calculi with full lambda syntax and weak conversion, in >> particular regarding normalization. What I've found so far seems >> to either be limited to a form of combinator syntax, or only >> consider evaluation to weak head normal form ([3], [4]). >> Do you have any suggestions? >> >> Thank you >> >> Sincerely >> >> [1] W.A. Howard, Assignment of Ordinals to Terms for Primitive >> Recursive Functionals of Finite Type, 1970 >> 1970 >> >> [2] N. ?a?man, J.R. Hindley, Combinatory weak reduction in lambda >> calculus, 1998 >> >> [3] P. Martin-L?f, An Intuitionistic Theory of Types: Predicative >> Part, 1975 >> >> [4] T. Coquand, P. Dybjer, Intuitionistic Model Constructions and >> Normalization Proofs, 1998 >> >> -- >> Filippo Sestini >> sestini.filippo at gmail.com >> (+39) 333 6757668 >> > > > > > > From t.altenkirch at googlemail.com Thu Mar 1 07:15:27 2018 From: t.altenkirch at googlemail.com (Thorsten Altenkirch) Date: Thu, 01 Mar 2018 12:15:27 +0000 Subject: [TYPES] Typed lambda calculi with weak conversion Message-ID: Hi Filippo, you can define weak equality using a substitution calculus. The basic idea is that you have normal forms of the form (\x.t)[us and laws like ((\x.t)[us])[vs] = (\x.t)[us o vs], see http://www.cs.nott.ac.uk/~psztxa/publ/jtait07.pdf, p.6. Cheers, Thorsten On 25/02/2018, 11:06, "Types-list on behalf of Filippo Sestini" wrote: >[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >Dear all, > >The notion of weak reduction first appeared in [1], and was later >developed into the definition of a weak lambda calculus in, for >example, [2]. The crucial property of weak reduction is that it >does not validate the ? rule, but some form of computation under >binders is still possible. > >I'm looking for references in the literature treating typed >calculi with full lambda syntax and weak conversion, in >particular regarding normalization. What I've found so far seems >to either be limited to a form of combinator syntax, or only >consider evaluation to weak head normal form ([3], [4]). >Do you have any suggestions? > >Thank you > >Sincerely > >[1] W.A. Howard, Assignment of Ordinals to Terms for Primitive >Recursive Functionals of Finite Type, 1970 >1970 > >[2] N. ?a?man, J.R. Hindley, Combinatory weak reduction in lambda >calculus, 1998 > >[3] P. Martin-L?f, An Intuitionistic Theory of Types: Predicative >Part, 1975 > >[4] T. Coquand, P. Dybjer, Intuitionistic Model Constructions and >Normalization Proofs, 1998 > >-- >Filippo Sestini >sestini.filippo at gmail.com >(+39) 333 6757668 > From thibaut.balabonski at free.fr Thu Mar 1 09:09:46 2018 From: thibaut.balabonski at free.fr (Thibaut Balabonski) Date: Thu, 1 Mar 2018 15:09:46 +0100 Subject: [TYPES] Typed lambda calculi with weak conversion In-Reply-To: References: Message-ID: <0F282CF0-7D10-48C5-8F6B-7F6CFF42B626@free.fr> Dear Filippo, dear Gabriel, One reason the ?a?man and Hindley's notion of weak reduction is appreciated might be that it makes the weak lambda-calculus behave exactly like an orthogonal first-order term rewriting system (which basically means ?simple and nicely confluent?). In particular it does never hide a redex, like in a term (\xy.x)((\lz.z) a) that would be reduced in normal order reduction to a weak normal form \y.((\z.z) a) where the redex that was once visible in the argument is now hidden. I assume when you speak of ?normalization? you mean ?strong normalization?, that is requiring that any reduction strategy reaches a normal form (in contrast to ?weak normalization? that only requires the existence of one -potentially non-computable- normalizing strategy). Otherwise both variants of weak reduction will definitely be different. One proof of the equivalence is in Sec. 4 of this paper: https://www.lri.fr/~blsk/Docs/Balabonski-FullLaziness-POPL12.pdf Regarding the ?extrusion? transformation, this looks really similar to the hoisting of maximal free expressions. The transformation is present in the usual presentation of fully lazy lambda-lifting (see SPJ?s The Implementation of Functional Programming Languages). It is also presented as a program transformation that implements fully lazy reduction using a ?simply lazy? evaluation mechanism (see Ariola and Felleisen?s JFP version of the call-by-need lambda-calculus). Thibaut. > Le 25 f?vr. 2018 ? 16:34, Gabriel Scherer a ?crit : > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > # ?a?man and Hindley's notion of weak reduction > > For those that would not be familiar with the work of ?a?man and Hindley, > 1998 (it is a nice paper that is still readable today, so don't hesitate to > have a look), they give a definition of weak reduction (as opposed to > strong/full reduction) that is a bit stronger (and arguably better-behaved) > than the standard one: instead of saying "don't reduce under lambdas", they > say "only reduce closed term" -- this is essentially the same thing, except > that they allow themselves to reduce closed sub-terms that would occur > under a lambda. For example: > > (lam y. (lam x. x) (lam z. z)) > > would not reduce for the common notion of weak reduction ((lam y . []) is > not an evaluation context), but it does reduce to > > (lam y. (lam z. z)) > > for their notion of weak reduction. (Luc Maranget once claimed to me that > this is the "right" notion of weak reduction.) > > One nice idea in the paper of ?a?man and Hindley is that this can be > characterized very nicely by using the no-variable-capture property of > substitutions (as opposed to context plugging). Strong/full reduction can > be defined by saying that, for any head-reduction from A to B (a "head" > reduction is when A is the redex, instead of merely containing the redex as > a subterm; if you only have functions, this is when A is of the form ((lam > x. C) D)): > > C[A] reduces to C[B] > > for an arbitrary context C. (Just a term with a hole, no particular > restriction to a family of evaluation contexts). ?a?man and Hindley remark > that their notion of weak reduction can be characterized by saying that, > for any head-reduction from A to B, > > C[A/x] reduces to C[B/x] > > using (capture-avoiding) substitution instead of context plugging imposes > that A and B do not contain a variable bound in C, which precisely > corresponds to their notion of weak reduction. > (Consider (lam y . z) and (lam y . y), the former can be written as (lam y > . x)[z/x], but the latter is not equal to (lam y . x)[y/x].) > > # Normalization of weak reduction > > To answer Filippo's question: I don't think that enlarging the notion of > weak reduction to reduce closed subterms changes the normalization behavior > in any way: the two notions of weak reduction have the same normalization > properties. > > In particular, consider a term of the form > > C[A] > > where A is a closed sub-term in the sense that is under some lambdas in C, > but does not use their bound variables. A is not reducible under the > no-reduction-under-lambda interpretation (the common one), but it is > reducible under the reduce-closed-subterms interpretation (?a?man and > Hindley's -- let's call it "CH weak reduction" for short). But this term is > beta-equivalent to > > (lam x. C[x]) A > > where A is in reducible position under either notions of reductions. If > C[A] was non-normalizable for CH weak reduction, then ((lam x. C[x]) A) is > non-normalizable for standard weak reduction. If the standard calculus has > a weak-normalization property, then you know that Ch-reducing A in C[A] > preserves normalization. > > My intuition suggests that this can be turned a general construction that > extrudes all closed sub-term of a term, and produces from any term T an > extruded term T' such that the CH-weak normal form of T is the standard > normal form of T'. (Of course, this is only intuition, one has to do the > work.) (This extrusion is type-preserving.) This means that any > normalization result for standard weak reduction implies a normalization > result for CH weak reduction. > > > # Explicit substitutions > > I believe that CH-weak reduction is may be related to calculi with explicit > substitutions. > CH-weak reduction is "more confluent" than standard reduction, as reducing > (lam x. A) B preserves the property that B is in reducible position. (This > is not true of standard weak reduction, consider the case where A is (lam y > . C)). This property that head reduction preserves reducibility of their > subterms also holds for explicit substitution calculi. > > On Sun, Feb 25, 2018 at 12:06 PM, Filippo Sestini > wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Dear all, >> >> The notion of weak reduction first appeared in [1], and was later >> developed into the definition of a weak lambda calculus in, for >> example, [2]. The crucial property of weak reduction is that it >> does not validate the ? rule, but some form of computation under >> binders is still possible. >> >> I'm looking for references in the literature treating typed >> calculi with full lambda syntax and weak conversion, in >> particular regarding normalization. What I've found so far seems >> to either be limited to a form of combinator syntax, or only >> consider evaluation to weak head normal form ([3], [4]). >> Do you have any suggestions? >> >> Thank you >> >> Sincerely >> >> [1] W.A. Howard, Assignment of Ordinals to Terms for Primitive >> Recursive Functionals of Finite Type, 1970 >> 1970 >> >> [2] N. ?a?man, J.R. Hindley, Combinatory weak reduction in lambda >> calculus, 1998 >> >> [3] P. Martin-L?f, An Intuitionistic Theory of Types: Predicative >> Part, 1975 >> >> [4] T. Coquand, P. Dybjer, Intuitionistic Model Constructions and >> Normalization Proofs, 1998 >> >> -- >> Filippo Sestini >> sestini.filippo at gmail.com >> (+39) 333 6757668 >> >> From zpalmer2 at swarthmore.edu Mon Mar 19 09:52:26 2018 From: zpalmer2 at swarthmore.edu (Zachary Palmer) Date: Mon, 19 Mar 2018 09:52:26 -0400 Subject: [TYPES] Existing Work on Function Destructors or Haskell Type Spec? Message-ID: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> Hi, all!? I have a couple things I've been trying to find to no avail for a while, so I thought I'd ask the help of the list. The first is prior work involving typed function destructors other than application.? Cloud Haskell seems close, as the runtime essentially allows a restricted form of transmission of lambdas over a network and that requires the closures to be serialized.?? I can't help but think, though, that there's some core theory I'm missing on the topic.? Is anyone familiar with any work of that sort? Second, I've also had little luck finding a formal type specification for Haskell.? The 1999 workshop paper "Typing Haskell in Haskell" by Mark Jones seems to be the closest thing I could find with search engines, and that's basically a Literate Haskell file. Does anyone know of an inference rule-style type system specification for the language? Thanks for your time! Best, Zach From sestini.filippo at gmail.com Mon Mar 19 16:44:53 2018 From: sestini.filippo at gmail.com (Filippo Sestini) Date: Mon, 19 Mar 2018 21:44:53 +0100 Subject: [TYPES] Existing Work on Function Destructors or Haskell Type Spec? In-Reply-To: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> References: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> Message-ID: Hello, To address your second request, as far as I know there is no single, comprehensive formal specification of (GHC) Haskell's type system. In the view of some of its creators [6], one reason is that full formalization somewhat hinders language evolution and experimentation. Rather, many of Haskell's constituent pieces have been individually described over the years in the various research papers introducing them. Examples from the type system are [1] for GADTs, and [2] for closed type families. Moreover, the GHC implementation of Haskell is based on a core intermediate language, also known as GHC Core or System FC, to which the entire language must elaborate. System FC and its subsequent extensions have been formally specified [3, 4, 5]. In a sense, a definition of Haskell's type system can be (partially) inferred from the way it desugars to System FC. [1] Vytiniotis, Weirich, Peyton Jones, Simple unification-based type inference for GADTs, 2006 [2] Eisenberg, Vytiniotis, Peyton Jones, Weirich, Closed Type Families with Overlapping Equations, 2014 [3] https://github.com/ghc/ghc/tree/master/docs/core-spec [4] Sulzmann, Chakravarty, Peyton Jones, Donnelly, System F with type equality coercions, 2007 [5] Weirich, Hsu, Eisenberg, System FC with Explicit Kind Equality, 2013 [6] Hudak, Hughes, Peyton Jones, Wadler, A History of Haskell, 2007 Regards -- Filippo Sestini sestini.filippo at gmail.com (+39) 333 6757668 > On Mar 19, 2018, at 2:52 PM, Zachary Palmer wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi, all! I have a couple things I've been trying to find to no avail for a while, so I thought I'd ask the help of the list. > > The first is prior work involving typed function destructors other than application. Cloud Haskell seems close, as the runtime essentially allows a restricted form of transmission of lambdas over a network and that requires the closures to be serialized. I can't help but think, though, that there's some core theory I'm missing on the topic. Is anyone familiar with any work of that sort? > > Second, I've also had little luck finding a formal type specification for Haskell. The 1999 workshop paper "Typing Haskell in Haskell" by Mark Jones seems to be the closest thing I could find with search engines, and that's basically a Literate Haskell file. Does anyone know of an inference rule-style type system specification for the language? > > Thanks for your time! > > Best, > > Zach From rae at cs.brynmawr.edu Mon Mar 19 18:57:48 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 19 Mar 2018 18:57:48 -0400 Subject: [TYPES] Existing Work on Function Destructors or Haskell Type Spec? In-Reply-To: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> References: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> Message-ID: <8FDDA4F4-A960-4D2E-8506-08E78C0C3918@cs.brynmawr.edu> I had this same query several years ago and also was surprised not to find much. Beyond the Jones paper, you may also be interested in Karl-Filip Fax?n's "A static semantics for Haskell", JFP'02. You might also find use in my entry into this sparse field, in the extended version of my HS'14 paper "Promoting Type Functions to Type Families in Haskell". That paper doesn't focus on the type theory, but we needed to write down typing rules on the way to our main proof result. You can find the extended version here: https://repository.brynmawr.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1017&context=compsci_pubs I hope this helps! Richard -=-=-=-=-=-=-=-=-=-=- Richard A. Eisenberg, PhD Asst. Prof. of Computer Science Bryn Mawr College Bryn Mawr, PA, USA cs.brynmawr.edu/~rae > On Mar 19, 2018, at 9:52 AM, Zachary Palmer wrote: > > Second, I've also had little luck finding a formal type specification for Haskell. The 1999 workshop paper "Typing Haskell in Haskell" by Mark Jones seems to be the closest thing I could find with search engines, and that's basically a Literate Haskell file. Does anyone know of an inference rule-style type system specification for the language? From neelakantan.krishnaswami at gmail.com Tue Mar 20 06:49:12 2018 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Tue, 20 Mar 2018 10:49:12 +0000 Subject: [TYPES] Existing Work on Function Destructors or Haskell Type Spec? In-Reply-To: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> References: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> Message-ID: <3f311c5d-c624-6f13-093c-df046d9b8193@gmail.com> On 19/03/18 13:52, Zachary Palmer wrote: > > The first is prior work involving typed function destructors other than > application.? Cloud Haskell seems close, as the runtime essentially > allows a restricted form of transmission of lambdas over a network and > that requires the closures to be serialized.?? I can't help but think, > though, that there's some core theory I'm missing on the topic.? Is > anyone familiar with any work of that sort? I do not know what a "function destructor" is, but the theoretical account of the ability to send functions over the network that is closest to what Cloud Haskell is doing is given in Tom Murphy's PhD thesis, "Modal Types for Mobile Code" [1]. The Static type constructor in Cloud Haskell corresponds to the box modality in Murphy's ML5 language. One point worth noting is that the intro/elim forms for the "static/unstatic" keywords in Cloud Haskell dont't quite work right -- it doesn't preserve typability under eta-expansion. That is, if e has the type Static a, there's no way to show that e is equal to static (unstatic e), because static (unstatic e) will not in general be well-typed (for instance, if e is an ordinary variable). A solution to this problem would be if CH had a let-style eliminator of the form let Static x = e in e' which takes a term e of type Static a and binds it to a variable x which is known to be static. Then you could write the eta-expansion e --> let Static x = e in (static x) which would be well-typed. The loss of eta isn't fatal, but it hinders equational reasoning and optimization quite a bit. Interestingly, the rule used in Cloud Haskell is actually the one that logicians first tried when trying to give a proof theory for the box modality, but cut-elimination didn't work properly with it. Eventually people moved to a let-style eliminator (see Davies and Pfenning's "A Judgemental Reconstruction of Modal Logic"), which works better. Best, Neel [1] https://www.cs.cmu.edu/~tom7/papers/modal-types-for-mobile-code.pdf FYI, he and his coauthors also wrote some conference papers about this system, but due to the page count restrictions I found them a bit dense. His thesis is really the most readable account. From wadler at inf.ed.ac.uk Tue Mar 20 09:45:47 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 20 Mar 2018 10:45:47 -0300 Subject: [TYPES] Existing Work on Function Destructors or Haskell Type Spec? In-Reply-To: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> References: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> Message-ID: There has never been a formal description of the type system for all of Haskell, but the following may be relevant: https://dl.acm.org/citation.cfm?doid=227699.227700 Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ On 19 March 2018 at 10:52, Zachary Palmer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi, all! I have a couple things I've been trying to find to no avail for > a while, so I thought I'd ask the help of the list. > > The first is prior work involving typed function destructors other than > application. Cloud Haskell seems close, as the runtime essentially allows > a restricted form of transmission of lambdas over a network and that > requires the closures to be serialized. I can't help but think, though, > that there's some core theory I'm missing on the topic. Is anyone familiar > with any work of that sort? > > Second, I've also had little luck finding a formal type specification for > Haskell. The 1999 workshop paper "Typing Haskell in Haskell" by Mark Jones > seems to be the closest thing I could find with search engines, and that's > basically a Literate Haskell file. Does anyone know of an inference > rule-style type system specification for the language? > > Thanks for your time! > > Best, > > Zach > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From matthias at ccs.neu.edu Tue Mar 20 13:09:54 2018 From: matthias at ccs.neu.edu (matthias at ccs.neu.edu) Date: Tue, 20 Mar 2018 13:09:54 -0400 Subject: [TYPES] Existing Work on Function Destructors or Haskell Type Spec? In-Reply-To: <3f311c5d-c624-6f13-093c-df046d9b8193@gmail.com> References: <6190570b-99b0-a96d-215f-ce7be2a1881a@swarthmore.edu> <3f311c5d-c624-6f13-093c-df046d9b8193@gmail.com> Message-ID: <41819AC1-5330-4488-A85A-81DD99133853@ccs.neu.edu> > On Mar 20, 2018, at 6:49 AM, Neel Krishnaswami wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On 19/03/18 13:52, Zachary Palmer wrote: >> The first is prior work involving typed function destructors other than application. Cloud Haskell seems close, as the runtime essentially allows a restricted form of transmission of lambdas over a network and that requires the closures to be serialized. I can't help but think, though, that there's some core theory I'm missing on the topic. Is anyone familiar with any work of that sort? > > I do not know what a "function destructor" is, but the theoretical > account of the ability to send functions over the network that is > closest to what Cloud Haskell is doing is given in Tom Murphy's > PhD thesis, "Modal Types for Mobile Code? In the same spirit the work of Heather Miller et al. [1] relevant. It explains the Scala support for Apache Spark (no worries, the paper does contain theoretical material), which sends closures to large sets of data in large clusters of computers. ? Matthias [1] https://www.cambridge.org/core/journals/journal-of-functional-programming/article/programming-model-and-foundation-for-lineagebased-distributed-computation/B410CE79B21E33462843B408B716E1E5 From wadler at inf.ed.ac.uk Mon Mar 26 19:30:32 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Mon, 26 Mar 2018 20:30:32 -0300 Subject: [TYPES] What algebra am I thinking of? Message-ID: Consider a blockchain managing several different resources. Over time, new resources may be added or deleted. Each input to or output from a transaction is associated with a value, where each value consists of associating zero or more resources with amounts, where the amounts are natural numbers (that is, integers greater than or equal to zero). What kind of algebra do values correspond to? It seems similar to vector spaces, except: (a) adding or deleting resources increases or decreases the number of dimensions in the vector space (b) the scalars in the vector space are natural numbers rather than reals What algebra am I thinking of? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From mlmeola at gmail.com Tue Mar 27 02:49:44 2018 From: mlmeola at gmail.com (Matthew Meola) Date: Tue, 27 Mar 2018 02:49:44 -0400 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: Message-ID: I think this might be a module over a semiring. The natural numbers with +,* form a semiring and then, over resources, would form a module over the semiring. For the dimensional issue, you might treat it as infinite dimensional because there are infinitely many potential resources even if not all are available at the moment. And then take the set of sequences with finitely many nonzero terms. This a subspace and is infinite dimensional at least for the vector space R-infinity. Maybe that holds for this case as well. On Tue, Mar 27, 2018 at 01:21 Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Consider a blockchain managing several different resources. Over time, new > resources may be added or deleted. Each input to or output from a > transaction is associated with a value, where each value consists of > associating zero or more resources with amounts, where the amounts are > natural numbers (that is, integers greater than or equal to zero). > > What kind of algebra do values correspond to? It seems similar to vector > spaces, except: > (a) adding or deleting resources increases or decreases the number of > dimensions in the vector space > (b) the scalars in the vector space are natural numbers rather than reals > > What algebra am I thinking of? Cheers, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From radugrigore at gmail.com Tue Mar 27 04:03:11 2018 From: radugrigore at gmail.com (Radu Grigore) Date: Tue, 27 Mar 2018 08:03:11 +0000 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: Message-ID: You might want to look at vector addition systems, aka Petri Nets. "Displacements" are from Z^d, but current values are from N^d, as in your case. But, unlike your case, the dimension d is typically finite. The restriction to nonnegative integers makes all sorts of problems harder than they'd otherwise be. On Tue, Mar 27, 2018 at 6:19 AM Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Consider a blockchain managing several different resources. Over time, new > resources may be added or deleted. Each input to or output from a > transaction is associated with a value, where each value consists of > associating zero or more resources with amounts, where the amounts are > natural numbers (that is, integers greater than or equal to zero). > > What kind of algebra do values correspond to? It seems similar to vector > spaces, except: > (a) adding or deleting resources increases or decreases the number of > dimensions in the vector space > (b) the scalars in the vector space are natural numbers rather than reals > > What algebra am I thinking of? Cheers, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From moggi at disi.unige.it Tue Mar 27 07:38:41 2018 From: moggi at disi.unige.it (Eugenio Moggi) Date: Tue, 27 Mar 2018 13:38:41 +0200 Subject: [TYPES] What algebra am I thinking of? Message-ID: <87y3idg33y.fsf@unige.it> > Consider a blockchain managing several different resources. Over time, new > resources may be added or deleted. Each input to or output from a > transaction is associated with a value, where each value consists of > associating zero or more resources with amounts, where the amounts are > natural numbers (that is, integers greater than or equal to zero). > > What kind of algebra do values correspond to? It seems similar to vector > spaces, except: > (a) adding or deleting resources increases or decreases the number of > dimensions in the vector space > (b) the scalars in the vector space are natural numbers rather than reals > > What algebra am I thinking of? Cheers, -- P Dear Phil, regarding the issue (b) you want to replace the FIELD of the real numbers with the SEMI-RING of natural numbers (N,+,*,0,1). As suggested by Meola the structure should be a module over a RIG, see https://ncatlab.org/nlab/show/module More precisely - a rig (F,*,+,1,0) for the scalars - a commutative monoid (V,+,0) for the vectors - an action *:FxV->V satisfying certain properties, in particular 0*v=0=f*0 When F is a field, one recovers the usual notion of vector space. A vector space V can be infinite dimensional. In the case of a module over a RIG, the definition of base B should be the usual one, namely a subset of B is a base for V iff - every finite subset of B is linearly independent - every element of V is the linear combination of a finite subset of B but the definition of LINEARLY INDEPENDENT has to be revised, to avoid the use of "negative". A finite subset {v_i|i:n} of V is lineraly independent <=> for every a,b:F^n if Sum_i a_i*v_i = Sum_i b_i*v_i, then a=b. In the module of over N proposed by Meola, ie the maps from the set of resourses to the rig N that have finite support, the base is unique, and there is an obvious definition of inner product, but I doubt you can do much with it. However, there are modules over N that have no base. For instance, the module Z over N, {1} is too small to be a base, and {1,-1} is too big. Best Regards Eugenio Moggi From harrisonrbrown90 at gmail.com Tue Mar 27 08:10:02 2018 From: harrisonrbrown90 at gmail.com (Harrison Brown) Date: Tue, 27 Mar 2018 08:10:02 -0400 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: <87y3idg33y.fsf@unige.it> References: <87y3idg33y.fsf@unige.it> Message-ID: Hi Phil, I think Meola and Eugenio have it right, and in particular I think you're describing the free module over N generated by the set of resources. (Free modules over rings proper behave pretty much like vector spaces; I don't know much about the theory of free modules over rigs, but the analogy should hold.) And, analogous to how free abelian groups are free modules over Z, I believe that this is isomorphic to a free commutative monoid generated by the set of resources. If you want to forbid the empty transaction, you have a free commutative semigroup. Of course if the resources interact in some way you no longer have a free structure, but you should still have an N-module. Best, Harrison On Tue, Mar 27, 2018 at 7:38 AM, Eugenio Moggi wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > Consider a blockchain managing several different resources. Over time, > new > > resources may be added or deleted. Each input to or output from a > > transaction is associated with a value, where each value consists of > > associating zero or more resources with amounts, where the amounts are > > natural numbers (that is, integers greater than or equal to zero). > > > > What kind of algebra do values correspond to? It seems similar to vector > > spaces, except: > > (a) adding or deleting resources increases or decreases the number of > > dimensions in the vector space > > (b) the scalars in the vector space are natural numbers rather than > reals > > > > What algebra am I thinking of? Cheers, -- P > > Dear Phil, regarding the issue (b) you want to replace the FIELD of the > real > numbers with the SEMI-RING of natural numbers (N,+,*,0,1). > > As suggested by Meola the structure should be a module over a RIG, see > https://ncatlab.org/nlab/show/module > More precisely > - a rig (F,*,+,1,0) for the scalars > - a commutative monoid (V,+,0) for the vectors > - an action *:FxV->V satisfying certain properties, in particular 0*v=0=f*0 > > When F is a field, one recovers the usual notion of vector space. > > A vector space V can be infinite dimensional. In the case of a module over > a > RIG, the definition of base B should be the usual one, namely a subset of > B is a > base for V iff > > - every finite subset of B is linearly independent > - every element of V is the linear combination of a finite subset of B > > but the definition of LINEARLY INDEPENDENT has to be revised, to avoid the > use > of "negative". A finite subset {v_i|i:n} of V is lineraly independent <=> > for every a,b:F^n if Sum_i a_i*v_i = Sum_i b_i*v_i, then a=b. > > In the module of over N proposed by Meola, ie the maps from the set of > resourses > to the rig N that have finite support, the base is unique, and there is an > obvious definition of inner product, but I doubt you can do much with it. > > However, there are modules over N that have no base. For instance, the > module Z > over N, {1} is too small to be a base, and {1,-1} is too big. > > Best Regards > Eugenio Moggi > From carette at mcmaster.ca Tue Mar 27 08:45:45 2018 From: carette at mcmaster.ca (Jacques Carette) Date: Tue, 27 Mar 2018 08:45:45 -0400 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: <87y3idg33y.fsf@unige.it> Message-ID: <0fcd5217-cfa3-cec1-7f78-f9a1260ca9c3@mcmaster.ca> Correct.? And that (free commutative monoid) is exactly the Bag data-structure, with its union and cartesian product operations. [Where Bag here is understood to be finitely supported, even though the set of resources can be infinite] Jacques On 2018-03-27 8:10 AM, Harrison Brown wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi Phil, > > I think Meola and Eugenio have it right, and in particular I think you're > describing the free module over N generated by the set of resources. (Free > modules over rings proper behave pretty much like vector spaces; I don't > know much about the theory of free modules over rigs, but the analogy > should hold.) And, analogous to how free abelian groups are free modules > over Z, I believe that this is isomorphic to a free commutative monoid > generated by the set of resources. If you want to forbid the empty > transaction, you have a free commutative semigroup. Of course if the > resources interact in some way you no longer have a free structure, but you > should still have an N-module. > > Best, > > Harrison > > On Tue, Mar 27, 2018 at 7:38 AM, Eugenio Moggi wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >>> Consider a blockchain managing several different resources. Over time, >> new >>> resources may be added or deleted. Each input to or output from a >>> transaction is associated with a value, where each value consists of >>> associating zero or more resources with amounts, where the amounts are >>> natural numbers (that is, integers greater than or equal to zero). >>> >>> What kind of algebra do values correspond to? It seems similar to vector >>> spaces, except: >>> (a) adding or deleting resources increases or decreases the number of >>> dimensions in the vector space >>> (b) the scalars in the vector space are natural numbers rather than >> reals >>> What algebra am I thinking of? Cheers, -- P >> Dear Phil, regarding the issue (b) you want to replace the FIELD of the >> real >> numbers with the SEMI-RING of natural numbers (N,+,*,0,1). >> >> As suggested by Meola the structure should be a module over a RIG, see >> https://ncatlab.org/nlab/show/module >> More precisely >> - a rig (F,*,+,1,0) for the scalars >> - a commutative monoid (V,+,0) for the vectors >> - an action *:FxV->V satisfying certain properties, in particular 0*v=0=f*0 >> >> When F is a field, one recovers the usual notion of vector space. >> >> A vector space V can be infinite dimensional. In the case of a module over >> a >> RIG, the definition of base B should be the usual one, namely a subset of >> B is a >> base for V iff >> >> - every finite subset of B is linearly independent >> - every element of V is the linear combination of a finite subset of B >> >> but the definition of LINEARLY INDEPENDENT has to be revised, to avoid the >> use >> of "negative". A finite subset {v_i|i:n} of V is lineraly independent <=> >> for every a,b:F^n if Sum_i a_i*v_i = Sum_i b_i*v_i, then a=b. >> >> In the module of over N proposed by Meola, ie the maps from the set of >> resourses >> to the rig N that have finite support, the base is unique, and there is an >> obvious definition of inner product, but I doubt you can do much with it. >> >> However, there are modules over N that have no base. For instance, the >> module Z >> over N, {1} is too small to be a base, and {1,-1} is too big. >> >> Best Regards >> Eugenio Moggi >> From wadler at inf.ed.ac.uk Tue Mar 27 10:27:24 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 27 Mar 2018 11:27:24 -0300 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: <87y3idg33y.fsf@unige.it> Message-ID: Thank you for all the replies, and for reminding me of bags, rigs/semirings, and modules. In a rig, what corresponds to the notion of monus? _?_ : ? ? ? ? ? m ? zero = m zero ? (suc n) = zero (suc m) ? (suc n) = m ? n Is monus defined for every rig? (While zero and suc are defined in every rig, it's not immediately obvious to me that definition by pattern matching as above makes sense in an arbitrary rig.) What are the algebraic properties of monus? James' suggestion of bags with delete is spot on. I would prefer delete to be total rather than partial, so that deleting an element not present in the bag leaves the bag unchanged. If a bag is a free commutative monoid, then how does (total) delete come into the picture? Does it have an algebraic characterisation? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ On 27 March 2018 at 10:42, Sanjiva Prasad wrote: > Eugenio, is there some reason I am missing that a left semimodule to a > semiring won't suffice? > > On 27.03.2018 17:08, Eugenio Moggi wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/ma >> ilman/listinfo/types-list ] >> >> Consider a blockchain managing several different resources. Over time, new >>> resources may be added or deleted. Each input to or output from a >>> transaction is associated with a value, where each value consists of >>> associating zero or more resources with amounts, where the amounts are >>> natural numbers (that is, integers greater than or equal to zero). >>> >>> What kind of algebra do values correspond to? It seems similar to vector >>> spaces, except: >>> (a) adding or deleting resources increases or decreases the number of >>> dimensions in the vector space >>> (b) the scalars in the vector space are natural numbers rather than >>> reals >>> >>> What algebra am I thinking of? Cheers, -- P >>> >> >> Dear Phil, regarding the issue (b) you want to replace the FIELD of the >> real >> numbers with the SEMI-RING of natural numbers (N,+,*,0,1). >> >> As suggested by Meola the structure should be a module over a RIG, see >> https://ncatlab.org/nlab/show/module >> More precisely >> - a rig (F,*,+,1,0) for the scalars >> - a commutative monoid (V,+,0) for the vectors >> - an action *:FxV->V satisfying certain properties, in particular >> 0*v=0=f*0 >> >> When F is a field, one recovers the usual notion of vector space. >> >> A vector space V can be infinite dimensional. In the case of a module >> over a >> RIG, the definition of base B should be the usual one, namely a subset of >> B is a >> base for V iff >> >> - every finite subset of B is linearly independent >> - every element of V is the linear combination of a finite subset of B >> >> but the definition of LINEARLY INDEPENDENT has to be revised, to avoid >> the use >> of "negative". A finite subset {v_i|i:n} of V is lineraly independent <=> >> for every a,b:F^n if Sum_i a_i*v_i = Sum_i b_i*v_i, then a=b. >> >> In the module of over N proposed by Meola, ie the maps from the set of >> resourses >> to the rig N that have finite support, the base is unique, and there is an >> obvious definition of inner product, but I doubt you can do much with it. >> >> However, there are modules over N that have no base. For instance, the >> module Z >> over N, {1} is too small to be a base, and {1,-1} is too big. >> >> Best Regards >> Eugenio Moggi >> > > -- > Sanjiva Prasad > Professor, Department of Computer Science & Engineering > Indian Institute of Technology Delhi > Hauz Khas, New Delhi 110016 > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From neelakantan.krishnaswami at gmail.com Tue Mar 27 12:41:20 2018 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Tue, 27 Mar 2018 17:41:20 +0100 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: <87y3idg33y.fsf@unige.it> Message-ID: <62540254-40a0-83db-9212-cba53e82c924@gmail.com> On 27/03/18 15:27, Philip Wadler wrote: > Thank you for all the replies, and for reminding me of bags, > rigs/semirings, and modules. > > In a rig, what corresponds to the notion of monus? > _?_ : ? ? ? ? ? > m ? zero = m > zero ? (suc n) = zero > (suc m) ? (suc n) = m ? n > Is monus defined for every rig? (While zero and suc are defined in every > rig, it's not immediately obvious to me that definition by pattern matching > as above makes sense in an arbitrary rig.) What are the algebraic > properties of monus? The natural numbers Nat, ordered by , form a posetal symmetric monoidal closed category. Here, the objects are the natural numbers, and Hom(n, m) = { ? | n ? m } The tensor product is given by: x ? y ? x + y The monoidal closed structure arises from saturating subtraction: y ? z ? z - x It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since x + y ? z if and only if x ? z - y. (This is all given in Lawvere's paper "Metric Spaces, Generalized Logic, and Closed Categories".) So it seems like the natural thing to do would be to say that a rig has a monus when it is monoidal closed with respect to its addition, defining x ? z to hold when there exists a y such that x = y + z. Best, Neel -- Neel Krishnaswami nk480 at cl.cam.ac.uk From carette at mcmaster.ca Tue Mar 27 14:53:00 2018 From: carette at mcmaster.ca (Jacques Carette) Date: Tue, 27 Mar 2018 14:53:00 -0400 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: <87y3idg33y.fsf@unige.it> Message-ID: You are looking for 'multiset difference'. http://theory.stanford.edu/~arbrad/pivc/sets.pdf Jacques On 2018-03-27 10:27 AM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Thank you for all the replies, and for reminding me of bags, > rigs/semirings, and modules. > > In a rig, what corresponds to the notion of monus? > _?_ : ? ? ? ? ? > m ? zero = m > zero ? (suc n) = zero > (suc m) ? (suc n) = m ? n > Is monus defined for every rig? (While zero and suc are defined in every > rig, it's not immediately obvious to me that definition by pattern matching > as above makes sense in an arbitrary rig.) What are the algebraic > properties of monus? > > James' suggestion of bags with delete is spot on. I would prefer delete to > be total rather than partial, so that deleting an element not present in > the bag leaves the bag unchanged. If a bag is a free commutative monoid, > then how does (total) delete come into the picture? Does it have an > algebraic characterisation? > > Cheers, -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > On 27 March 2018 at 10:42, Sanjiva Prasad > wrote: > >> Eugenio, is there some reason I am missing that a left semimodule to a >> semiring won't suffice? >> >> On 27.03.2018 17:08, Eugenio Moggi wrote: >> >>> [ The Types Forum, http://lists.seas.upenn.edu/ma >>> ilman/listinfo/types-list ] >>> >>> Consider a blockchain managing several different resources. Over time, new >>>> resources may be added or deleted. Each input to or output from a >>>> transaction is associated with a value, where each value consists of >>>> associating zero or more resources with amounts, where the amounts are >>>> natural numbers (that is, integers greater than or equal to zero). >>>> >>>> What kind of algebra do values correspond to? It seems similar to vector >>>> spaces, except: >>>> (a) adding or deleting resources increases or decreases the number of >>>> dimensions in the vector space >>>> (b) the scalars in the vector space are natural numbers rather than >>>> reals >>>> >>>> What algebra am I thinking of? Cheers, -- P >>>> >>> Dear Phil, regarding the issue (b) you want to replace the FIELD of the >>> real >>> numbers with the SEMI-RING of natural numbers (N,+,*,0,1). >>> >>> As suggested by Meola the structure should be a module over a RIG, see >>> https://ncatlab.org/nlab/show/module >>> More precisely >>> - a rig (F,*,+,1,0) for the scalars >>> - a commutative monoid (V,+,0) for the vectors >>> - an action *:FxV->V satisfying certain properties, in particular >>> 0*v=0=f*0 >>> >>> When F is a field, one recovers the usual notion of vector space. >>> >>> A vector space V can be infinite dimensional. In the case of a module >>> over a >>> RIG, the definition of base B should be the usual one, namely a subset of >>> B is a >>> base for V iff >>> >>> - every finite subset of B is linearly independent >>> - every element of V is the linear combination of a finite subset of B >>> >>> but the definition of LINEARLY INDEPENDENT has to be revised, to avoid >>> the use >>> of "negative". A finite subset {v_i|i:n} of V is lineraly independent <=> >>> for every a,b:F^n if Sum_i a_i*v_i = Sum_i b_i*v_i, then a=b. >>> >>> In the module of over N proposed by Meola, ie the maps from the set of >>> resourses >>> to the rig N that have finite support, the base is unique, and there is an >>> obvious definition of inner product, but I doubt you can do much with it. >>> >>> However, there are modules over N that have no base. For instance, the >>> module Z >>> over N, {1} is too small to be a base, and {1,-1} is too big. >>> >>> Best Regards >>> Eugenio Moggi >>> >> -- >> Sanjiva Prasad >> Professor, Department of Computer Science & Engineering >> Indian Institute of Technology Delhi >> Hauz Khas, New Delhi 110016 >> >> >> >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. From wadler at inf.ed.ac.uk Tue Mar 27 16:02:05 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 27 Mar 2018 17:02:05 -0300 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> References: <87y3idg33y.fsf@unige.it> <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> Message-ID: Thanks to all for further replies, and especially to Neel for his observation about monus and monoidal closure. Neel, is this written down anywhere? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ On 27 March 2018 at 13:39, Neel Krishnaswami wrote: > On 27/03/18 15:27, Philip Wadler wrote: > > Thank you for all the replies, and for reminding me of bags, >> rigs/semirings, and modules. >> >> In a rig, what corresponds to the notion of monus? >> _?_ : ? ? ? ? ? >> m ? zero = m >> zero ? (suc n) = zero >> (suc m) ? (suc n) = m ? n >> Is monus defined for every rig? (While zero and suc are defined in every >> rig, it's not immediately obvious to me that definition by pattern >> matching >> as above makes sense in an arbitrary rig.) What are the algebraic >> properties of monus? >> > > The natural numbers Nat, ordered by , form a posetal symmetric > monoidal closed category. Here, the objects are the natural > numbers, and Hom(n, m) = { ? | n ? m } > > The tensor product is given by: > > x ? y ? x + y > > The monoidal closed structure arises from saturating subtraction: > > y ? z ? z - x > > It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since > x + y ? z if and only if x ? z - y. > > (This is all given in Lawvere's paper "Metric Spaces, Generalized > Logic, and Closed Categories".) > > So it seems like the natural thing to do would be to say that a rig > has a monus when it is monoidal closed with respect to its addition, > defining x ? z to hold when there exists a y such that x = y + z. > > Best, > Neel > > > > -- > Neel Krishnaswami > nk480 at cl.cam.ac.uk > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From Sam.Lindley at ed.ac.uk Wed Mar 28 05:49:46 2018 From: Sam.Lindley at ed.ac.uk (Sam Lindley) Date: Wed, 28 Mar 2018 10:49:46 +0100 Subject: [TYPES] breaking abstraction with ML exceptions Message-ID: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> The following SML program exception C; structure M :> sig exception A end = struct exception A = C end; (raise M.A) handle C => 42 returns the value 42 (according to SML/NJ and, I believe, the 1997 definition of Standard ML). The signature ascription appears to assert that exception A is abstract in M, and yet we are able to raise the exception M.A and catch it as C outside the scope of M. It makes no difference whether the signature ascription is transparent or opaque. The equivalent OCaml program yields the same result. Does this kind of code occur in practice? Sam -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From neelakantan.krishnaswami at gmail.com Wed Mar 28 06:21:31 2018 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Wed, 28 Mar 2018 11:21:31 +0100 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: <87y3idg33y.fsf@unige.it> <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> Message-ID: <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> Hi Phil, I learned about this from Martin Escardo, and he told me the observation about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized Logic, and Closed Categories". One further observation I have not seen written down anywhere is that the natural numbers also support a trace (ie, feedback) operator, since it is also the case that x + y ? z + y if and only if x ? z. This means that the Geometry of Interaction construction can be applied to the natural numbers, which yields a preorder of pairs of natural numbers (n,m). In this case, two objects have a map between them if one object is greater than the other, viewing each object (n, m) as the signed integer n - m. As a result, I've sometimes wondered if the Int construction got its name as a bit of a joke! Best, Neel On 27/03/18 21:02, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Thanks to all for further replies, and especially to Neel for his > observation about monus and monoidal closure. Neel, is this written down > anywhere? Cheers, -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > On 27 March 2018 at 13:39, Neel Krishnaswami wrote: > >> On 27/03/18 15:27, Philip Wadler wrote: >> >> Thank you for all the replies, and for reminding me of bags, >>> rigs/semirings, and modules. >>> >>> In a rig, what corresponds to the notion of monus? >>> _?_ : ? ? ? ? ? >>> m ? zero = m >>> zero ? (suc n) = zero >>> (suc m) ? (suc n) = m ? n >>> Is monus defined for every rig? (While zero and suc are defined in every >>> rig, it's not immediately obvious to me that definition by pattern >>> matching >>> as above makes sense in an arbitrary rig.) What are the algebraic >>> properties of monus? >>> >> >> The natural numbers Nat, ordered by , form a posetal symmetric >> monoidal closed category. Here, the objects are the natural >> numbers, and Hom(n, m) = { ? | n ? m } >> >> The tensor product is given by: >> >> x ? y ? x + y >> >> The monoidal closed structure arises from saturating subtraction: >> >> y ? z ? z - y >> >> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since >> x + y ? z if and only if x ? z - y. >> >> (This is all given in Lawvere's paper "Metric Spaces, Generalized >> Logic, and Closed Categories".) >> >> So it seems like the natural thing to do would be to say that a rig >> has a monus when it is monoidal closed with respect to its addition, >> defining x ? z to hold when there exists a y such that x = y + z. >> >> Best, >> Neel >> >> >> >> -- >> Neel Krishnaswami >> nk480 at cl.cam.ac.uk >> >> >> >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. From gabriel.scherer at gmail.com Wed Mar 28 06:24:07 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Wed, 28 Mar 2018 12:24:07 +0200 Subject: [TYPES] breaking abstraction with ML exceptions In-Reply-To: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> References: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> Message-ID: While I cannot comment on how often this occurs in practice, I'd like to point out that this ability to hide equalities between exceptions has delicate implications for the compilation of pattern-matching (in exception-handling clauses, or when matching on exceptions as values?), and we (Luc Maranget, Thomas Refis and myself) somewhat recently found and fixed bugs in the OCaml pattern-matching compilation coming from them. (The bug went unreported for a long time, suggesting that indeed these cases do not occur terribly often in practice.) The problem is that it is wrong to assume that two constructors with distinct names are distinct, because one may be redefined to be equal to the other in a way that is not visible from your type environment (so a canonicalization strategy does not suffice). Remark that if you decide to conservatively assume that all constructors of exception type may be equal, and you use a classic "matrix-based" algorithm for pattern-matching, you can end up with ill-typed columns of patterns (containing patterns of incompatible types), coming from "potentially-equal" constructors of different argument types, so your pattern-matching processing has to be robust against this. (Two types may be equal for reasons unknown to the current typing environment, but you should also be ready to deal with incompatible head constructors or take steps to prevent that in the check of whether two constructors are surely equal, surely distinct, or may be equal.) ?: OCaml 4.02 (August 2014) introduced extensible algebraic datatypes (variants), contributed by Leo White, which generalize the extensible-type model of exceptions to arbitrary datatypes (including GADTs). It is more natural to match on the value on those than on exceptions. On Wed, Mar 28, 2018 at 11:49 AM, Sam Lindley wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > The following SML program > > exception C; > > structure M :> sig exception A end = > struct > exception A = C > end; > > (raise M.A) handle C => 42 > > returns the value 42 (according to SML/NJ and, I believe, the 1997 > definition of Standard ML). > > The signature ascription appears to assert that exception A is abstract in > M, and yet we are able to raise the exception M.A and catch it as C outside > the scope of M. It makes no difference whether the signature ascription is > transparent or opaque. The equivalent OCaml program yields the same result. > > Does this kind of code occur in practice? > > Sam > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > From dreyer at mpi-sws.org Wed Mar 28 08:17:52 2018 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 28 Mar 2018 14:17:52 +0200 Subject: [TYPES] breaking abstraction with ML exceptions In-Reply-To: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> References: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> Message-ID: Hi, Sam. I would take issue with your characterization of this example as "breaking abstraction". By way of analogy, when you opaquely seal a module with a signature containing a field val v : T, it does not mean that the client of the module sees a "fresh" value of type T (whatever that could mean) -- it means that the client can use the exported value at the type T. In the case of an exception spec, you are exporting a value/constructor of type exn. Values of type exn can have their tags dynamically inspected, and that's what's happening here, so I fail to see how any abstraction has been broken here. Furthermore, I can't think of any other sensible semantics for signature matching with exceptions. If the external M.A in your example were somehow made distinct from the internal A (= C), that would mean that the exception A had a different dynamic representation when referred to outside the module than it did inside the module. In fact, they would be distinct values! This would be particularly horrible, since the module's implementation would not even have a way of referring to the doppelganger exception M.A (and catching it) from within the module. This is reminiscent of the "double vision problem" in recursive modules, except much much worse. I would go so far as to say that such a semantics would be impossible to program against. Do you have some alternative sensible semantics for exceptions and signature matching in mind? Cheers, Derek On Wed, Mar 28, 2018 at 11:49 AM, Sam Lindley wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > The following SML program > > exception C; > > structure M :> sig exception A end = > struct > exception A = C > end; > > (raise M.A) handle C => 42 > > returns the value 42 (according to SML/NJ and, I believe, the 1997 > definition of Standard ML). > > The signature ascription appears to assert that exception A is abstract in > M, and yet we are able to raise the exception M.A and catch it as C outside > the scope of M. It makes no difference whether the signature ascription is > transparent or opaque. The equivalent OCaml program yields the same result. > > Does this kind of code occur in practice? > > Sam > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From tadeusz.litak at gmail.com Wed Mar 28 06:31:05 2018 From: tadeusz.litak at gmail.com (Tadeusz Litak) Date: Wed, 28 Mar 2018 12:31:05 +0200 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> References: <87y3idg33y.fsf@unige.it> <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> Message-ID: <4f2eb864-1190-4f7d-a571-14e97c532883@gmail.com> Hi Neel, see ? 4.3 and especially Exercise 4.13 in Haghverdi and Scott's "Geometry of Interaction and the Dynamics of Proof Reduction: a tutorial". Best regards, t. On 28/03/18 12:21, Neel Krishnaswami wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi Phil, > > I learned about this from Martin Escardo, and he told me the observation > about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized > Logic, and Closed Categories". > > One further observation I have not seen written down anywhere is that > the natural numbers also support a trace (ie, feedback) operator, since > it is also the case that x + y ? z + y if and only if x ? z. > > This means that the Geometry of Interaction construction can be applied > to the natural numbers, which yields a preorder of pairs of natural > numbers (n,m). In this case, two objects have a map between them if > one object is greater than the other, viewing each object (n, m) as the > signed integer n - m. > > As a result, I've sometimes wondered if the Int construction got its > name as a bit of a joke! > > Best, > Neel > > > On 27/03/18 21:02, Philip Wadler wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> >> >> Thanks to all for further replies, and especially to Neel for his >> observation about monus and monoidal closure. Neel, is this written down >> anywhere? Cheers, -- P >> >> .?? \ Philip Wadler, Professor of Theoretical Computer Science, >> .?? /\ School of Informatics, University of Edinburgh >> .? /? \ and Senior Research Fellow, IOHK >> . http://homepages.inf.ed.ac.uk/wadler/ >> >> On 27 March 2018 at 13:39, Neel Krishnaswami wrote: >> >>> On 27/03/18 15:27, Philip Wadler wrote: >>> >>> Thank you for all the replies, and for reminding me of bags, >>>> rigs/semirings, and modules. >>>> >>>> In a rig, what corresponds to the notion of monus? >>>> ??? _?_ : ? ? ? ? ? >>>> ??? m????????? ? zero????? =? m >>>> ??? zero????? ? (suc n)? =? zero >>>> ??? (suc m) ? (suc n)? =? m ? n >>>> Is monus defined for every rig? (While zero and suc are defined in every >>>> rig, it's not immediately obvious to me that definition by pattern >>>> matching >>>> as above makes sense in an arbitrary rig.) What are the algebraic >>>> properties of monus? >>>> >>> >>> The natural numbers Nat, ordered by , form a posetal symmetric >>> monoidal closed category. Here, the objects are the natural >>> numbers, and Hom(n, m) = { ? | n ? m } >>> >>> The tensor product is given by: >>> >>> ?? x ? y ? x + y >>> >>> The monoidal closed structure arises from saturating subtraction: >>> >>> ?? y ? z ? z - y >>> >>> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since >>> x + y ? z if and only if x ? z - y. >>> >>> (This is all given in Lawvere's paper "Metric Spaces, Generalized >>> Logic, and Closed Categories".) >>> >>> So it seems like the natural thing to do would be to say that a rig >>> has a monus when it is monoidal closed with respect to its addition, >>> defining x ? z to hold when there exists a y such that x = y + z. >>> >>> Best, >>> Neel >>> >>> >>> >>> -- >>> Neel Krishnaswami >>> nk480 at cl.cam.ac.uk >>> >>> >>> >>> >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. From francesco.gavazzo at gmail.com Wed Mar 28 06:41:50 2018 From: francesco.gavazzo at gmail.com (Francesco Gavazzo) Date: Wed, 28 Mar 2018 12:41:50 +0200 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> References: <87y3idg33y.fsf@unige.it> <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> Message-ID: I am not sure whether the following is the correct definition of a monus, but following Neel (as well as the analogy with natural numbers) one could look at monus in a monoid as right adjoint to monoid multiplication. In particular, any quantale has monus, given by 'implication'. This can indeed be found in Lawvere's 1973 paper, as well as in any introduction/paper on quantales and/or generalised metric spaces. Another relevant reference could be Chapter II of Monoidal Topology, by Hofmann, Seal, and Tholen. More generally, any paper dealing with V-categories (or quantale-categories) provides relevant information on such algebraic structures. Best, Francesco 2018-03-28 12:21 GMT+02:00 Neel Krishnaswami < neelakantan.krishnaswami at gmail.com>: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi Phil, > > I learned about this from Martin Escardo, and he told me the observation > about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized > Logic, and Closed Categories". > > One further observation I have not seen written down anywhere is that > the natural numbers also support a trace (ie, feedback) operator, since > it is also the case that x + y ? z + y if and only if x ? z. > > This means that the Geometry of Interaction construction can be applied > to the natural numbers, which yields a preorder of pairs of natural > numbers (n,m). In this case, two objects have a map between them if > one object is greater than the other, viewing each object (n, m) as the > signed integer n - m. > > As a result, I've sometimes wondered if the Int construction got its > name as a bit of a joke! > > Best, > Neel > > > On 27/03/18 21:02, Philip Wadler wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/ma >> ilman/listinfo/types-list ] >> >> >> >> Thanks to all for further replies, and especially to Neel for his >> observation about monus and monoidal closure. Neel, is this written down >> anywhere? Cheers, -- P >> >> . \ Philip Wadler, Professor of Theoretical Computer Science, >> . /\ School of Informatics, University of Edinburgh >> . / \ and Senior Research Fellow, IOHK >> . http://homepages.inf.ed.ac.uk/wadler/ >> >> On 27 March 2018 at 13:39, Neel Krishnaswami wrote: >> >> On 27/03/18 15:27, Philip Wadler wrote: >>> >>> Thank you for all the replies, and for reminding me of bags, >>> >>>> rigs/semirings, and modules. >>>> >>>> In a rig, what corresponds to the notion of monus? >>>> _?_ : ? ? ? ? ? >>>> m ? zero = m >>>> zero ? (suc n) = zero >>>> (suc m) ? (suc n) = m ? n >>>> Is monus defined for every rig? (While zero and suc are defined in every >>>> rig, it's not immediately obvious to me that definition by pattern >>>> matching >>>> as above makes sense in an arbitrary rig.) What are the algebraic >>>> properties of monus? >>>> >>>> >>> The natural numbers Nat, ordered by , form a posetal symmetric >>> monoidal closed category. Here, the objects are the natural >>> numbers, and Hom(n, m) = { ? | n ? m } >>> >>> The tensor product is given by: >>> >>> x ? y ? x + y >>> >>> The monoidal closed structure arises from saturating subtraction: >>> >>> y ? z ? z - y >>> >>> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since >>> x + y ? z if and only if x ? z - y. >>> >>> (This is all given in Lawvere's paper "Metric Spaces, Generalized >>> Logic, and Closed Categories".) >>> >>> So it seems like the natural thing to do would be to say that a rig >>> has a monus when it is monoidal closed with respect to its addition, >>> defining x ? z to hold when there exists a y such that x = y + z. >>> >>> Best, >>> Neel >>> >>> >>> >>> -- >>> Neel Krishnaswami >>> nk480 at cl.cam.ac.uk >>> >>> >>> >>> >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> >> From Sam.Lindley at ed.ac.uk Wed Mar 28 09:20:21 2018 From: Sam.Lindley at ed.ac.uk (Sam Lindley) Date: Wed, 28 Mar 2018 14:20:21 +0100 Subject: [TYPES] breaking abstraction with ML exceptions In-Reply-To: References: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> Message-ID: <1a8a9753-3626-ab58-ad9d-7b9a3c42812f@ed.ac.uk> Thanks Derek. My subject line is deliberately provocative. Given that exceptions in ML are values that inhabit a single type, the behaviour does make sense, and is rather unsurprising if we rewrite M as: structure M :> sig val A : exn end = struct val A = C end I did not have in mind an alternative semantics (I agree that introducing a "doppleganger" in the way you suggest would be unworkable). I was more interested in whether people abstract over exceptions in this way in practice. Gabriel's response suggests that perhaps they don't very often. My motivation is understanding how this kind of pattern interacts with various designs for effect type systems that track exceptions - and whether one could perhaps get away with forbidding it. Sam On 28/03/18 13:17, Derek Dreyer wrote: > Hi, Sam. > > I would take issue with your characterization of this example as > "breaking abstraction". By way of analogy, when you opaquely seal a > module with a signature containing a field > > val v : T, > > it does not mean that the client of the module sees a "fresh" value of > type T (whatever that could mean) -- it means that the client can use > the exported value at the type T. In the case of an exception spec, > you are exporting a value/constructor of type exn. Values of type exn > can have their tags dynamically inspected, and that's what's happening > here, so I fail to see how any abstraction has been broken here. > > Furthermore, I can't think of any other sensible semantics for > signature matching with exceptions. If the external M.A in your > example were somehow made distinct from the internal A (= C), that > would mean that the exception A had a different dynamic representation > when referred to outside the module than it did inside the module. In > fact, they would be distinct values! This would be particularly > horrible, since the module's implementation would not even have a way > of referring to the doppelganger exception M.A (and catching it) from > within the module. This is reminiscent of the "double vision problem" > in recursive modules, except much much worse. I would go so far as to > say that such a semantics would be impossible to program against. > > Do you have some alternative sensible semantics for exceptions and > signature matching in mind? > > Cheers, > Derek > > On Wed, Mar 28, 2018 at 11:49 AM, Sam Lindley wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> The following SML program >> >> exception C; >> >> structure M :> sig exception A end = >> struct >> exception A = C >> end; >> >> (raise M.A) handle C => 42 >> >> returns the value 42 (according to SML/NJ and, I believe, the 1997 >> definition of Standard ML). >> >> The signature ascription appears to assert that exception A is abstract in >> M, and yet we are able to raise the exception M.A and catch it as C outside >> the scope of M. It makes no difference whether the signature ascription is >> transparent or opaque. The equivalent OCaml program yields the same result. >> >> Does this kind of code occur in practice? >> >> Sam >> >> -- >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From francois.pottier at inria.fr Wed Mar 28 10:04:27 2018 From: francois.pottier at inria.fr (=?UTF-8?Q?Fran=c3=a7ois_Pottier?=) Date: Wed, 28 Mar 2018 16:04:27 +0200 Subject: [TYPES] breaking abstraction with ML exceptions In-Reply-To: <1a8a9753-3626-ab58-ad9d-7b9a3c42812f@ed.ac.uk> References: <4f39e12f-082a-b82f-4005-3c074a539a1f@ed.ac.uk> <1a8a9753-3626-ab58-ad9d-7b9a3c42812f@ed.ac.uk> Message-ID: <58e89150-1b3a-5417-ea61-b3f7d43c8a46@inria.fr> Hi Sam, Le 28/03/2018 ? 15:20, Sam Lindley a ?crit?: > My motivation is understanding how this kind of pattern interacts with > various designs for effect type systems that track exceptions - and > whether one could perhaps get away with forbidding it. In OCaml, it might seem tempting to forbid the exception-aliasing declaration, "exception A = B". But I believe that that still would not be sufficient to eliminate exception aliasing, by which I mean the possibility that a single (dynamic) exception is known under two distinct (static) names. This is due to other language features that introduce aliasing, such as module aliasing declarations ("module A = B") and first-class modules (a function that expects two first-class modules as arguments can be applied twice to the same module). -- Fran?ois Pottier francois.pottier at inria.fr http://gallium.inria.fr/~fpottier/ From andru at cs.cornell.edu Wed Mar 28 12:50:17 2018 From: andru at cs.cornell.edu (Andrew Myers) Date: Wed, 28 Mar 2018 12:50:17 -0400 Subject: [TYPES] breaking abstraction with ML exceptions In-Reply-To: <58e89150-1b3a-5417-ea61-b3f7d43c8a46@inria.fr> Message-ID: Yizhou Zhang and I made similar observations about unchecked exceptions violating abstraction in our PLDI'16 paper. We also proposed and implemented a new exception semantics that addresses the problem, tunneled exceptions. Andrew -------- Original message --------From: Fran?ois Pottier Date: 3/28/18 10:04 AM (GMT-05:00) To: types-list at LISTS.SEAS.UPENN.EDU Subject: Re: [TYPES] breaking abstraction with ML exceptions [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Hi Sam, Le 28/03/2018 ? 15:20, Sam Lindley a ?crit?: > My motivation is understanding how this kind of pattern interacts with > various designs for effect type systems that track exceptions - and > whether one could perhaps get away with forbidding it. In OCaml, it might seem tempting to forbid the exception-aliasing declaration, "exception A = B". But I believe that that still would not be sufficient to eliminate exception aliasing, by which I mean the possibility that a single (dynamic) exception is known under two distinct (static) names. This is due to other language features that introduce aliasing, such as module aliasing declarations ("module A = B") and first-class modules (a function that expects two first-class modules as arguments can be applied twice to the same module). -- Fran?ois Pottier francois.pottier at inria.fr http://gallium.inria.fr/~fpottier/ From wadler at inf.ed.ac.uk Wed Mar 28 14:43:30 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Wed, 28 Mar 2018 15:43:30 -0300 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> References: <87y3idg33y.fsf@unige.it> <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> Message-ID: Thanks to Neel, Tadeusz, and Francesco for additional replies & citation. I love the "Int" joke! -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ On 28 March 2018 at 07:21, Neel Krishnaswami < neelakantan.krishnaswami at gmail.com> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi Phil, > > I learned about this from Martin Escardo, and he told me the observation > about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized > Logic, and Closed Categories". > > One further observation I have not seen written down anywhere is that > the natural numbers also support a trace (ie, feedback) operator, since > it is also the case that x + y ? z + y if and only if x ? z. > > This means that the Geometry of Interaction construction can be applied > to the natural numbers, which yields a preorder of pairs of natural > numbers (n,m). In this case, two objects have a map between them if > one object is greater than the other, viewing each object (n, m) as the > signed integer n - m. > > As a result, I've sometimes wondered if the Int construction got its > name as a bit of a joke! > > Best, > Neel > > > On 27/03/18 21:02, Philip Wadler wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/ma >> ilman/listinfo/types-list ] >> >> >> >> Thanks to all for further replies, and especially to Neel for his >> observation about monus and monoidal closure. Neel, is this written down >> anywhere? Cheers, -- P >> >> . \ Philip Wadler, Professor of Theoretical Computer Science, >> . /\ School of Informatics, University of Edinburgh >> . / \ and Senior Research Fellow, IOHK >> . http://homepages.inf.ed.ac.uk/wadler/ >> >> On 27 March 2018 at 13:39, Neel Krishnaswami wrote: >> >> On 27/03/18 15:27, Philip Wadler wrote: >>> >>> Thank you for all the replies, and for reminding me of bags, >>> >>>> rigs/semirings, and modules. >>>> >>>> In a rig, what corresponds to the notion of monus? >>>> _?_ : ? ? ? ? ? >>>> m ? zero = m >>>> zero ? (suc n) = zero >>>> (suc m) ? (suc n) = m ? n >>>> Is monus defined for every rig? (While zero and suc are defined in every >>>> rig, it's not immediately obvious to me that definition by pattern >>>> matching >>>> as above makes sense in an arbitrary rig.) What are the algebraic >>>> properties of monus? >>>> >>>> >>> The natural numbers Nat, ordered by , form a posetal symmetric >>> monoidal closed category. Here, the objects are the natural >>> numbers, and Hom(n, m) = { ? | n ? m } >>> >>> The tensor product is given by: >>> >>> x ? y ? x + y >>> >>> The monoidal closed structure arises from saturating subtraction: >>> >>> y ? z ? z - y >>> >>> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since >>> x + y ? z if and only if x ? z - y. >>> >>> (This is all given in Lawvere's paper "Metric Spaces, Generalized >>> Logic, and Closed Categories".) >>> >>> So it seems like the natural thing to do would be to say that a rig >>> has a monus when it is monoidal closed with respect to its addition, >>> defining x ? z to hold when there exists a y such that x = y + z. >>> >>> Best, >>> Neel >>> >>> >>> >>> -- >>> Neel Krishnaswami >>> nk480 at cl.cam.ac.uk >>> >>> >>> >>> >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> >> -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From oleg at okmij.org Thu Mar 29 05:35:07 2018 From: oleg at okmij.org (Oleg) Date: Thu, 29 Mar 2018 18:35:07 +0900 Subject: [TYPES] breaking abstraction with ML exceptions In-Reply-To: Message-ID: <20180329093507.GA2600@Magus.localnet> I'm in agreement with Derek. In OCaml at least, one can quickly gets used to the fact that an abstract type does not mean a fresh or a globally unique type. An abstract type may turn out equal to something rather concrete. If one needs something globally unique, one should use generative functors (and extensible data types, as Gabriel indicated). Your example is a good illustration, in case of exceptions. But I would not blame exceptions per se: for example, references may just as well reveal the identity of what looks like an abstract type. A side remark first: I would find your example more compelling (at least in OCaml) in a slightly re-written form: exception C module F(S:sig exception A end) = struct let r = try raise S.A with C -> 42 end let module M = F(struct exception A = C end) in M.r (* 42 *) The body of the functor receives some abstract exception A, and is able to find out it is being equal to something concrete. Here is the similar example with references: let x = ref 100 module F(S:sig type t val y:t ref end) = struct let r = let xold = !x in x := xold + 1; let ynew = !S.y in x := xold; if not (!S.y = ynew) then print_string "t was int" end ;; let module M = F(struct type t = int let y = x end) in M.r;; Again, the body of the functor was able to find out that the abstract type t is actually an int. So, if the test comes out positive, by assigning to S.y (and then reading from x), the functor can find the `true' value of any value of the supposedly abstract type t. I have learned of this trick from a rather old message, which I append for the sake of history. That message showed the SML code. > From: jmv16 at cornell.edu (Jeffrey M. Vinocur) Newsgroups: comp.lang.functional Subject: Re: Type Identification in ML Date: Thu, 24 Oct 2002 11:37:56 +0000 (UTC) Organization: Cornell University Sender: jeff at litech.org Message-ID: References: X-Trace: puck.litech.org 1035459476 1441 208.48.41.224 (24 Oct 2002 11:37:56 GMT) NNTP-Posting-Date: Thu, 24 Oct 2002 11:37:56 +0000 (UTC) X-Newsreader: trn 4.0-test76 (Apr 2, 2001) Originator: jeff at litech.org (Jeffrey M. Vinocur) In article , John S. Novak, III wrote: >Is there, in ML a way to define a function which, if given an int, >returns true, but given anything else returns false? Likewise, is there >a way to define a function which, if given a list (of any type, or of a >specified type) returns true, but given anything else returns false? Nope. No way to provide a type for such a function. Practical people can stop reading here. However, there is the following interesting bit which has the remarkable behavior that it can distinguish between types in a contrived circumstance. (A 'a ref * int ref -> bool function would be "expected" to be ignoring its first argument.) fun g (x:'a ref, y:int ref) : bool = let val () = y := 0 val z = !x val () = y := 1 val () = x := z in !y = 0 end - g(ref "hi", ref 3); val it = false : bool - val x = ref 3; val x = ref 3 : int ref - g(x,x); val it = true : bool From tadeusz.litak at gmail.com Thu Mar 29 09:42:20 2018 From: tadeusz.litak at gmail.com (Tadeusz Litak) Date: Thu, 29 Mar 2018 15:42:20 +0200 Subject: [TYPES] What algebra am I thinking of? In-Reply-To: References: <87y3idg33y.fsf@unige.it> <13bf0d8e-4e5a-12b5-560b-3e9268cb3a80@cl.cam.ac.uk> <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> Message-ID: <111a9a31-6faf-23fe-8f75-94f59e4fdd11@gmail.com> You're welcome! I realized now that this is explicitly stated in the opening paragraph of the 1996 Joyal, Street and Verity paper: > Naively speaking, the construction is a glorification of the construction of the > integers from the natural numbers. O ye classical papers that we quote so oft and so rarely read... t. On 28/03/18 20:43, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Thanks to Neel, Tadeusz, and Francesco for additional replies & citation. I > love the "Int" joke! -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > On 28 March 2018 at 07:21, Neel Krishnaswami < > neelakantan.krishnaswami at gmail.com> wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Hi Phil, >> >> I learned about this from Martin Escardo, and he told me the observation >> about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized >> Logic, and Closed Categories". >> >> One further observation I have not seen written down anywhere is that >> the natural numbers also support a trace (ie, feedback) operator, since >> it is also the case that x + y ? z + y if and only if x ? z. >> >> This means that the Geometry of Interaction construction can be applied >> to the natural numbers, which yields a preorder of pairs of natural >> numbers (n,m). In this case, two objects have a map between them if >> one object is greater than the other, viewing each object (n, m) as the >> signed integer n - m. >> >> As a result, I've sometimes wondered if the Int construction got its >> name as a bit of a joke! >> >> Best, >> Neel >> >> >> On 27/03/18 21:02, Philip Wadler wrote: >> >>> [ The Types Forum, http://lists.seas.upenn.edu/ma >>> ilman/listinfo/types-list ] >>> >>> >>> >>> Thanks to all for further replies, and especially to Neel for his >>> observation about monus and monoidal closure. Neel, is this written down >>> anywhere? Cheers, -- P >>> >>> . \ Philip Wadler, Professor of Theoretical Computer Science, >>> . /\ School of Informatics, University of Edinburgh >>> . / \ and Senior Research Fellow, IOHK >>> . http://homepages.inf.ed.ac.uk/wadler/ >>> >>> On 27 March 2018 at 13:39, Neel Krishnaswami wrote: >>> >>> On 27/03/18 15:27, Philip Wadler wrote: >>>> Thank you for all the replies, and for reminding me of bags, >>>> >>>>> rigs/semirings, and modules. >>>>> >>>>> In a rig, what corresponds to the notion of monus? >>>>> _?_ : ? ? ? ? ? >>>>> m ? zero = m >>>>> zero ? (suc n) = zero >>>>> (suc m) ? (suc n) = m ? n >>>>> Is monus defined for every rig? (While zero and suc are defined in every >>>>> rig, it's not immediately obvious to me that definition by pattern >>>>> matching >>>>> as above makes sense in an arbitrary rig.) What are the algebraic >>>>> properties of monus? >>>>> >>>>> >>>> The natural numbers Nat, ordered by , form a posetal symmetric >>>> monoidal closed category. Here, the objects are the natural >>>> numbers, and Hom(n, m) = { ? | n ? m } >>>> >>>> The tensor product is given by: >>>> >>>> x ? y ? x + y >>>> >>>> The monoidal closed structure arises from saturating subtraction: >>>> >>>> y ? z ? z - y >>>> >>>> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since >>>> x + y ? z if and only if x ? z - y. >>>> >>>> (This is all given in Lawvere's paper "Metric Spaces, Generalized >>>> Logic, and Closed Categories".) >>>> >>>> So it seems like the natural thing to do would be to say that a rig >>>> has a monus when it is monoidal closed with respect to its addition, >>>> defining x ? z to hold when there exists a y such that x = y + z. >>>> >>>> Best, >>>> Neel >>>> >>>> >>>> >>>> -- >>>> Neel Krishnaswami >>>> nk480 at cl.cam.ac.uk >>>> >>>> >>>> >>>> >>>> The University of Edinburgh is a charitable body, registered in >>>> Scotland, with registration number SC005336. >>>> >>> >>> >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. From selinger at mathstat.dal.ca Fri Mar 30 22:30:17 2018 From: selinger at mathstat.dal.ca (Peter Selinger) Date: Fri, 30 Mar 2018 23:30:17 -0300 (ADT) Subject: [TYPES] What algebra am I thinking of? In-Reply-To: <9bee20ec-2519-3c19-3cfe-15a57d0a7e4e@gmail.com> Message-ID: <20180331023018.0642014076D@chase.mathstat.dal.ca> Hi Neel and Phil, I don't think the connection between the name of the Int construction and the integers is a joke, but that that was in fact the original motivation for the name. As far as I know, the name Int V for what we now call the Int-construction originates in Joyal, Street, and Verity's original 1996 paper "Traced monoidal categories" (p.453). A beautiful paper, by the way. In the same paper, they also mention at the top of p.465 an example where the compact category has objects that are words in the symbols "-" and "+", whereas the underlying traced monoidal category only has "+". Also, on p.467, they write "The objects of Int Rel can be called integer sets in contrast to the objects of Rel which are natural sets". So while they didn't give the exact example you mentioned, it is clear from their formulation that the relationship between the natural numbers and the integers is very much what they had in mind, and that that is where they took the name "Int" from (i.e., "Integers" and not "Interaction"). They do also mention the geometry of interaction, but only in passing in the last sentence of the introduction. -- Peter Neel Krishnaswami wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi Phil, > > I learned about this from Martin Escardo, and he told me the observation > about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized > Logic, and Closed Categories". > > One further observation I have not seen written down anywhere is that > the natural numbers also support a trace (ie, feedback) operator, since > it is also the case that x + y ? z + y if and only if x ? z. > > This means that the Geometry of Interaction construction can be applied > to the natural numbers, which yields a preorder of pairs of natural > numbers (n,m). In this case, two objects have a map between them if > one object is greater than the other, viewing each object (n, m) as the > signed integer n - m. > > As a result, I've sometimes wondered if the Int construction got its > name as a bit of a joke! > > Best, > Neel > > > On 27/03/18 21:02, Philip Wadler wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > > > Thanks to all for further replies, and especially to Neel for his > > observation about monus and monoidal closure. Neel, is this written down > > anywhere? Cheers, -- P > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . http://homepages.inf.ed.ac.uk/wadler/ > > > > On 27 March 2018 at 13:39, Neel Krishnaswami wrote: > > > >> On 27/03/18 15:27, Philip Wadler wrote: > >> > >> Thank you for all the replies, and for reminding me of bags, > >>> rigs/semirings, and modules. > >>> > >>> In a rig, what corresponds to the notion of monus? > >>> _?_ : ? ? ? ? ? > >>> m ? zero = m > >>> zero ? (suc n) = zero > >>> (suc m) ? (suc n) = m ? n > >>> Is monus defined for every rig? (While zero and suc are defined in every > >>> rig, it's not immediately obvious to me that definition by pattern > >>> matching > >>> as above makes sense in an arbitrary rig.) What are the algebraic > >>> properties of monus? > >>> > >> > >> The natural numbers Nat, ordered by , form a posetal symmetric > >> monoidal closed category. Here, the objects are the natural > >> numbers, and Hom(n, m) = { ? | n ? m } > >> > >> The tensor product is given by: > >> > >> x ? y ? x + y > >> > >> The monoidal closed structure arises from saturating subtraction: > >> > >> y ? z ? z - y > >> > >> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since > >> x + y ? z if and only if x ? z - y. > >> > >> (This is all given in Lawvere's paper "Metric Spaces, Generalized > >> Logic, and Closed Categories".) > >> > >> So it seems like the natural thing to do would be to say that a rig > >> has a monus when it is monoidal closed with respect to its addition, > >> defining x ? z to hold when there exists a y such that x = y + z. > >> > >> Best, > >> Neel > >> > >> > >> > >> -- > >> Neel Krishnaswami > >> nk480 at cl.cam.ac.uk > >> > >> > >> > >> > >> The University of Edinburgh is a charitable body, registered in > >> Scotland, with registration number SC005336. > > From alejandro at diaz-caro.info Fri Apr 6 07:07:19 2018 From: alejandro at diaz-caro.info (=?UTF-8?Q?Alejandro_D=C3=ADaz=2DCaro?=) Date: Fri, 06 Apr 2018 11:07:19 +0000 Subject: [TYPES] System F and System T names Message-ID: Dear Type-theorists, Does anyone know where do the names System "F" and System "T" comes from? I am not asking who introduced those names (Girard System F, and G?del System T), but what the "F" and the "T" means. Kind regards, Alejandro -- http://diaz-caro.web.unq.edu.ar From neelakantan.krishnaswami at gmail.com Fri Apr 6 08:03:17 2018 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Fri, 6 Apr 2018 13:03:17 +0100 Subject: [TYPES] System F and System T names In-Reply-To: References: Message-ID: <4d2abfd0-b31c-6280-3fb7-5082448c0c1c@gmail.com> Hello, 1. In "The system F of variable types, fifteen years later", Girard remarks that there was no particular reason for the name F: However, in [3] it was shown that the obvious rules of conversion for this system, called F by chance, were converging. There may be another explanation in his thesis, but I haven't read it since unfortunately I am not fluent in French. 2. However, since I am semiliterate in German, I did take a look at G?del's paper "?ber eine noch nicht ben?zte Erweiterung des finiten Standpunktes", where System T (and the Dialectia interpretation for it) was introduced. He names this system in a parenthetical aside: Das heisst die Axiome dieses Systems (es werde T genannt) sind formal fast dieselben wie die der primitiv rekursiven Zahlentheorie [...][1] However, the previous page and a half were spent talking about the type structure of system T, so it is reasonable to guess that T stands for "types". But, no explicit reason is given in print. Best, Neel [1] "This means the axioms of this system (dubbed T) are nearly the same as those of primitive recursive number theory [...]" On 06/04/18 12:07, Alejandro D?az-Caro wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Type-theorists, > > Does anyone know where do the names System "F" and System "T" comes from? I > am not asking who introduced those names (Girard System F, and G?del System > T), but what the "F" and the "T" means. > > Kind regards, > Alejandro > From alejandro at diaz-caro.info Fri Apr 6 10:10:04 2018 From: alejandro at diaz-caro.info (=?UTF-8?Q?Alejandro_D=C3=ADaz=2DCaro?=) Date: Fri, 06 Apr 2018 14:10:04 +0000 Subject: [TYPES] System F and System T names In-Reply-To: <4d2abfd0-b31c-6280-3fb7-5082448c0c1c@gmail.com> References: <4d2abfd0-b31c-6280-3fb7-5082448c0c1c@gmail.com> Message-ID: Dear Neel, Thank you for the info. That is exactly what I was looking for. Kind regards, Alejandro On Fri, 6 Apr 2018 at 14:03 Neel Krishnaswami < neelakantan.krishnaswami at gmail.com> wrote: > Hello, > > 1. In "The system F of variable types, fifteen years later", Girard > remarks that there was no particular reason for the name F: > > However, in [3] it was shown that the obvious rules of conversion for > this system, called F by chance, were converging. > > There may be another explanation in his thesis, but I haven't read it > since unfortunately I am not fluent in French. > > 2. However, since I am semiliterate in German, I did take a look at > G?del's paper "?ber eine noch nicht ben?zte Erweiterung des finiten > Standpunktes", where System T (and the Dialectia interpretation for it) > was introduced. He names this system in a parenthetical aside: > > Das heisst die Axiome dieses Systems (es werde T genannt) sind formal > fast dieselben wie die der primitiv rekursiven Zahlentheorie [...][1] > > However, the previous page and a half were spent talking about the type > structure of system T, so it is reasonable to guess that T stands for > "types". But, no explicit reason is given in print. > > Best, > Neel > > > [1] "This means the axioms of this system (dubbed T) are nearly > the same as those of primitive recursive number theory [...]" > > On 06/04/18 12:07, Alejandro D?az-Caro wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Dear Type-theorists, > > > > Does anyone know where do the names System "F" and System "T" comes > from? I > > am not asking who introduced those names (Girard System F, and G?del > System > > T), but what the "F" and the "T" means. > > > > Kind regards, > > Alejandro > > > -- http://diaz-caro.web.unq.edu.ar From aaronngray.lists at gmail.com Sun May 20 12:05:20 2018 From: aaronngray.lists at gmail.com (Aaron Gray) Date: Sun, 20 May 2018 17:05:20 +0100 Subject: [TYPES] Generalised Covariant and Contravariant inference rules Message-ID: I am looking for a paper I cannot seem to find anymore that if I remember correctly gives or hints at a generalised scheme for describing covariant and contravariant type inference rules. Many thanks in advance, Aaron -- Aaron Gray Independent Open Source Software Engineer, Computer Language Researcher, Information Theorist, and amateur computer scientist. From valeria.depaiva at gmail.com Tue May 22 18:36:49 2018 From: valeria.depaiva at gmail.com (Valeria de Paiva) Date: Tue, 22 May 2018 15:36:49 -0700 Subject: [TYPES] What algebra am I thinking of? Message-ID: Hi Neel, Still on the subject of Lawvere's metric spaces and generalized logic, I think many people knew about it. I, for instance had a paper from 1991, which you can read in https://www.slideshare.net/valeria.depaiva/a-dialectica-model-of-the-lambek-calculus, which has the Lawvere example of the natural numbers N as a model of multiplicative linear logic LP, as a folklore example that Pino Rosolini told me about. But yes, I was not looking for traces then. cheers Valeria > > Neel Krishnaswami wrote: > > > > > > [ The Types Forum, http://lists.seas.upenn.edu/ > mailman/listinfo/types-list ] > > > > Hi Phil, > > > > I learned about this from Martin Escardo, and he told me the observation > > about monoidal closure is in Lawvere's paper "Metric Spaces, Generalized > > Logic, and Closed Categories". > > > > One further observation I have not seen written down anywhere is that > > the natural numbers also support a trace (ie, feedback) operator, since > > it is also the case that x + y ? z + y if and only if x ? z. > > > > This means that the Geometry of Interaction construction can be applied > > to the natural numbers, which yields a preorder of pairs of natural > > numbers (n,m). In this case, two objects have a map between them if > > one object is greater than the other, viewing each object (n, m) as the > > signed integer n - m. > > > > As a result, I've sometimes wondered if the Int construction got its > > name as a bit of a joke! > > > > Best, > > Neel > > > > > > On 27/03/18 21:02, Philip Wadler wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/ > mailman/listinfo/types-list ] > > > > > > > > > > > > Thanks to all for further replies, and especially to Neel for his > > > observation about monus and monoidal closure. Neel, is this written > down > > > anywhere? Cheers, -- P > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > > . /\ School of Informatics, University of Edinburgh > > > . / \ and Senior Research Fellow, IOHK > > > . http://homepages.inf.ed.ac.uk/wadler/ > > > > > > On 27 March 2018 at 13:39, Neel Krishnaswami > wrote: > > > > > >> On 27/03/18 15:27, Philip Wadler wrote: > > >> > > >> Thank you for all the replies, and for reminding me of bags, > > >>> rigs/semirings, and modules. > > >>> > > >>> In a rig, what corresponds to the notion of monus? > > >>> _?_ : ? ? ? ? ? > > >>> m ? zero = m > > >>> zero ? (suc n) = zero > > >>> (suc m) ? (suc n) = m ? n > > >>> Is monus defined for every rig? (While zero and suc are defined in > every > > >>> rig, it's not immediately obvious to me that definition by pattern > > >>> matching > > >>> as above makes sense in an arbitrary rig.) What are the algebraic > > >>> properties of monus? > > >>> > > >> > > >> The natural numbers Nat, ordered by , form a posetal symmetric > > >> monoidal closed category. Here, the objects are the natural > > >> numbers, and Hom(n, m) = { ? | n ? m } > > >> > > >> The tensor product is given by: > > >> > > >> x ? y ? x + y > > >> > > >> The monoidal closed structure arises from saturating subtraction: > > >> > > >> y ? z ? z - y > > >> > > >> It's easy to show that Hom(x ? y, z) = Hom(x, y ? z) since > > >> x + y ? z if and only if x ? z - y. > > >> > > >> (This is all given in Lawvere's paper "Metric Spaces, Generalized > > >> Logic, and Closed Categories".) > > >> > > >> So it seems like the natural thing to do would be to say that a rig > > >> has a monus when it is monoidal closed with respect to its addition, > > >> defining x ? z to hold when there exists a y such that x = y + z. > > >> > > >> Best, > > >> Neel > > >> > > >> > > >> > > >> -- > > >> Neel Krishnaswami > > >> nk480 at cl.cam.ac.uk > > >> > > >> > > >> > > >> > > >> The University of Edinburgh is a charitable body, registered in > > >> Scotland, with registration number SC005336. > > > > > > > > ------------------------------ > > Message: 2 > Date: Fri, 06 Apr 2018 11:07:19 +0000 > From: Alejandro D?az-Caro > To: types > Subject: [TYPES] System F and System T names > Message-ID: > HaZRe_u1iaLOig at mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > Dear Type-theorists, > > Does anyone know where do the names System "F" and System "T" comes from? I > am not asking who introduced those names (Girard System F, and G?del System > T), but what the "F" and the "T" means. > > Kind regards, > Alejandro > -- > > http://diaz-caro.web.unq.edu.ar > > > ------------------------------ > > Message: 3 > Date: Fri, 6 Apr 2018 13:03:17 +0100 > From: Neel Krishnaswami > To: Alejandro D?az-Caro , types > > Subject: Re: [TYPES] System F and System T names > Message-ID: <4d2abfd0-b31c-6280-3fb7-5082448c0c1c at gmail.com> > Content-Type: text/plain; charset=utf-8; format=flowed > > Hello, > > 1. In "The system F of variable types, fifteen years later", Girard > remarks that there was no particular reason for the name F: > > However, in [3] it was shown that the obvious rules of conversion for > this system, called F by chance, were converging. > > There may be another explanation in his thesis, but I haven't read it > since unfortunately I am not fluent in French. > > 2. However, since I am semiliterate in German, I did take a look at > G?del's paper "?ber eine noch nicht ben?zte Erweiterung des finiten > Standpunktes", where System T (and the Dialectia interpretation for it) > was introduced. He names this system in a parenthetical aside: > > Das heisst die Axiome dieses Systems (es werde T genannt) sind formal > fast dieselben wie die der primitiv rekursiven Zahlentheorie [...][1] > > However, the previous page and a half were spent talking about the type > structure of system T, so it is reasonable to guess that T stands for > "types". But, no explicit reason is given in print. > > Best, > Neel > > > [1] "This means the axioms of this system (dubbed T) are nearly > the same as those of primitive recursive number theory [...]" > > On 06/04/18 12:07, Alejandro D?az-Caro wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/ > mailman/listinfo/types-list ] > > > > Dear Type-theorists, > > > > Does anyone know where do the names System "F" and System "T" comes > from? I > > am not asking who introduced those names (Girard System F, and G?del > System > > T), but what the "F" and the "T" means. > > > > Kind regards, > > Alejandro > > > > > ------------------------------ > > Message: 4 > Date: Fri, 06 Apr 2018 14:10:04 +0000 > From: Alejandro D?az-Caro > To: Neel Krishnaswami > Cc: types > Subject: Re: [TYPES] System F and System T names > Message-ID: > gmail.com> > Content-Type: text/plain; charset="UTF-8" > > Dear Neel, > > Thank you for the info. That is exactly what I was looking for. > > Kind regards, > Alejandro > > On Fri, 6 Apr 2018 at 14:03 Neel Krishnaswami < > neelakantan.krishnaswami at gmail.com> wrote: > > > Hello, > > > > 1. In "The system F of variable types, fifteen years later", Girard > > remarks that there was no particular reason for the name F: > > > > However, in [3] it was shown that the obvious rules of conversion for > > this system, called F by chance, were converging. > > > > There may be another explanation in his thesis, but I haven't read it > > since unfortunately I am not fluent in French. > > > > 2. However, since I am semiliterate in German, I did take a look at > > G?del's paper "?ber eine noch nicht ben?zte Erweiterung des finiten > > Standpunktes", where System T (and the Dialectia interpretation for it) > > was introduced. He names this system in a parenthetical aside: > > > > Das heisst die Axiome dieses Systems (es werde T genannt) sind formal > > fast dieselben wie die der primitiv rekursiven Zahlentheorie [...][1] > > > > However, the previous page and a half were spent talking about the type > > structure of system T, so it is reasonable to guess that T stands for > > "types". But, no explicit reason is given in print. > > > > Best, > > Neel > > > > > > [1] "This means the axioms of this system (dubbed T) are nearly > > the same as those of primitive recursive number theory [...]" > > > > On 06/04/18 12:07, Alejandro D?az-Caro wrote: > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > Dear Type-theorists, > > > > > > Does anyone know where do the names System "F" and System "T" comes > > from? I > > > am not asking who introduced those names (Girard System F, and G?del > > System > > > T), but what the "F" and the "T" means. > > > > > > Kind regards, > > > Alejandro > > > > > > -- > > http://diaz-caro.web.unq.edu.ar > > > ------------------------------ > > Message: 5 > Date: Sun, 20 May 2018 17:05:20 +0100 > From: Aaron Gray > To: Types list > Subject: [TYPES] Generalised Covariant and Contravariant inference > rules > Message-ID: > mail.gmail.com> > Content-Type: text/plain; charset="UTF-8" > > I am looking for a paper I cannot seem to find anymore that if I remember > correctly gives or hints at a generalised scheme for describing covariant > and contravariant type inference rules. > > Many thanks in advance, > > Aaron > -- > Aaron Gray > > Independent Open Source Software Engineer, Computer Language Researcher, > Information Theorist, and amateur computer scientist. > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Types-list mailing list > Types-list at LISTS.SEAS.UPENN.EDU > https://LISTS.SEAS.UPENN.EDU/mailman/listinfo/types-list > > > ------------------------------ > > End of Types-list Digest, Vol 107, Issue 1 > ****************************************** > -- Valeria de Paiva http://vcvpaiva.github.io/ http://research.nuance.com/author/valeria-de-paiva/ http://www.cs.bham.ac.uk/~vdp/ From kirst at ps.uni-saarland.de Sun Jun 3 09:23:50 2018 From: kirst at ps.uni-saarland.de (Dominik Kirst) Date: Sun, 3 Jun 2018 15:23:50 +0200 Subject: [TYPES] Decidable equality of propositions implies proof irrelevance Message-ID: <2A6FF759-4981-41E0-AEB0-49837AB119AE@ps.uni-saarland.de> Dear Types community, studying Berardi?s 1996 proof that excluded middle (XM) implies proof irrelevance (PI) in CiC, I found that the assumption of XM can be weakened to (logically) decidable equality of propositions (DEP). Even weaker, it in fact suffices to assume a type conditional, which is definable from DEP. Has this fact been known before and is there work on related ideas? I have only found the overview of propositions independent from CiC/Coq in the Coq wiki (https://github.com/coq/coq/wiki/The-Logic-of-Coq ), where no implication from decidable equality to PI is indicated. The proof I discovered uses Hedberg?s 1998 result to derive PI for equality proofs of propositions from DEP and then proceeds by constructing the type conditional and following the remaining structure of Berardi?s proof. If the fact DEP -> PI has been established before: what are the known proofs and, in particular, are there proofs that do not require Hedberg?s result explicitly? Kind regards, Dominik ----------------------------------------- Programming Systems Lab Saarland University http://www.ps.uni-saarland.de/~kirst/ From gabriel.scherer at gmail.com Sat Jun 23 13:09:35 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sat, 23 Jun 2018 19:09:35 +0200 Subject: [TYPES] meta thread on types-announces filtering of announces Message-ID: Dear types list, A new thread on academic announces moderation on coq-club ( https://sympa.inria.fr/sympa/arc/coq-club/2018-06/msg00104.html ) reminded me that some may be interested in discussing the types-announce moderation policy. (I apologize for the people in types-discuss who do not subscribe to types-announce for the noise. I don't think there is much to discuss about types-list moderation, as essentially every message is interesting and is accepted.) (Reminder: I am the main moderator of both lists since November 2017 (see the full list of moderators at https://lists.seas.upenn.edu/mailman/listinfo/types-list ), so I was the one filtering almost all announces sent through the list.) The current types-announce policy is roughly: - off-topic announces are not transmitted, everything that seems relevant to type theory, programming language or formal verification is kept (one of those themes has to be of significant importance in the announce, not one checkbox among a bazillion hot topics) - calls for submissions are more important than calls for participations (except in no-submissions events like summer school); only one call-for-participation for conference, and none for workshops, if they have sent a call for submission on the list - position announces are transmitted when they are in-topic (or when they are generic but the sender comes from an in-topic community, implying that the topic will be considered), no judgment on the importance of the position or the announce is attempted. Some numbers. There were 442 postings in 2018, about 2.5 per day. 2200 people receive each types-announce message, and 300 get daily digests. I understand that people prefer to receive less emails. Here are some options I have considered, but currently there is none that I strongly prefer to the statu quo, so the plan is to roughly keep the current policy. 1. I could use a more subjective policy, letting through only announces that pass the filter above *and* that I find interesting. However, based on sampling the archives, this would not remove more than 5-10% of the announces, for a decrease in predictability and fairness. 2. We could stop propagating announcements for positions/jobs, which tend to have a smaller potentially-interested audience than international events. On the other hand, it does seem useful to have a place to publicize positions. This would filter out 10-15% of the announces. 3. We could only send one announce per event, instead of allowing re-sends. I have mixed feelings about this idea. In particular, I think that it is not a good idea to reject announces for deadline extensions: if someone learned of an event from our list, they would probably expect to learn of the extension from our list as well, and they may miss it if we don't propagate it. A lot of the events multi-announce (with at least a month delay in between), so this could decrease the traffic a bit more, maybe 30%. 4. We could restrict announcements of workshops that are affiliated with conferences, systematically asking for joint workshop calls instead of individual calls. We had a lot of traffic for the FLoC 2018 workshops, for example -- for the Calls for Participation, I decided to only let the joint call through. Workshops are 20% of our traffic, but not all of them are co-hosted. 5. I have had discussions with other moderators about splitting the list in smaller, more focused lists, for example one list "types -- and software verification" another "types -- and logic". Personally I think it's pretty clear on which sides most announces would fall, so that someone subscribed to a single of those list could expect a good 40% traffic reduction. But it's not clear how to organize such a transition, and how many people would not still subscribe to both lists. I'm open to other suggestions for filters -- they have to be easily implementable by a human, and justifiable to people whose announces get rejected. It is useful, to evaluate a proposition, to use the list archives ( all 2018 annunces are at http://lists.seas.upenn.edu/pipermail/types-announce/2018/thread.html ) to get a quantitative estimation for how much of the traffic you would filter out. Any other comment on the moderation is also welcome is welcome -- please consider whether you should send it to the whole list or just me. Cheers From wadler at inf.ed.ac.uk Fri Jul 6 11:49:48 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 6 Jul 2018 12:49:48 -0300 Subject: [TYPES] Progress + Preservation = Evaluation Message-ID: Everyone on this list will be familiar with Progress and Preservation for terms in a typed calculus. Write ? ? M : A to indicate that term M has type A for closed M. Progress. If ? ? M : A then either M is a value or M ?? N, for some term N. Preservation. If ? ? M : A and M ?? N then ? ? N : A. It is easy to combine these two proofs into an evaluator. Write ?? for the transitive and reflexive closure of ??. Evaluation. If ? ? M : A, then for every natural number n, either M ?? V, where V is a value and the reduction sequence has no more than n steps, or M ?? N, where N is not a value and the reduction sequence has n steps. Evaluation implies that either M ?? V or there is an infinite sequence M ?? M? ?? M? ?? ... that never reduces to a value; but this last result is not constructive, as deciding which of the two results holds is not decidable. An Agda implementation of Evaluation provides an evaluator for terms: given a number n it will perform up to n steps of evaluation, stopping early if a value is reached. This is entirely obvious (at least in retrospect), but I have not seen it written down anywhere. For instance, this approach is not exploited in Software Foundations to evaluate terms (it uses a normalize tactic instead). I have used it in my draft textbook: https:plfa.inf.ed.ac.uk Questions: What sources in the literature should one cite for this technique? How well-known is it as folklore? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From wjb at williamjbowman.com Fri Jul 6 15:47:24 2018 From: wjb at williamjbowman.com (William J. Bowman) Date: Fri, 6 Jul 2018 15:47:24 -0400 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: References: Message-ID: <20180706194724.GA20492@williamjbowman.com> I think the canonical reference for this is Wright Felleisen 1994, https://doi.org/10.1006/inco.1994.1093. They have a history in the introduction showing many prior versions, although most don't talk about evaluation directly. They call "evaluation" "type soundess", in various forms, where the evaluation function reprenents the ground truth semantics for the language, and the type system is a syntactic proof system. "Type soundness" is meant to indicate that the syntactic semantics (proof) is sound w.r.t. evaluation semantics (truth). -- William J. Bowman On Fri, Jul 06, 2018 at 12:49:48PM -0300, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Everyone on this list will be familiar with Progress and Preservation > for terms in a typed calculus. Write ? ? M : A to indicate that term > M has type A for closed M. > > Progress. If ? ? M : A then either M is a value or M ?? N, > for some term N. > > Preservation. If ? ? M : A and M ?? N then ? ? N : A. > > It is easy to combine these two proofs into an evaluator. > Write ?? for the transitive and reflexive closure of ??. > > Evaluation. If ? ? M : A, then for every natural number n, > either M ?? V, where V is a value and the reduction sequence > has no more than n steps, or M ?? N, where N is not a value > and the reduction sequence has n steps. > > Evaluation implies that either M ?? V or there is an infinite > sequence M ?? M? ?? M? ?? ... that never reduces to a value; > but this last result is not constructive, as deciding which of > the two results holds is not decidable. > > An Agda implementation of Evaluation provides an evaluator for terms: > given a number n it will perform up to n steps of evaluation, stopping > early if a value is reached. This is entirely obvious (at least in > retrospect), but I have not seen it written down anywhere. For > instance, this approach is not exploited in Software Foundations to > evaluate terms (it uses a normalize tactic instead). I have used it > in my draft textbook: > > https:plfa.inf.ed.ac.uk > > Questions: What sources in the literature should one cite for this > technique? How well-known is it as folklore? Cheers, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > Too brief? Here's why: http://www.emailcharter.org/ > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. From wadler at inf.ed.ac.uk Fri Jul 6 16:49:44 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 6 Jul 2018 17:49:44 -0300 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: References: Message-ID: My previous question must not have been clear, since I received two answers from top-notch researchers that missed my point. My question is not about progress and preservation or step indexing. My question is about the following observation: Progress + Preservation = Evaluation. >From a constructive proof of progress and preservation, we can assemble a constructive evaluator which, for any given number n, will either reduce the term to a value (in less than n steps) or show that after n steps of reduction the term is not a value. [The key word here is *constructive*. Once one has proved progress and preservation constructively, one has implemented a constructive evaluator for terms.] Pierce et al's text Software Foundations presents constructive proofs of progress and preservation. From these, one may immediately derive an evaluator for terms. But the text does not do so; instead it presents a separate normalisation tactic. (To be fair, that tactic is given prior to the proofs of progress and preservation. Nonetheless, after the proofs are given there is no observation that they might be applied to replace the normalisation tactic.) I also could not spot this observation in Harper's Practical Foundations for Programming Languages. Indeed, in every presentation I can recall the act of proving progress and preservation is separated from the act of writing code that can evaluate a term, even though the former trivially implies the latter. (The one exception is my new textbook. It's located at plfa.inf.ed.ac.uk --- do have a look!) To repeat my questions: What sources in the literature should one cite for this technique? How well-known is it as folklore? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ On 6 July 2018 at 12:49, Philip Wadler wrote: > Everyone on this list will be familiar with Progress and Preservation > for terms in a typed calculus. Write ? ? M : A to indicate that term > M has type A for closed M. > > Progress. If ? ? M : A then either M is a value or M ?? N, > for some term N. > > Preservation. If ? ? M : A and M ?? N then ? ? N : A. > > It is easy to combine these two proofs into an evaluator. > Write ?? for the transitive and reflexive closure of ??. > > Evaluation. If ? ? M : A, then for every natural number n, > either M ?? V, where V is a value and the reduction sequence > has no more than n steps, or M ?? N, where N is not a value > and the reduction sequence has n steps. > > Evaluation implies that either M ?? V or there is an infinite > sequence M ?? M? ?? M? ?? ... that never reduces to a value; > but this last result is not constructive, as deciding which of > the two results holds is not decidable. > > An Agda implementation of Evaluation provides an evaluator for terms: > given a number n it will perform up to n steps of evaluation, stopping > early if a value is reached. This is entirely obvious (at least in > retrospect), but I have not seen it written down anywhere. For > instance, this approach is not exploited in Software Foundations to > evaluate terms (it uses a normalize tactic instead). I have used it > in my draft textbook: > > https:plfa.inf.ed.ac.uk > > Questions: What sources in the literature should one cite for this > technique? How well-known is it as folklore? Cheers, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > Too brief? Here's why: http://www.emailcharter.org/ > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From pierre.courtieu at gmail.com Fri Jul 6 16:57:19 2018 From: pierre.courtieu at gmail.com (Pierre Courtieu) Date: Fri, 6 Jul 2018 22:57:19 +0200 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: References: Message-ID: Hi, I think this is now well known as the "fuel" pattern. Ideally n is very big and computed lazily. I am not aware of a paper specifically about this trick, it is there as a very useful folklore I guess. Note however that Bertot Balaa presented a more "complete" version of this by refining this trick: the function itself is defined by means of a proof that there exists a number N of iterations such that any greater number would give the same result. i.e. that N is a sufficient fuel to reach the fixpoint. This mechanism is used in the current implementation of the "Function" command in coq (when given a non structurally recursive function). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.601&rep=rep1&type=pdf Best, Pierre Le ven. 6 juil. 2018 ? 21:28, Philip Wadler a ?crit : > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Everyone on this list will be familiar with Progress and Preservation > for terms in a typed calculus. Write ? ? M : A to indicate that term > M has type A for closed M. > > Progress. If ? ? M : A then either M is a value or M ?? N, > for some term N. > > Preservation. If ? ? M : A and M ?? N then ? ? N : A. > > It is easy to combine these two proofs into an evaluator. > Write ?? for the transitive and reflexive closure of ??. > > Evaluation. If ? ? M : A, then for every natural number n, > either M ?? V, where V is a value and the reduction sequence > has no more than n steps, or M ?? N, where N is not a value > and the reduction sequence has n steps. > > Evaluation implies that either M ?? V or there is an infinite > sequence M ?? M? ?? M? ?? ... that never reduces to a value; > but this last result is not constructive, as deciding which of > the two results holds is not decidable. > > An Agda implementation of Evaluation provides an evaluator for terms: > given a number n it will perform up to n steps of evaluation, stopping > early if a value is reached. This is entirely obvious (at least in > retrospect), but I have not seen it written down anywhere. For > instance, this approach is not exploited in Software Foundations to > evaluate terms (it uses a normalize tactic instead). I have used it > in my draft textbook: > > https:plfa.inf.ed.ac.uk > > Questions: What sources in the literature should one cite for this > technique? How well-known is it as folklore? Cheers, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > Too brief? Here's why: http://www.emailcharter.org/ > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. From wadler at inf.ed.ac.uk Fri Jul 6 17:18:02 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 6 Jul 2018 18:18:02 -0300 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: References: Message-ID: Thanks, Pierre and Eelco. I'm well aware of "fuel" as folklore. For instance, I've heard Conor McBride describe fuel as an argument that, despite the guarantee that all programs terminate, Agda can effectively encode every recursive function. But that is different from my question; please see my second, clarifying email. I had a look at the three references cited, and I don't believe they exploit constructive proofs of progress and preservation to define an evaluator. Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ On 6 July 2018 at 17:57, Pierre Courtieu wrote: > Hi, > > I think this is now well known as the "fuel" pattern. Ideally n is > very big and computed lazily. > > I am not aware of a paper specifically about this trick, it is there > as a very useful folklore I guess. > > Note however that Bertot Balaa presented a more "complete" version of > this by refining this trick: the function itself is defined by means > of a proof that there exists a number N of iterations such that any > greater number would give the same result. i.e. that N is a sufficient > fuel to reach the fixpoint. This mechanism is used in the current > implementation of the "Function" command in coq (when given a non > structurally recursive function). > > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1. > 28.601&rep=rep1&type=pdf > > Best, > Pierre > Le ven. 6 juil. 2018 ? 21:28, Philip Wadler a ?crit > : > > > > [ The Types Forum, http://lists.seas.upenn.edu/ > mailman/listinfo/types-list ] > > > > Everyone on this list will be familiar with Progress and Preservation > > for terms in a typed calculus. Write ? ? M : A to indicate that term > > M has type A for closed M. > > > > Progress. If ? ? M : A then either M is a value or M ?? N, > > for some term N. > > > > Preservation. If ? ? M : A and M ?? N then ? ? N : A. > > > > It is easy to combine these two proofs into an evaluator. > > Write ?? for the transitive and reflexive closure of ??. > > > > Evaluation. If ? ? M : A, then for every natural number n, > > either M ?? V, where V is a value and the reduction sequence > > has no more than n steps, or M ?? N, where N is not a value > > and the reduction sequence has n steps. > > > > Evaluation implies that either M ?? V or there is an infinite > > sequence M ?? M? ?? M? ?? ... that never reduces to a value; > > but this last result is not constructive, as deciding which of > > the two results holds is not decidable. > > > > An Agda implementation of Evaluation provides an evaluator for terms: > > given a number n it will perform up to n steps of evaluation, stopping > > early if a value is reached. This is entirely obvious (at least in > > retrospect), but I have not seen it written down anywhere. For > > instance, this approach is not exploited in Software Foundations to > > evaluate terms (it uses a normalize tactic instead). I have used it > > in my draft textbook: > > > > https:plfa.inf.ed.ac.uk > > > > Questions: What sources in the literature should one cite for this > > technique? How well-known is it as folklore? Cheers, -- P > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . http://homepages.inf.ed.ac.uk/wadler/ > > > > Too brief? Here's why: http://www.emailcharter.org/ > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From Sam.Lindley at ed.ac.uk Fri Jul 6 18:23:18 2018 From: Sam.Lindley at ed.ac.uk (Sam Lindley) Date: Sat, 7 Jul 2018 00:23:18 +0200 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: References: Message-ID: <050f5d02-8b2a-8b4c-2fe2-299a049a5a98@ed.ac.uk> You should certainly cite Conor McBride's MPC 2015 paper "Turing-Completeness Totally Free": https://personal.cis.strath.ac.uk/conor.mcbride/TotallyFree.pdf Sam On 06/07/18 17:49, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > Everyone on this list will be familiar with Progress and Preservation > for terms in a typed calculus. Write ? ? M : A to indicate that term > M has type A for closed M. > > Progress. If ? ? M : A then either M is a value or M ?? N, > for some term N. > > Preservation. If ? ? M : A and M ?? N then ? ? N : A. > > It is easy to combine these two proofs into an evaluator. > Write ?? for the transitive and reflexive closure of ??. > > Evaluation. If ? ? M : A, then for every natural number n, > either M ?? V, where V is a value and the reduction sequence > has no more than n steps, or M ?? N, where N is not a value > and the reduction sequence has n steps. > > Evaluation implies that either M ?? V or there is an infinite > sequence M ?? M? ?? M? ?? ... that never reduces to a value; > but this last result is not constructive, as deciding which of > the two results holds is not decidable. > > An Agda implementation of Evaluation provides an evaluator for terms: > given a number n it will perform up to n steps of evaluation, stopping > early if a value is reached. This is entirely obvious (at least in > retrospect), but I have not seen it written down anywhere. For > instance, this approach is not exploited in Software Foundations to > evaluate terms (it uses a normalize tactic instead). I have used it > in my draft textbook: > > https:plfa.inf.ed.ac.uk > > Questions: What sources in the literature should one cite for this > technique? How well-known is it as folklore? Cheers, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > Too brief? Here's why: http://www.emailcharter.org/ > > > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From wadler at inf.ed.ac.uk Fri Jul 6 18:45:12 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 6 Jul 2018 19:45:12 -0300 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: <050f5d02-8b2a-8b4c-2fe2-299a049a5a98@ed.ac.uk> References: <050f5d02-8b2a-8b4c-2fe2-299a049a5a98@ed.ac.uk> Message-ID: Thanks, Sam and Andreas, for the elegant examples of how to encode general recursion in a language that guarantees termination. Sam, am I right in thinking Conor's paper does not touch on the idea of combining constructive proofs of progress and preservation to derive a constructive evaluator? (But I could apply Conor's technique or a trampoline to express such an evaluator. Indeed, I considered using Delay for the evaluator in the textbook, but decided that supplying a step count was simpler, and avoided the need to explain coinduction.) Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ On 6 July 2018 at 19:23, Sam Lindley wrote: > You should certainly cite Conor McBride's MPC 2015 paper > "Turing-Completeness Totally Free": > > https://personal.cis.strath.ac.uk/conor.mcbride/TotallyFree.pdf > > Sam > > On 06/07/18 17:49, Philip Wadler wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/ma >> ilman/listinfo/types-list ] >> >> >> >> >> Everyone on this list will be familiar with Progress and Preservation >> for terms in a typed calculus. Write ? ? M : A to indicate that term >> M has type A for closed M. >> >> Progress. If ? ? M : A then either M is a value or M ?? N, >> for some term N. >> >> Preservation. If ? ? M : A and M ?? N then ? ? N : A. >> >> It is easy to combine these two proofs into an evaluator. >> Write ?? for the transitive and reflexive closure of ??. >> >> Evaluation. If ? ? M : A, then for every natural number n, >> either M ?? V, where V is a value and the reduction sequence >> has no more than n steps, or M ?? N, where N is not a value >> and the reduction sequence has n steps. >> >> Evaluation implies that either M ?? V or there is an infinite >> sequence M ?? M? ?? M? ?? ... that never reduces to a value; >> but this last result is not constructive, as deciding which of >> the two results holds is not decidable. >> >> An Agda implementation of Evaluation provides an evaluator for terms: >> given a number n it will perform up to n steps of evaluation, stopping >> early if a value is reached. This is entirely obvious (at least in >> retrospect), but I have not seen it written down anywhere. For >> instance, this approach is not exploited in Software Foundations to >> evaluate terms (it uses a normalize tactic instead). I have used it >> in my draft textbook: >> >> https:plfa.inf.ed.ac.uk >> >> Questions: What sources in the literature should one cite for this >> technique? How well-known is it as folklore? Cheers, -- P >> >> >> . \ Philip Wadler, Professor of Theoretical Computer Science, >> . /\ School of Informatics, University of Edinburgh >> . / \ and Senior Research Fellow, IOHK >> . http://homepages.inf.ed.ac.uk/wadler/ >> >> Too brief? Here's why: http://www.emailcharter.org/ >> >> >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From effectfully at gmail.com Fri Jul 6 19:41:48 2018 From: effectfully at gmail.com (Roman) Date: Sat, 7 Jul 2018 02:41:48 +0300 Subject: [TYPES] [Agda] Progress + Preservation = Evaluation In-Reply-To: References: <050f5d02-8b2a-8b4c-2fe2-299a049a5a98@ed.ac.uk> Message-ID: > avoided the need to explain coinduction Conor uses a constructive free monad. So coinduction is just one interpretation and you can avoid it and present only the fuel-driven execution semantics. Though, if it's the only one you need, why complicate it by using free monads indeed. There is a very remotely related topic to what you're asking, it's called Normalization by Completeness. Here are some slides: [1]. In essence, you can get normalization by composing Soundness with Completeness which is kinda related, because Progress and Preservation are forms of Soundness and Completeness (I always forget which one is which). But there Soundness means Evaluation (to a value in some model) and Completeness means Reification, so I'm not really sure whether there is any relation at all. [1] http://www.cs.nott.ac.uk/~psztxa/talks/nbe09.pdf Best regards, Roman From effectfully at gmail.com Sun Jul 8 12:39:37 2018 From: effectfully at gmail.com (Roman) Date: Sun, 8 Jul 2018 19:39:37 +0300 Subject: [TYPES] What is the state of the art of call-by-value computing? Message-ID: Hi all, there are plenty of machines that allow to evaluate terms in a call-by-value language: CEK, CESK, KAM, Zinc, etc. Is there any overview that says which machine to choose for your particular setup? Any efficiency evaluation of various machines? Benchmarks would be great. Thank you. Best regards, Roman From frederic.blanqui at inria.fr Sun Jul 8 13:58:23 2018 From: frederic.blanqui at inria.fr (=?UTF-8?B?RnLDqWTDqXJpYyBCbGFucXVp?=) Date: Sun, 8 Jul 2018 19:58:23 +0200 Subject: [TYPES] What is the state of the art of call-by-value computing? In-Reply-To: References: Message-ID: See for instance: B. Accattoli, B. Barras. Environments and the Complexity of Abstract Machines . /PPDP 2017/. Le 08/07/2018 ? 18:39, Roman a ?crit?: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi all, > > there are plenty of machines that allow to evaluate terms in a > call-by-value language: CEK, CESK, KAM, Zinc, etc. Is there any > overview that says which machine to choose for your particular setup? > Any efficiency evaluation of various machines? Benchmarks would be > great. > > Thank you. > > Best regards, > Roman From effectfully at gmail.com Tue Jul 10 08:36:07 2018 From: effectfully at gmail.com (Roman) Date: Tue, 10 Jul 2018 15:36:07 +0300 Subject: [TYPES] What is the state of the art of call-by-value computing? In-Reply-To: References: Message-ID: Thank you Fr?d?ric, this is what I need and also contains relevant references. From monnier at IRO.UMontreal.CA Tue Jul 10 17:50:11 2018 From: monnier at IRO.UMontreal.CA (Stefan Monnier) Date: Tue, 10 Jul 2018 17:50:11 -0400 Subject: [TYPES] Transfinite universe levels Message-ID: Most type theory-based tools nowadays are based on a calculus with an infinite hierarchy of predicative universes. Some of them, such as Agda, also allow abstraction over those levels, so we end up with a PTS along the lines of: ? ::= z | s ? | ?? ? ?? | l (where `l` is a level variable) S = { Type ?, Type ?, SortLevel } A = { Type ? : Type (? + 1), TypeLevel : SortLevel } R = { (Type ??, Type ??, Type (?? ? ??)), (SortLevel, Type ?, Type ?), (SortLevel, Type ?, Type ?) } I was wondering if there's been work to push this to larger ordinals, to allow maybe rules like (Type ?, Type ?, Type (???)) I.e. "deep universe polymorphism" Stefan From palmgren at math.su.se Tue Jul 10 18:40:12 2018 From: palmgren at math.su.se (Erik Palmgren) Date: Wed, 11 Jul 2018 00:40:12 +0200 Subject: [TYPES] Transfinite universe levels In-Reply-To: References: Message-ID: Hi In my PhD thesis (Uppsala University 1991) I consider transfinite hierarchies of type universes in Martin-L?f type theory. Some of this was published in a 1998 proceedings: "On universes in type theory, in: G. Sambin and J. Smith (eds.) Twenty Five Years of Constructive Type Theory. Oxford Logic Guides, Oxford University Press 1998, 191-204 (refereed collection of papers). Near final version in PDF." http://staff.math.su.se/palmgren/publications.html http://www2.math.uu.se/~palmgren/universe.pdf The so-called superuniverses (V,S) (which admits transfinite iterations of universes) have been formalized in Agda. Here is an example of such a formalization: --- -- building a universe above a family (A,B) mutual data U (A : Set) (B : A -> Set) : Set where n? : U A B n? : U A B ix : U A B lf : A -> U A B _?_ : U A B -> U A B -> U A B ? : (a : U A B) -> (T a -> U A B) -> U A B ? : (a : U A B) -> (T a -> U A B) -> U A B n : U A B w : (a : U A B) -> (T a -> U A B) -> U A B i : (a : U A B) -> T a -> T a -> U A B T : {A : Set} {B : A -> Set} -> U A B -> Set T n? = N? T n? = N? T {A} {B} ix = A T {A} {B} (lf a) = B a T (a ? b) = T a + T b T (? a b) = ? (T a) (\x -> T (b x)) T (? a b) = ? (T a) (\x -> T (b x)) T n = N T (w a b) = W (T a) (\x -> T (b x)) T (i a b c) = I (T a) b c -- the super universe (V, S): a universe which is closed also -- under the above universe building operation mutual data V : Set where n? : V n? : V _?_ : V -> V -> V ? : (a : V) -> (S a -> V) -> V ? : (a : V) -> (S a -> V) -> V n : V w : (a : V) -> (S a -> V) -> V i : (a : V) -> S a -> S a -> V u : (a : V) -> (S a -> V) -> V t : (a : V) -> (b : S a -> V) -> U (S a) (\x -> S (b x)) -> V S : V -> Set S n? = N? S n? = N? S (a ? b) = S a + S b S (? a b) = ? (S a) (\x -> S (b x)) S (? a b) = ? (S a) (\x -> S (b x)) S n = N S (w a b) = W (S a) (\x -> S (b x)) S (i a b c) = I (S a) b c S (u a b) = U (S a) (\x -> S (b x)) S (t a b c) = T {S a} {\x -> S (b x)} c -- There is also work by Anton Setzer (Swansea U) of type universes that go far beyond this, Mahlo universes - a kind of ultimate universe polymorphism. These are significant constructions in proof theory. Best Regards Erik Erik Palmgren Prof Mathematical Logic Department of Mathematics Stockholm University Den 2018-07-10 kl. 23:50, skrev Stefan Monnier: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Most type theory-based tools nowadays are based on a calculus with an > infinite hierarchy of predicative universes.? Some of them, such as > Agda, also allow abstraction over those levels, so we end up with a PTS > along the lines of: > > ??? ? ::= z | s ? | ?? ? ?? | l????? (where `l` is a level variable) > > ??? S = { Type ?, Type ?, SortLevel } > ??? A = { Type ? : Type (? + 1), TypeLevel : SortLevel } > ??? R = { (Type ??, Type ??, Type (?? ? ??)), > ????????? (SortLevel, Type ?, Type ?), > ????????? (SortLevel, Type ?, Type ?) } > > I was wondering if there's been work to push this to larger ordinals, to > allow maybe rules like > > ????????? (Type ?, Type ?, Type (???)) > > I.e. "deep universe polymorphism" > > > ??????? Stefan From flippa at flippac.org Tue Jul 10 19:13:36 2018 From: flippa at flippac.org (Philippa Cowderoy) Date: Wed, 11 Jul 2018 00:13:36 +0100 Subject: [TYPES] Transfinite universe levels In-Reply-To: References: Message-ID: <5124bd78-9d51-23a0-7a90-fac646e344be@flippac.org> I've certainly meant to play with that! Especially for reflection purposes. Never actually did though, and I'd be interested to know if someone else has. On 10/07/2018 22:50, Stefan Monnier wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Most type theory-based tools nowadays are based on a calculus with an > infinite hierarchy of predicative universes. Some of them, such as > Agda, also allow abstraction over those levels, so we end up with a PTS > along the lines of: > > ? ::= z | s ? | ?? ? ?? | l (where `l` is a level variable) > > S = { Type ?, Type ?, SortLevel } > A = { Type ? : Type (? + 1), TypeLevel : SortLevel } > R = { (Type ??, Type ??, Type (?? ? ??)), > (SortLevel, Type ?, Type ?), > (SortLevel, Type ?, Type ?) } > > I was wondering if there's been work to push this to larger ordinals, to > allow maybe rules like > > (Type ?, Type ?, Type (???)) > > I.e. "deep universe polymorphism" > > > Stefan From gabriel.scherer at gmail.com Wed Jul 11 03:42:17 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Wed, 11 Jul 2018 08:42:17 +0100 Subject: [TYPES] Transfinite universe levels In-Reply-To: References: Message-ID: There are several reasons to be interested in universe hierarchies, among which (1) the question of the ordinal strength of the resulting type theory, (2) the ability to reflect a fragment of the type theory of interest into itself to study it internally, and (3) usability questions arising from the inconvenience, in practice, of predicative universes. Erik's answer gives a lot of information on (1) and (2), but I have a comment on (3). If I understand correctly, Agda today has a finite hierarchy of universes that are types, and a Set-omega sort (that is not a type) to classify universe-polymorphic declarations. It is natural to think of extending this hierarchy with sorts (omega+n) (making Set-omega a type), and then one would extend quantification on universe levels to something of the form (forall (i < l)) for an ordinal level l. However, my intuition would be that this does not deliver much usability benefits in practice. We use universe-polymorphic definitions to get genericity (from a programming point of view, we think of it as SystemF-style impredicative quantification), and then sometimes we find that we cannot compose two generic things as it would require going above Set-omega. But in practice with a transfinite hierarchy you would likely run into the same trouble: if two independent generic definitions quantify in the strongest possible way, they clash, and otherwise one can be used under the other but not conversely. When you develop in the large, you would still need to synchronize universe constraints among the various libraries to compose them in the way you intend; in fact, it is likely that in practice you could (given a coherent policy for universe numbering among the libraries of your development) lift that back to finite universes equipped with finite bounded quantification. For this reason, I have generally considered that extending the ordinal hierarchy you have access to isn't actually so important. If you don't ask for ordinals that are very strong (say, if you are happy with the m*omega+k, or below epsilon0), my understanding is that the metatheory would not be harder than it is today with finite universes and universe quantification, but the usability of the resulting system also wouldn't be very different. On Tue, Jul 10, 2018 at 11:00 PM Stefan Monnier wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > Most type theory-based tools nowadays are based on a calculus with an > infinite hierarchy of predicative universes. Some of them, such as > Agda, also allow abstraction over those levels, so we end up with a PTS > along the lines of: > > ? ::= z | s ? | ?? ? ?? | l (where `l` is a level variable) > > S = { Type ?, Type ?, SortLevel } > A = { Type ? : Type (? + 1), TypeLevel : SortLevel } > R = { (Type ??, Type ??, Type (?? ? ??)), > (SortLevel, Type ?, Type ?), > (SortLevel, Type ?, Type ?) } > > I was wondering if there's been work to push this to larger ordinals, to > allow maybe rules like > > (Type ?, Type ?, Type (???)) > > I.e. "deep universe polymorphism" > > > Stefan > From gabriel.scherer at gmail.com Thu Jul 12 07:46:42 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Thu, 12 Jul 2018 12:46:42 +0100 Subject: [TYPES] Max on terminology: "Graduality" instead of "Gradual Guarantee" Message-ID: Dear types-list, I attended this week at FSCD an excellent talk by Max New (see abstract and link at the end of this email) where he proposed the name "graduality" for what we usually call the "gradual guarantee", to get a stronger correspondence with the name "parametricity": we don't say "parametric guarantee". I like the suggestion, and I would suggest to adopt it! P.S.: It is interesting that graduality experts sometimes distinguish "static graduality" from "dynamic graduality" -- there are different things that could be called "dynamic parametricity", and they are being actively researched. Call-by-name Gradual Type Theory Max S. New, Daniel R. Licata, 2018 https://arxiv.org/abs/1802.00061 This terminology is also used in a follow-up paper Graduality from Embedding-projection Pairs Max S. New, Amal Ahmed, 2018 https://arxiv.org/abs/1807.02786 From dan.doel at gmail.com Thu Jul 12 23:31:25 2018 From: dan.doel at gmail.com (Dan Doel) Date: Thu, 12 Jul 2018 23:31:25 -0400 Subject: [TYPES] Transfinite universe levels In-Reply-To: References: Message-ID: To add to this a bit, the universes in Agda are actually (at least) Mahlo universes. If you find the right paper by Setzer, it's not too hard to take the definition of a Mahlo universe and write down something in Agda that exhibits, say, Set as Mahlo (the sort of 'impredicative' definition of Mahlo is the one you're looking for). But also, Setzer has papers on universes that go _beyond_ Mahlo, to incorporate (and understand) similar strong concepts from proof theory. I forget if it's the Pi_3 universe or the Pi^1_1 universe (his page of papers mentions both, and I don't remember enough details). But my understanding is that it is a recursion-theoretic analogue of weakly compact cardinals (whereas Mahlo universes/ordinals are related to Mahlo cardinals). I don't think Agda's universes are this strong, though I really don't understand well enough to say for sure. -- Dan On Tue, Jul 10, 2018 at 6:40 PM, Erik Palmgren wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > Hi > > In my PhD thesis (Uppsala University 1991) I consider transfinite > hierarchies of type universes in Martin-L?f type theory. Some of this was > published in a 1998 proceedings: > > "On universes in type theory, in: G. Sambin and J. Smith (eds.) Twenty > Five Years of Constructive Type Theory. Oxford Logic Guides, Oxford > University Press 1998, 191-204 (refereed collection of papers). Near final > version in PDF." > > http://staff.math.su.se/palmgren/publications.html > > http://www2.math.uu.se/~palmgren/universe.pdf > > The so-called superuniverses (V,S) (which admits transfinite iterations of > universes) have been formalized in Agda. Here is an example of such a > formalization: > > --- > > -- building a universe above a family (A,B) > > mutual > data U (A : Set) (B : A -> Set) : Set where > n? : U A B > n? : U A B > ix : U A B > lf : A -> U A B > _?_ : U A B -> U A B -> U A B > ? : (a : U A B) -> (T a -> U A B) -> U A B > ? : (a : U A B) -> (T a -> U A B) -> U A B > n : U A B > w : (a : U A B) -> (T a -> U A B) -> U A B > i : (a : U A B) -> T a -> T a -> U A B > > T : {A : Set} {B : A -> Set} -> U A B -> Set > T n? = N? > T n? = N? > T {A} {B} ix = A > T {A} {B} (lf a) = B a > T (a ? b) = T a + T b > T (? a b) = ? (T a) (\x -> T (b x)) > T (? a b) = ? (T a) (\x -> T (b x)) > T n = N > T (w a b) = W (T a) (\x -> T (b x)) > T (i a b c) = I (T a) b c > > -- the super universe (V, S): a universe which is closed also > -- under the above universe building operation > > mutual > data V : Set where > n? : V > n? : V > _?_ : V -> V -> V > ? : (a : V) -> (S a -> V) -> V > ? : (a : V) -> (S a -> V) -> V > n : V > w : (a : V) -> (S a -> V) -> V > i : (a : V) -> S a -> S a -> V > u : (a : V) -> (S a -> V) -> V > t : (a : V) -> (b : S a -> V) -> U (S a) (\x -> S (b x)) -> V > > > S : V -> Set > S n? = N? > S n? = N? > S (a ? b) = S a + S b > S (? a b) = ? (S a) (\x -> S (b x)) > S (? a b) = ? (S a) (\x -> S (b x)) > S n = N > S (w a b) = W (S a) (\x -> S (b x)) > S (i a b c) = I (S a) b c > S (u a b) = U (S a) (\x -> S (b x)) > S (t a b c) = T {S a} {\x -> S (b x)} c > -- > > There is also work by Anton Setzer (Swansea U) of type universes that go > far beyond this, Mahlo universes - a kind of ultimate universe > polymorphism. These are significant constructions in proof theory. > > > Best Regards > > Erik > > > Erik Palmgren > Prof Mathematical Logic > Department of Mathematics > Stockholm University > > > > > Den 2018-07-10 kl. 23:50, skrev Stefan Monnier: > >> [ The Types Forum, http://lists.seas.upenn.edu/ma >> ilman/listinfo/types-list ] >> >> >> Most type theory-based tools nowadays are based on a calculus with an >> infinite hierarchy of predicative universes. Some of them, such as >> Agda, also allow abstraction over those levels, so we end up with a PTS >> along the lines of: >> >> ? ::= z | s ? | ?? ? ?? | l (where `l` is a level variable) >> >> S = { Type ?, Type ?, SortLevel } >> A = { Type ? : Type (? + 1), TypeLevel : SortLevel } >> R = { (Type ??, Type ??, Type (?? ? ??)), >> (SortLevel, Type ?, Type ?), >> (SortLevel, Type ?, Type ?) } >> >> I was wondering if there's been work to push this to larger ordinals, to >> allow maybe rules like >> >> (Type ?, Type ?, Type (???)) >> >> I.e. "deep universe polymorphism" >> >> >> Stefan >> > From oleg at okmij.org Wed Jul 18 05:03:24 2018 From: oleg at okmij.org (Oleg) Date: Wed, 18 Jul 2018 18:03:24 +0900 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: Message-ID: <20180718090324.GA3275@Magus.localnet> Phil Wadler wrote: > Progress + Preservation = Evaluation. > From a constructive proof of progress and preservation, we can assemble a > constructive evaluator which, for any given number n, will either reduce > the term to a value (in less than n steps) or show that after n steps of > reduction the term is not a value. [The key word here is *constructive*. > Once one has proved progress and preservation constructively, one has > implemented a constructive evaluator for terms.] > Indeed, in every presentation I can recall the act of proving progress and > preservation is separated from the act of writing code that can evaluate a > term, even though the former trivially implies the latter. Sorry for being late to the party... I can cite quite a few cases where to prove progress and preservation, one first implements an evaluator -- and the type soundness comes as a `side-effect' of writing it. To wit: one writes an evaluator of the signature (in Twelf terms) eval : {E:exp T} non-value E -> exp T -> type. The signature expresses subject reduction (preservation) property. Thus it becomes impossible to write cases of eval that do not have subject reduction -- the typechecker will complain otherwise. Once we have finished writing all cases of eval (which can then be used immediately to evaluate terms) we can ask Twelf if `eval' is total. If Twelf agrees, we get the proof of progress. This is often called `intrinsic encoding' of DSLs and has been quite popular in Twelf community. The following web page from about 10 years ago http://okmij.org/ftp/formalizations/index.html#intrinsic summarizes this techniques and provides the references I could find. It also shows several less-trivial progress-preservation proofs done in this style, in particular, calculus with both staging and delimited control. The eval function was indeed used in practice, to evaluate sample terms (and run all the examples that are mentioned in our papers about that calculus). The technique is closely related to `tagless-final', as mentioned on that web page. Writing a tagless-final evaluator indeed includes essentially proving subject reduction in the process (even if one actually is interested in running programs rather than doing proofs). From wadler at inf.ed.ac.uk Thu Jul 19 02:47:29 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Thu, 19 Jul 2018 08:47:29 +0200 Subject: [TYPES] Progress + Preservation = Evaluation In-Reply-To: <20180718090324.GA3275@Magus.localnet> References: <20180718090324.GA3275@Magus.localnet> Message-ID: Thanks, Oleg! I agree that renaming "progress" to "eval" corresponds to the same insight, though expressed differently: instead of observing that progress and preservation give you evaluation, you are observing that evaluation gives you progress! Were any of the items listed on your page published (elsewhere than on the page itself)? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ On 18 July 2018 at 11:03, Oleg wrote: > > Phil Wadler wrote: > > Progress + Preservation = Evaluation. > > From a constructive proof of progress and preservation, we can assemble a > > constructive evaluator which, for any given number n, will either reduce > > the term to a value (in less than n steps) or show that after n steps of > > reduction the term is not a value. [The key word here is *constructive*. > > Once one has proved progress and preservation constructively, one has > > implemented a constructive evaluator for terms.] > > > Indeed, in every presentation I can recall the act of proving progress > and > > preservation is separated from the act of writing code that can evaluate > a > > term, even though the former trivially implies the latter. > > Sorry for being late to the party... > > I can cite quite a few cases where to prove progress and > preservation, one first implements an evaluator -- and the type > soundness comes as a `side-effect' of writing it. To wit: one writes > an evaluator of the signature (in Twelf terms) > eval : {E:exp T} non-value E -> exp T -> type. > The signature expresses subject reduction (preservation) property. > Thus it becomes impossible to write cases of eval that do not > have subject reduction -- the typechecker will complain otherwise. Once we > have finished writing all cases of eval (which can then be used immediately > to evaluate terms) we can ask Twelf if `eval' is total. If Twelf agrees, > we get the proof of progress. This is often called `intrinsic > encoding' of DSLs and has been quite popular in Twelf community. > > The following web page from about 10 years ago > > http://okmij.org/ftp/formalizations/index.html#intrinsic > > summarizes this techniques and provides the references I could find. It > also shows several less-trivial progress-preservation proofs done in > this style, in particular, calculus with both staging and delimited > control. The eval function was indeed used in practice, to evaluate > sample terms (and run all the examples that are mentioned in our > papers about that calculus). > > The technique is closely related to `tagless-final', as mentioned on > that web page. Writing a tagless-final evaluator indeed includes > essentially proving subject reduction in the process (even if one > actually is interested in running programs rather than doing proofs). > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From grosu at illinois.edu Sun Sep 16 18:29:39 2018 From: grosu at illinois.edu (Rosu, Grigore) Date: Sun, 16 Sep 2018 22:29:39 +0000 Subject: [TYPES] models of untyped lambda calculus Message-ID: <0A9D1F138D97BC4F9ACD513FCDC400178E72E3E4@CITESMBX4.ad.uillinois.edu> Are there conventional models of untyped lambda calculus for which equational deduction with alpha+beta is *complete*? By conventional models I mean ones where lambda and application are interpreted as functions. The Hindley-Longo style models are complete but non-conventional. On the other hand, graph models are conventional but incomplete. Thank you, Grigore From grosu at illinois.edu Thu Sep 20 20:22:20 2018 From: grosu at illinois.edu (Rosu, Grigore) Date: Fri, 21 Sep 2018 00:22:20 +0000 Subject: [TYPES] models of untyped lambda calculus In-Reply-To: <0A9D1F138D97BC4F9ACD513FCDC400178E72E3E4@CITESMBX4.ad.uillinois.edu> References: <0A9D1F138D97BC4F9ACD513FCDC400178E72E3E4@CITESMBX4.ad.uillinois.edu> Message-ID: <0A9D1F138D97BC4F9ACD513FCDC400178E7452A1@CITESMBX4.ad.uillinois.edu> Still looking for an answer to this question. Thank you, Mariangiola Dezani, for referring us to open problem #22 in the TLCA list: http://tlca.di.unito.it/opltlca/. That was indeed what we initially had in mind, continuously complete CPOs. But now, seeing that the problem is still open, we are willing to weaken our requirements as much as possible provided the resulting structures can still be reasonably called "conventional models". Specifically, we want to avoid models which come with given interpretations of all the terms (satisfying constraints). Instead, we want models where the interpretation of `lambda x . e` under `rho` can be constructed from the interpretations of `e` under `rho[m/x]` for all appropriate `m` in `M`. Thank you, Grigore and Xiaohong ________________________________________ From: Types-list [types-list-bounces at LISTS.SEAS.UPENN.EDU] on behalf of Rosu, Grigore [grosu at illinois.edu] Sent: Sunday, September 16, 2018 5:29 PM To: types-list at lists.seas.upenn.edu Subject: [TYPES] models of untyped lambda calculus [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Are there conventional models of untyped lambda calculus for which equational deduction with alpha+beta is *complete*? By conventional models I mean ones where lambda and application are interpreted as functions. The Hindley-Longo style models are complete but non-conventional. On the other hand, graph models are conventional but incomplete. Thank you, Grigore From gabriel.scherer at gmail.com Fri Sep 21 03:00:28 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Fri, 21 Sep 2018 09:00:28 +0200 Subject: [TYPES] models of untyped lambda calculus In-Reply-To: <0A9D1F138D97BC4F9ACD513FCDC400178E7452A1@CITESMBX4.ad.uillinois.edu> References: <0A9D1F138D97BC4F9ACD513FCDC400178E72E3E4@CITESMBX4.ad.uillinois.edu> <0A9D1F138D97BC4F9ACD513FCDC400178E7452A1@CITESMBX4.ad.uillinois.edu> Message-ID: Dear Grigore and Xiaohong, It may be helpful to make the question a bit more self-contained -- or even if we cannot help answer, some of us on the list would at last learn from the question. - I am not sure what "complete" means in your question, recalling the definition would help. - I am not familiar with the Hindley-Longo and graph models that you refer to, do you have some pointers to introductory presentations? (Intuitively I would expect models mapping lambda-terms to mathematical functions to respect "eta" (for the function type), and thus have stronger identifications than just alpha+beta.) On Fri, Sep 21, 2018 at 7:04 AM Rosu, Grigore wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Still looking for an answer to this question. Thank you, Mariangiola Dezani, for referring us to open problem #22 in the TLCA list: http://tlca.di.unito.it/opltlca/. That was indeed what we initially had in mind, continuously complete CPOs. But now, seeing that the problem is still open, we are willing to weaken our requirements as much as possible provided the resulting structures can still be reasonably called "conventional models". Specifically, we want to avoid models which come with given interpretations of all the terms (satisfying constraints). Instead, we want models where the interpretation of `lambda x . e` under `rho` can be constructed from the interpretations of `e` under `rho[m/x]` for all appropriate `m` in `M`. > > Thank you, > Grigore and Xiaohong > > > ________________________________________ > From: Types-list [types-list-bounces at LISTS.SEAS.UPENN.EDU] on behalf of Rosu, Grigore [grosu at illinois.edu] > Sent: Sunday, September 16, 2018 5:29 PM > To: types-list at lists.seas.upenn.edu > Subject: [TYPES] models of untyped lambda calculus > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Are there conventional models of untyped lambda calculus for which equational deduction with alpha+beta is *complete*? By conventional models I mean ones where lambda and application are interpreted as functions. > The Hindley-Longo style models are complete but non-conventional. On the other hand, graph models are conventional but incomplete. > > Thank you, > Grigore > From ameyer at mit.edu Fri Sep 21 12:45:18 2018 From: ameyer at mit.edu (Albert R Meyer) Date: Fri, 21 Sep 2018 12:45:18 -0400 Subject: [TYPES] models of untyped lambda calculus In-Reply-To: References: <0A9D1F138D97BC4F9ACD513FCDC400178E72E3E4@CITESMBX4.ad.uillinois.edu> <0A9D1F138D97BC4F9ACD513FCDC400178E7452A1@CITESMBX4.ad.uillinois.edu> Message-ID: perhaps my old paper, attached, is relevant here. Yours truly, Albert R. Meyer Professor of Computer Science MIT Department of Electrical Engineering and Computer Science Cambridge MA 02139 On Fri, Sep 21, 2018 at 3:04 AM Gabriel Scherer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Grigore and Xiaohong, > > It may be helpful to make the question a bit more self-contained -- or > even if we cannot help answer, some of us on the list would at last > learn from the question. > - I am not sure what "complete" means in your question, recalling the > definition would help. > - I am not familiar with the Hindley-Longo and graph models that you > refer to, do you have some pointers to introductory presentations? > > (Intuitively I would expect models mapping lambda-terms to > mathematical functions to respect "eta" (for the function type), and > thus have stronger identifications than just alpha+beta.) > On Fri, Sep 21, 2018 at 7:04 AM Rosu, Grigore wrote: > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Still looking for an answer to this question. Thank you, Mariangiola > Dezani, for referring us to open problem #22 in the TLCA list: > http://tlca.di.unito.it/opltlca/. That was indeed what we initially had > in mind, continuously complete CPOs. But now, seeing that the problem is > still open, we are willing to weaken our requirements as much as possible > provided the resulting structures can still be reasonably called > "conventional models". Specifically, we want to avoid models which come > with given interpretations of all the terms (satisfying constraints). > Instead, we want models where the interpretation of `lambda x . e` under > `rho` can be constructed from the interpretations of `e` under `rho[m/x]` > for all appropriate `m` in `M`. > > > > Thank you, > > Grigore and Xiaohong > > > > > > ________________________________________ > > From: Types-list [types-list-bounces at LISTS.SEAS.UPENN.EDU] on behalf of > Rosu, Grigore [grosu at illinois.edu] > > Sent: Sunday, September 16, 2018 5:29 PM > > To: types-list at lists.seas.upenn.edu > > Subject: [TYPES] models of untyped lambda calculus > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Are there conventional models of untyped lambda calculus for which > equational deduction with alpha+beta is *complete*? By conventional models > I mean ones where lambda and application are interpreted as functions. > > The Hindley-Longo style models are complete but non-conventional. On > the other hand, graph models are conventional but incomplete. > > > > Thank you, > > Grigore > > > From gabriel.scherer at gmail.com Sat Sep 22 11:30:57 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sat, 22 Sep 2018 17:30:57 +0200 Subject: [TYPES] models of untyped lambda calculus In-Reply-To: References: <0A9D1F138D97BC4F9ACD513FCDC400178E72E3E4@CITESMBX4.ad.uillinois.edu> <0A9D1F138D97BC4F9ACD513FCDC400178E7452A1@CITESMBX4.ad.uillinois.edu> Message-ID: The paper that Albert referred to is What is a model of the lambda calculus? Albert Meyer, 1982 > An elementary, purely algebraic definition of model for the untyped lambda > calculus is given. This definition is shown to be equivalent to the natural > semantic definition based on environments. These definitions of model are > consistent with, and yield a completeness theorem for, the standard axioms > for lambda convertibility. A simple construction of models for lambda > calculus is reviewed. The algebraic formulation clarifies the relation > between combinators and lambda terms. https://www.sciencedirect.com/science/article/pii/S0019995882800879 The fact that the only practical way to access a PDF of this work is from ScienceDirect (the version attached by Albert also comes from this website) should serve as a reminder to everyone to upload their paper on arXiv for proper open access and long-term archival: https://arxiv.org/ . You may thank yourself 36 years from now. On Sat, Sep 22, 2018 at 3:35 PM Artem Pelenitsyn wrote: > Dear Albert, > > I don't see anything attached to your mail, unfortunately. > > -- > Best wishes, > Artem Pelenitsyn > > On Fri, Sep 21, 2018, 2:22 PM Albert R Meyer wrote: > >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> perhaps my old paper, attached, is relevant here. >> Yours truly, >> >> Albert R. Meyer >> Professor of Computer Science >> MIT Department of Electrical Engineering and Computer Science >> Cambridge MA 02139 >> >> >> On Fri, Sep 21, 2018 at 3:04 AM Gabriel Scherer < >> gabriel.scherer at gmail.com> >> wrote: >> >> > [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list >> > ] >> > >> > Dear Grigore and Xiaohong, >> > >> > It may be helpful to make the question a bit more self-contained -- or >> > even if we cannot help answer, some of us on the list would at last >> > learn from the question. >> > - I am not sure what "complete" means in your question, recalling the >> > definition would help. >> > - I am not familiar with the Hindley-Longo and graph models that you >> > refer to, do you have some pointers to introductory presentations? >> > >> > (Intuitively I would expect models mapping lambda-terms to >> > mathematical functions to respect "eta" (for the function type), and >> > thus have stronger identifications than just alpha+beta.) >> > On Fri, Sep 21, 2018 at 7:04 AM Rosu, Grigore >> wrote: >> > > >> > > [ The Types Forum, >> > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > > >> > > Still looking for an answer to this question. Thank you, Mariangiola >> > Dezani, for referring us to open problem #22 in the TLCA list: >> > http://tlca.di.unito.it/opltlca/. That was indeed what we initially >> had >> > in mind, continuously complete CPOs. But now, seeing that the problem >> is >> > still open, we are willing to weaken our requirements as much as >> possible >> > provided the resulting structures can still be reasonably called >> > "conventional models". Specifically, we want to avoid models which come >> > with given interpretations of all the terms (satisfying constraints). >> > Instead, we want models where the interpretation of `lambda x . e` under >> > `rho` can be constructed from the interpretations of `e` under >> `rho[m/x]` >> > for all appropriate `m` in `M`. >> > > >> > > Thank you, >> > > Grigore and Xiaohong >> > > >> > > >> > > ________________________________________ >> > > From: Types-list [types-list-bounces at LISTS.SEAS.UPENN.EDU] on behalf >> of >> > Rosu, Grigore [grosu at illinois.edu] >> > > Sent: Sunday, September 16, 2018 5:29 PM >> > > To: types-list at lists.seas.upenn.edu >> > > Subject: [TYPES] models of untyped lambda calculus >> > > >> > > [ The Types Forum, >> > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > > >> > > Are there conventional models of untyped lambda calculus for which >> > equational deduction with alpha+beta is *complete*? By conventional >> models >> > I mean ones where lambda and application are interpreted as functions. >> > > The Hindley-Longo style models are complete but non-conventional. On >> > the other hand, graph models are conventional but incomplete. >> > > >> > > Thank you, >> > > Grigore >> > > >> > >> > From jeremy.siek at gmail.com Mon Oct 1 15:14:16 2018 From: jeremy.siek at gmail.com (Jeremy Siek) Date: Mon, 1 Oct 2018 15:14:16 -0400 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics Message-ID: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> I?m interested in finding literature on proofs of correctness of closure conversion applied to the untyped lambda calculus with respect to a denotational semantics. (Bonus for mechanized proofs.) I?ve found papers that are a near miss for what I?m looking for, such as: * closure conversion proofs for the STLC (Chlipala PLDI 2007) * closure conversion proofs based on operational semantics (Minamide, Morrisett, and Harper POPL 1996, and many more) * correctness proofs using denotational semantics for compilers that don?t do closure conversion, but instead compile to a machine that supports closure creation (the VLISP project and many more). Thanks in advance for pointers! Best regards, Jeremy From gabriel.scherer at gmail.com Mon Oct 1 17:26:31 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Mon, 1 Oct 2018 23:26:31 +0200 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics In-Reply-To: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> References: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> Message-ID: Dear Jeremy, In what sense would closure conversion be correct for the untyped lambda-calculus? Writing in an untyped language allows me to use pair/tuple operations to access the closure environment, which was not previously possible on the non-closure-converted calculus. If your untyped target language (I think it should be made precise) contains a primitive representation of closures (not just pairs), you can prevent environment extraction, but then that corresponds to your third bullet of a target that "supports (explicit) closure creation". Best On Mon, Oct 1, 2018 at 11:11 PM Jeremy Siek wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > I?m interested in finding literature on proofs of correctness of closure > conversion > applied to the untyped lambda calculus with respect to a denotational > semantics. > (Bonus for mechanized proofs.) > > I?ve found papers that are a near miss for what I?m looking for, such as: > * closure conversion proofs for the STLC (Chlipala PLDI 2007) > * closure conversion proofs based on operational semantics > (Minamide, Morrisett, and Harper POPL 1996, and many more) > * correctness proofs using denotational semantics for compilers that > don?t do closure conversion, but instead compile to a machine that > supports closure creation (the VLISP project and many more). > > Thanks in advance for pointers! > > Best regards, > Jeremy > > From wjb at williamjbowman.com Mon Oct 1 17:28:33 2018 From: wjb at williamjbowman.com (William J. Bowman) Date: Mon, 1 Oct 2018 17:28:33 -0400 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics In-Reply-To: References: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> Message-ID: <20181001212832.GI68875@williamjbowman.com> Gabriel, It sounds like you're describing a failure of full abstraction. I think the compiler could still satsify "whole-program correctness", i.e., the translated (whole) program denotes the same value as the original, even without primitive closures. -- William J. Bowman On Mon, Oct 01, 2018 at 11:26:31PM +0200, Gabriel Scherer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Jeremy, > > In what sense would closure conversion be correct for the untyped > lambda-calculus? Writing in an untyped language allows me to use pair/tuple > operations to access the closure environment, which was not previously > possible on the non-closure-converted calculus. If your untyped target > language (I think it should be made precise) contains a primitive > representation of closures (not just pairs), you can prevent environment > extraction, but then that corresponds to your third bullet of a target that > "supports (explicit) closure creation". > > Best > > On Mon, Oct 1, 2018 at 11:11 PM Jeremy Siek wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > I?m interested in finding literature on proofs of correctness of closure > > conversion > > applied to the untyped lambda calculus with respect to a denotational > > semantics. > > (Bonus for mechanized proofs.) > > > > I?ve found papers that are a near miss for what I?m looking for, such as: > > * closure conversion proofs for the STLC (Chlipala PLDI 2007) > > * closure conversion proofs based on operational semantics > > (Minamide, Morrisett, and Harper POPL 1996, and many more) > > * correctness proofs using denotational semantics for compilers that > > don?t do closure conversion, but instead compile to a machine that > > supports closure creation (the VLISP project and many more). > > > > Thanks in advance for pointers! > > > > Best regards, > > Jeremy > > > > From jeremy.siek at gmail.com Mon Oct 1 20:42:29 2018 From: jeremy.siek at gmail.com (Jeremy Siek) Date: Mon, 1 Oct 2018 20:42:29 -0400 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics In-Reply-To: <20181001212832.GI68875@williamjbowman.com> References: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> <20181001212832.GI68875@williamjbowman.com> Message-ID: Hi Gabriel, William, Yes indeed, it was just whole program correctness that I had in mind, and not fully abstract compilation. -Jeremy > On Oct 1, 2018, at 5:28 PM, William J. Bowman wrote: > > Gabriel, > > It sounds like you're describing a failure of full abstraction. > I think the compiler could still satsify "whole-program correctness", i.e., the > translated (whole) program denotes the same value as the original, even without > primitive closures. > > -- > William J. Bowman > >> On Mon, Oct 01, 2018 at 11:26:31PM +0200, Gabriel Scherer wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Dear Jeremy, >> >> In what sense would closure conversion be correct for the untyped >> lambda-calculus? Writing in an untyped language allows me to use pair/tuple >> operations to access the closure environment, which was not previously >> possible on the non-closure-converted calculus. If your untyped target >> language (I think it should be made precise) contains a primitive >> representation of closures (not just pairs), you can prevent environment >> extraction, but then that corresponds to your third bullet of a target that >> "supports (explicit) closure creation". >> >> Best >> >>> On Mon, Oct 1, 2018 at 11:11 PM Jeremy Siek wrote: >>> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ] >>> >>> I?m interested in finding literature on proofs of correctness of closure >>> conversion >>> applied to the untyped lambda calculus with respect to a denotational >>> semantics. >>> (Bonus for mechanized proofs.) >>> >>> I?ve found papers that are a near miss for what I?m looking for, such as: >>> * closure conversion proofs for the STLC (Chlipala PLDI 2007) >>> * closure conversion proofs based on operational semantics >>> (Minamide, Morrisett, and Harper POPL 1996, and many more) >>> * correctness proofs using denotational semantics for compilers that >>> don?t do closure conversion, but instead compile to a machine that >>> supports closure creation (the VLISP project and many more). >>> >>> Thanks in advance for pointers! >>> >>> Best regards, >>> Jeremy >>> >>> From neelakantan.krishnaswami at gmail.com Tue Oct 2 05:58:15 2018 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Tue, 2 Oct 2018 10:58:15 +0100 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics In-Reply-To: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> References: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> Message-ID: <84ce1c04-6948-4c00-f0e6-e9a3669832e4@gmail.com> Hi Jeremy, One approach to closure conversion which I like very much is the observation that when you defunctionalize a normalization- by-evaluation algorithm, you get a translation into an explicit closure representation. I learned this from Chapter 3 of Andreas Abel's habilitation thesis [1], where he abstracts NbE from domains to partial applicative structures, and then shows that a closure calculus forms a partial applicative structure. This might be too operational for you, but since the line between operational and denotational approaches can get pretty blurry, perhaps it is of interest. [1] Best, Neel On 01/10/18 20:14, Jeremy Siek wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I?m interested in finding literature on proofs of correctness of closure conversion > applied to the untyped lambda calculus with respect to a denotational semantics. > (Bonus for mechanized proofs.) > > I?ve found papers that are a near miss for what I?m looking for, such as: > * closure conversion proofs for the STLC (Chlipala PLDI 2007) > * closure conversion proofs based on operational semantics > (Minamide, Morrisett, and Harper POPL 1996, and many more) > * correctness proofs using denotational semantics for compilers that > don?t do closure conversion, but instead compile to a machine that > supports closure creation (the VLISP project and many more). > > Thanks in advance for pointers! > > Best regards, > Jeremy > From gabriel.scherer at gmail.com Tue Oct 2 06:43:53 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Tue, 2 Oct 2018 12:43:53 +0200 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics In-Reply-To: References: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> <20181001212832.GI68875@williamjbowman.com> Message-ID: I think that the failure of full-abstraction limits the amount of "denotational semantics" you can use in a proof, in the following sense: natural/good denotational semantics tend to be fully abstract themselves, in the sense that observationally distinguishable terms have distinct semantics. For such a denotational semantics [[.]] on the target language, you cannot hope to prove a natural correctness result of the form [[ closure-convert(t] ]] = from-lc([[ t ]]) where from-lc embeds the denotational semantic of the source term into the semantic space of the target calculus. I went back to look at Adam's article on STLC. His correctness statement is weaker, it relies on a heterogeneous logical relation between the source and target denotational semantics, with a definition of the form [[ lam x. t ]] ~ closure-app-env[[ (env, lam ys x. u) ]] : A -> B := forall (v ~ v' \in [[A]]), [[ lam x. t]] v ~ closure-app-env[[ (env, lam x. u) ]] v' where closure-app-env computes the set-theoretic semantics of partially-applying the closure environment. This does not imply full-abstraction, as the denotational semantics [[ (env, lam ys x. u) ]] in the target may contain much more behaviors than closure-app-env[[ (env, lam ys x. u) ]] does. On Tue, Oct 2, 2018 at 2:42 AM Jeremy Siek wrote: > Hi Gabriel, William, > > Yes indeed, it was just whole program correctness that I had in mind, and > not fully abstract compilation. > > -Jeremy > > > On Oct 1, 2018, at 5:28 PM, William J. Bowman > wrote: > > > > Gabriel, > > > > It sounds like you're describing a failure of full abstraction. > > I think the compiler could still satsify "whole-program correctness", > i.e., the > > translated (whole) program denotes the same value as the original, even > without > > primitive closures. > > > > -- > > William J. Bowman > > > >> On Mon, Oct 01, 2018 at 11:26:31PM +0200, Gabriel Scherer wrote: > >> [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > >> Dear Jeremy, > >> > >> In what sense would closure conversion be correct for the untyped > >> lambda-calculus? Writing in an untyped language allows me to use > pair/tuple > >> operations to access the closure environment, which was not previously > >> possible on the non-closure-converted calculus. If your untyped target > >> language (I think it should be made precise) contains a primitive > >> representation of closures (not just pairs), you can prevent environment > >> extraction, but then that corresponds to your third bullet of a target > that > >> "supports (explicit) closure creation". > >> > >> Best > >> > >>> On Mon, Oct 1, 2018 at 11:11 PM Jeremy Siek > wrote: > >>> > >>> [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>> ] > >>> > >>> I?m interested in finding literature on proofs of correctness of > closure > >>> conversion > >>> applied to the untyped lambda calculus with respect to a denotational > >>> semantics. > >>> (Bonus for mechanized proofs.) > >>> > >>> I?ve found papers that are a near miss for what I?m looking for, such > as: > >>> * closure conversion proofs for the STLC (Chlipala PLDI 2007) > >>> * closure conversion proofs based on operational semantics > >>> (Minamide, Morrisett, and Harper POPL 1996, and many more) > >>> * correctness proofs using denotational semantics for compilers that > >>> don?t do closure conversion, but instead compile to a machine that > >>> supports closure creation (the VLISP project and many more). > >>> > >>> Thanks in advance for pointers! > >>> > >>> Best regards, > >>> Jeremy > >>> > >>> > From jeremy.siek at gmail.com Wed Oct 3 08:12:02 2018 From: jeremy.siek at gmail.com (Jeremy Siek) Date: Wed, 3 Oct 2018 08:12:02 -0400 Subject: [TYPES] correctness of closure conversion for untyped lambda calculus wrt. denotational semantics In-Reply-To: References: <971EA6A6-C9EB-4ECF-A408-F07241803349@gmail.com> <20181001212832.GI68875@williamjbowman.com> Message-ID: Hi Gabriel, Yes, one needs to relate the source and target denotations, and the relation that Adam uses is quite simple and natural: n1 ?_N n2 := n1 = n1 f1 ?_{A \to N} f2 := forall x1 : P[A], forall x2 : C[A], x1 ?_A x2 implies f1 x1 ?_N f2 x2 (PLDI 2007, section 9.1, page 63) I'm not seeing the definition that you listed... is that from another paper or the mechanization? Cheers, Jeremy On Tue, Oct 2, 2018 at 6:35 AM Gabriel Scherer wrote: > I think that the failure of full-abstraction limits the amount of > "denotational semantics" you can use in a proof, in the following sense: > natural/good denotational semantics tend to be fully abstract themselves, > in the sense that observationally distinguishable terms have distinct > semantics. For such a denotational semantics [[.]] on the target language, > you cannot hope to prove a natural correctness result of the form > > [[ closure-convert(t] ]] = from-lc([[ t ]]) > > where from-lc embeds the denotational semantic of the source term into the > semantic space of the target calculus. > > I went back to look at Adam's article on STLC. His correctness statement > is weaker, it relies on a heterogeneous logical relation between the source > and target denotational semantics, with a definition of the form > > [[ lam x. t ]] ~ closure-app-env[[ (env, lam ys x. u) ]] : A -> B > := forall (v ~ v' \in [[A]]), [[ lam x. t]] v ~ closure-app-env[[ > (env, lam x. u) ]] v' > > where closure-app-env computes the set-theoretic semantics of > partially-applying the closure environment. > > This does not imply full-abstraction, as the denotational semantics [[ > (env, lam ys x. u) ]] in the target may contain much more behaviors than > closure-app-env[[ (env, lam ys x. u) ]] does. > > On Tue, Oct 2, 2018 at 2:42 AM Jeremy Siek wrote: > >> Hi Gabriel, William, >> >> Yes indeed, it was just whole program correctness that I had in mind, and >> not fully abstract compilation. >> >> -Jeremy >> >> > On Oct 1, 2018, at 5:28 PM, William J. Bowman >> wrote: >> > >> > Gabriel, >> > >> > It sounds like you're describing a failure of full abstraction. >> > I think the compiler could still satsify "whole-program correctness", >> i.e., the >> > translated (whole) program denotes the same value as the original, even >> without >> > primitive closures. >> > >> > -- >> > William J. Bowman >> > >> >> On Mon, Oct 01, 2018 at 11:26:31PM +0200, Gabriel Scherer wrote: >> >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> >> >> Dear Jeremy, >> >> >> >> In what sense would closure conversion be correct for the untyped >> >> lambda-calculus? Writing in an untyped language allows me to use >> pair/tuple >> >> operations to access the closure environment, which was not previously >> >> possible on the non-closure-converted calculus. If your untyped target >> >> language (I think it should be made precise) contains a primitive >> >> representation of closures (not just pairs), you can prevent >> environment >> >> extraction, but then that corresponds to your third bullet of a target >> that >> >> "supports (explicit) closure creation". >> >> >> >> Best >> >> >> >>> On Mon, Oct 1, 2018 at 11:11 PM Jeremy Siek >> wrote: >> >>> >> >>> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list >> >>> ] >> >>> >> >>> I?m interested in finding literature on proofs of correctness of >> closure >> >>> conversion >> >>> applied to the untyped lambda calculus with respect to a denotational >> >>> semantics. >> >>> (Bonus for mechanized proofs.) >> >>> >> >>> I?ve found papers that are a near miss for what I?m looking for, such >> as: >> >>> * closure conversion proofs for the STLC (Chlipala PLDI 2007) >> >>> * closure conversion proofs based on operational semantics >> >>> (Minamide, Morrisett, and Harper POPL 1996, and many more) >> >>> * correctness proofs using denotational semantics for compilers that >> >>> don?t do closure conversion, but instead compile to a machine that >> >>> supports closure creation (the VLISP project and many more). >> >>> >> >>> Thanks in advance for pointers! >> >>> >> >>> Best regards, >> >>> Jeremy >> >>> >> >>> >> > From bcpierce at cis.upenn.edu Thu Oct 25 06:32:56 2018 From: bcpierce at cis.upenn.edu (Benjamin C. Pierce) Date: Thu, 25 Oct 2018 06:32:56 -0400 Subject: [TYPES] Martin Hofmann memorial talk Message-ID: <1BC3BADD-D080-4C7E-932B-2678EBB6F6AB@cis.upenn.edu> Those who still feel the loss of Martin's passing may want to know about a memorial talk on his life and work from LICS in July. Video: https://www.youtube.com/watch?v=WpWt40l_uDk Slides only: http://www.cis.upenn.edu/~bcpierce/papers/MartinHofmannMemorial.pdf Slides + text of presentation: http://www.cis.upenn.edu/~bcpierce/papers/MartinHofmannMemorial.key - Benjamin From monnier at IRO.UMontreal.CA Thu Oct 25 17:46:47 2018 From: monnier at IRO.UMontreal.CA (Stefan Monnier) Date: Thu, 25 Oct 2018 17:46:47 -0400 Subject: [TYPES] Do erasable boxes form a monad? Message-ID: I'm playing with erasability in the sense of the Implicit Calculus of Constructions and bumping into problems: Say we create a datatype holding an erasable element. We could define it as follows using the impredicative encoding: Erased : Type -> Type; Erased t = (r : Type) -> (t ?> r) -> r; erase : (t : Type) ?> t -> Erased t; erase t x = ? r -> ? f -> f x; where I use ?> to denote the "implicit" abstraction (which I like to call "erasable", and yet others might favor "irrelevant"). With such a definition, we can implement functions such as: erased_liftfun : Erased (t? -> t?) -> (Erased t? -> Erased t?); erased_liftfun f x = case f | erase f' => case x | erase x' => erase (f' x'); [ where I write `case e of erase x => ...x...x...` instead of `e (? x ?> ...x...x...)` ] But I can't seem to write the reverse function. More puzzlingly, I can't seem to write the monad bind operator. I.e. the below types seem not to be inhabited: erased_bind : Erased t? -> (t? -> Erased t?) -> Erased t?; erased_funlift : (Erased ?t? -> Erased ?t?) -> Erased (?t? -> ?t?); Am I just not trying hard enough, or is there something deeper? Would it break the consistency of the logic to add one of those as an axiom? Stefan From Jesper at sikanda.be Fri Oct 26 04:05:21 2018 From: Jesper at sikanda.be (Jesper Cockx) Date: Fri, 26 Oct 2018 10:05:21 +0200 Subject: [TYPES] Do erasable boxes form a monad? In-Reply-To: References: Message-ID: Hi Stefan, I'm not very sure about your impredicative encoding, but both your examples work if you define Erased in a more direct way in Agda extended with Prop (currently in the development version of Agda): data Squash (A : Set) : Prop where squash : A ? Squash A record Box (B : Prop) : Set where constructor box field unbox : B open Box Erase : Set ? Set Erase A = Box (Squash A) lift-squash : Squash (A ? B) ? Squash A ? Squash B lift-squash (squash f) (squash x) = squash (f x) lift-erase : Erase (A ? B) ? Erase A ? Erase B lift-erase f x = box (lift-squash (unbox f) (unbox x)) return : A ? Erase A return x = box (squash x) _>>=_ : Erase A ? (A ? Erase B) ? Erase B (box (squash x) >>= f) .unbox = f x .unbox left-identity : (a : A) (f : A ? Erase B) ? return a >>= f ? f a left-identity a f = refl right-identity : (m : Erase A) ? m >>= return ? m right-identity m = refl associativity : (m : Erase A) (f : A ? Erase B) (g : B ? Erase C) ? (m >>= f) >>= g ? m >>= (? x ? f x >>= g) associativity m f g = refl Since Agda's implementation of Prop is heavily based on the proposed sProp extension to Coq, I expect it would also work with that. So if you cannot implement it for your impredicative version, at least adding it as an axiom should not break consistency. -- Jesper On Fri, Oct 26, 2018 at 8:31 AM Stefan Monnier wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > I'm playing with erasability in the sense of the Implicit Calculus of > Constructions and bumping into problems: > > Say we create a datatype holding an erasable element. > We could define it as follows using the impredicative encoding: > > Erased : Type -> Type; > Erased t = (r : Type) -> (t ?> r) -> r; > > erase : (t : Type) ?> t -> Erased t; > erase t x = ? r -> ? f -> f x; > > where I use ?> to denote the "implicit" abstraction (which I like to > call "erasable", and yet others might favor "irrelevant"). > > With such a definition, we can implement functions such as: > > erased_liftfun : Erased (t? -> t?) -> (Erased t? -> Erased t?); > erased_liftfun f x = case f > | erase f' => case x > | erase x' => erase (f' x'); > > [ where I write `case e of erase x => ...x...x...` instead of > `e (? x ?> ...x...x...)` ] > > But I can't seem to write the reverse function. > More puzzlingly, I can't seem to write the monad bind operator. > I.e. the below types seem not to be inhabited: > > erased_bind : Erased t? -> (t? -> Erased t?) -> Erased t?; > erased_funlift : (Erased ?t? -> Erased ?t?) -> Erased (?t? -> ?t?); > > Am I just not trying hard enough, or is there something deeper? > Would it break the consistency of the logic to add one of those as > an axiom? > > > Stefan > From monnier at iro.umontreal.ca Fri Oct 26 15:55:16 2018 From: monnier at iro.umontreal.ca (Stefan Monnier) Date: Fri, 26 Oct 2018 15:55:16 -0400 Subject: [TYPES] Do erasable boxes form a monad? In-Reply-To: (Jesper Cockx's message of "Fri, 26 Oct 2018 10:05:21 +0200") References: Message-ID: > I'm not very sure about your impredicative encoding, but both your examples > work if you define Erased in a more direct way in Agda extended with Prop > (currently in the development version of Agda): That's the thing: with Coq-style Prop's erasability I can define those operations, indeed. But in the context of ICC's erasability (which is not tied to Prop and is used to strengthen the conversion rule by checking beta-equivalence on the erased version of the code), it seems that those properties don't hold any more. > _>>=_ : Erase A ? (A ? Erase B) ? Erase B > (box (squash x) >>= f) .unbox = f x .unbox In ICC this definition is rejected because the erasure of this code is >>= f = f x where `x` is a "dangling" free variable, because the left hand side argument is erased, but `f` takes a non-erasable argument so its argument is not erased. Stefan From renevestergaard at acm.org Thu Nov 8 15:38:55 2018 From: renevestergaard at acm.org (Rene Vestergaard) Date: Fri, 9 Nov 2018 05:38:55 +0900 Subject: [TYPES] "Proofs of life" In-Reply-To: References: <4DC19C14-4949-474E-B90A-ADDB711C88B1@acm.org> Message-ID: <84e889a1-c3c7-9a2f-17e9-633efe7c58f8@acm.org> "Proofs of life: molecular-biology reasoning simulates cell behaviors from first principles" is now available at http://arxiv.org/abs/1811.02478 The work springs from computer-verified reasoning and establishes wider utility by means of a reasoning-computation correspondence, including for long-standing critical problems in biology that have defied standard approaches. Sincerely, Rene PS! The relevance to the types list is in "reasoning-computation correspondence". The work is non-standard type theory. ----- ESSENCE Mathematics-style correctness works for molecular biology and enables phenotype and life-cycle prediction from genotypes. Formally, reductionist science involves constructive reasoning, i.e., executable simulation. SIGNIFICANCE Axiomatic reasoning provides an alternative perspective that allows us to address long-standing open problems in biology. Our approach is supported by meta-theory and likely applies to any reductionist discipline. ABSTRACT Science relies on external correctness: statistical analysis and reproducibility, with ready applicability but inherent false positives/negatives. Mathematics uses internal correctness: conclusions must be established by detailed reasoning, with high confidence and deep insights but not necessarily real-world significance. Here, we formalize the molecular-biology reasoning style; establish that it constitutes an executable first-principle theory of cell behaviors that admits predictive technologies, with a range of correctness guarantees; and show that we can fully account for the standard reference: Ptashne, A Genetic Switch. Everything works for principled reasons and is presented within an open-ended meta-theoretic framework that seemingly applies to any reductionist discipline. The framework is adapted from a century-long line of work on mathematical reasoning. The key step is to not admit reasoning based on an external notion of truth but work only with what can be justified from considered assumptions. For molecular biology, the induced theory involves the concurrent running/interference of molecule-coded elementary processes of physiology change over the genome. The life cycle of the single-celled monograph organism is predicted in molecular detail as the aggregate of the possible sequentializations of the coded-for processes. The difficult question of molecular coding, i.e., the specific means of gene regulation, is addressed via a detailed modeling methodology. We establish a complementary perspective on science, complete with a proven correctness notion, and use it to make progress on long-standing and critical open problems in biology. From renevestergaard at acm.org Fri Nov 9 02:27:21 2018 From: renevestergaard at acm.org (Rene Vestergaard) Date: Fri, 9 Nov 2018 16:27:21 +0900 Subject: [TYPES] "Proofs of life" In-Reply-To: <38009791-9c9e-b0b5-e54f-cb486fda5b7a@acm.org> References: <4DC19C14-4949-474E-B90A-ADDB711C88B1@acm.org> <38009791-9c9e-b0b5-e54f-cb486fda5b7a@acm.org> Message-ID: <92a74cb5-8665-bb9e-23cc-be24b111efee@acm.org> I should have mentioned earlier that there's an accompanying video, with proof visualization and more: http://ceqea.sourceforge.net/extras/instructionalPoL.mp4 On 11/7/18 10:37 AM, Rene Vestergaard wrote: > "Proofs of life: molecular-biology reasoning simulates cell behaviors > from first principles" is now available at http://arxiv.org/abs/1811.02478 > > The work springs from computer-verified reasoning and establishes wider > utility by means of a reasoning-computation correspondence, including > for long-standing critical problems in biology that have defied standard > approaches. > > Sincerely, > Rene > > ----- > > ESSENCE > Mathematics-style correctness works for molecular biology and enables > phenotype and life-cycle prediction from genotypes. Formally, > reductionist science involves constructive reasoning, i.e., executable > simulation. > > SIGNIFICANCE > Axiomatic reasoning provides an alternative perspective that allows us > to address long-standing open problems in biology. Our approach is > supported by meta-theory and likely applies to any reductionist discipline. > > ABSTRACT > Science relies on external correctness: statistical analysis and > reproducibility, with ready applicability but inherent false > positives/negatives. Mathematics uses internal correctness: conclusions > must be established by detailed reasoning, with high confidence and deep > insights but not necessarily real-world significance. Here, we formalize > the molecular-biology reasoning style; establish that it constitutes an > executable first-principle theory of cell behaviors that admits > predictive technologies, with a range of correctness guarantees; and > show that we can fully account for the standard reference: Ptashne, A > Genetic Switch. Everything works for principled reasons and is presented > within an open-ended meta-theoretic framework that seemingly applies to > any reductionist discipline. The framework is adapted from a > century-long line of work on mathematical reasoning. The key step is to > not admit reasoning based on an external notion of truth but work only > with what can be justified from considered assumptions. For molecular > biology, the induced theory involves the concurrent running/interference > of molecule-coded elementary processes of physiology change over the > genome. The life cycle of the single-celled monograph organism is > predicted in molecular detail as the aggregate of the possible > sequentializations of the coded-for processes. The difficult question of > molecular coding, i.e., the specific means of gene regulation, is > addressed via a detailed modeling methodology. We establish a > complementary perspective on science, complete with a proven correctness > notion, and use it to make progress on long-standing and critical open > problems in biology. From mp at cs.stanford.edu Wed Nov 14 17:38:27 2018 From: mp at cs.stanford.edu (Marco Patrignani) Date: Wed, 14 Nov 2018 14:38:27 -0800 Subject: [TYPES] Equi recursive types: logical relations and intuition Message-ID: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> Dear types mailing list, I have some questions related to equi recursive types. ** Background ** Together with my colleague Dominique Devriese (in cc) we were building a compiler from STLC extended with a fix operator (and without recursive types) down to STLC with isorecursive types in order to prove it fully abstract using a step-indexed logical relation. This proof goes through but we noticed that it goes through exactly because of the extra step that a term of the form unfold fold v consumes while reducing to v Now if we were to replace iso recursive types in the target with equi recursive ones, that extra step would not be there and therefore the proof would fail. ( if you?re familiar with these kinds of logical relations, there?s a \later that ?matches? this reduction step and reduces the amount of steps by 1 ) However, this seems a limitation that stems purely out of the proof technique. If we had something else to ?decrease" rather than an operational step, the proof would go through. ** Q1 : logical relation for equi-recursive types ** Looking at logical relations for languages with recursive types, they all seem to have iso-recursive ones. Is there a sound logical relation for a calculus with equi-recursive types? ** Q2 : full abstraction of iso and equi calculi ** To the best of our knowledge there is no known fully abstract compilation result from a calculus with iso recursive types to one with equi recursive ones. Are we right? ( such a compiler would, intuitively, just erase fold/unfolds ) ** Q3 : intuition behind equi-recursive types ** I have generally considered the difference between iso and equi recursive types to be one primarily of typechecking, the fold/unfold annotations being there to direct typechecking, and in fact fold/unfold are often somewhat implicit in the syntax of languages (as i recall from TAPL?s chapter on recursive types). However, for logical relations it seems that the operational aspect (the extra reduction step) introduced by iso recursive types is a key distinguishing feature between iso and equi recursive types. Are there other topics besides the logical relations (apart from typechecking) where the difference in the operational semantics between iso and equi typed terms play a role? thanks! marco --- Marco Patrignani http://theory.stanford.edu/~mp Stanford Computer Science Department, USA & CISPA, Saarland University, Germany From gabriel.scherer at gmail.com Thu Nov 15 06:32:16 2018 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Thu, 15 Nov 2018 12:32:16 +0100 Subject: [TYPES] Equi recursive types: logical relations and intuition In-Reply-To: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> References: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> Message-ID: In the theories you have in mind, how do the fixpoint operators on terms and types deal with degenerate fixpoints like (fix x . x) and (mu alpha. alpha)? If your iso-recursive type theory allows the term (fix x. fold x) but your equi-recursive type theory forbids (fix x . x) or its type, you have a difference between the two systems. If they are allowed in both cases, do they reduce in the same way? On Thu, Nov 15, 2018 at 12:13 PM Marco Patrignani wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear types mailing list, > I have some questions related to equi recursive types. > > ** Background ** > > Together with my colleague Dominique Devriese (in cc) we were building a > compiler from > STLC extended with a fix operator (and without recursive types) > down to > STLC with isorecursive types > in order to prove it fully abstract using a step-indexed logical relation. > This proof goes through but we noticed that it goes through exactly > because of the extra step that a term of the form > unfold fold v > consumes while reducing to > v > Now if we were to replace iso recursive types in the target with equi > recursive ones, that extra step would not be there and therefore the proof > would fail. > ( if you?re familiar with these kinds of logical relations, there?s a > \later that ?matches? this reduction step and reduces the amount of steps > by 1 ) > > However, this seems a limitation that stems purely out of the proof > technique. > If we had something else to ?decrease" rather than an operational step, > the proof would go through. > > > ** Q1 : logical relation for equi-recursive types ** > Looking at logical relations for languages with recursive types, they all > seem to have iso-recursive ones. > Is there a sound logical relation for a calculus with equi-recursive types? > > > ** Q2 : full abstraction of iso and equi calculi ** > To the best of our knowledge there is no known fully abstract compilation > result from a calculus with iso recursive types to one with equi recursive > ones. > Are we right? > ( such a compiler would, intuitively, just erase fold/unfolds ) > > > ** Q3 : intuition behind equi-recursive types ** > I have generally considered the difference between iso and equi recursive > types to be one primarily of typechecking, the fold/unfold annotations > being there to direct typechecking, and in fact fold/unfold are often > somewhat implicit in the syntax of languages (as i recall from TAPL?s > chapter on recursive types). > > However, for logical relations it seems that the operational aspect (the > extra reduction step) introduced by iso recursive types is a key > distinguishing feature between iso and equi recursive types. > Are there other topics besides the logical relations (apart from > typechecking) where the difference in the operational semantics between iso > and equi typed terms play a role? > > > thanks! > > marco > --- > Marco Patrignani > http://theory.stanford.edu/~mp > Stanford Computer Science Department, USA > & CISPA, Saarland University, Germany > > > > > From G.A.McCusker at bath.ac.uk Thu Nov 15 06:54:49 2018 From: G.A.McCusker at bath.ac.uk (Guy McCusker) Date: Thu, 15 Nov 2018 11:54:49 +0000 Subject: [TYPES] Equi recursive types: logical relations and intuition In-Reply-To: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> References: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> Message-ID: You may find some relevant material in M. Abadi and M. P. Fiore, "Syntactic considerations on recursive types," Proceedings 11th Annual IEEE Symposium on Logic in Computer Science, New Brunswick, NJ, USA, 1996, pp. 242-252. doi: 10.1109/LICS.1996.561324 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=561324&isnumber=12231 I don?t recall everything that?s in there, but certainly Abadi and Fiore consider explicitly the passage to and from equi- and iso-recursive formulations of an extended lambda-calculus. > On 14 Nov 2018, at 22:38, Marco Patrignani wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear types mailing list, > I have some questions related to equi recursive types. > > ** Background ** > > Together with my colleague Dominique Devriese (in cc) we were building a compiler from > STLC extended with a fix operator (and without recursive types) > down to > STLC with isorecursive types > in order to prove it fully abstract using a step-indexed logical relation. > This proof goes through but we noticed that it goes through exactly because of the extra step that a term of the form > unfold fold v > consumes while reducing to > v > Now if we were to replace iso recursive types in the target with equi recursive ones, that extra step would not be there and therefore the proof would fail. > ( if you?re familiar with these kinds of logical relations, there?s a \later that ?matches? this reduction step and reduces the amount of steps by 1 ) > > However, this seems a limitation that stems purely out of the proof technique. > If we had something else to ?decrease" rather than an operational step, the proof would go through. > > > ** Q1 : logical relation for equi-recursive types ** > Looking at logical relations for languages with recursive types, they all seem to have iso-recursive ones. > Is there a sound logical relation for a calculus with equi-recursive types? > > > ** Q2 : full abstraction of iso and equi calculi ** > To the best of our knowledge there is no known fully abstract compilation result from a calculus with iso recursive types to one with equi recursive ones. > Are we right? > ( such a compiler would, intuitively, just erase fold/unfolds ) > > > ** Q3 : intuition behind equi-recursive types ** > I have generally considered the difference between iso and equi recursive types to be one primarily of typechecking, the fold/unfold annotations being there to direct typechecking, and in fact fold/unfold are often somewhat implicit in the syntax of languages (as i recall from TAPL?s chapter on recursive types). > > However, for logical relations it seems that the operational aspect (the extra reduction step) introduced by iso recursive types is a key distinguishing feature between iso and equi recursive types. > Are there other topics besides the logical relations (apart from typechecking) where the difference in the operational semantics between iso and equi typed terms play a role? > > > thanks! > > marco > --- > Marco Patrignani > http://theory.stanford.edu/~mp > Stanford Computer Science Department, USA > & CISPA, Saarland University, Germany > > > > From p.giarrusso at gmail.com Thu Nov 15 08:01:45 2018 From: p.giarrusso at gmail.com (Paolo Giarrusso) Date: Thu, 15 Nov 2018 14:01:45 +0100 Subject: [TYPES] Equi recursive types: logical relations and intuition In-Reply-To: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> References: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> Message-ID: On Thu, 15 Nov 2018 at 12:13, Marco Patrignani wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear types mailing list, > I have some questions related to equi recursive types. > However, this seems a limitation that stems purely out of the proof technique. > If we had something else to ?decrease" rather than an operational step, the proof would go through. > ** Q1 : logical relation for equi-recursive types ** > Looking at logical relations for languages with recursive types, they all seem to have iso-recursive ones. > Is there a sound logical relation for a calculus with equi-recursive types? Yes, equirecursive types are used by both Appel and McAllester [2001, An Indexed Model of Recursive Types for Foundational Proof-Carrying Code] and Ahmed's PhD thesis (Sec. 4.2.1); they also appear in Appel et al. [2007] tho in a different context. Beware Appel and McAllester's binary logical relation has an issue, fixed by Ahmed [2006, Step-Indexed Syntactic Logical Relations for Recursive and Quantified Types]. Contrast that model with the one by Dreyer, Ahmed and Birkedal [2011, Logical Step-indexed Logical Relations]: in that model only fold-unfold reductions are counted as steps in the expression relation, because that's the only reduction that produces a result that is only valid *later*. Instead, with equirecursive types, it's easier to give meaning to mu X. F X if F is *contractive* (that is, only depends on X *later* ? the name is 'well-founded' in earlier papers), so models with equirecursive types make all their type constructors contractive; as a result, all reduction steps must be counted. ================ > However, for logical relations it seems that the operational aspect (the extra reduction step) introduced by iso recursive types is a key distinguishing feature between iso and equi recursive types. The importance of step counts, here and elsewhere, it's one of the few downsides of step-indexing. This is discussed by Svendsen et al. [2016, Transfinite Step-Indexing: Decoupling Concrete and Logical Steps] and earlier by Benton and Hur [2010, Step-Indexing: The Good, the Bad and the Ugly, http://drops.dagstuhl.de/opus/volltexte/2010/2808/]. There is often an (inelegant but effective) fix, which I haven't seen in the literature but appears to be folklore among specialists; I describe this to emphasize that the problem is artificial. One can add a "nop" instruction to terms, `t ::= nop t | ...`, and emit it when translating terms, just to consume one more step; that would play the role of the extra steps. One can then "postprocess" terms to remove the nop instructions, and show this preserves equivalence through a simulation lemma. It is more elegant to hide such steps into the elimination forms of some type, but such hiding is a purely syntactic operation. Cheers, -- Paolo G. Giarrusso From mp at cs.stanford.edu Mon Nov 19 19:11:51 2018 From: mp at cs.stanford.edu (Marco Patrignani) Date: Mon, 19 Nov 2018 16:11:51 -0800 Subject: [TYPES] Equi recursive types: logical relations and intuition In-Reply-To: References: <951F9AE5-381C-4234-9444-3A5E6E334D61@cs.stanford.edu> Message-ID: Dear types mailing list, thanks for the many responses and for the suggestions :) @arthur: what you suggest is (more or less) indeed what we?re looking into! cheers, marco > On 15 Nov 2018, at 12:11, Arthur Azevedo de Amorim wrote: > > > > Em qui, 15 de nov de 2018 ?s 08:29, Paolo Giarrusso escreveu: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On Thu, 15 Nov 2018 at 12:13, Marco Patrignani wrote: > > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Dear types mailing list, > > I have some questions related to equi recursive types. > > > However, this seems a limitation that stems purely out of the proof technique. > > If we had something else to ?decrease" rather than an operational step, the proof would go through. > > > ** Q1 : logical relation for equi-recursive types ** > > Looking at logical relations for languages with recursive types, they all seem to have iso-recursive ones. > > Is there a sound logical relation for a calculus with equi-recursive types? > > Yes, equirecursive types are used by both Appel and McAllester [2001, > An Indexed Model of Recursive Types for Foundational Proof-Carrying > Code] and Ahmed's PhD thesis (Sec. 4.2.1); they also appear in Appel > et al. [2007] tho in a different context. > > Beware Appel and McAllester's binary logical relation has an issue, > fixed by Ahmed [2006, Step-Indexed Syntactic Logical Relations for > Recursive and Quantified Types]. > > Contrast that model with the one by Dreyer, Ahmed and Birkedal [2011, > Logical Step-indexed Logical Relations]: in that model only > fold-unfold reductions are counted as steps in the expression > relation, because that's the only reduction that produces a result > that is only valid *later*. > > Instead, with equirecursive types, it's easier to give meaning to mu > X. F X if F is *contractive* (that is, only depends on X *later* ? the > name is 'well-founded' in earlier papers), so models with > equirecursive types make all their type constructors contractive; as a > result, all reduction steps must be counted. > > ================ > > > However, for logical relations it seems that the operational aspect (the extra reduction step) introduced by iso recursive types is a key distinguishing feature between iso and equi recursive types. > > The importance of step counts, here and elsewhere, it's one of the few > downsides of step-indexing. This is discussed by Svendsen et al. > [2016, Transfinite Step-Indexing: Decoupling Concrete and Logical > Steps] and earlier by Benton and Hur [2010, Step-Indexing: The Good, > the Bad and the Ugly, > http://drops.dagstuhl.de/opus/volltexte/2010/2808/]. > > There is often an (inelegant but effective) fix, which I haven't seen > in the literature but appears to be folklore among specialists; I > describe this to emphasize that the problem is artificial. One can add > a "nop" instruction to terms, `t ::= nop t | ...`, and emit it when > translating terms, just to consume one more step; that would play the > role of the extra steps. One can then "postprocess" terms to remove > the nop instructions, and show this preserves equivalence through a > simulation lemma. > It is more elegant to hide such steps into the elimination forms of > some type, but such hiding is a purely syntactic operation. > > > In this vein, you could probably add another translation from iso- to equi-recursive types and show that the translation and the original term simulate each other. You could use the iso-recursive types to define the logical relations, and transport the equivalences proved via this logical relation to the equi-recursive final target. > > Cheers, > -- > Paolo G. Giarrusso --- Marco Patrignani http://theory.stanford.edu/~mp Stanford Computer Science Department, USA & CISPA, Saarland University, Germany From wadler at inf.ed.ac.uk Sat Dec 1 19:22:58 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Sat, 1 Dec 2018 21:22:58 -0300 Subject: [TYPES] Programming Language Foundations in Agda Message-ID: Wen Kokke and I are pleased to announce the availability of the textbook: Programming Language Foundations in Agda plfa.inf.ed.ac.uk github.com/plfa/plfa.github.io/ It is written as a literate script in Agda, and available at the above URLs. The books has so far been used for teaching at the Universities of Edinburgh and Vermont, and at Google Seattle. Please send your comments and pull requests! The book was presented in a paper (of the same title) at the XXI Brazilian Symposium on Formal Methods, 28--30 Nov 2018, and is available here: http://homepages.inf.ed.ac.uk/wadler/topics/agda.html#sbmf The paper won the SBMF 2018 Best Paper Award, 1st Place. Yours, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From wadler at inf.ed.ac.uk Mon Dec 3 07:42:56 2018 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Mon, 3 Dec 2018 10:42:56 -0200 Subject: [TYPES] Interaction between type classes and modules Message-ID: I am trying to track down a paper I once read about type classes and modules. The key result I recall is that if an instance of a type class is declared in a module then, to preserve coherence, signatures exported by the module should not mention the type class. Can someone please point me to the paper? Cheers, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . http://homepages.inf.ed.ac.uk/wadler/ Too brief? Here's why: http://www.emailcharter.org/ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From steven at steshaw.org Mon Dec 3 21:15:27 2018 From: steven at steshaw.org (Steven Shaw) Date: Tue, 4 Dec 2018 12:15:27 +1000 Subject: [TYPES] Interaction between type classes and modules In-Reply-To: References: Message-ID: Hi Philip, That might be *Modular Type Classes* by Derek Dreyer, Robert Harper, and Manuel M.T. Chakravarty. https://people.mpi-sws.org/~dreyer/papers/mtc/main-short.pdf https://people.mpi-sws.org/~dreyer/talks/popl07.ppt https://people.mpi-sws.org/~dreyer/papers/mtc/main-long.pdf Cheers, Steve. [Re-sending as apparently I wasn't subscribed to the list] On Mon, 3 Dec 2018 at 23:39, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > I am trying to track down a paper I once read about type classes and > modules. The key result I recall is that if an instance of a type class is > declared in a module then, to preserve coherence, signatures exported by > the module should not mention the type class. Can someone please point me > to the paper? Cheers, -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . http://homepages.inf.ed.ac.uk/wadler/ > > Too brief? Here's why: http://www.emailcharter.org/ > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From dreyer at mpi-sws.org Tue Dec 4 05:24:43 2018 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Tue, 4 Dec 2018 11:24:43 +0100 Subject: [TYPES] Interaction between type classes and modules In-Reply-To: References: Message-ID: Thanks for the plug, Steven, but I don't think my paper is the one Phil is referring to. We did point out an issue with coherence and scoped instances (see ?3.1), but the solution did not involve signatures not mentioning a type class. In fact, that wouldn't even make sense in the MTC system since type classes were merely a specific form of ML signatures (i.e., they were treated structurally, not nominally) and so it would not even make sense to talk about "mentioning" them. Cheers, Derek On Tue, Dec 4, 2018 at 7:54 AM Steven Shaw wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi Philip, > > That might be *Modular Type Classes* by Derek Dreyer, Robert Harper, and > Manuel M.T. Chakravarty. > > https://people.mpi-sws.org/~dreyer/papers/mtc/main-short.pdf > https://people.mpi-sws.org/~dreyer/talks/popl07.ppt > https://people.mpi-sws.org/~dreyer/papers/mtc/main-long.pdf > > Cheers, > Steve. > > [Re-sending as apparently I wasn't subscribed to the list] > > On Mon, 3 Dec 2018 at 23:39, Philip Wadler wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > I am trying to track down a paper I once read about type classes and > > modules. The key result I recall is that if an instance of a type class is > > declared in a module then, to preserve coherence, signatures exported by > > the module should not mention the type class. Can someone please point me > > to the paper? Cheers, -- P > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . http://homepages.inf.ed.ac.uk/wadler/ > > > > Too brief? Here's why: http://www.emailcharter.org/ > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > From aaronngray.lists at gmail.com Tue Dec 4 22:43:47 2018 From: aaronngray.lists at gmail.com (Aaron Gray) Date: Wed, 5 Dec 2018 03:43:47 +0000 Subject: [TYPES] type inference for mutually recursive algebraic types with subtyping Message-ID: I am looking for papers on type inference for mutually recursive algebraic types with subtyping. Many thanks in advance, Aaron -- Aaron Gray Independent Open Source Software Engineer, Computer Language Researcher, Information Theorist, and amateur computer scientist. From Sam.Lindley at ed.ac.uk Wed Dec 5 19:40:48 2018 From: Sam.Lindley at ed.ac.uk (Sam Lindley) Date: Thu, 6 Dec 2018 13:40:48 +1300 Subject: [TYPES] type inference for mutually recursive algebraic types with subtyping In-Reply-To: References: Message-ID: Anyone interested in type inference in the presence of subtyping should read Stephen Dolan's PhD dissertation on algebraic subtyping: https://www.cl.cam.ac.uk/~sd601/thesis.pdf The main ideas are also covered in his POPL 2017 paper with Alan Mycroft: https://www.cl.cam.ac.uk/~sd601/papers/mlsub-preprint.pdf Sam On 05/12/2018 16:43, Aaron Gray wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I am looking for papers on type inference for mutually recursive algebraic > types with subtyping. > > Many thanks in advance, > > Aaron > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From rodolphe.lepigre at inria.fr Thu Dec 6 07:51:07 2018 From: rodolphe.lepigre at inria.fr (Rodolphe Lepigre) Date: Thu, 6 Dec 2018 13:51:07 +0100 Subject: [TYPES] type inference for mutually recursive algebraic types with subtyping In-Reply-To: References: Message-ID: <20181206125107.GI31663@HPArchRod.localdomain> It is also probably worth checking out the work I did on subtyping with Christophe Raffalli. It has been around for a while, but only recently accepted for publication in the TOPLAS journal. The system is an extension of (Curry-style) System F with: - subtyping (obviously), - existential types, - sums and products, - inductive and coinductive types (with ordinal sized), - general recursion with termination checking. On the technical side, our approach mixes: - a new notion of ?local subtyping? (with judgments ?t : A ? B?), - choice operators similar to Hilbert Epsilon for a simple semantics, - cyclic proofs (shown well-founded with the size-change principle). Here are a couple of links: - https://lepigre.fr/files/publications/LepRaf2018a.pdf (paper), - https://rlepigre.github.io/subml/ (online interpreter), - https://github.com/rlepigre/subml (source code). Cheers, Rodolphe. -- Rodolphe Lepigre Inria (Deducteam), LSV, CNRS, Universit? Paris-Saclay, FRANCE https://lepigre.fr On 05/12/18 03:43, Aaron Gray wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I am looking for papers on type inference for mutually recursive algebraic > types with subtyping. > > Many thanks in advance, > > Aaron > -- > Aaron Gray > > Independent Open Source Software Engineer, Computer Language Researcher, > Information Theorist, and amateur computer scientist. From xnningxie at gmail.com Mon Dec 10 06:57:39 2018 From: xnningxie at gmail.com (Ningning Xie) Date: Mon, 10 Dec 2018 19:57:39 +0800 Subject: [TYPES] Translation of bounded quantifications into intersection types Message-ID: We have been working on an intersection type system. In the dissertation of Benjamin Pierce (https://www.cis.upenn.edu/~bcpierce/papers/thesis.pdf), Section 3.5.1 and 7.9 mentioned a rough idea of translating bounded quantifications (System F-sub) into intersection types. Wondering if there is any related work/formalization in this direction? Thanks in advance for pointers! Best regards, Ningning From xnningxie at gmail.com Mon Dec 10 22:50:42 2018 From: xnningxie at gmail.com (Ningning Xie) Date: Tue, 11 Dec 2018 11:50:42 +0800 Subject: [TYPES] Translation of bounded quantifications into intersection types In-Reply-To: References: Message-ID: To give more context, I am referring to the encoding: ? ? ? ??. ?? ? ? ?. ?? [? ? ?? ? ? ]. (where in the rhs ? is substituted by the intersection type (?? ? ?)). For example, we could have ? ? ? Nat . ? ? Bool ? ? ?. (Nat ? ?) ? Bool This encoding is known to break subtyping relation for polymorphic types: (? ? ? Real . Bool ? ?) ? (? ? ? Int . Bool ? ?) which translates to a non-derivable statement: (? ? . Bool ? (? ? Real)) ? (? ? . Bool ? (? ? Int)) Thanks in advance for any pointers. Best regards, Ningning On Mon, 10 Dec 2018 at 19:57, Ningning Xie wrote: > We have been working on an intersection type system. > > In the dissertation of Benjamin Pierce > (https://www.cis.upenn.edu/~bcpierce/papers/thesis.pdf), Section 3.5.1 > and 7.9 > mentioned a rough idea of translating bounded quantifications (System > F-sub) > into intersection types. > > Wondering if there is any related work/formalization in this direction? > > Thanks in advance for pointers! > > > Best regards, > Ningning > From stephen.dolan at cl.cam.ac.uk Tue Dec 11 09:18:40 2018 From: stephen.dolan at cl.cam.ac.uk (Stephen Dolan) Date: Tue, 11 Dec 2018 14:18:40 +0000 Subject: [TYPES] Translation of bounded quantifications into intersection types In-Reply-To: References: Message-ID: On Tue, 11 Dec 2018 at 08:30, Ningning Xie wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > To give more context, I am referring to the encoding: > > ? ? ? ??. ?? ? ? ?. ?? [? ? ?? ? ? ]. > > (where in the rhs ? is substituted by the intersection type (?? ? ?)). > > For example, we could have > > ? ? ? Nat . ? ? Bool ? ? ?. (Nat ? ?) ? Bool > This encoding is used in my thesis on type inference with subtyping, available at: https://www.cl.cam.ac.uk/~sd601/thesis.pdf (Technically, my ? is not a standard intersection type but just a meet in the lattice of types, but this encoding is the same) This encoding is known to break subtyping relation for polymorphic types: > > (? ? ? Real . Bool ? ?) ? (? ? ? Int . Bool ? ?) > > which translates to a non-derivable statement: > > (? ? . Bool ? (? ? Real)) ? (? ? . Bool ? (? ? Int)) > This is only the case if your definition of subtyping between two polymorphic types requires one to be a subtype of the other for any given ?. (This is indeed the definition used by Pierce and used in F?). However, this is not the definition of subtyping between polymorphic types used in e.g. ML (the relation between type schemes of one being "at least as polymorphic as the other"), because in ML these type schemes are equivalent: ?? ??. ? ? ? ? ? ?? ??. ? ? ? ? ? For given ?, ?, these yield different types and are not related by the F? subtyping relation. Yet they have the same set of instances: all types of the form ? ? ? ? ? (and their supertypes). This suggests a different definition of subtyping between polymorphic types: one polymorphic type is a subtype of another if the first has more instances. Equivalently, one polymorphic type is a subtype of another if for every instance of the second, there is an instance of the first which is a subtype. I first came across this idea in Fran?ois Pottier's thesis, available at https://hal.inria.fr/inria-00073205, where he calls it "scheme subsumption". He cites a paper, "Subtyping Constrained Types" (Valery Trifonov and Scott Smith) as the origin of the idea. Under this definition of subtyping, the statement above does actually hold: (? ? . Bool ? (? ? Real)) ? (? ? . Bool ? (? ? Int)) because for any ?, we can choose ? = ? ? Int to make the above true. In fact, we can go further than the encoding you mentioned. It turns out that in general it is only necessary to replace the negative occurrences of a type variable with meets / intersections, making the following types equivalent: ? ? ? ?. ? ? ? ? ?. (? ? ?) ? ? This encoding is used in my thesis as one of the main ingredients of the "biunification" algorithm, and Sections 5.2, 5.3 have more detail about it. Regards, Stephen From aaronngray.lists at gmail.com Tue Dec 11 17:54:46 2018 From: aaronngray.lists at gmail.com (Aaron Gray) Date: Tue, 11 Dec 2018 22:54:46 +0000 Subject: [TYPES] type inference for mutually recursive algebraic types with subtyping In-Reply-To: References: Message-ID: On Thu, 6 Dec 2018 at 10:13, Sam Lindley wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Anyone interested in type inference in the presence of subtyping should > read Stephen Dolan's PhD dissertation on algebraic subtyping: > > https://www.cl.cam.ac.uk/~sd601/thesis.pdf > > The main ideas are also covered in his POPL 2017 paper with Alan Mycroft: > > https://www.cl.cam.ac.uk/~sd601/papers/mlsub-preprint.pdf > Thank you I have looked at Stephen Dolan's dissertation, but was not very keen on the technique of mapping onto an automata though. There is an implementation on GitHub :- https://github.com/stedolan/mlsub Many thanks, Aaron Aaron From aaronngray.lists at gmail.com Tue Dec 11 17:54:52 2018 From: aaronngray.lists at gmail.com (Aaron Gray) Date: Tue, 11 Dec 2018 22:54:52 +0000 Subject: [TYPES] type inference for mutually recursive algebraic types with subtyping In-Reply-To: <20181206125107.GI31663@HPArchRod.localdomain> References: <20181206125107.GI31663@HPArchRod.localdomain> Message-ID: On Thu, 6 Dec 2018 at 12:51, Rodolphe Lepigre wrote: > It is also probably worth checking out the work I did on subtyping with > Christophe Raffalli. It has been around for a while, but only recently > accepted for publication in the TOPLAS journal. > > The system is an extension of (Curry-style) System F with: > - subtyping (obviously), > - existential types, > - sums and products, > - inductive and coinductive types (with ordinal sized), > - general recursion with termination checking. > > On the technical side, our approach mixes: > - a new notion of ?local subtyping? (with judgments ?t : A ? B?), > - choice operators similar to Hilbert Epsilon for a simple semantics, > - cyclic proofs (shown well-founded with the size-change principle). > > Here are a couple of links: > - https://lepigre.fr/files/publications/LepRaf2018a.pdf (paper), > - https://rlepigre.github.io/subml/ (online interpreter), > - https://github.com/rlepigre/subml (source code). > Rodolphe, This is great exactly what I was looking for plus more covering coinduction in a very clever way. Its great to have an implementation too as well. Many thanks I think I will be use this. Regards, Aaron On 05/12/18 03:43, Aaron Gray wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > I am looking for papers on type inference for mutually recursive > algebraic > > types with subtyping. > > > > Many thanks in advance, > > > > Aaron > > -- > > Aaron Gray > > > > Independent Open Source Software Engineer, Computer Language Researcher, > > Information Theorist, and amateur computer scientist. > -- Aaron Gray Independent Open Source Software Engineer, Computer Language Researcher, Information Theorist, and amateur computer scientist. From gc at irif.fr Wed Dec 12 07:22:49 2018 From: gc at irif.fr (Giuseppe Castagna) Date: Wed, 12 Dec 2018 13:22:49 +0100 Subject: [TYPES] type inference for mutually recursive algebraic types with subtyping In-Reply-To: References: Message-ID: <1f1bfa0d-d898-a2ba-fc15-0dad70194ce2@irif.fr> The work on extensible sum types (e.g. like the polymorphic variants of OCaml) could be interesting for you if you don?t know about it already. A notable work on this is ?A polymorphic record calculus and its compilation? by Ohori (TOPLAS 1995), though it doesn?t discuss recursive types. A more recent one is ?Extensible programming with first-class cases? by Blume, Acar, and Chae (ICFP 2006). These works, however, do not have true subtyping but use unification with a notion of kinding for type variables (Ohori) or row variables (Blume et al.). In our work at ICFP 2016 ("Set-theoretic types for polymorphic variants? by Castagna, Petrucciani, and Nguyen) we have described type inference for a type system with polymorphic variant types, union types, and recursive types, with subtyping based on the ?semantic subtyping? approach. A revised version of that type inference system will be described in Tommaso Petrucciani?s forthcoming PhD thesis (it is currently in the hands of the reviewers, but I can send you a copy via PM if you wish). This is a major revision of our ICFP paper. In particular the proof of completeness there was not correct, and the thesis addresses this issue (by using techniques introduced in Dolan's thesis already cited in previous messages) and provides a more robust solution. Cheers Beppe From gc at irif.fr Wed Dec 12 16:43:06 2018 From: gc at irif.fr (Giuseppe Castagna) Date: Wed, 12 Dec 2018 22:43:06 +0100 Subject: [TYPES] Translation of bounded quantifications into intersection types In-Reply-To: References: Message-ID: <51157dec-4502-ebe1-3e0a-aec97c0e29ca@irif.fr> On 12/11/18 3:18 PM, Stephen Dolan wrote: > This suggests a different > definition of subtyping between polymorphic types: one polymorphic type is > a subtype of another if the first has more instances. Equivalently, one > polymorphic type is a subtype of another if for every instance of the > second, there is an instance of the first which is a subtype. If you are interested in prenex polymorphism then let me point out the paper "Set-theoretic Foundation of Parametric Polymorphism and Subtyping" in ICFP?'11 (by Castagna & Xu) for a type system with union, intersection, and negation types. There the subtyping relation is defined semantically: ?? ? ?? iff for all possible interpretations of the type variables the first type is interpreted in a subset of the second type. This in particular implies that for every type substitution ?, ?? ? ?? implies ??? ? ??? In this system not only you have Bool ? (? ? Real) ? Bool ? (? ? Int) but since you have "true" unions and intersections you can encode more general bounded polymorphism of the form ? ??? ? ? ??. ? as ? ? . ?' where ?' is obtained from ? by replacing every occurrence of ? by ((??v ?) ? ??) The system also shows new applications of bounded polymorphism when combined with negation types. For instance if you consider the function fun x = if (x?Int) then "ok" else x then you want to give it the type ? ? . (Int ? String) ? (?\Int ? ?\Int) that is, the type of a function that when applied to an integer it returns a string and when applied to an argument of any type that is not an integer returns a result of the same type (here / is the set-theoretic difference, i.e. ??/?? = ?? ? ???). We can express this type in bounded polymorphism as ? ? ? ?Int. (Int ? String) ? (? ? ?) Notice that this type is not weird. It is exactly the type of the "balance" function as defined by Okasaki for red-black trees in his book (you can find the annotated code in section 3.3 of our POPL 15 paper "Polymorphic Functions with Set-Theoretic Types: Part 2") Finally, if you want to play with this kind of types, you can use the development version of CDuce. git clone https://gitlab.math.univ-paris-diderot.fr/cduce/cduce.git use the cduce-next branch. In particular in the toplevel you can use two debug directives: > debug subtype ?? ?? that checks whether two types are in the subtyping relation and > debug tallying [ ] [?? , ?? ; .... ; ??' , ??'] that returns a set of solutions (type substitutions) for the set of constraints {????? ; .... ; ??'???'} (see #help debug for the syntax of debug directives and http://www.cduce.org/manual_types_patterns.html#syntax for the syntax of types) From stephen.dolan at cl.cam.ac.uk Mon Dec 17 14:47:02 2018 From: stephen.dolan at cl.cam.ac.uk (Stephen Dolan) Date: Mon, 17 Dec 2018 19:47:02 +0000 Subject: [TYPES] Translation of bounded quantifications into intersection types In-Reply-To: <51157dec-4502-ebe1-3e0a-aec97c0e29ca@irif.fr> References: <51157dec-4502-ebe1-3e0a-aec97c0e29ca@irif.fr> Message-ID: On Thu, 13 Dec 2018 at 06:33, Giuseppe Castagna wrote: > > If you are interested in prenex polymorphism then let me point out the > paper > "Set-theoretic Foundation of Parametric Polymorphism and Subtyping" in > ICFP '11 > (by Castagna & Xu) for a type system with union, intersection, and > negation types. > There the subtyping relation is defined semantically: ?? ? ?? iff for all > possible interpretations of the type variables the first type is > interpreted in > a subset of the second type. This in particular implies that for every > type > substitution ?, ?? ? ?? implies ??? ? ??? > > In this system not only you have > > Bool ? (? ? Real) ? Bool ? (? ? Int) > > but since you have "true" unions and intersections you can encode more > general > bounded polymorphism of the form > > ? ??? ? ? ??. ? > > as > > ? ? . ?' > > where ?' is obtained from ? by replacing every occurrence of ? by ((??v ?) > ? ??) > I don't think it's necessary to have "true" unions and intersections here - it's enough that types form a lattice. Incidentally, under the definition of subtyping I described above, a simpler encoding is possible: replacing positive occurrences of ? with (?? v ?) and negative occurrences of ? with (?? ? ?). The system also shows new applications of bounded polymorphism when > combined > with negation types. For instance if you consider the function > > fun x = if (x?Int) then "ok" else x > > then you want to give it the type > > ? ? . (Int ? String) ? (?\Int ? ?\Int) > > that is, the type of a function that when applied to an integer it returns > a > string and when applied to an argument of any type that is not an integer > returns a result of the same type (here / is the set-theoretic difference, > i.e. > ??/?? = ?? ? ???). We can express this type in bounded polymorphism as > > ? ? ? ?Int. (Int ? String) ? (? ? ?) > > Notice that this type is not weird. It is exactly the type of the > "balance" > function as defined by Okasaki for red-black trees in his book (you can > find the > annotated code in section 3.3 of our POPL 15 paper "Polymorphic Functions > with > Set-Theoretic Types: Part 2") > > > Finally, if you want to play with this kind of types, you can use the > development version of CDuce. > > git clone https://gitlab.math.univ-paris-diderot.fr/cduce/cduce.git > > use the cduce-next branch. In particular in the toplevel you can use two > debug > directives: > > > debug subtype ?? ?? > > that checks whether two types are in the subtyping relation and > > > debug tallying [ ] [?? , ?? ; .... ; ??' , ??'] > > that returns a set of solutions (type substitutions) for the set of > constraints > {????? ; .... ; ??'???'} > (see #help debug for the syntax of debug directives and > http://www.cduce.org/manual_types_patterns.html#syntax for the syntax of > types) > Thanks for the reference! Stephen