From tringer at cs.washington.edu Sun Jan 24 15:06:53 2021 From: tringer at cs.washington.edu (Talia Ringer) Date: Sun, 24 Jan 2021 12:06:53 -0800 Subject: [TYPES] Type systems for cryptographic proofs Message-ID: Hi all, I'm curious what work there is about type systems that encode cryptographic proof systems, like zero knowledge proofs and witness indistinguishable proofs. These proof systems have well-defined soundness and completeness criteria. The criteria are probabilistic, but I do not think that should be an issue given the work on probabilistic PL in recent years. If there are any papers on this topic, I would super appreciate some pointers. Thanks, Talia From gabriel.scherer at gmail.com Mon Jan 25 03:23:07 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Mon, 25 Jan 2021 09:23:07 +0100 Subject: [TYPES] Type systems for cryptographic proofs In-Reply-To: References: Message-ID: One relevant reference would be Noam Zeilberger's exploratory work on "a type-theoretic understanding of zero-knowledge" presented at the HOPE workshop (Higher-order Programming with Effects) 2012 https://software.imdea.org/~noam.zeilberger/talks/hope2012.svg (As far as I am aware there is no article/detailed version of this work.) On Mon, Jan 25, 2021 at 9:14 AM Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi all, > > I'm curious what work there is about type systems that encode cryptographic > proof systems, like zero knowledge proofs and witness indistinguishable > proofs. These proof systems have well-defined soundness and completeness > criteria. The criteria are probabilistic, but I do not think that should be > an issue given the work on probabilistic PL in recent years. If there are > any papers on this topic, I would super appreciate some pointers. > > Thanks, > > Talia > From me at madsbuch.com Mon Jan 25 04:20:01 2021 From: me at madsbuch.com (Mads Buch) Date: Mon, 25 Jan 2021 09:20:01 +0000 Subject: [TYPES] Type systems for cryptographic proofs In-Reply-To: References: Message-ID: Hi Talia A quick reference to mention is EasyCrypt: https://www.easycrypt.info/ I used it in my masters thesis for differential privacy but it is geared towards cryptographic proofs using probabilistic reasoning techniques. Venlig hilsen Mads Buch ??????? Original Message ??????? On Sunday, 24 January 2021 21:06, Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi all, > > I'm curious what work there is about type systems that encode cryptographic > proof systems, like zero knowledge proofs and witness indistinguishable > proofs. These proof systems have well-defined soundness and completeness > criteria. The criteria are probabilistic, but I do not think that should be > an issue given the work on probabilistic PL in recent years. If there are > any papers on this topic, I would super appreciate some pointers. > > Thanks, > > Talia From jelle at defekt.nl Tue Jan 26 10:04:32 2021 From: jelle at defekt.nl (Jelle Herold) Date: Tue, 26 Jan 2021 16:04:32 +0100 Subject: [TYPES] Type systems for cryptographic proofs In-Reply-To: References: Message-ID: Underappreciated topic! There is Anton Golov's master thesis where game theoretic proofs are implemented in Agda. http://dspace.library.uu.nl/handle/1874/367810 And more sketchy & categorical, but could be adopted to type theory: Izaak Meckler wrote down some idea's on categorically viewing crypto, which could be interesting. https://math.berkeley.edu/~izaak/research/documents/Categorical-Cryptography.html We have been thinking about doing cryptography using surface diagrams, I believe a discussion between Fabrizio Genovese and Amar Hadzihasanovic and can be found here https://categorytheory.zulipchat.com/login/#narrow/stream/229156-applied-category.20theory/topic/cryptography (sorry can not log in right now to check) Best, Jelle. On Mon, Jan 25, 2021 at 9:15 AM Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi all, > > I'm curious what work there is about type systems that encode cryptographic > proof systems, like zero knowledge proofs and witness indistinguishable > proofs. These proof systems have well-defined soundness and completeness > criteria. The criteria are probabilistic, but I do not think that should be > an issue given the work on probabilistic PL in recent years. If there are > any papers on this topic, I would super appreciate some pointers. > > Thanks, > > Talia > From nestmann at gmail.com Mon Feb 8 13:56:18 2021 From: nestmann at gmail.com (Uwe Nestmann) Date: Mon, 8 Feb 2021 19:56:18 +0100 Subject: [TYPES] typability in Curry-style System F Message-ID: Dear Types, while we have many possible types for \omega in Curry-style System F (or \lambda 2, as by Barendregt): is there any publication that contains a reasonably formal ?direct" argument/proof why \Omega = \omega\omega is _not_ Curry-typable in System F? I mean ?direct? in the sense of not indirectly arguing with the strong normalization theorem. The closest I could find is in the book ?Type Theory and Formal Proof? by Nederpelt and Geuvers just telling that it would be ?far from obvious? (p 81) ... == Uwe == From urzy at mimuw.edu.pl Tue Feb 9 05:38:18 2021 From: urzy at mimuw.edu.pl (=?UTF-8?Q?Pawe=c5=82_Urzyczyn?=) Date: Tue, 9 Feb 2021 11:38:18 +0100 Subject: [TYPES] typability in Curry-style System F In-Reply-To: References: Message-ID: W dniu 08.02.2021 o?19:56, Uwe Nestmann pisze: > is there any publication that contains a reasonably formal ?direct" argument/proof why \Omega = \omega\omega is_not_ Curry-typable in System F? ------------- Dear Uwe, this is Exercise 11.16 in Sorensen-Urzyczyn: Lectures on the Curry-Howard Isomorphism. Best regards, Pawe? Urzyczyn From gabriel.scherer at gmail.com Wed Mar 3 10:07:38 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Wed, 3 Mar 2021 16:07:38 +0100 Subject: [TYPES] Announcing a "types" Zulip Chat, experimental for now Message-ID: Dear types-list, On a suggestion from Philipp Haselwarter ( https://haselwarter.org/~philipp/ ), I created a Zulip chat for the Types community: https://typ.zulipchat.com/ Zulip is an in-browser chat platform that is being used by several research communities around us, for example: - Lean: https://leanprover.zulipchat.com/ (archive: https://leanprover-community.github.io/archive/) - Coq: https://coq.zulipchat.com/ - category-theory: https://categorytheory.zulipchat.com/login/ (archive: https://mattecapu.github.io/ct-zulip-archive/) - Julia: https://julialang.zulipchat.com/login/ - a "functional programming" Zulip that seems mostly Haskell-centric for now https://funprog.zulipchat.com/ (archive: https://funprog.srid.ca/ ) Zulip ( https://zulip.com/ ) is free software. Most of the development comes from a company, Khandra Labs, that offers paid hosting on *. zulipchat.com. They provide free hosting for open-source or academic projects, which is how the Types Zulip is hosted. This is an experiment, to be revisited in a few months if it proves too hard to deal with or people are dissatisfied with the platform. In the meantime, please feel free to join! Of course you can also stick to types-list if you prefer email. Ideally I would prefer for membership to increase smoothly (this will be my first experience as a public Zulip moderator), so I would recommend that you share the link to your colleagues, but not yet on open social-media platforms (Reddit, etc.) during March. We can tell the whole world on April 1st. (This could also be an occasion to advertise the types-list itself in our communities; I'm told it is not as widely known as it should.) Zulip offers a semi-structured chat model (with topics/threads) that may be amenable to producing fruitful technical and scientific discussions. I will try to enable archiving soon, so that the content written today can help people in the future, just like the current types-list archives. Happy chatting From fdhzs2010 at hotmail.com Mon Mar 29 22:34:05 2021 From: fdhzs2010 at hotmail.com (Jason -Zhong Sheng- Hu) Date: Tue, 30 Mar 2021 02:34:05 +0000 Subject: [TYPES] normalization by evaluation for strong sums for STLC Message-ID: Hi all, I am trying to find papers on NbE algorithms handling strong sums for STLC. In particular, I want to see how this commuting conversion rule is dealt with: (match t with | inl y => s | inr z => u) t' ============> match t with | inl y => s t' | inr z => u t' that is, applications immediately after a pattern matching are distributed into the branches. For most papers I found, they only deal with either other commuting conversions or eta for sums. I am aware of https://ieeexplore.ieee.org/document/932506 which does handle that commuting conversion in interest, but it is too category-heavy. I am looking for more light-weight semantic methods. Is there any other method to deal with strong sums which I am not aware of? Thanks, Jason Hu https://hustmphrrr.github.io/ From neelakantan.krishnaswami at gmail.com Tue Mar 30 03:12:10 2021 From: neelakantan.krishnaswami at gmail.com (Neelakantan Krishnaswami) Date: Tue, 30 Mar 2021 08:12:10 +0100 Subject: [TYPES] normalization by evaluation for strong sums for STLC In-Reply-To: References: Message-ID: Hi, You probably want Sam Lindley's *Extensional Rewriting with Sums*, TLCA 2007. https://homepages.inf.ed.ac.uk/slindley/papers/sum.pdf He calls the equation you give as the "move-case" rewrite. Best, Neel On Tue, Mar 30, 2021, 6:26 AM Jason -Zhong Sheng- Hu wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi all, > > I am trying to find papers on NbE algorithms handling strong sums for > STLC. In particular, I want to see how this commuting conversion rule is > dealt with: > > (match t with > | inl y => s > | inr z => u) t' > ============> > match t with > | inl y => s t' > | inr z => u t' > > that is, applications immediately after a pattern matching are distributed > into the branches. > > For most papers I found, they only deal with either other commuting > conversions or eta for sums. I am aware of > https://ieeexplore.ieee.org/document/932506 which does handle that > commuting conversion in interest, but it is too category-heavy. I am > looking for more light-weight semantic methods. Is there any other method > to deal with strong sums which I am not aware of? > > Thanks, > Jason Hu > https://hustmphrrr.github.io/ > From email at mayureshkathe.com Fri Apr 16 04:53:26 2021 From: email at mayureshkathe.com (email at mayureshkathe.com) Date: Fri, 16 Apr 2021 10:53:26 +0200 Subject: [TYPES] =?utf-8?q?Practical_Foundations_for_Programming_Language?= =?utf-8?q?s_=3A_2e_=3A_Requesting_review?= Message-ID: <3de7-60795080-5e7-1ff06580@50711471> I solicit expert opinions and reviews about the book; "Practical Foundations for Programming Languages" 2e by Robert Harper. I do own a copy of "Types and Programming Languages" by Pierce, but thought the book above would add to my knowledge, hence the request. Thank you. From bcpierce at cis.upenn.edu Mon Apr 19 09:59:18 2021 From: bcpierce at cis.upenn.edu (Benjamin Pierce) Date: Mon, 19 Apr 2021 09:59:18 -0400 Subject: [TYPES] Practical Foundations for Programming Languages : 2e : Requesting review In-Reply-To: <3de7-60795080-5e7-1ff06580@50711471> References: <3de7-60795080-5e7-1ff06580@50711471> Message-ID: It's an excellent book by a leader in the field (and goes beyond TAPL in a number of dimensions). Well worth owning for anybody with an interest in this area. - Benjamin On Mon, Apr 19, 2021 at 9:05 AM email at mayureshkathe.com < email at mayureshkathe.com> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > I solicit expert opinions and reviews about the book; "Practical > Foundations for Programming Languages" 2e by Robert Harper. > I do own a copy of "Types and Programming Languages" by Pierce, but > thought the book above would add to my knowledge, hence the request. > Thank you. > > From anitha.gollamudi at gmail.com Thu May 13 00:07:38 2021 From: anitha.gollamudi at gmail.com (Anitha Gollamudi) Date: Thu, 13 May 2021 00:07:38 -0400 Subject: [TYPES] Strong Normalization for Dependent Typed Calculus Message-ID: Hi, Looking for some help in understanding the strong normalization (SN) proofs for LF. Chapter 13 from the book "Lectures on Curry-Howard Isomorphism" by Sorensen and Urzyczyn, translates LF (actually \lambda P) to STLC using contracting maps. I am trying to understand the invariant(s) of the translation. The first attempt at translation (Definition 13.3.1 on p.334) uses a contracting map, b, that is not strong enough to prove SN. Even then, I am puzzled by the definition for the case of the constructor variable \alpha (shown below) b(\alpa) = p_\alpha where \alpha is a constructor variable and p_\alpha is a fresh propositional variable. However, the translation of the typing context uses a different definition for translating \alpha. b(\G) = {x_\alpha : b(k) | (\alpha : k) \in \G } \cup ... where x_\alpha is a fresh term variable. The discrepancy between type and typing context translation threw me off a bit for a few reasons. 1. p_\alpha is a type variable; clearly STLC has no type variables. So, the translated type does not belong to STLC types. 2. Related to the above point. How to translate a term (x: \alpha) under \G that has a suitable kind for \alpha? 3. The translation of \G introduces fresh variables x_\alpha that are not used in the translated terms. So why bother? Appreciate any pointer/explanation. Best Anitha From tringer at cs.washington.edu Tue May 18 15:42:20 2021 From: tringer at cs.washington.edu (Talia Ringer) Date: Tue, 18 May 2021 12:42:20 -0700 Subject: [TYPES] What's a program? (Seriously) Message-ID: Hi friends, I have a strange discussion I'd like to start. Recently I was debating with someone whether Curry-Howard extends to arbitrary logical systems---whether all proofs are programs in some sense. I argued yes, he argued no. But after a while of arguing, we realized that we had different axioms if you will modeling what a "program" is. Is any term that can be typed a program? I assumed yes, he assumed no. So then I took to Twitter, and I asked the following questions (some informal language here, since audience was Twitter): 1. If you're working in a language in which not all terms compute (say, HoTT without a computational interpretation of univalence, so not cubical), would you still call terms that mostly compute but rely on axioms "programs"? 2. If you answered no, would you call a term that does fully compute in the same language a "program"? People actually really disagreed here; there was nothing resembling consensus. Is a term a program if it calls out to an oracle? Relies on an unrealizable axiom? Relies on an axiom that is realizable, but not yet realized, like univalence before cubical existed? (I suppose some reliance on axioms at some point is necessary, which makes this even weirder to me---what makes univalence different to people who do not view terms that invoke it as an axiom as programs?) Anyways, it just feels strange to get to the last three weeks of my programming languages PhD, and realize I've never once asked what makes a term a program ?. So it'd be interesting to hear your thoughts. Talia From monnier at iro.umontreal.ca Tue May 18 16:39:52 2021 From: monnier at iro.umontreal.ca (Stefan Monnier) Date: Tue, 18 May 2021 16:39:52 -0400 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: (Talia Ringer's message of "Tue, 18 May 2021 12:42:20 -0700") References: Message-ID: > Anyways, it just feels strange to get to the last three weeks of my > programming languages PhD, and realize I've never once asked what makes a > term a program ?. So it'd be interesting to hear your thoughts. I think it's not a property of the object but has instead to do with the intent. When I write a proof, it's a proof (and probably a broken one as long as I haven't mechanically checked it), and when I decide to try and run it then it becomes a program. Stefan From sergey.goncharov at fau.de Tue May 18 16:44:45 2021 From: sergey.goncharov at fau.de (Sergey Goncharov) Date: Tue, 18 May 2021 22:44:45 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: Message-ID: Dear Talia, I am not sure if I can say something particularly intelligent in matter terms, but in your post you are first asking if proofs can be seen as programs, and then switch to the question if "terms" can be regarded as programs. I am wondering why you have no issue with regarding "terms" as proofs, but you have an issue with regarding "terms" as programs. To my knowledge, a proof is a very (very) specific kind of program. Exotic cases aside, let us take Haskell as an example. Its Curry-Howard logic is degenerate, but it is a programming language, isnt it? Cheers, Sergey On 18/05/2021 21:42, Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi friends, > > I have a strange discussion I'd like to start. Recently I was debating with > someone whether Curry-Howard extends to arbitrary logical systems---whether > all proofs are programs in some sense. I argued yes, he argued no. But > after a while of arguing, we realized that we had different axioms if you > will modeling what a "program" is. Is any term that can be typed a program? > I assumed yes, he assumed no. > > So then I took to Twitter, and I asked the following questions (some > informal language here, since audience was Twitter): > > 1. If you're working in a language in which not all terms compute (say, > HoTT without a computational interpretation of univalence, so not cubical), > would you still call terms that mostly compute but rely on axioms > "programs"? > > 2. If you answered no, would you call a term that does fully compute in the > same language a "program"? > > People actually really disagreed here; there was nothing resembling > consensus. Is a term a program if it calls out to an oracle? Relies on an > unrealizable axiom? Relies on an axiom that is realizable, but not yet > realized, like univalence before cubical existed? (I suppose some reliance > on axioms at some point is necessary, which makes this even weirder to > me---what makes univalence different to people who do not view terms that > invoke it as an axiom as programs?) > > Anyways, it just feels strange to get to the last three weeks of my > programming languages PhD, and realize I've never once asked what makes a > term a program ?. So it'd be interesting to hear your thoughts. > > Talia > From m.escardo at cs.bham.ac.uk Tue May 18 16:58:19 2021 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Tue, 18 May 2021 21:58:19 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: Message-ID: This will not answer your question (which seems more philosophical than mathematical). The BHK / CH interpretation of logic applies equally well to classical mathematics if we understand the imprecise notion of procedure as function (for example, the classical set-theoretical notion of function). Classical mathematicians using Lean apply this everyday in their practical formalization work (for example). What counts as a program, in my view, is subtle and so your question is very appropriate. For example, if I write a proof in Agda, using function extensionality, is it a program? No. But if I write --cubical in the header of my Agda program and import a suitable cubical library, then yes. I can define an integer using function extensionality so that Agda will get stuck trying to compute it but Cubical Agda won't (and I have done it in the cubical library as an illustrative example). A term has a meaning for you and me (probably not exactly the same meaning), and it may also have (or have not) an operational meaning to the computer. To make things worse, you may have contradictory realizable statements. For example, each of ??LPO and "all functions are continuous" are realizable, but not simultaneously. (LPO means "every sequence s:N->N either has a 1 or is constantly zero".) Sorry for adding more to the confusion. For more confusion: in the setoid model of type theory, countable choice is just true. In the cubical model it is not provable (and it is probably independent). Martin On 18/05/2021 20:42, Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi friends, > > I have a strange discussion I'd like to start. Recently I was debating with > someone whether Curry-Howard extends to arbitrary logical systems---whether > all proofs are programs in some sense. I argued yes, he argued no. But > after a while of arguing, we realized that we had different axioms if you > will modeling what a "program" is. Is any term that can be typed a program? > I assumed yes, he assumed no. > > So then I took to Twitter, and I asked the following questions (some > informal language here, since audience was Twitter): > > 1. If you're working in a language in which not all terms compute (say, > HoTT without a computational interpretation of univalence, so not cubical), > would you still call terms that mostly compute but rely on axioms > "programs"? > > 2. If you answered no, would you call a term that does fully compute in the > same language a "program"? > > People actually really disagreed here; there was nothing resembling > consensus. Is a term a program if it calls out to an oracle? Relies on an > unrealizable axiom? Relies on an axiom that is realizable, but not yet > realized, like univalence before cubical existed? (I suppose some reliance > on axioms at some point is necessary, which makes this even weirder to > me---what makes univalence different to people who do not view terms that > invoke it as an axiom as programs?) > > Anyways, it just feels strange to get to the last three weeks of my > programming languages PhD, and realize I've never once asked what makes a > term a program ?. So it'd be interesting to hear your thoughts. > > Talia > -- Martin Escardo http://www.cs.bham.ac.uk/~mhe From m.escardo at cs.bham.ac.uk Tue May 18 17:10:37 2021 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Tue, 18 May 2021 22:10:37 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: Message-ID: <7f928b34-f159-cf1f-d25e-55680eb75255@cs.bham.ac.uk> On 18/05/2021 21:58, Martin Escardo wrote: > (and it is > probably independent). In univalent type theories. Martin From tringer at cs.washington.edu Tue May 18 17:18:42 2021 From: tringer at cs.washington.edu (Talia Ringer) Date: Tue, 18 May 2021 14:18:42 -0700 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <7f928b34-f159-cf1f-d25e-55680eb75255@cs.bham.ac.uk> References: <7f928b34-f159-cf1f-d25e-55680eb75255@cs.bham.ac.uk> Message-ID: > > For example, if I write a proof in Agda, using function extensionality, > is it a program? No. But if I write --cubical in the header of my Agda > program and import a suitable cubical library, then yes. I can define an > integer using function extensionality so that Agda will get stuck trying > to compute it but Cubical Agda won't (and I have done it in the cubical > library as an illustrative example). No wrong answers here, but I'm curious: What makes a proof in Agda using functional extensionality different in your mind from a function that takes a proof of functional extensionality as an input, and then returns the proof? Is the latter a program? Is the latter a program once you turn on --cubical, since now it has inputs? Do programs need to have realizable inputs? To my knowledge, a proof is a very (very) specific kind of program. > What makes it more specific? In my mind, you can certainly view any program in any language as a proof in some logical system, even if the logical system is not actually implemented as the language's type checker, and even if the corresponding logic is unsound. A proof in an unsound logic is still a proof, I think---just a possibly useless one (possibly useful, though, since you can have incorrect assumptions and still sometimes prove things that in no way rely on them). On Tue, May 18, 2021 at 2:10 PM Martin Escardo wrote: > > > On 18/05/2021 21:58, Martin Escardo wrote: > > (and it is > > probably independent). > > In univalent type theories. > > Martin > > From gavin.mendel.gleason at gmail.com Tue May 18 17:26:37 2021 From: gavin.mendel.gleason at gmail.com (Gavin Mendel-Gleason) Date: Tue, 18 May 2021 23:26:37 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: Message-ID: The idea of computation being related to proof requires that there be some directional relation between proofs which can be used as a reduction relation. Without such a notion, then the terms that form a proof are entirely static. Many "reasonable" relations between proofs - i.e. those that preserve the final logical statement while retaining a valid proof, might not "reduce" to some "value" either. They could instead relate the proofs in such a way that the structures always get more complex (perhaps cut-introduction?). Would that still be a programme? But perhaps the question should be taken the other way around. Is every programme the proof of something? And if so what? We can fix the reduction relation with our operational notion of our programme, but then we're left wondering what properties we are computing. We can easily equip a programme with properties which are abstract enough that it will compute that property. But there are nots of properties which may hold of the programme which we probably can not call the programme a "proof" of without somehow giving more elaborated information without which we will find the satisfaction of these properties by the programme is undecidable. This missing information for decidability would seem necessary to supply if we want it to constitute a "proof" or giving someone the programme and telling them to verify the proof would be rather cruel. Whichever way you look at it, while you can sometimes wedge programming into the CH, it seems unlikely that the CH covers everything we mean by proof. On Tue, 18 May 2021 at 22:42, Stefan Monnier wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > Anyways, it just feels strange to get to the last three weeks of my > > programming languages PhD, and realize I've never once asked what makes a > > term a program ?. So it'd be interesting to hear your thoughts. > > I think it's not a property of the object but has instead to do with > the intent. When I write a proof, it's a proof (and probably a broken > one as long as I haven't mechanically checked it), and when I decide to > try and run it then it becomes a program. > > > Stefan > > From m.escardo at cs.bham.ac.uk Tue May 18 17:51:56 2021 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Tue, 18 May 2021 22:51:56 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <7f928b34-f159-cf1f-d25e-55680eb75255@cs.bham.ac.uk> Message-ID: On 18/05/2021 22:18, Talia Ringer wrote: > Do programs need to have > realizable inputs? That's an excellent question. You should check the book "Higher-Order Computability" by John Longley and Dag Normann (Springer 2015). https://www.springer.com/gp/book/9783662479919 In the 1950's people asked this question to themselves (Kleene, Kreisel and others). The answer is that there are two classical theories of higher-order computability. * computable data and computable functions. * continuous data and computable functions. Technically, they amount to realizability over K1 (Hyland's "effective topos") and realizability over K2 (Kleene-Vesley topos), where K1 and K2 are Kleene's first and second combinatory algebras. Other proposals where given, but all of them turned out to be (with hard work) equivalent to K1 or K2. These two theories don't agree. All they agree about is what functions N->N are or are not computable (and they agree with Turing). But e.g. they don't agree regarding what functions (N->N)->N are allowed and which ones are computable. The difference is what you say: do we consider computable functions over arbitrary inputs, or just over computable inputs? The precise answers are technically subtle (and mathematically interesting). > To my knowledge, a proof is a very (very) specific kind of program. > > > ?What makes it more specific? In my mind, you can certainly view any > program in any language as a proof in some logical system, even if the > logical system is not actually implemented as the language's type > checker, and even if the corresponding logic is unsound. A proof in an > unsound logic is still a proof, I think---just a possibly useless one > (possibly useful, though, since you can have incorrect assumptions and > still sometimes prove things that in no way rely on them). I am not sure I understand this, but I would be willing to discuss it to clarify it. Best, Martin From neelakantan.krishnaswami at gmail.com Tue May 18 18:00:18 2021 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Tue, 18 May 2021 23:00:18 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: Message-ID: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Dear Talia, Here's an imprecise but useful way of organising these ideas that I found helpful. 1. A *language* is a (generalised) algebraic theory. Basically, think of a theory as a set of generators and relations in the style of abstract algebra. You need to beef this up to handle variables (e.g., see the work of Fiore and Hamana) but (a) I promised to be imprecise, and (b) the core intuition that a language is a set of generators for terms, plus a set of equations these terms satisfy is already totally visible in the basic case. For example: a) the simply-typed lambda calculus b) regular expressions c) relational algebra 2. A *model* of a a language is literally just any old mathematical structure which supports the generators of the language and respects the equations. For example: a) you can model the typed lambda calculus using sets for types and mathematical functions for terms, b) you can model regular expressions as denoting particular languages (ie, sets of strings) c) you can model relational algebra expressions as sets of tuples 2. A *model of computation* or *machine model* is basically a description of an abstract machine that we think can be implemented with physical hardware, at least in principle. So these are things like finite state machines, Turing machines, Petri nets, pushdown automata, register machines, circuits, and so on. Basically, think of models of computation as the things you study in a computability class. The Church-Turing thesis bounds which abstract machines we think it is possible to physically implement. 3. A language is a *programming language* when you can give at least one model of the language using some machine model. For example: a) the types of the lambda calculus can be viewed as partial equivalence relations over Go?del codes for some universal turing machine, and the terms of a type can be assigned to equivalence classes of the corresponding PER. b) Regular expressions can be interpreted into finite state machines quotiented by bisimulation. c) A set in relational algebra can be realised as equivalence classes of B-trees, and relational algebra expressions as nested for-loops over them. Note that in all three cases we have to quotient the machine model by a suitable equivalence relation to preserve the equations of the language's theory. This quotient is *very* important, and is the source of a lot of confusion. It hides the equivalences the language theory wants to deny, but that is not always what the programmer wants ? e.g., is merge sort equal to bubble sort? As mathematical functions, they surely are, but if you consider them as operations running on an actual computer, then we will have strong preferences! 4. A common source of confusion arises from the fact that if you have a nice type-theoretic language (like the STLC), then: a) the term model of this theory will be the initial model in the category of models, and b) you can turn the terms into a machine model by orienting some of the equations the lambda-theory satisfies and using them as rewrites. As a result we abuse language to talk about the theory of the simply-typed calculus as "being" a programming language. This is also where operational semantics comes from, at least for purely functional languages. Best, Neel On 18/05/2021 20:42, Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi friends, > > I have a strange discussion I'd like to start. Recently I was debating with > someone whether Curry-Howard extends to arbitrary logical systems---whether > all proofs are programs in some sense. I argued yes, he argued no. But > after a while of arguing, we realized that we had different axioms if you > will modeling what a "program" is. Is any term that can be typed a program? > I assumed yes, he assumed no. > > So then I took to Twitter, and I asked the following questions (some > informal language here, since audience was Twitter): > > 1. If you're working in a language in which not all terms compute (say, > HoTT without a computational interpretation of univalence, so not cubical), > would you still call terms that mostly compute but rely on axioms > "programs"? > > 2. If you answered no, would you call a term that does fully compute in the > same language a "program"? > > People actually really disagreed here; there was nothing resembling > consensus. Is a term a program if it calls out to an oracle? Relies on an > unrealizable axiom? Relies on an axiom that is realizable, but not yet > realized, like univalence before cubical existed? (I suppose some reliance > on axioms at some point is necessary, which makes this even weirder to > me---what makes univalence different to people who do not view terms that > invoke it as an axiom as programs?) > > Anyways, it just feels strange to get to the last three weeks of my > programming languages PhD, and realize I've never once asked what makes a > term a program ?. So it'd be interesting to hear your thoughts. > > Talia > From sandro.stucki at gmail.com Wed May 19 04:03:36 2021 From: sandro.stucki at gmail.com (Sandro Stucki) Date: Wed, 19 May 2021 10:03:36 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: Talia: thanks for a thought-provoking question, and thanks everyone else for all the interesting answers so far! Neel: I love your explanation and all your examples! But you didn't really answer Talia's question, did you? I'd be curious to know where and how HoTT without a computation rule for univalence would fit into your classification. It would certainly be a language, and by your definition it has models (e.g. cubical ones) which, if I understand correctly, can be turned into an abstract machine (either a rewriting system per your point 4 or whatever the Agda backends compile to). So according to your definition of programming language (point 3), this version of HoTT would be a programming language simply because there is, in principle, an abstract machine model for it? Is that what you had in mind? Cheers /Sandro On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami < neelakantan.krishnaswami at gmail.com> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Talia, > > Here's an imprecise but useful way of organising these ideas that I > found helpful. > > 1. A *language* is a (generalised) algebraic theory. Basically, think > of a theory as a set of generators and relations in the style of > abstract algebra. > > You need to beef this up to handle variables (e.g., see the work of > Fiore and Hamana) but (a) I promised to be imprecise, and (b) the > core intuition that a language is a set of generators for terms, > plus a set of equations these terms satisfy is already totally > visible in the basic case. > > For example: > > a) the simply-typed lambda calculus > b) regular expressions > c) relational algebra > > 2. A *model* of a a language is literally just any old mathematical > structure which supports the generators of the language and > respects the equations. > > For example: > > a) you can model the typed lambda calculus using sets > for types and mathematical functions for terms, > b) you can model regular expressions as denoting particular > languages (ie, sets of strings) > c) you can model relational algebra expressions as sets of > tuples > > 2. A *model of computation* or *machine model* is basically a > description of an abstract machine that we think can be implemented > with physical hardware, at least in principle. So these are things > like finite state machines, Turing machines, Petri nets, pushdown > automata, register machines, circuits, and so on. Basically, think > of models of computation as the things you study in a computability > class. > > The Church-Turing thesis bounds which abstract machines we think it > is possible to physically implement. > > 3. A language is a *programming language* when you can give at least > one model of the language using some machine model. > > For example: > > a) the types of the lambda calculus can be viewed as partial > equivalence relations over Go?del codes for some universal turing > machine, and the terms of a type can be assigned to equivalence > classes of the corresponding PER. > > b) Regular expressions can be interpreted into finite state machines > quotiented by bisimulation. > > c) A set in relational algebra can be realised as equivalence > classes of B-trees, and relational algebra expressions as nested > for-loops over them. > > Note that in all three cases we have to quotient the machine model > by a suitable equivalence relation to preserve the equations of the > language's theory. > > This quotient is *very* important, and is the source of a lot of > confusion. It hides the equivalences the language theory wants to > deny, but that is not always what the programmer wants ? e.g., is > merge sort equal to bubble sort? As mathematical functions, they > surely are, but if you consider them as operations running on an > actual computer, then we will have strong preferences! > > 4. A common source of confusion arises from the fact that if you have > a nice type-theoretic language (like the STLC), then: > > a) the term model of this theory will be the initial model in the > category of models, and > b) you can turn the terms into a machine > model by orienting some of the equations the lambda-theory > satisfies and using them as rewrites. > > As a result we abuse language to talk about the theory of the > simply-typed calculus as "being" a programming language. This is > also where operational semantics comes from, at least for purely > functional languages. > > Best, > Neel > > On 18/05/2021 20:42, Talia Ringer wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hi friends, > > > > I have a strange discussion I'd like to start. Recently I was debating > with > > someone whether Curry-Howard extends to arbitrary logical > systems---whether > > all proofs are programs in some sense. I argued yes, he argued no. But > > after a while of arguing, we realized that we had different axioms if you > > will modeling what a "program" is. Is any term that can be typed a > program? > > I assumed yes, he assumed no. > > > > So then I took to Twitter, and I asked the following questions (some > > informal language here, since audience was Twitter): > > > > 1. If you're working in a language in which not all terms compute (say, > > HoTT without a computational interpretation of univalence, so not > cubical), > > would you still call terms that mostly compute but rely on axioms > > "programs"? > > > > 2. If you answered no, would you call a term that does fully compute in > the > > same language a "program"? > > > > People actually really disagreed here; there was nothing resembling > > consensus. Is a term a program if it calls out to an oracle? Relies on an > > unrealizable axiom? Relies on an axiom that is realizable, but not yet > > realized, like univalence before cubical existed? (I suppose some > reliance > > on axioms at some point is necessary, which makes this even weirder to > > me---what makes univalence different to people who do not view terms that > > invoke it as an axiom as programs?) > > > > Anyways, it just feels strange to get to the last three weeks of my > > programming languages PhD, and realize I've never once asked what makes a > > term a program ?. So it'd be interesting to hear your thoughts. > > > > Talia > > > From anstenklev at gmail.com Wed May 19 04:08:37 2021 From: anstenklev at gmail.com (=?UTF-8?Q?Ansten_M=C3=B8rch_Klev?=) Date: Wed, 19 May 2021 10:08:37 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: Dear Talia, It might be worth remembering that, in discussions of Curry--Howard, the terms "proof" and "programme" have meanings that (at least at first sight) do not quite agree with their meaning in other parts of logic and computer science. A proof in the Curry--Howard sense may also be called a verifier or a truth-maker, for it is what makes a proposition true, where truth of a proposition is understood as the inhabitation of a type. A proof in this sense is therefore a kind of object, and not an act by means of which you get to know that a proposition is true. Martin-L?f and others prefer to call the latter a demonstration. Within type theory, you demonstrate that a proposition is true by constructing a proof of it, where a proof is thus a certain term in the language of the theory in question. A term is called a programme, I take it, by virtue of its being possible to evaluate it. This is *not* the notion of programme that is assumed in the phrase "Turing machine programme", since a Turing machine programme is not itself evaluated. For instance, the term 2+2 is a programme in the sense that it can be evaluated---to ssss(0), say---but it is clearly not a Turing machine programme. In the world of Turing machines, the term 2+2 qua programme rather corresponds to a machine with suitable input on the tape (two sequences of two 1's in this case). If we stick to the idea that a term is a programme in the sense that it can be evaluated, then no higher-order term is a programme. In a higher-order language, the term + in isolation may be a term, but it is not by itself evaluable, just as a Turing machine programme that computes addition is not by itself evaluable. Rather, in accordance with the type of the term +, you must complete it with two arguments of type N to get something evaluable. One can make the same argument with respect to open terms, since an open term is (or stands for) a function, just as a higher-order term is (or does). The open term x+y, for instance, is not by itself evaluable. To get something evaluable, you need to assign values to the variables, yielding for instance 2+2. With kind regards, Ansten Klev On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami < neelakantan.krishnaswami at gmail.com> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Talia, > > Here's an imprecise but useful way of organising these ideas that I > found helpful. > > 1. A *language* is a (generalised) algebraic theory. Basically, think > of a theory as a set of generators and relations in the style of > abstract algebra. > > You need to beef this up to handle variables (e.g., see the work of > Fiore and Hamana) but (a) I promised to be imprecise, and (b) the > core intuition that a language is a set of generators for terms, > plus a set of equations these terms satisfy is already totally > visible in the basic case. > > For example: > > a) the simply-typed lambda calculus > b) regular expressions > c) relational algebra > > 2. A *model* of a a language is literally just any old mathematical > structure which supports the generators of the language and > respects the equations. > > For example: > > a) you can model the typed lambda calculus using sets > for types and mathematical functions for terms, > b) you can model regular expressions as denoting particular > languages (ie, sets of strings) > c) you can model relational algebra expressions as sets of > tuples > > 2. A *model of computation* or *machine model* is basically a > description of an abstract machine that we think can be implemented > with physical hardware, at least in principle. So these are things > like finite state machines, Turing machines, Petri nets, pushdown > automata, register machines, circuits, and so on. Basically, think > of models of computation as the things you study in a computability > class. > > The Church-Turing thesis bounds which abstract machines we think it > is possible to physically implement. > > 3. A language is a *programming language* when you can give at least > one model of the language using some machine model. > > For example: > > a) the types of the lambda calculus can be viewed as partial > equivalence relations over Go?del codes for some universal turing > machine, and the terms of a type can be assigned to equivalence > classes of the corresponding PER. > > b) Regular expressions can be interpreted into finite state machines > quotiented by bisimulation. > > c) A set in relational algebra can be realised as equivalence > classes of B-trees, and relational algebra expressions as nested > for-loops over them. > > Note that in all three cases we have to quotient the machine model > by a suitable equivalence relation to preserve the equations of the > language's theory. > > This quotient is *very* important, and is the source of a lot of > confusion. It hides the equivalences the language theory wants to > deny, but that is not always what the programmer wants ? e.g., is > merge sort equal to bubble sort? As mathematical functions, they > surely are, but if you consider them as operations running on an > actual computer, then we will have strong preferences! > > 4. A common source of confusion arises from the fact that if you have > a nice type-theoretic language (like the STLC), then: > > a) the term model of this theory will be the initial model in the > category of models, and > b) you can turn the terms into a machine > model by orienting some of the equations the lambda-theory > satisfies and using them as rewrites. > > As a result we abuse language to talk about the theory of the > simply-typed calculus as "being" a programming language. This is > also where operational semantics comes from, at least for purely > functional languages. > > Best, > Neel > > On 18/05/2021 20:42, Talia Ringer wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hi friends, > > > > I have a strange discussion I'd like to start. Recently I was debating > with > > someone whether Curry-Howard extends to arbitrary logical > systems---whether > > all proofs are programs in some sense. I argued yes, he argued no. But > > after a while of arguing, we realized that we had different axioms if you > > will modeling what a "program" is. Is any term that can be typed a > program? > > I assumed yes, he assumed no. > > > > So then I took to Twitter, and I asked the following questions (some > > informal language here, since audience was Twitter): > > > > 1. If you're working in a language in which not all terms compute (say, > > HoTT without a computational interpretation of univalence, so not > cubical), > > would you still call terms that mostly compute but rely on axioms > > "programs"? > > > > 2. If you answered no, would you call a term that does fully compute in > the > > same language a "program"? > > > > People actually really disagreed here; there was nothing resembling > > consensus. Is a term a program if it calls out to an oracle? Relies on an > > unrealizable axiom? Relies on an axiom that is realizable, but not yet > > realized, like univalence before cubical existed? (I suppose some > reliance > > on axioms at some point is necessary, which makes this even weirder to > > me---what makes univalence different to people who do not view terms that > > invoke it as an axiom as programs?) > > > > Anyways, it just feels strange to get to the last three weeks of my > > programming languages PhD, and realize I've never once asked what makes a > > term a program ?. So it'd be interesting to hear your thoughts. > > > > Talia > > > From m.escardo at cs.bham.ac.uk Wed May 19 05:03:33 2021 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Wed, 19 May 2021 10:03:33 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <7f928b34-f159-cf1f-d25e-55680eb75255@cs.bham.ac.uk> Message-ID: <510563eb-ae05-49be-038d-bad2def9d873@cs.bham.ac.uk> On 18/05/2021 22:18, Talia Ringer wrote: > For example, if I write a proof in Agda, using function extensionality, > is it a program? No. But if I write --cubical in the header of my Agda > program and import a suitable cubical library, then yes. I can define an > integer using function extensionality so that Agda will get stuck trying > to compute it but Cubical Agda won't (and I have done it in the cubical > library as an illustrative example). > > > No wrong answers here, but I'm curious: What makes a proof in Agda using > functional extensionality different in your mind from a function that > takes a proof of functional extensionality as an input, and then returns > the proof? Is the latter a program? Is the latter a program once you > turn on --cubical, since now it has inputs? Do programs need to have > realizable inputs? In my preferred view of the world, computable functions don't need to have computable inputs. (I am a K2 person.) Many theorems are of the form "A -> B". If you can do this, then you can do that. I would regard a constructive proof of A -> B as a program, before I know whether A "can be done" (or how). Take e.g. A = function extensionality and B = integers. Let f : A -> B be some constructive proof / program. If I want to compute an integer from f, now I do need to supply some a:A. This is what cubical Agda can do but Agda can't: supply the necessary input to be able to *use* the program f to get an integer. Similarly in Agda you can prove Univalence -> Blah, but you won't be able to supply an input to run this proof, although you will be able to so in Cubical Agda. By the way, here is another more concrete example of a pair of things that are realizable but contradict each other: the univalence axiom and the uniqueness-of-identity-proofs axiom. Cubical Agda has the former, and Agda has the latter by default (although it can be disabled). One more comment about (non)computable inputs: many practical programs take a continuous stream of inputs from the the real world. Is this stream computable? Maybe. But I've never been told what the program generating it is, and so knowing that it exists is not a very useful piece of information when I am writing a program to process this stream. Martin From freek at cs.ru.nl Wed May 19 05:29:47 2021 From: freek at cs.ru.nl (Freek Wiedijk) Date: Wed, 19 May 2021 11:29:47 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: <20210519092947.tc3l3k2mmghdmwiq@xb27-stretch.fritz.box> Dear Talia and others, Of course there is no inherent meaning to the word "program". It depends on the person what they mean with it. Also it's a bit like Wittgenstein's analysis of the notion of "game". These concepts are not very precisely delineated, there is just a social consensus based on how the word is used in general. For me a program is something for which there is a notion of time evolution. I.e., if there is a notion to be denoted by -> a notion of taking steps. In type theory that's term reduction, in operational semantics it's state transition, the physical state of every processor continuously evaluates through time too, hell, even a Turing machine takes steps. So the notion of time is for me the essential ingredient: in time a program makes progress towards something that it is supposed to accomplish. Of course, a value already is a finished result, so that's a trivial program that won't take any steps anymore: but still it can be in a domain for which such an arrow exists, and then it still counts as a program to me too. We have various things; - terms - proofs - programs - states I guess Talia's question is what inclusions between these notions we think are natural. I don't see any obvious inclusion in any direction myself, but I guess others will disagree. Also (I think this already was asked, but I would like to repeat it): the original question was whether every proof is a program, but from discussions with Dan Synek I remember the question: can every program (in the general sense) be seen as a proof? Freek From neelakantan.krishnaswami at gmail.com Wed May 19 05:54:51 2021 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Wed, 19 May 2021 10:54:51 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: Dear Sandro, Yes, you're right -- I didn't answer the question, since I was too taken by the subject line. :) Anyway, I do think that HoTT with a non-reducible univalence axiom is still a programming language, because we can give a computational interpretation to that language: for example, you could follow the strategy of Angiuli, Harper and Wilson's POPL 2017 paper, *Computational Higher-Dimensional Type Theory*. Another, simpler example comes from Martin Escardo's example upthread of basic Martin-Lo?f type theory with the function extensionality axiom. You can give a very simple realizability interpretation to the equality type and extensionality axiom, which lets every compiled program compute. What you lose in both of these cases is not the ability to give a computational model to the language, but rather the ability to identify normal forms and to use an oriented version of the equational theory of the language as the evaluation mechanism. This is not an overly shocking phenomenon: it occurs even in much simpler languages than dependent type theories. For example, once you add the reference type `ref a` to ML, it is no longer the case that the language has normal forms, because the ref type does not have introduction and elimination rules with beta- and eta- rules. Another way of thinking about this is that often, we *aren't sure* what the equational theory of our language is or should be. This is because we often derive a language by thinking about a particular semantic model, and don't have a clear idea of which equations are properly part of the theory of the language, and which ones are accidental features of the concrete model. For example, in the case of name generation ? i.e., ref unit ? our intuitions for which equations hold come from the concrete model of nominal sets. But we don't know which of those equations should hold in all models of name generation, and which are "coincidental" to nominal sets. Another, more practical, example comes from the theory of state. We all have the picture of memory as a big array which is updated by assembly instructions a la the state monad. But this model incorrectly models the behaviour of memory on modern multicore systems. So a proper theory of state for this case should have fewer equations than what the folk model of state validates. Best, Neel On 19/05/2021 09:03, Sandro Stucki wrote: > Talia: thanks for a thought-provoking question, and thanks everyone else > for all the interesting answers so far! > > Neel: I love your explanation and all your examples! > > But you didn't really answer Talia's question, did you? I'd be curious > to know where and how HoTT without a computation rule for univalence > would fit into your classification. It would certainly be a language, > and by your definition it has models (e.g. cubical ones) which, if I > understand correctly, can be turned into an abstract machine (either a > rewriting system per your point 4 or whatever the Agda backends compile > to). So according to your definition of programming language (point 3), > this version of HoTT would be a programming language simply because > there is, in principle, an abstract machine model for it? Is that what > you had in mind? > > Cheers > /Sandro > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > > wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Talia, > > Here's an imprecise but useful way of organising these ideas that I > found helpful. > > 1. A *language* is a (generalised) algebraic theory. Basically, think > ? ? of a theory as a set of generators and relations in the style of > ? ? abstract algebra. > > ? ? You need to beef this up to handle variables (e.g., see the work of > ? ? Fiore and Hamana) but (a) I promised to be imprecise, and (b) the > ? ? core intuition that a language is a set of generators for terms, > ? ? plus a set of equations these terms satisfy is already totally > ? ? visible in the basic case. > > ? ? For example: > > ? ? a) the simply-typed lambda calculus > ? ? b) regular expressions > ? ? c) relational algebra > > 2. A *model* of a a language is literally just any old mathematical > ? ? structure which supports the generators of the language and > ? ? respects the equations. > > ? ? For example: > > ? ? a) you can model the typed lambda calculus using sets > ? ? ? ?for types and mathematical functions for terms, > ? ? b) you can model regular expressions as denoting particular > ? ? ? ?languages (ie, sets of strings) > ? ? c) you can model relational algebra expressions as sets of > ? ? ? ?tuples > > 2. A *model of computation* or *machine model* is basically a > ? ? description of an abstract machine that we think can be implemented > ? ? with physical hardware, at least in principle. So these are things > ? ? like finite state machines, Turing machines, Petri nets, pushdown > ? ? automata, register machines, circuits, and so on. Basically, think > ? ? of models of computation as the things you study in a computability > ? ? class. > > ? ? The Church-Turing thesis bounds which abstract machines we think it > ? ? is possible to physically implement. > > 3. A language is a *programming language* when you can give at least > ? ? one model of the language using some machine model. > > ? ? For example: > > ? ? a) the types of the lambda calculus can be viewed as partial > ? ? ? ?equivalence relations over Go?del codes for some universal turing > ? ? ? ?machine, and the terms of a type can be assigned to equivalence > ? ? ? ?classes of the corresponding PER. > > ? ? b) Regular expressions can be interpreted into finite state > machines > ? ? ? ?quotiented by bisimulation. > > ? ? c) A set in relational algebra can be realised as equivalence > ? ? ? ?classes of B-trees, and relational algebra expressions as nested > ? ? ? ?for-loops over them. > > ? ?Note that in all three cases we have to quotient the machine model > ? ?by a suitable equivalence relation to preserve the equations of the > ? ?language's theory. > > ? ?This quotient is *very* important, and is the source of a lot of > ? ?confusion. It hides the equivalences the language theory wants to > ? ?deny, but that is not always what the programmer wants ? e.g., is > ? ?merge sort equal to bubble sort? As mathematical functions, they > ? ?surely are, but if you consider them as operations running on an > ? ?actual computer, then we will have strong preferences! > > 4. A common source of confusion arises from the fact that if you have > ? ? a nice type-theoretic language (like the STLC), then: > > ? ? a) the term model of this theory will be the initial model in the > ? ? ? ?category of models, and > ? ? b) you can turn the terms into a machine > ? ? ? ?model by orienting some of the equations the lambda-theory > ? ? ? ?satisfies and using them as rewrites. > > ? ? As a result we abuse language to talk about the theory of the > ? ? simply-typed calculus as "being" a programming language. This is > ? ? also where operational semantics comes from, at least for purely > ? ? functional languages. > > Best, > Neel > > On 18/05/2021 20:42, Talia Ringer wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > > Hi friends, > > > > I have a strange discussion I'd like to start. Recently I was > debating with > > someone whether Curry-Howard extends to arbitrary logical > systems---whether > > all proofs are programs in some sense. I argued yes, he argued > no. But > > after a while of arguing, we realized that we had different > axioms if you > > will modeling what a "program" is. Is any term that can be typed > a program? > > I assumed yes, he assumed no. > > > > So then I took to Twitter, and I asked the following questions (some > > informal language here, since audience was Twitter): > > > > 1. If you're working in a language in which not all terms compute > (say, > > HoTT without a computational interpretation of univalence, so not > cubical), > > would you still call terms that mostly compute but rely on axioms > > "programs"? > > > > 2. If you answered no, would you call a term that does fully > compute in the > > same language a "program"? > > > > People actually really disagreed here; there was nothing resembling > > consensus. Is a term a program if it calls out to an oracle? > Relies on an > > unrealizable axiom? Relies on an axiom that is realizable, but > not yet > > realized, like univalence before cubical existed? (I suppose some > reliance > > on axioms at some point is necessary, which makes this even > weirder to > > me---what makes univalence different to people who do not view > terms that > > invoke it as an axiom as programs?) > > > > Anyways, it just feels strange to get to the last three weeks of my > > programming languages PhD, and realize I've never once asked what > makes a > > term a program ?. So it'd be interesting to hear your thoughts. > > > > Talia > > > From tringer at cs.washington.edu Wed May 19 06:35:22 2021 From: tringer at cs.washington.edu (Talia Ringer) Date: Wed, 19 May 2021 03:35:22 -0700 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: Somewhat of a complementary question, and proof to the world that I'm up at 330 AM still thinking about this: Are there interesting or commonly used logical axioms that we know for sure cannot have computational interpretations? On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < neelakantan.krishnaswami at gmail.com> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear Sandro, > > Yes, you're right -- I didn't answer the question, since I was too > taken by the subject line. :) > > Anyway, I do think that HoTT with a non-reducible univalence axiom is > still a programming language, because we can give a computational > interpretation to that language: for example, you could follow the > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, > *Computational Higher-Dimensional Type Theory*. > > Another, simpler example comes from Martin Escardo's example upthread > of basic Martin-Lo?f type theory with the function extensionality > axiom. You can give a very simple realizability interpretation to the > equality type and extensionality axiom, which lets every compiled > program compute. > > What you lose in both of these cases is not the ability to give a > computational model to the language, but rather the ability to > identify normal forms and to use an oriented version of the equational > theory of the language as the evaluation mechanism. > > This is not an overly shocking phenomenon: it occurs even in much > simpler languages than dependent type theories. For example, once you > add the reference type `ref a` to ML, it is no longer the case that > the language has normal forms, because the ref type does not have > introduction and elimination rules with beta- and eta- rules. > > Another way of thinking about this is that often, we *aren't sure* > what the equational theory of our language is or should be. This is > because we often derive a language by thinking about a particular > semantic model, and don't have a clear idea of which equations are > properly part of the theory of the language, and which ones are > accidental features of the concrete model. > > For example, in the case of name generation ? i.e., ref unit ? our > intuitions for which equations hold come from the concrete model of > nominal sets. But we don't know which of those equations should hold > in all models of name generation, and which are "coincidental" to > nominal sets. > > Another, more practical, example comes from the theory of state. We > all have the picture of memory as a big array which is updated by > assembly instructions a la the state monad. But this model incorrectly > models the behaviour of memory on modern multicore systems. So a > proper theory of state for this case should have fewer equations > than what the folk model of state validates. > > > Best, > Neel > > On 19/05/2021 09:03, Sandro Stucki wrote: > > Talia: thanks for a thought-provoking question, and thanks everyone else > > for all the interesting answers so far! > > > > Neel: I love your explanation and all your examples! > > > > But you didn't really answer Talia's question, did you? I'd be curious > > to know where and how HoTT without a computation rule for univalence > > would fit into your classification. It would certainly be a language, > > and by your definition it has models (e.g. cubical ones) which, if I > > understand correctly, can be turned into an abstract machine (either a > > rewriting system per your point 4 or whatever the Agda backends compile > > to). So according to your definition of programming language (point 3), > > this version of HoTT would be a programming language simply because > > there is, in principle, an abstract machine model for it? Is that what > > you had in mind? > > > > Cheers > > /Sandro > > > > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > > > > wrote: > > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > Dear Talia, > > > > Here's an imprecise but useful way of organising these ideas that I > > found helpful. > > > > 1. A *language* is a (generalised) algebraic theory. Basically, think > > of a theory as a set of generators and relations in the style of > > abstract algebra. > > > > You need to beef this up to handle variables (e.g., see the > work of > > Fiore and Hamana) but (a) I promised to be imprecise, and (b) > the > > core intuition that a language is a set of generators for terms, > > plus a set of equations these terms satisfy is already totally > > visible in the basic case. > > > > For example: > > > > a) the simply-typed lambda calculus > > b) regular expressions > > c) relational algebra > > > > 2. A *model* of a a language is literally just any old mathematical > > structure which supports the generators of the language and > > respects the equations. > > > > For example: > > > > a) you can model the typed lambda calculus using sets > > for types and mathematical functions for terms, > > b) you can model regular expressions as denoting particular > > languages (ie, sets of strings) > > c) you can model relational algebra expressions as sets of > > tuples > > > > 2. A *model of computation* or *machine model* is basically a > > description of an abstract machine that we think can be > implemented > > with physical hardware, at least in principle. So these are > things > > like finite state machines, Turing machines, Petri nets, > pushdown > > automata, register machines, circuits, and so on. Basically, > think > > of models of computation as the things you study in a > computability > > class. > > > > The Church-Turing thesis bounds which abstract machines we > think it > > is possible to physically implement. > > > > 3. A language is a *programming language* when you can give at least > > one model of the language using some machine model. > > > > For example: > > > > a) the types of the lambda calculus can be viewed as partial > > equivalence relations over Go?del codes for some universal > turing > > machine, and the terms of a type can be assigned to > equivalence > > classes of the corresponding PER. > > > > b) Regular expressions can be interpreted into finite state > > machines > > quotiented by bisimulation. > > > > c) A set in relational algebra can be realised as equivalence > > classes of B-trees, and relational algebra expressions as > nested > > for-loops over them. > > > > Note that in all three cases we have to quotient the machine > model > > by a suitable equivalence relation to preserve the equations of > the > > language's theory. > > > > This quotient is *very* important, and is the source of a lot of > > confusion. It hides the equivalences the language theory wants to > > deny, but that is not always what the programmer wants ? e.g., is > > merge sort equal to bubble sort? As mathematical functions, they > > surely are, but if you consider them as operations running on an > > actual computer, then we will have strong preferences! > > > > 4. A common source of confusion arises from the fact that if you have > > a nice type-theoretic language (like the STLC), then: > > > > a) the term model of this theory will be the initial model in > the > > category of models, and > > b) you can turn the terms into a machine > > model by orienting some of the equations the lambda-theory > > satisfies and using them as rewrites. > > > > As a result we abuse language to talk about the theory of the > > simply-typed calculus as "being" a programming language. This is > > also where operational semantics comes from, at least for purely > > functional languages. > > > > Best, > > Neel > > > > On 18/05/2021 20:42, Talia Ringer wrote: > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > > > Hi friends, > > > > > > I have a strange discussion I'd like to start. Recently I was > > debating with > > > someone whether Curry-Howard extends to arbitrary logical > > systems---whether > > > all proofs are programs in some sense. I argued yes, he argued > > no. But > > > after a while of arguing, we realized that we had different > > axioms if you > > > will modeling what a "program" is. Is any term that can be typed > > a program? > > > I assumed yes, he assumed no. > > > > > > So then I took to Twitter, and I asked the following questions > (some > > > informal language here, since audience was Twitter): > > > > > > 1. If you're working in a language in which not all terms compute > > (say, > > > HoTT without a computational interpretation of univalence, so not > > cubical), > > > would you still call terms that mostly compute but rely on axioms > > > "programs"? > > > > > > 2. If you answered no, would you call a term that does fully > > compute in the > > > same language a "program"? > > > > > > People actually really disagreed here; there was nothing > resembling > > > consensus. Is a term a program if it calls out to an oracle? > > Relies on an > > > unrealizable axiom? Relies on an axiom that is realizable, but > > not yet > > > realized, like univalence before cubical existed? (I suppose some > > reliance > > > on axioms at some point is necessary, which makes this even > > weirder to > > > me---what makes univalence different to people who do not view > > terms that > > > invoke it as an axiom as programs?) > > > > > > Anyways, it just feels strange to get to the last three weeks of > my > > > programming languages PhD, and realize I've never once asked what > > makes a > > > term a program ?. So it'd be interesting to hear your thoughts. > > > > > > Talia > > > > > > From jasongross9 at gmail.com Wed May 19 08:03:05 2021 From: jasongross9 at gmail.com (Jason Gross) Date: Wed, 19 May 2021 08:03:05 -0400 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: Non-truncated Excluded Middle (that is, the version that returns an informative disjunction) cannot have a computational interpretation in Turing machines, for it would allow you to decide the halting problem. More generally, some computational complexity theory is done with reference to oracles for known-undecidable problems. Additionally, I'd be suspicious of a computational interpretation of the consistency of ZFC or PA ---- would having a computational interpretation of these mean having a type theory that believes that there are ground terms of type False in the presence of a contradiction in ZFC? On Wed, May 19, 2021, 07:38 Talia Ringer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Somewhat of a complementary question, and proof to the world that I'm up at > 330 AM still thinking about this: > > Are there interesting or commonly used logical axioms that we know for sure > cannot have computational interpretations? > > On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < > neelakantan.krishnaswami at gmail.com> wrote: > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > Dear Sandro, > > > > Yes, you're right -- I didn't answer the question, since I was too > > taken by the subject line. :) > > > > Anyway, I do think that HoTT with a non-reducible univalence axiom is > > still a programming language, because we can give a computational > > interpretation to that language: for example, you could follow the > > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, > > *Computational Higher-Dimensional Type Theory*. > > > > Another, simpler example comes from Martin Escardo's example upthread > > of basic Martin-Lo?f type theory with the function extensionality > > axiom. You can give a very simple realizability interpretation to the > > equality type and extensionality axiom, which lets every compiled > > program compute. > > > > What you lose in both of these cases is not the ability to give a > > computational model to the language, but rather the ability to > > identify normal forms and to use an oriented version of the equational > > theory of the language as the evaluation mechanism. > > > > This is not an overly shocking phenomenon: it occurs even in much > > simpler languages than dependent type theories. For example, once you > > add the reference type `ref a` to ML, it is no longer the case that > > the language has normal forms, because the ref type does not have > > introduction and elimination rules with beta- and eta- rules. > > > > Another way of thinking about this is that often, we *aren't sure* > > what the equational theory of our language is or should be. This is > > because we often derive a language by thinking about a particular > > semantic model, and don't have a clear idea of which equations are > > properly part of the theory of the language, and which ones are > > accidental features of the concrete model. > > > > For example, in the case of name generation ? i.e., ref unit ? our > > intuitions for which equations hold come from the concrete model of > > nominal sets. But we don't know which of those equations should hold > > in all models of name generation, and which are "coincidental" to > > nominal sets. > > > > Another, more practical, example comes from the theory of state. We > > all have the picture of memory as a big array which is updated by > > assembly instructions a la the state monad. But this model incorrectly > > models the behaviour of memory on modern multicore systems. So a > > proper theory of state for this case should have fewer equations > > than what the folk model of state validates. > > > > > > Best, > > Neel > > > > On 19/05/2021 09:03, Sandro Stucki wrote: > > > Talia: thanks for a thought-provoking question, and thanks everyone > else > > > for all the interesting answers so far! > > > > > > Neel: I love your explanation and all your examples! > > > > > > But you didn't really answer Talia's question, did you? I'd be curious > > > to know where and how HoTT without a computation rule for univalence > > > would fit into your classification. It would certainly be a language, > > > and by your definition it has models (e.g. cubical ones) which, if I > > > understand correctly, can be turned into an abstract machine (either a > > > rewriting system per your point 4 or whatever the Agda backends compile > > > to). So according to your definition of programming language (point 3), > > > this version of HoTT would be a programming language simply because > > > there is, in principle, an abstract machine model for it? Is that what > > > you had in mind? > > > > > > Cheers > > > /Sandro > > > > > > > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > > > > > > wrote: > > > > > > [ The Types Forum, > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > ] > > > > > > Dear Talia, > > > > > > Here's an imprecise but useful way of organising these ideas that I > > > found helpful. > > > > > > 1. A *language* is a (generalised) algebraic theory. Basically, > think > > > of a theory as a set of generators and relations in the style > of > > > abstract algebra. > > > > > > You need to beef this up to handle variables (e.g., see the > > work of > > > Fiore and Hamana) but (a) I promised to be imprecise, and (b) > > the > > > core intuition that a language is a set of generators for > terms, > > > plus a set of equations these terms satisfy is already totally > > > visible in the basic case. > > > > > > For example: > > > > > > a) the simply-typed lambda calculus > > > b) regular expressions > > > c) relational algebra > > > > > > 2. A *model* of a a language is literally just any old mathematical > > > structure which supports the generators of the language and > > > respects the equations. > > > > > > For example: > > > > > > a) you can model the typed lambda calculus using sets > > > for types and mathematical functions for terms, > > > b) you can model regular expressions as denoting particular > > > languages (ie, sets of strings) > > > c) you can model relational algebra expressions as sets of > > > tuples > > > > > > 2. A *model of computation* or *machine model* is basically a > > > description of an abstract machine that we think can be > > implemented > > > with physical hardware, at least in principle. So these are > > things > > > like finite state machines, Turing machines, Petri nets, > > pushdown > > > automata, register machines, circuits, and so on. Basically, > > think > > > of models of computation as the things you study in a > > computability > > > class. > > > > > > The Church-Turing thesis bounds which abstract machines we > > think it > > > is possible to physically implement. > > > > > > 3. A language is a *programming language* when you can give at > least > > > one model of the language using some machine model. > > > > > > For example: > > > > > > a) the types of the lambda calculus can be viewed as partial > > > equivalence relations over Go?del codes for some universal > > turing > > > machine, and the terms of a type can be assigned to > > equivalence > > > classes of the corresponding PER. > > > > > > b) Regular expressions can be interpreted into finite state > > > machines > > > quotiented by bisimulation. > > > > > > c) A set in relational algebra can be realised as equivalence > > > classes of B-trees, and relational algebra expressions as > > nested > > > for-loops over them. > > > > > > Note that in all three cases we have to quotient the machine > > model > > > by a suitable equivalence relation to preserve the equations of > > the > > > language's theory. > > > > > > This quotient is *very* important, and is the source of a lot > of > > > confusion. It hides the equivalences the language theory wants > to > > > deny, but that is not always what the programmer wants ? e.g., > is > > > merge sort equal to bubble sort? As mathematical functions, > they > > > surely are, but if you consider them as operations running on > an > > > actual computer, then we will have strong preferences! > > > > > > 4. A common source of confusion arises from the fact that if you > have > > > a nice type-theoretic language (like the STLC), then: > > > > > > a) the term model of this theory will be the initial model in > > the > > > category of models, and > > > b) you can turn the terms into a machine > > > model by orienting some of the equations the lambda-theory > > > satisfies and using them as rewrites. > > > > > > As a result we abuse language to talk about the theory of the > > > simply-typed calculus as "being" a programming language. This > is > > > also where operational semantics comes from, at least for > purely > > > functional languages. > > > > > > Best, > > > Neel > > > > > > On 18/05/2021 20:42, Talia Ringer wrote: > > > > [ The Types Forum, > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > ] > > > > > > > > Hi friends, > > > > > > > > I have a strange discussion I'd like to start. Recently I was > > > debating with > > > > someone whether Curry-Howard extends to arbitrary logical > > > systems---whether > > > > all proofs are programs in some sense. I argued yes, he argued > > > no. But > > > > after a while of arguing, we realized that we had different > > > axioms if you > > > > will modeling what a "program" is. Is any term that can be typed > > > a program? > > > > I assumed yes, he assumed no. > > > > > > > > So then I took to Twitter, and I asked the following questions > > (some > > > > informal language here, since audience was Twitter): > > > > > > > > 1. If you're working in a language in which not all terms > compute > > > (say, > > > > HoTT without a computational interpretation of univalence, so > not > > > cubical), > > > > would you still call terms that mostly compute but rely on > axioms > > > > "programs"? > > > > > > > > 2. If you answered no, would you call a term that does fully > > > compute in the > > > > same language a "program"? > > > > > > > > People actually really disagreed here; there was nothing > > resembling > > > > consensus. Is a term a program if it calls out to an oracle? > > > Relies on an > > > > unrealizable axiom? Relies on an axiom that is realizable, but > > > not yet > > > > realized, like univalence before cubical existed? (I suppose > some > > > reliance > > > > on axioms at some point is necessary, which makes this even > > > weirder to > > > > me---what makes univalence different to people who do not view > > > terms that > > > > invoke it as an axiom as programs?) > > > > > > > > Anyways, it just feels strange to get to the last three weeks of > > my > > > > programming languages PhD, and realize I've never once asked > what > > > makes a > > > > term a program ?. So it'd be interesting to hear your thoughts. > > > > > > > > Talia > > > > > > > > > > From hendrik at topoi.pooq.com Wed May 19 09:20:39 2021 From: hendrik at topoi.pooq.com (Hendrik Boom) Date: Wed, 19 May 2021 09:20:39 -0400 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <7f928b34-f159-cf1f-d25e-55680eb75255@cs.bham.ac.uk> Message-ID: <20210519132039.uz7zqp5y63oaqccg@topoi.pooq.com> On Tue, May 18, 2021 at 10:51:56PM +0100, Martin Escardo wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > On 18/05/2021 22:18, Talia Ringer wrote: ... ... > > > > ?What makes it more specific? In my mind, you can certainly view any > > program in any language as a proof in some logical system, even if the > > logical system is not actually implemented as the language's type > > checker, and even if the corresponding logic is unsound. > > A proof in an > > unsound logic is still a proof, I think---just a possibly useless one > > (possibly useful, though, since you can have incorrect assumptions and > > still sometimes prove things that in no way rely on them). > > I am not sure I understand this, but I would be willing to discuss it to > clarify it. Example of a useful inconsistent logic. Useful only if used properly, of course, but it leaves it to the user to determine wnat is proper. Take a usual univers-hierarchy type theory, one with u0 : U1, U1 : U2 an so forth. How far to follow that hierarchy? Two levels? an infinite sequence? Index the U's with all the ordinals: There are always limits on expressing things that generalise over universes. The obvious thing to do is to introduce a type of all universes, and to consider it a universe. U : U Of course, that's inconsistent. But it allows you to take any type to index universes by using a constant function that always returns U. And then you'll be able to construct any infinite herarchy you want. You could even make a reasonable one and prove its reasonable properties. Afterward, you can use this hierarchy consitently as long as you forget you ever had U : U lying around, because using U : U directly can lead to trouble. -- hendrik From oleg at okmij.org Wed May 19 10:52:40 2021 From: oleg at okmij.org (Oleg) Date: Wed, 19 May 2021 23:52:40 +0900 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: Message-ID: <20210519145240.GA2140@Melchior.localnet> If we are asking `what is a program' without further qualifications, then I believe the discussion so far has been too `procedurally' focused, leaving out a class of programs and programming languages. For example, Prolog is a programming language -- its name says so (and so does Wikipedia). A Prolog program is a sequence of clauses and a query, asking if some new clause is a consequence of the others. The answer is no or yes (perhaps with some more information). There is no inherent notion of time, or reductions. One may say: but the answering the query is performing a sequence of SLD resolutions. Well, nowadays -- no. First, in answerset semantics, a Prolog program is just an easier-to-read SAT solver query. Second, for a long time Prolog has supported constraints over various domains (beside the equality constraints over terms, of course). So, at some point an execution engine might consult a constraint solver. Some constraint solvers are built-in. Some are external: there are constraint-logic programming systems (CLP) that consult an SMT solver. In principle an execution engine may even consult with the user (e.g., via a debugger interface). This is all called programming -- that's what the last letter in CLP stands for. Seeing as more and more bona fide programming languages are interfacing with Z3, perhaps it is not a stretch to call an SMT query a program. Or a query of a theorem prover. (And although I prefer not to go into that direction, I have seen several presentations advertizing machine learning as programming. In that case, it is really very hard to tell how the answer is arrived at. Still, some people do call training a neural net and then querying it programming. So a program is a set of training data and a query.) A small remark on Neel Krishnaswami's approach: > 3. A language is a *programming language* when you can give at least > one model of the language using some machine model. Well, let's take as a model of a language a term model -- which surely exists if the language/algebra has constants/generators and equalities are algebraic identities. (actually, the overall point holds for more general identities). That is, the carrier set of the algebra is just a set of terms quotiened by equalities. By `give a model using some machine model' I understand I need to present a machine that given a term produces some element of the carrier set. Since these elements are term equivalence classes, and a term itself is a representative of its class, our machine is the identity. I believe it is physically realizable. Thus every language (a generalized algebraic theory) is a programming language? From streicher at mathematik.tu-darmstadt.de Wed May 19 10:56:06 2021 From: streicher at mathematik.tu-darmstadt.de (Thomas Streicher) Date: Wed, 19 May 2021 16:56:06 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: <20210519145606.GA15921@mathematik.tu-darmstadt.de> > Are there interesting or commonly used logical axioms that we know for sure > cannot have computational interpretations? PEM (Principle of Excluded Middle) already for equality of functions from N to N. From this you immediately get a decider for the halting problem. Thomas From gabriel.scherer at gmail.com Wed May 19 11:26:52 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Wed, 19 May 2021 17:26:52 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: I am not convinced by the example of Jason and Thomas, which suggests that I am missing something. We can interpret the excluded middle in classical abstract machines (for example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical lambda-calculus), or in presence of control operators (classical abstract machines being nicer syntax for non-delimited continuation operators). If you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); if you ever manage to build a proof of A and try to use it to get a contradiction using this (not A), it will "cheat" by traveling back in time to your "ask", and serve you your own proof of A. This gives a computational interpretation of (non-truncated) excluded middle that seems perfectly in line with Talia's notion of "program". Of course, what we don't get that we might expect is a canonicity property: we now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" proofs by normalization alone, you have to interact with them to see where they take you. (Or, if you see those classical programs through a double-negation translation, they aren't really at type (A \/ B), but rather at its double-negation translation, which has weirder normal forms.) On Wed, May 19, 2021 at 5:07 PM Jason Gross wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Non-truncated Excluded Middle (that is, the version that returns an > informative disjunction) cannot have a computational interpretation in > Turing machines, for it would allow you to decide the halting problem. > More generally, some computational complexity theory is done with reference > to oracles for known-undecidable problems. Additionally, I'd be suspicious > of a computational interpretation of the consistency of ZFC or PA ---- > would having a computational interpretation of these mean having a type > theory that believes that there are ground terms of type False in the > presence of a contradiction in ZFC? > > On Wed, May 19, 2021, 07:38 Talia Ringer > wrote: > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > Somewhat of a complementary question, and proof to the world that I'm up > at > > 330 AM still thinking about this: > > > > Are there interesting or commonly used logical axioms that we know for > sure > > cannot have computational interpretations? > > > > On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < > > neelakantan.krishnaswami at gmail.com> wrote: > > > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > ] > > > > > > Dear Sandro, > > > > > > Yes, you're right -- I didn't answer the question, since I was too > > > taken by the subject line. :) > > > > > > Anyway, I do think that HoTT with a non-reducible univalence axiom is > > > still a programming language, because we can give a computational > > > interpretation to that language: for example, you could follow the > > > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, > > > *Computational Higher-Dimensional Type Theory*. > > > > > > Another, simpler example comes from Martin Escardo's example upthread > > > of basic Martin-Lo?f type theory with the function extensionality > > > axiom. You can give a very simple realizability interpretation to the > > > equality type and extensionality axiom, which lets every compiled > > > program compute. > > > > > > What you lose in both of these cases is not the ability to give a > > > computational model to the language, but rather the ability to > > > identify normal forms and to use an oriented version of the equational > > > theory of the language as the evaluation mechanism. > > > > > > This is not an overly shocking phenomenon: it occurs even in much > > > simpler languages than dependent type theories. For example, once you > > > add the reference type `ref a` to ML, it is no longer the case that > > > the language has normal forms, because the ref type does not have > > > introduction and elimination rules with beta- and eta- rules. > > > > > > Another way of thinking about this is that often, we *aren't sure* > > > what the equational theory of our language is or should be. This is > > > because we often derive a language by thinking about a particular > > > semantic model, and don't have a clear idea of which equations are > > > properly part of the theory of the language, and which ones are > > > accidental features of the concrete model. > > > > > > For example, in the case of name generation ? i.e., ref unit ? our > > > intuitions for which equations hold come from the concrete model of > > > nominal sets. But we don't know which of those equations should hold > > > in all models of name generation, and which are "coincidental" to > > > nominal sets. > > > > > > Another, more practical, example comes from the theory of state. We > > > all have the picture of memory as a big array which is updated by > > > assembly instructions a la the state monad. But this model incorrectly > > > models the behaviour of memory on modern multicore systems. So a > > > proper theory of state for this case should have fewer equations > > > than what the folk model of state validates. > > > > > > > > > Best, > > > Neel > > > > > > On 19/05/2021 09:03, Sandro Stucki wrote: > > > > Talia: thanks for a thought-provoking question, and thanks everyone > > else > > > > for all the interesting answers so far! > > > > > > > > Neel: I love your explanation and all your examples! > > > > > > > > But you didn't really answer Talia's question, did you? I'd be > curious > > > > to know where and how HoTT without a computation rule for univalence > > > > would fit into your classification. It would certainly be a language, > > > > and by your definition it has models (e.g. cubical ones) which, if I > > > > understand correctly, can be turned into an abstract machine (either > a > > > > rewriting system per your point 4 or whatever the Agda backends > compile > > > > to). So according to your definition of programming language (point > 3), > > > > this version of HoTT would be a programming language simply because > > > > there is, in principle, an abstract machine model for it? Is that > what > > > > you had in mind? > > > > > > > > Cheers > > > > /Sandro > > > > > > > > > > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > > > > > > > > wrote: > > > > > > > > [ The Types Forum, > > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > > ] > > > > > > > > Dear Talia, > > > > > > > > Here's an imprecise but useful way of organising these ideas > that I > > > > found helpful. > > > > > > > > 1. A *language* is a (generalised) algebraic theory. Basically, > > think > > > > of a theory as a set of generators and relations in the > style > > of > > > > abstract algebra. > > > > > > > > You need to beef this up to handle variables (e.g., see the > > > work of > > > > Fiore and Hamana) but (a) I promised to be imprecise, and > (b) > > > the > > > > core intuition that a language is a set of generators for > > terms, > > > > plus a set of equations these terms satisfy is already > totally > > > > visible in the basic case. > > > > > > > > For example: > > > > > > > > a) the simply-typed lambda calculus > > > > b) regular expressions > > > > c) relational algebra > > > > > > > > 2. A *model* of a a language is literally just any old > mathematical > > > > structure which supports the generators of the language and > > > > respects the equations. > > > > > > > > For example: > > > > > > > > a) you can model the typed lambda calculus using sets > > > > for types and mathematical functions for terms, > > > > b) you can model regular expressions as denoting particular > > > > languages (ie, sets of strings) > > > > c) you can model relational algebra expressions as sets of > > > > tuples > > > > > > > > 2. A *model of computation* or *machine model* is basically a > > > > description of an abstract machine that we think can be > > > implemented > > > > with physical hardware, at least in principle. So these are > > > things > > > > like finite state machines, Turing machines, Petri nets, > > > pushdown > > > > automata, register machines, circuits, and so on. Basically, > > > think > > > > of models of computation as the things you study in a > > > computability > > > > class. > > > > > > > > The Church-Turing thesis bounds which abstract machines we > > > think it > > > > is possible to physically implement. > > > > > > > > 3. A language is a *programming language* when you can give at > > least > > > > one model of the language using some machine model. > > > > > > > > For example: > > > > > > > > a) the types of the lambda calculus can be viewed as partial > > > > equivalence relations over Go?del codes for some > universal > > > turing > > > > machine, and the terms of a type can be assigned to > > > equivalence > > > > classes of the corresponding PER. > > > > > > > > b) Regular expressions can be interpreted into finite state > > > > machines > > > > quotiented by bisimulation. > > > > > > > > c) A set in relational algebra can be realised as > equivalence > > > > classes of B-trees, and relational algebra expressions as > > > nested > > > > for-loops over them. > > > > > > > > Note that in all three cases we have to quotient the machine > > > model > > > > by a suitable equivalence relation to preserve the equations > of > > > the > > > > language's theory. > > > > > > > > This quotient is *very* important, and is the source of a lot > > of > > > > confusion. It hides the equivalences the language theory > wants > > to > > > > deny, but that is not always what the programmer wants ? > e.g., > > is > > > > merge sort equal to bubble sort? As mathematical functions, > > they > > > > surely are, but if you consider them as operations running on > > an > > > > actual computer, then we will have strong preferences! > > > > > > > > 4. A common source of confusion arises from the fact that if you > > have > > > > a nice type-theoretic language (like the STLC), then: > > > > > > > > a) the term model of this theory will be the initial model > in > > > the > > > > category of models, and > > > > b) you can turn the terms into a machine > > > > model by orienting some of the equations the > lambda-theory > > > > satisfies and using them as rewrites. > > > > > > > > As a result we abuse language to talk about the theory of > the > > > > simply-typed calculus as "being" a programming language. > This > > is > > > > also where operational semantics comes from, at least for > > purely > > > > functional languages. > > > > > > > > Best, > > > > Neel > > > > > > > > On 18/05/2021 20:42, Talia Ringer wrote: > > > > > [ The Types Forum, > > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > > ] > > > > > > > > > > Hi friends, > > > > > > > > > > I have a strange discussion I'd like to start. Recently I was > > > > debating with > > > > > someone whether Curry-Howard extends to arbitrary logical > > > > systems---whether > > > > > all proofs are programs in some sense. I argued yes, he argued > > > > no. But > > > > > after a while of arguing, we realized that we had different > > > > axioms if you > > > > > will modeling what a "program" is. Is any term that can be > typed > > > > a program? > > > > > I assumed yes, he assumed no. > > > > > > > > > > So then I took to Twitter, and I asked the following questions > > > (some > > > > > informal language here, since audience was Twitter): > > > > > > > > > > 1. If you're working in a language in which not all terms > > compute > > > > (say, > > > > > HoTT without a computational interpretation of univalence, so > > not > > > > cubical), > > > > > would you still call terms that mostly compute but rely on > > axioms > > > > > "programs"? > > > > > > > > > > 2. If you answered no, would you call a term that does fully > > > > compute in the > > > > > same language a "program"? > > > > > > > > > > People actually really disagreed here; there was nothing > > > resembling > > > > > consensus. Is a term a program if it calls out to an oracle? > > > > Relies on an > > > > > unrealizable axiom? Relies on an axiom that is realizable, but > > > > not yet > > > > > realized, like univalence before cubical existed? (I suppose > > some > > > > reliance > > > > > on axioms at some point is necessary, which makes this even > > > > weirder to > > > > > me---what makes univalence different to people who do not view > > > > terms that > > > > > invoke it as an axiom as programs?) > > > > > > > > > > Anyways, it just feels strange to get to the last three weeks > of > > > my > > > > > programming languages PhD, and realize I've never once asked > > what > > > > makes a > > > > > term a program ?. So it'd be interesting to hear your > thoughts. > > > > > > > > > > Talia > > > > > > > > > > > > > > > From streicher at mathematik.tu-darmstadt.de Wed May 19 11:59:13 2021 From: streicher at mathematik.tu-darmstadt.de (Thomas Streicher) Date: Wed, 19 May 2021 17:59:13 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: <20210519155913.GD15921@mathematik.tu-darmstadt.de> n Wed, May 19, 2021 at 05:26:52PM +0200, Gabriel Scherer wrote: > I am not convinced by the example of Jason and Thomas, which suggests that > I am missing something. > > We can interpret the excluded middle in classical abstract machines (for > example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical > lambda-calculus), or in presence of control operators (classical abstract > machines being nicer syntax for non-delimited continuation operators). If > you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); > if you ever manage to build a proof of A and try to use it to get a > contradiction using this (not A), it will "cheat" by traveling back in time > to your "ask", and serve you your own proof of A. > > This gives a computational interpretation of (non-truncated) excluded > middle that seems perfectly in line with Talia's notion of "program". Of > course, what we don't get that we might expect is a canonicity property: we > now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" > proofs by normalization alone, you have to interact with them to see where > they take you. (Or, if you see those classical programs through a > double-negation translation, they aren't really at type (A \/ B), but > rather at its double-negation translation, which has weirder normal forms.) All these computational interpretations of classical logic are actually constructive interpretations of their negative translations. This is maybe not so transparent because macros are used. But if you consider the cps translation of these macros you again get the negative translations. I know the French school (Krivine in particular) is very fond of these macros. But these macros don't add too computational power. More generally, all effects can be translated to a purely functional kernel. Whether one likes these macros or not presumably depends on whether one is more CS person or a mathematician. But these preferences don't change strength. Thomas From tadeusz.litak at gmail.com Wed May 19 13:32:48 2021 From: tadeusz.litak at gmail.com (Tadeusz Litak) Date: Wed, 19 May 2021 19:32:48 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: This controversy (between Gabriel and Thomas + Jason) is an instructive example.? One should perhaps distinguish between a "logical axiom" understood as a purely propositional scheme and its specific instances in? expressive languages. A purely propositional scheme may allow computational interpretations in some domains even when its specific instances fail in concrete domains of foundational significance. But then, of course, the question arises whether such a scheme deserves the name of a *logical* axiom at all. Philosophers of logic tend to insist that such an axiom should be valid across all possible worlds. This contrasts rather funnily with the fact that most of them? still seem to prefer classical logic. Then again, they tend to be brought up on set theory and have little awareness of the fact that structures refuting excluded middle are actively explored and used in modern CS (and math). (When it comes to logical character of Choice and suchlike, yet another terminological problems arises. There is a long-standing discussion concerning demarcation lines between logic, set/type theory and the rest of mathematics. Clearly, AC cannot even be stated if the ambient language is not sufficiently rich. Thus, even many philosophers who believe in ZFC as a foundation could refuse to call Choice a *logical* axiom.) To come back to the question whether an inconsistent logic is a logic at all: there are several issues here. The first one is the validity of principle of explosion (ex falso quodlibet). It has been argued that Brouwer himself did not believe in this law. It also wasn't included in Kolmogorov's first attempt at axiomatizing intuitionistic logic (1925); he added it in a 1932 paper, after Heyting. Ingebrit Johansson explicitly disposed of this law in 1937. More broadly, paraconsistent/relevance logicians do study systems admitting some contradictions with a non-degenerate notion of theoremhood. However, this is clearly of little help when discussing Turing-complete languages admitting general recursion, where *all* types are inhabited and the notion of theoremhood does collapse. To restore some connection with logic, one would need to claim that logic is concerned with notions subtler than theoremhood. One possible choice, going back to Tarski and underlying the entire enterprise of Abstract Algebraic Logic (AAL), is to shift focus to the consequence relation (entailment or turnstile). I am not sure if this is a viable route here. One would need to come up with a natural turnstile s.t. A|- B may fail despite A -> B being inhabited and despite \emptyset |- B being valid. Otherwise, the consequence relation remains as degenerate as the notion of theoremhood. So one would have to go even more fine-grained and claim that logic is (or should be) concerned with criteria of identity of proofs, i.e., the question of what makes two proofs of the same proposition identical. This has been suggested by some logicians focusing on proof theory and/or categorical logic, but most logicians would probably find it an extravagant view. Best, t. On 19/5/21 5:26 PM, Gabriel Scherer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I am not convinced by the example of Jason and Thomas, which suggests that > I am missing something. > > We can interpret the excluded middle in classical abstract machines (for > example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical > lambda-calculus), or in presence of control operators (classical abstract > machines being nicer syntax for non-delimited continuation operators). If > you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); > if you ever manage to build a proof of A and try to use it to get a > contradiction using this (not A), it will "cheat" by traveling back in time > to your "ask", and serve you your own proof of A. > > This gives a computational interpretation of (non-truncated) excluded > middle that seems perfectly in line with Talia's notion of "program". Of > course, what we don't get that we might expect is a canonicity property: we > now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" > proofs by normalization alone, you have to interact with them to see where > they take you. (Or, if you see those classical programs through a > double-negation translation, they aren't really at type (A \/ B), but > rather at its double-negation translation, which has weirder normal forms.) > > > > > On Wed, May 19, 2021 at 5:07 PM Jason Gross wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Non-truncated Excluded Middle (that is, the version that returns an >> informative disjunction) cannot have a computational interpretation in >> Turing machines, for it would allow you to decide the halting problem. >> More generally, some computational complexity theory is done with reference >> to oracles for known-undecidable problems. Additionally, I'd be suspicious >> of a computational interpretation of the consistency of ZFC or PA ---- >> would having a computational interpretation of these mean having a type >> theory that believes that there are ground terms of type False in the >> presence of a contradiction in ZFC? >> >> On Wed, May 19, 2021, 07:38 Talia Ringer >> wrote: >> >>> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ] >>> >>> Somewhat of a complementary question, and proof to the world that I'm up >> at >>> 330 AM still thinking about this: >>> >>> Are there interesting or commonly used logical axioms that we know for >> sure >>> cannot have computational interpretations? >>> >>> On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < >>> neelakantan.krishnaswami at gmail.com> wrote: >>> >>>> [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ] >>>> >>>> Dear Sandro, >>>> >>>> Yes, you're right -- I didn't answer the question, since I was too >>>> taken by the subject line. :) >>>> >>>> Anyway, I do think that HoTT with a non-reducible univalence axiom is >>>> still a programming language, because we can give a computational >>>> interpretation to that language: for example, you could follow the >>>> strategy of Angiuli, Harper and Wilson's POPL 2017 paper, >>>> *Computational Higher-Dimensional Type Theory*. >>>> >>>> Another, simpler example comes from Martin Escardo's example upthread >>>> of basic Martin-Lo?f type theory with the function extensionality >>>> axiom. You can give a very simple realizability interpretation to the >>>> equality type and extensionality axiom, which lets every compiled >>>> program compute. >>>> >>>> What you lose in both of these cases is not the ability to give a >>>> computational model to the language, but rather the ability to >>>> identify normal forms and to use an oriented version of the equational >>>> theory of the language as the evaluation mechanism. >>>> >>>> This is not an overly shocking phenomenon: it occurs even in much >>>> simpler languages than dependent type theories. For example, once you >>>> add the reference type `ref a` to ML, it is no longer the case that >>>> the language has normal forms, because the ref type does not have >>>> introduction and elimination rules with beta- and eta- rules. >>>> >>>> Another way of thinking about this is that often, we *aren't sure* >>>> what the equational theory of our language is or should be. This is >>>> because we often derive a language by thinking about a particular >>>> semantic model, and don't have a clear idea of which equations are >>>> properly part of the theory of the language, and which ones are >>>> accidental features of the concrete model. >>>> >>>> For example, in the case of name generation ? i.e., ref unit ? our >>>> intuitions for which equations hold come from the concrete model of >>>> nominal sets. But we don't know which of those equations should hold >>>> in all models of name generation, and which are "coincidental" to >>>> nominal sets. >>>> >>>> Another, more practical, example comes from the theory of state. We >>>> all have the picture of memory as a big array which is updated by >>>> assembly instructions a la the state monad. But this model incorrectly >>>> models the behaviour of memory on modern multicore systems. So a >>>> proper theory of state for this case should have fewer equations >>>> than what the folk model of state validates. >>>> >>>> >>>> Best, >>>> Neel >>>> >>>> On 19/05/2021 09:03, Sandro Stucki wrote: >>>>> Talia: thanks for a thought-provoking question, and thanks everyone >>> else >>>>> for all the interesting answers so far! >>>>> >>>>> Neel: I love your explanation and all your examples! >>>>> >>>>> But you didn't really answer Talia's question, did you? I'd be >> curious >>>>> to know where and how HoTT without a computation rule for univalence >>>>> would fit into your classification. It would certainly be a language, >>>>> and by your definition it has models (e.g. cubical ones) which, if I >>>>> understand correctly, can be turned into an abstract machine (either >> a >>>>> rewriting system per your point 4 or whatever the Agda backends >> compile >>>>> to). So according to your definition of programming language (point >> 3), >>>>> this version of HoTT would be a programming language simply because >>>>> there is, in principle, an abstract machine model for it? Is that >> what >>>>> you had in mind? >>>>> >>>>> Cheers >>>>> /Sandro >>>>> >>>>> >>>>> On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami >>>>> >>>> > wrote: >>>>> >>>>> [ The Types Forum, >>>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>> ] >>>>> >>>>> Dear Talia, >>>>> >>>>> Here's an imprecise but useful way of organising these ideas >> that I >>>>> found helpful. >>>>> >>>>> 1. A *language* is a (generalised) algebraic theory. Basically, >>> think >>>>> of a theory as a set of generators and relations in the >> style >>> of >>>>> abstract algebra. >>>>> >>>>> You need to beef this up to handle variables (e.g., see the >>>> work of >>>>> Fiore and Hamana) but (a) I promised to be imprecise, and >> (b) >>>> the >>>>> core intuition that a language is a set of generators for >>> terms, >>>>> plus a set of equations these terms satisfy is already >> totally >>>>> visible in the basic case. >>>>> >>>>> For example: >>>>> >>>>> a) the simply-typed lambda calculus >>>>> b) regular expressions >>>>> c) relational algebra >>>>> >>>>> 2. A *model* of a a language is literally just any old >> mathematical >>>>> structure which supports the generators of the language and >>>>> respects the equations. >>>>> >>>>> For example: >>>>> >>>>> a) you can model the typed lambda calculus using sets >>>>> for types and mathematical functions for terms, >>>>> b) you can model regular expressions as denoting particular >>>>> languages (ie, sets of strings) >>>>> c) you can model relational algebra expressions as sets of >>>>> tuples >>>>> >>>>> 2. A *model of computation* or *machine model* is basically a >>>>> description of an abstract machine that we think can be >>>> implemented >>>>> with physical hardware, at least in principle. So these are >>>> things >>>>> like finite state machines, Turing machines, Petri nets, >>>> pushdown >>>>> automata, register machines, circuits, and so on. Basically, >>>> think >>>>> of models of computation as the things you study in a >>>> computability >>>>> class. >>>>> >>>>> The Church-Turing thesis bounds which abstract machines we >>>> think it >>>>> is possible to physically implement. >>>>> >>>>> 3. A language is a *programming language* when you can give at >>> least >>>>> one model of the language using some machine model. >>>>> >>>>> For example: >>>>> >>>>> a) the types of the lambda calculus can be viewed as partial >>>>> equivalence relations over Go?del codes for some >> universal >>>> turing >>>>> machine, and the terms of a type can be assigned to >>>> equivalence >>>>> classes of the corresponding PER. >>>>> >>>>> b) Regular expressions can be interpreted into finite state >>>>> machines >>>>> quotiented by bisimulation. >>>>> >>>>> c) A set in relational algebra can be realised as >> equivalence >>>>> classes of B-trees, and relational algebra expressions as >>>> nested >>>>> for-loops over them. >>>>> >>>>> Note that in all three cases we have to quotient the machine >>>> model >>>>> by a suitable equivalence relation to preserve the equations >> of >>>> the >>>>> language's theory. >>>>> >>>>> This quotient is *very* important, and is the source of a lot >>> of >>>>> confusion. It hides the equivalences the language theory >> wants >>> to >>>>> deny, but that is not always what the programmer wants ? >> e.g., >>> is >>>>> merge sort equal to bubble sort? As mathematical functions, >>> they >>>>> surely are, but if you consider them as operations running on >>> an >>>>> actual computer, then we will have strong preferences! >>>>> >>>>> 4. A common source of confusion arises from the fact that if you >>> have >>>>> a nice type-theoretic language (like the STLC), then: >>>>> >>>>> a) the term model of this theory will be the initial model >> in >>>> the >>>>> category of models, and >>>>> b) you can turn the terms into a machine >>>>> model by orienting some of the equations the >> lambda-theory >>>>> satisfies and using them as rewrites. >>>>> >>>>> As a result we abuse language to talk about the theory of >> the >>>>> simply-typed calculus as "being" a programming language. >> This >>> is >>>>> also where operational semantics comes from, at least for >>> purely >>>>> functional languages. >>>>> >>>>> Best, >>>>> Neel >>>>> >>>>> On 18/05/2021 20:42, Talia Ringer wrote: >>>>> > [ The Types Forum, >>>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>> ] >>>>> > >>>>> > Hi friends, >>>>> > >>>>> > I have a strange discussion I'd like to start. Recently I was >>>>> debating with >>>>> > someone whether Curry-Howard extends to arbitrary logical >>>>> systems---whether >>>>> > all proofs are programs in some sense. I argued yes, he argued >>>>> no. But >>>>> > after a while of arguing, we realized that we had different >>>>> axioms if you >>>>> > will modeling what a "program" is. Is any term that can be >> typed >>>>> a program? >>>>> > I assumed yes, he assumed no. >>>>> > >>>>> > So then I took to Twitter, and I asked the following questions >>>> (some >>>>> > informal language here, since audience was Twitter): >>>>> > >>>>> > 1. If you're working in a language in which not all terms >>> compute >>>>> (say, >>>>> > HoTT without a computational interpretation of univalence, so >>> not >>>>> cubical), >>>>> > would you still call terms that mostly compute but rely on >>> axioms >>>>> > "programs"? >>>>> > >>>>> > 2. If you answered no, would you call a term that does fully >>>>> compute in the >>>>> > same language a "program"? >>>>> > >>>>> > People actually really disagreed here; there was nothing >>>> resembling >>>>> > consensus. Is a term a program if it calls out to an oracle? >>>>> Relies on an >>>>> > unrealizable axiom? Relies on an axiom that is realizable, but >>>>> not yet >>>>> > realized, like univalence before cubical existed? (I suppose >>> some >>>>> reliance >>>>> > on axioms at some point is necessary, which makes this even >>>>> weirder to >>>>> > me---what makes univalence different to people who do not view >>>>> terms that >>>>> > invoke it as an axiom as programs?) >>>>> > >>>>> > Anyways, it just feels strange to get to the last three weeks >> of >>>> my >>>>> > programming languages PhD, and realize I've never once asked >>> what >>>>> makes a >>>>> > term a program ?. So it'd be interesting to hear your >> thoughts. >>>>> > >>>>> > Talia >>>>> > >>>>> From jasongross9 at gmail.com Wed May 19 21:52:33 2021 From: jasongross9 at gmail.com (Jason Gross) Date: Wed, 19 May 2021 21:52:33 -0400 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: > If you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); if you ever manage to build a proof of A and try to use it to get a contradiction using this (not A), it will "cheat" by traveling back in time to your "ask", and serve you your own proof of A. I don't understand how this semantics works; it seems to me that it invalidates the normal reduction rules. Consider the following: Axiom LEM : forall A, A + (A -> False). Definition term1 := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | _ => _ end with | inl v => fun _ => v | inr bad => fun f => f bad end (fun bad => let _ := bad 0 in bad 1). Definition term2 := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | _ => _ end with | inl v => fun _ => v | inr bad => fun f => f bad end (fun bad => bad 1). Lemma pf : term1 = term2. Proof. reflexivity. Qed. However, if I understand your interpretation correctly, then term1 should reduce to 0 but term2 should reduce to 1. Another issue is that typechecking requires normalization under binders, but normalization under binders seems to invalidate the semantics you suggest, because the proof of A might be not be well-scoped in the context in which you asked for it. (Trivially, it seems like eta-expanding the proof of fake proof of (not A) results in invoking the continuation if you try to fully normalize a term.) What am I missing/misunderstanding? Best, Jason On Wed, May 19, 2021 at 11:27 AM Gabriel Scherer wrote: > I am not convinced by the example of Jason and Thomas, which suggests that > I am missing something. > > We can interpret the excluded middle in classical abstract machines (for > example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical > lambda-calculus), or in presence of control operators (classical abstract > machines being nicer syntax for non-delimited continuation operators). If > you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); > if you ever manage to build a proof of A and try to use it to get a > contradiction using this (not A), it will "cheat" by traveling back in time > to your "ask", and serve you your own proof of A. > > This gives a computational interpretation of (non-truncated) excluded > middle that seems perfectly in line with Talia's notion of "program". Of > course, what we don't get that we might expect is a canonicity property: we > now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" > proofs by normalization alone, you have to interact with them to see where > they take you. (Or, if you see those classical programs through a > double-negation translation, they aren't really at type (A \/ B), but > rather at its double-negation translation, which has weirder normal forms.) > > > > > On Wed, May 19, 2021 at 5:07 PM Jason Gross wrote: > >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Non-truncated Excluded Middle (that is, the version that returns an >> informative disjunction) cannot have a computational interpretation in >> Turing machines, for it would allow you to decide the halting problem. >> More generally, some computational complexity theory is done with >> reference >> to oracles for known-undecidable problems. Additionally, I'd be >> suspicious >> of a computational interpretation of the consistency of ZFC or PA ---- >> would having a computational interpretation of these mean having a type >> theory that believes that there are ground terms of type False in the >> presence of a contradiction in ZFC? >> >> On Wed, May 19, 2021, 07:38 Talia Ringer >> wrote: >> >> > [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list >> > ] >> > >> > Somewhat of a complementary question, and proof to the world that I'm >> up at >> > 330 AM still thinking about this: >> > >> > Are there interesting or commonly used logical axioms that we know for >> sure >> > cannot have computational interpretations? >> > >> > On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < >> > neelakantan.krishnaswami at gmail.com> wrote: >> > >> > > [ The Types Forum, >> > http://lists.seas.upenn.edu/mailman/listinfo/types-list >> > > ] >> > > >> > > Dear Sandro, >> > > >> > > Yes, you're right -- I didn't answer the question, since I was too >> > > taken by the subject line. :) >> > > >> > > Anyway, I do think that HoTT with a non-reducible univalence axiom is >> > > still a programming language, because we can give a computational >> > > interpretation to that language: for example, you could follow the >> > > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, >> > > *Computational Higher-Dimensional Type Theory*. >> > > >> > > Another, simpler example comes from Martin Escardo's example upthread >> > > of basic Martin-Lo?f type theory with the function extensionality >> > > axiom. You can give a very simple realizability interpretation to the >> > > equality type and extensionality axiom, which lets every compiled >> > > program compute. >> > > >> > > What you lose in both of these cases is not the ability to give a >> > > computational model to the language, but rather the ability to >> > > identify normal forms and to use an oriented version of the equational >> > > theory of the language as the evaluation mechanism. >> > > >> > > This is not an overly shocking phenomenon: it occurs even in much >> > > simpler languages than dependent type theories. For example, once you >> > > add the reference type `ref a` to ML, it is no longer the case that >> > > the language has normal forms, because the ref type does not have >> > > introduction and elimination rules with beta- and eta- rules. >> > > >> > > Another way of thinking about this is that often, we *aren't sure* >> > > what the equational theory of our language is or should be. This is >> > > because we often derive a language by thinking about a particular >> > > semantic model, and don't have a clear idea of which equations are >> > > properly part of the theory of the language, and which ones are >> > > accidental features of the concrete model. >> > > >> > > For example, in the case of name generation ? i.e., ref unit ? our >> > > intuitions for which equations hold come from the concrete model of >> > > nominal sets. But we don't know which of those equations should hold >> > > in all models of name generation, and which are "coincidental" to >> > > nominal sets. >> > > >> > > Another, more practical, example comes from the theory of state. We >> > > all have the picture of memory as a big array which is updated by >> > > assembly instructions a la the state monad. But this model incorrectly >> > > models the behaviour of memory on modern multicore systems. So a >> > > proper theory of state for this case should have fewer equations >> > > than what the folk model of state validates. >> > > >> > > >> > > Best, >> > > Neel >> > > >> > > On 19/05/2021 09:03, Sandro Stucki wrote: >> > > > Talia: thanks for a thought-provoking question, and thanks everyone >> > else >> > > > for all the interesting answers so far! >> > > > >> > > > Neel: I love your explanation and all your examples! >> > > > >> > > > But you didn't really answer Talia's question, did you? I'd be >> curious >> > > > to know where and how HoTT without a computation rule for univalence >> > > > would fit into your classification. It would certainly be a >> language, >> > > > and by your definition it has models (e.g. cubical ones) which, if I >> > > > understand correctly, can be turned into an abstract machine >> (either a >> > > > rewriting system per your point 4 or whatever the Agda backends >> compile >> > > > to). So according to your definition of programming language (point >> 3), >> > > > this version of HoTT would be a programming language simply because >> > > > there is, in principle, an abstract machine model for it? Is that >> what >> > > > you had in mind? >> > > > >> > > > Cheers >> > > > /Sandro >> > > > >> > > > >> > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami >> > > > > > > > > wrote: >> > > > >> > > > [ The Types Forum, >> > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list >> > > > ] >> > > > >> > > > Dear Talia, >> > > > >> > > > Here's an imprecise but useful way of organising these ideas >> that I >> > > > found helpful. >> > > > >> > > > 1. A *language* is a (generalised) algebraic theory. Basically, >> > think >> > > > of a theory as a set of generators and relations in the >> style >> > of >> > > > abstract algebra. >> > > > >> > > > You need to beef this up to handle variables (e.g., see the >> > > work of >> > > > Fiore and Hamana) but (a) I promised to be imprecise, and >> (b) >> > > the >> > > > core intuition that a language is a set of generators for >> > terms, >> > > > plus a set of equations these terms satisfy is already >> totally >> > > > visible in the basic case. >> > > > >> > > > For example: >> > > > >> > > > a) the simply-typed lambda calculus >> > > > b) regular expressions >> > > > c) relational algebra >> > > > >> > > > 2. A *model* of a a language is literally just any old >> mathematical >> > > > structure which supports the generators of the language and >> > > > respects the equations. >> > > > >> > > > For example: >> > > > >> > > > a) you can model the typed lambda calculus using sets >> > > > for types and mathematical functions for terms, >> > > > b) you can model regular expressions as denoting particular >> > > > languages (ie, sets of strings) >> > > > c) you can model relational algebra expressions as sets of >> > > > tuples >> > > > >> > > > 2. A *model of computation* or *machine model* is basically a >> > > > description of an abstract machine that we think can be >> > > implemented >> > > > with physical hardware, at least in principle. So these are >> > > things >> > > > like finite state machines, Turing machines, Petri nets, >> > > pushdown >> > > > automata, register machines, circuits, and so on. >> Basically, >> > > think >> > > > of models of computation as the things you study in a >> > > computability >> > > > class. >> > > > >> > > > The Church-Turing thesis bounds which abstract machines we >> > > think it >> > > > is possible to physically implement. >> > > > >> > > > 3. A language is a *programming language* when you can give at >> > least >> > > > one model of the language using some machine model. >> > > > >> > > > For example: >> > > > >> > > > a) the types of the lambda calculus can be viewed as >> partial >> > > > equivalence relations over Go?del codes for some >> universal >> > > turing >> > > > machine, and the terms of a type can be assigned to >> > > equivalence >> > > > classes of the corresponding PER. >> > > > >> > > > b) Regular expressions can be interpreted into finite state >> > > > machines >> > > > quotiented by bisimulation. >> > > > >> > > > c) A set in relational algebra can be realised as >> equivalence >> > > > classes of B-trees, and relational algebra expressions >> as >> > > nested >> > > > for-loops over them. >> > > > >> > > > Note that in all three cases we have to quotient the machine >> > > model >> > > > by a suitable equivalence relation to preserve the >> equations of >> > > the >> > > > language's theory. >> > > > >> > > > This quotient is *very* important, and is the source of a >> lot >> > of >> > > > confusion. It hides the equivalences the language theory >> wants >> > to >> > > > deny, but that is not always what the programmer wants ? >> e.g., >> > is >> > > > merge sort equal to bubble sort? As mathematical functions, >> > they >> > > > surely are, but if you consider them as operations running >> on >> > an >> > > > actual computer, then we will have strong preferences! >> > > > >> > > > 4. A common source of confusion arises from the fact that if you >> > have >> > > > a nice type-theoretic language (like the STLC), then: >> > > > >> > > > a) the term model of this theory will be the initial model >> in >> > > the >> > > > category of models, and >> > > > b) you can turn the terms into a machine >> > > > model by orienting some of the equations the >> lambda-theory >> > > > satisfies and using them as rewrites. >> > > > >> > > > As a result we abuse language to talk about the theory of >> the >> > > > simply-typed calculus as "being" a programming language. >> This >> > is >> > > > also where operational semantics comes from, at least for >> > purely >> > > > functional languages. >> > > > >> > > > Best, >> > > > Neel >> > > > >> > > > On 18/05/2021 20:42, Talia Ringer wrote: >> > > > > [ The Types Forum, >> > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list >> > > > ] >> > > > > >> > > > > Hi friends, >> > > > > >> > > > > I have a strange discussion I'd like to start. Recently I was >> > > > debating with >> > > > > someone whether Curry-Howard extends to arbitrary logical >> > > > systems---whether >> > > > > all proofs are programs in some sense. I argued yes, he >> argued >> > > > no. But >> > > > > after a while of arguing, we realized that we had different >> > > > axioms if you >> > > > > will modeling what a "program" is. Is any term that can be >> typed >> > > > a program? >> > > > > I assumed yes, he assumed no. >> > > > > >> > > > > So then I took to Twitter, and I asked the following >> questions >> > > (some >> > > > > informal language here, since audience was Twitter): >> > > > > >> > > > > 1. If you're working in a language in which not all terms >> > compute >> > > > (say, >> > > > > HoTT without a computational interpretation of univalence, so >> > not >> > > > cubical), >> > > > > would you still call terms that mostly compute but rely on >> > axioms >> > > > > "programs"? >> > > > > >> > > > > 2. If you answered no, would you call a term that does fully >> > > > compute in the >> > > > > same language a "program"? >> > > > > >> > > > > People actually really disagreed here; there was nothing >> > > resembling >> > > > > consensus. Is a term a program if it calls out to an oracle? >> > > > Relies on an >> > > > > unrealizable axiom? Relies on an axiom that is realizable, >> but >> > > > not yet >> > > > > realized, like univalence before cubical existed? (I suppose >> > some >> > > > reliance >> > > > > on axioms at some point is necessary, which makes this even >> > > > weirder to >> > > > > me---what makes univalence different to people who do not >> view >> > > > terms that >> > > > > invoke it as an axiom as programs?) >> > > > > >> > > > > Anyways, it just feels strange to get to the last three >> weeks of >> > > my >> > > > > programming languages PhD, and realize I've never once asked >> > what >> > > > makes a >> > > > > term a program ?. So it'd be interesting to hear your >> thoughts. >> > > > > >> > > > > Talia >> > > > > >> > > > >> > > >> > >> > From gabriel.scherer at gmail.com Thu May 20 03:17:27 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Thu, 20 May 2021 09:17:27 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: > > I don't understand how this semantics works; > Some references: - on Parigot's lambda-mu calculus "An algorithmic interpretation of classical natural deduction", Michel Parigot, 1992 https://www.cs.ru.nl/~freek/courses/tt-2011/papers/parigot.pdf - on the nicer "classical abstract machines": "The duality of computation", Hugo Herbelin and Pierre-Louis Curien, 2000 https://hal.inria.fr/inria-00156377/document "Classical *F*?, orthogonality and symmetric candidates", St?phane Lengrand and Alexandre MIquel, 2008 http://www.csl.sri.com/users/sgl/Work/Reports/APAL2007.pdf "The duality of computation under focus", Pierre-Louis Curien and Guillaume Munch-Maccagnoni, 2010 https://arxiv.org/pdf/1006.2283 However, if I understand your interpretation correctly, then term1 should > reduce to 0 but term2 should reduce to 1. > Yes, terms in classical systems can replace the current continuation (and erase it or duplicate it), so (let _ := term 0 in term 1) is not necessarily equivalent to (term 1) (if we assume call-by-value reduction, at least at type Nat, you will get 0 here instead of 1 as you predict). Computing with excluded middle is "effectful". Another issue is that typechecking requires normalization under binders, > but normalization under binders seems to invalidate the semantics you > suggest, because the proof of A might be not be well-scoped in the context > in which you asked for it. > If we wrote it all down precisely, I think that there would be free variables (term variables or co-term/continuation blocks) in various places that "block" some of the computations, no scope escape. We can perfectly make sense of reduction under binders in classical calculi, although sometimes there results are surprising because control effects are surprising. (Trivially, it seems like eta-expanding the proof of fake proof of (not A) > results in invoking the continuation if you try to fully normalize a term.) > Yes, eta-expansion is not necessarily valid in an effectful system. In general eta-expansion is only possible terms that your reduction strategy will not evaluate right away. (If you choose the reduction strategy carefully you can still have nice eta rules for base connectives.) I should also mention that classical calculi are hard to mix with dependent types (just as most systems that are more effectful or more resourceful than usual intuitionistic logic): it's an open problem with active research how to provide nice dependently-typed systems that compute with LEM (in the syntax of those classical abstract machines, or just with control operators), exploring various tradeoffs. See for example "A Classical Sequent Calculus with Dependent Types", ?tienne Miquey, 2019 https://hal.inria.fr/hal-01519929v3/document To summarize: in general giving computational interpretations of newer axioms has a cost, we move to a new programming model with, typically, weakened program equivalences; and some powerful extension of our logics may be incompatible with it or less-compatible. In the particular case of excluded middle, I have the impression that we now have well-trodded computational interpretations (including just the double-negation translation). On Thu, May 20, 2021 at 3:53 AM Jason Gross wrote: > > If you ask for for a proof of (A \/ not A), you get a "fake" proof of > (not A); if you ever manage to build a proof of A and try to use it to get > a contradiction using this (not A), it will "cheat" by traveling back in > time to your "ask", and serve you your own proof of A. > > I don't understand how this semantics works; it seems to me that it > invalidates the normal reduction rules. > Consider the following: > Axiom LEM : forall A, A + (A -> False). > > Definition term1 > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | > _ => _ end with > | inl v => fun _ => v > | inr bad => fun f => f bad > end (fun bad => let _ := bad 0 in bad 1). > Definition term2 > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | > _ => _ end with > | inl v => fun _ => v > | inr bad => fun f => f bad > end (fun bad => bad 1). > Lemma pf : term1 = term2. Proof. reflexivity. Qed. > > However, if I understand your interpretation correctly, then term1 should > reduce to 0 but term2 should reduce to 1. > > Another issue is that typechecking requires normalization under binders, > but normalization under binders seems to invalidate the semantics you > suggest, because the proof of A might be not be well-scoped in the context > in which you asked for it. (Trivially, it seems like eta-expanding the > proof of fake proof of (not A) results in invoking the continuation if you > try to fully normalize a term.) > > What am I missing/misunderstanding? > > Best, > Jason > > > On Wed, May 19, 2021 at 11:27 AM Gabriel Scherer < > gabriel.scherer at gmail.com> wrote: > >> I am not convinced by the example of Jason and Thomas, which suggests >> that I am missing something. >> >> We can interpret the excluded middle in classical abstract machines (for >> example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical >> lambda-calculus), or in presence of control operators (classical abstract >> machines being nicer syntax for non-delimited continuation operators). If >> you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); >> if you ever manage to build a proof of A and try to use it to get a >> contradiction using this (not A), it will "cheat" by traveling back in time >> to your "ask", and serve you your own proof of A. >> >> This gives a computational interpretation of (non-truncated) excluded >> middle that seems perfectly in line with Talia's notion of "program". Of >> course, what we don't get that we might expect is a canonicity property: we >> now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" >> proofs by normalization alone, you have to interact with them to see where >> they take you. (Or, if you see those classical programs through a >> double-negation translation, they aren't really at type (A \/ B), but >> rather at its double-negation translation, which has weirder normal forms.) >> >> >> >> >> On Wed, May 19, 2021 at 5:07 PM Jason Gross >> wrote: >> >>> [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Non-truncated Excluded Middle (that is, the version that returns an >>> informative disjunction) cannot have a computational interpretation in >>> Turing machines, for it would allow you to decide the halting problem. >>> More generally, some computational complexity theory is done with >>> reference >>> to oracles for known-undecidable problems. Additionally, I'd be >>> suspicious >>> of a computational interpretation of the consistency of ZFC or PA ---- >>> would having a computational interpretation of these mean having a type >>> theory that believes that there are ground terms of type False in the >>> presence of a contradiction in ZFC? >>> >>> On Wed, May 19, 2021, 07:38 Talia Ringer >>> wrote: >>> >>> > [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> > ] >>> > >>> > Somewhat of a complementary question, and proof to the world that I'm >>> up at >>> > 330 AM still thinking about this: >>> > >>> > Are there interesting or commonly used logical axioms that we know for >>> sure >>> > cannot have computational interpretations? >>> > >>> > On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < >>> > neelakantan.krishnaswami at gmail.com> wrote: >>> > >>> > > [ The Types Forum, >>> > http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> > > ] >>> > > >>> > > Dear Sandro, >>> > > >>> > > Yes, you're right -- I didn't answer the question, since I was too >>> > > taken by the subject line. :) >>> > > >>> > > Anyway, I do think that HoTT with a non-reducible univalence axiom is >>> > > still a programming language, because we can give a computational >>> > > interpretation to that language: for example, you could follow the >>> > > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, >>> > > *Computational Higher-Dimensional Type Theory*. >>> > > >>> > > Another, simpler example comes from Martin Escardo's example upthread >>> > > of basic Martin-Lo?f type theory with the function extensionality >>> > > axiom. You can give a very simple realizability interpretation to the >>> > > equality type and extensionality axiom, which lets every compiled >>> > > program compute. >>> > > >>> > > What you lose in both of these cases is not the ability to give a >>> > > computational model to the language, but rather the ability to >>> > > identify normal forms and to use an oriented version of the >>> equational >>> > > theory of the language as the evaluation mechanism. >>> > > >>> > > This is not an overly shocking phenomenon: it occurs even in much >>> > > simpler languages than dependent type theories. For example, once you >>> > > add the reference type `ref a` to ML, it is no longer the case that >>> > > the language has normal forms, because the ref type does not have >>> > > introduction and elimination rules with beta- and eta- rules. >>> > > >>> > > Another way of thinking about this is that often, we *aren't sure* >>> > > what the equational theory of our language is or should be. This is >>> > > because we often derive a language by thinking about a particular >>> > > semantic model, and don't have a clear idea of which equations are >>> > > properly part of the theory of the language, and which ones are >>> > > accidental features of the concrete model. >>> > > >>> > > For example, in the case of name generation ? i.e., ref unit ? our >>> > > intuitions for which equations hold come from the concrete model of >>> > > nominal sets. But we don't know which of those equations should hold >>> > > in all models of name generation, and which are "coincidental" to >>> > > nominal sets. >>> > > >>> > > Another, more practical, example comes from the theory of state. We >>> > > all have the picture of memory as a big array which is updated by >>> > > assembly instructions a la the state monad. But this model >>> incorrectly >>> > > models the behaviour of memory on modern multicore systems. So a >>> > > proper theory of state for this case should have fewer equations >>> > > than what the folk model of state validates. >>> > > >>> > > >>> > > Best, >>> > > Neel >>> > > >>> > > On 19/05/2021 09:03, Sandro Stucki wrote: >>> > > > Talia: thanks for a thought-provoking question, and thanks everyone >>> > else >>> > > > for all the interesting answers so far! >>> > > > >>> > > > Neel: I love your explanation and all your examples! >>> > > > >>> > > > But you didn't really answer Talia's question, did you? I'd be >>> curious >>> > > > to know where and how HoTT without a computation rule for >>> univalence >>> > > > would fit into your classification. It would certainly be a >>> language, >>> > > > and by your definition it has models (e.g. cubical ones) which, if >>> I >>> > > > understand correctly, can be turned into an abstract machine >>> (either a >>> > > > rewriting system per your point 4 or whatever the Agda backends >>> compile >>> > > > to). So according to your definition of programming language >>> (point 3), >>> > > > this version of HoTT would be a programming language simply because >>> > > > there is, in principle, an abstract machine model for it? Is that >>> what >>> > > > you had in mind? >>> > > > >>> > > > Cheers >>> > > > /Sandro >>> > > > >>> > > > >>> > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami >>> > > > >> > > > > wrote: >>> > > > >>> > > > [ The Types Forum, >>> > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> > > > ] >>> > > > >>> > > > Dear Talia, >>> > > > >>> > > > Here's an imprecise but useful way of organising these ideas >>> that I >>> > > > found helpful. >>> > > > >>> > > > 1. A *language* is a (generalised) algebraic theory. Basically, >>> > think >>> > > > of a theory as a set of generators and relations in the >>> style >>> > of >>> > > > abstract algebra. >>> > > > >>> > > > You need to beef this up to handle variables (e.g., see >>> the >>> > > work of >>> > > > Fiore and Hamana) but (a) I promised to be imprecise, and >>> (b) >>> > > the >>> > > > core intuition that a language is a set of generators for >>> > terms, >>> > > > plus a set of equations these terms satisfy is already >>> totally >>> > > > visible in the basic case. >>> > > > >>> > > > For example: >>> > > > >>> > > > a) the simply-typed lambda calculus >>> > > > b) regular expressions >>> > > > c) relational algebra >>> > > > >>> > > > 2. A *model* of a a language is literally just any old >>> mathematical >>> > > > structure which supports the generators of the language >>> and >>> > > > respects the equations. >>> > > > >>> > > > For example: >>> > > > >>> > > > a) you can model the typed lambda calculus using sets >>> > > > for types and mathematical functions for terms, >>> > > > b) you can model regular expressions as denoting >>> particular >>> > > > languages (ie, sets of strings) >>> > > > c) you can model relational algebra expressions as sets of >>> > > > tuples >>> > > > >>> > > > 2. A *model of computation* or *machine model* is basically a >>> > > > description of an abstract machine that we think can be >>> > > implemented >>> > > > with physical hardware, at least in principle. So these >>> are >>> > > things >>> > > > like finite state machines, Turing machines, Petri nets, >>> > > pushdown >>> > > > automata, register machines, circuits, and so on. >>> Basically, >>> > > think >>> > > > of models of computation as the things you study in a >>> > > computability >>> > > > class. >>> > > > >>> > > > The Church-Turing thesis bounds which abstract machines we >>> > > think it >>> > > > is possible to physically implement. >>> > > > >>> > > > 3. A language is a *programming language* when you can give at >>> > least >>> > > > one model of the language using some machine model. >>> > > > >>> > > > For example: >>> > > > >>> > > > a) the types of the lambda calculus can be viewed as >>> partial >>> > > > equivalence relations over Go?del codes for some >>> universal >>> > > turing >>> > > > machine, and the terms of a type can be assigned to >>> > > equivalence >>> > > > classes of the corresponding PER. >>> > > > >>> > > > b) Regular expressions can be interpreted into finite >>> state >>> > > > machines >>> > > > quotiented by bisimulation. >>> > > > >>> > > > c) A set in relational algebra can be realised as >>> equivalence >>> > > > classes of B-trees, and relational algebra expressions >>> as >>> > > nested >>> > > > for-loops over them. >>> > > > >>> > > > Note that in all three cases we have to quotient the >>> machine >>> > > model >>> > > > by a suitable equivalence relation to preserve the >>> equations of >>> > > the >>> > > > language's theory. >>> > > > >>> > > > This quotient is *very* important, and is the source of a >>> lot >>> > of >>> > > > confusion. It hides the equivalences the language theory >>> wants >>> > to >>> > > > deny, but that is not always what the programmer wants ? >>> e.g., >>> > is >>> > > > merge sort equal to bubble sort? As mathematical functions, >>> > they >>> > > > surely are, but if you consider them as operations running >>> on >>> > an >>> > > > actual computer, then we will have strong preferences! >>> > > > >>> > > > 4. A common source of confusion arises from the fact that if >>> you >>> > have >>> > > > a nice type-theoretic language (like the STLC), then: >>> > > > >>> > > > a) the term model of this theory will be the initial >>> model in >>> > > the >>> > > > category of models, and >>> > > > b) you can turn the terms into a machine >>> > > > model by orienting some of the equations the >>> lambda-theory >>> > > > satisfies and using them as rewrites. >>> > > > >>> > > > As a result we abuse language to talk about the theory of >>> the >>> > > > simply-typed calculus as "being" a programming language. >>> This >>> > is >>> > > > also where operational semantics comes from, at least for >>> > purely >>> > > > functional languages. >>> > > > >>> > > > Best, >>> > > > Neel >>> > > > >>> > > > On 18/05/2021 20:42, Talia Ringer wrote: >>> > > > > [ The Types Forum, >>> > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> > > > ] >>> > > > > >>> > > > > Hi friends, >>> > > > > >>> > > > > I have a strange discussion I'd like to start. Recently I >>> was >>> > > > debating with >>> > > > > someone whether Curry-Howard extends to arbitrary logical >>> > > > systems---whether >>> > > > > all proofs are programs in some sense. I argued yes, he >>> argued >>> > > > no. But >>> > > > > after a while of arguing, we realized that we had different >>> > > > axioms if you >>> > > > > will modeling what a "program" is. Is any term that can be >>> typed >>> > > > a program? >>> > > > > I assumed yes, he assumed no. >>> > > > > >>> > > > > So then I took to Twitter, and I asked the following >>> questions >>> > > (some >>> > > > > informal language here, since audience was Twitter): >>> > > > > >>> > > > > 1. If you're working in a language in which not all terms >>> > compute >>> > > > (say, >>> > > > > HoTT without a computational interpretation of univalence, >>> so >>> > not >>> > > > cubical), >>> > > > > would you still call terms that mostly compute but rely on >>> > axioms >>> > > > > "programs"? >>> > > > > >>> > > > > 2. If you answered no, would you call a term that does fully >>> > > > compute in the >>> > > > > same language a "program"? >>> > > > > >>> > > > > People actually really disagreed here; there was nothing >>> > > resembling >>> > > > > consensus. Is a term a program if it calls out to an oracle? >>> > > > Relies on an >>> > > > > unrealizable axiom? Relies on an axiom that is realizable, >>> but >>> > > > not yet >>> > > > > realized, like univalence before cubical existed? (I suppose >>> > some >>> > > > reliance >>> > > > > on axioms at some point is necessary, which makes this even >>> > > > weirder to >>> > > > > me---what makes univalence different to people who do not >>> view >>> > > > terms that >>> > > > > invoke it as an axiom as programs?) >>> > > > > >>> > > > > Anyways, it just feels strange to get to the last three >>> weeks of >>> > > my >>> > > > > programming languages PhD, and realize I've never once asked >>> > what >>> > > > makes a >>> > > > > term a program ?. So it'd be interesting to hear your >>> thoughts. >>> > > > > >>> > > > > Talia >>> > > > > >>> > > > >>> > > >>> > >>> >> From nicolai.kraus at gmail.com Thu May 20 04:00:11 2021 From: nicolai.kraus at gmail.com (Nicolai Kraus) Date: Thu, 20 May 2021 09:00:11 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: On Wed, May 19, 2021 at 4:06 PM Jason Gross wrote: > Non-truncated Excluded Middle (that is, the version that returns an > informative disjunction) cannot have a computational interpretation in > Turing machines, for it would allow you to decide the halting problem. > For what it's worth, I don't think you need to add "non-truncated" in the sentence above. From "truncated" excluded middle, you equally directly either get a number n such that your program holds after n steps, or you get that it does not halt at all. Interesting discussion! -- Nicolai > More generally, some computational complexity theory is done with reference > to oracles for known-undecidable problems. Additionally, I'd be suspicious > of a computational interpretation of the consistency of ZFC or PA ---- > would having a computational interpretation of these mean having a type > theory that believes that there are ground terms of type False in the > presence of a contradiction in ZFC? > > On Wed, May 19, 2021, 07:38 Talia Ringer > wrote: > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > Somewhat of a complementary question, and proof to the world that I'm up > at > > 330 AM still thinking about this: > > > > Are there interesting or commonly used logical axioms that we know for > sure > > cannot have computational interpretations? > > > > On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < > > neelakantan.krishnaswami at gmail.com> wrote: > > > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > ] > > > > > > Dear Sandro, > > > > > > Yes, you're right -- I didn't answer the question, since I was too > > > taken by the subject line. :) > > > > > > Anyway, I do think that HoTT with a non-reducible univalence axiom is > > > still a programming language, because we can give a computational > > > interpretation to that language: for example, you could follow the > > > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, > > > *Computational Higher-Dimensional Type Theory*. > > > > > > Another, simpler example comes from Martin Escardo's example upthread > > > of basic Martin-Lo?f type theory with the function extensionality > > > axiom. You can give a very simple realizability interpretation to the > > > equality type and extensionality axiom, which lets every compiled > > > program compute. > > > > > > What you lose in both of these cases is not the ability to give a > > > computational model to the language, but rather the ability to > > > identify normal forms and to use an oriented version of the equational > > > theory of the language as the evaluation mechanism. > > > > > > This is not an overly shocking phenomenon: it occurs even in much > > > simpler languages than dependent type theories. For example, once you > > > add the reference type `ref a` to ML, it is no longer the case that > > > the language has normal forms, because the ref type does not have > > > introduction and elimination rules with beta- and eta- rules. > > > > > > Another way of thinking about this is that often, we *aren't sure* > > > what the equational theory of our language is or should be. This is > > > because we often derive a language by thinking about a particular > > > semantic model, and don't have a clear idea of which equations are > > > properly part of the theory of the language, and which ones are > > > accidental features of the concrete model. > > > > > > For example, in the case of name generation ? i.e., ref unit ? our > > > intuitions for which equations hold come from the concrete model of > > > nominal sets. But we don't know which of those equations should hold > > > in all models of name generation, and which are "coincidental" to > > > nominal sets. > > > > > > Another, more practical, example comes from the theory of state. We > > > all have the picture of memory as a big array which is updated by > > > assembly instructions a la the state monad. But this model incorrectly > > > models the behaviour of memory on modern multicore systems. So a > > > proper theory of state for this case should have fewer equations > > > than what the folk model of state validates. > > > > > > > > > Best, > > > Neel > > > > > > On 19/05/2021 09:03, Sandro Stucki wrote: > > > > Talia: thanks for a thought-provoking question, and thanks everyone > > else > > > > for all the interesting answers so far! > > > > > > > > Neel: I love your explanation and all your examples! > > > > > > > > But you didn't really answer Talia's question, did you? I'd be > curious > > > > to know where and how HoTT without a computation rule for univalence > > > > would fit into your classification. It would certainly be a language, > > > > and by your definition it has models (e.g. cubical ones) which, if I > > > > understand correctly, can be turned into an abstract machine (either > a > > > > rewriting system per your point 4 or whatever the Agda backends > compile > > > > to). So according to your definition of programming language (point > 3), > > > > this version of HoTT would be a programming language simply because > > > > there is, in principle, an abstract machine model for it? Is that > what > > > > you had in mind? > > > > > > > > Cheers > > > > /Sandro > > > > > > > > > > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > > > > > > > > wrote: > > > > > > > > [ The Types Forum, > > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > > ] > > > > > > > > Dear Talia, > > > > > > > > Here's an imprecise but useful way of organising these ideas > that I > > > > found helpful. > > > > > > > > 1. A *language* is a (generalised) algebraic theory. Basically, > > think > > > > of a theory as a set of generators and relations in the > style > > of > > > > abstract algebra. > > > > > > > > You need to beef this up to handle variables (e.g., see the > > > work of > > > > Fiore and Hamana) but (a) I promised to be imprecise, and > (b) > > > the > > > > core intuition that a language is a set of generators for > > terms, > > > > plus a set of equations these terms satisfy is already > totally > > > > visible in the basic case. > > > > > > > > For example: > > > > > > > > a) the simply-typed lambda calculus > > > > b) regular expressions > > > > c) relational algebra > > > > > > > > 2. A *model* of a a language is literally just any old > mathematical > > > > structure which supports the generators of the language and > > > > respects the equations. > > > > > > > > For example: > > > > > > > > a) you can model the typed lambda calculus using sets > > > > for types and mathematical functions for terms, > > > > b) you can model regular expressions as denoting particular > > > > languages (ie, sets of strings) > > > > c) you can model relational algebra expressions as sets of > > > > tuples > > > > > > > > 2. A *model of computation* or *machine model* is basically a > > > > description of an abstract machine that we think can be > > > implemented > > > > with physical hardware, at least in principle. So these are > > > things > > > > like finite state machines, Turing machines, Petri nets, > > > pushdown > > > > automata, register machines, circuits, and so on. Basically, > > > think > > > > of models of computation as the things you study in a > > > computability > > > > class. > > > > > > > > The Church-Turing thesis bounds which abstract machines we > > > think it > > > > is possible to physically implement. > > > > > > > > 3. A language is a *programming language* when you can give at > > least > > > > one model of the language using some machine model. > > > > > > > > For example: > > > > > > > > a) the types of the lambda calculus can be viewed as partial > > > > equivalence relations over Go?del codes for some > universal > > > turing > > > > machine, and the terms of a type can be assigned to > > > equivalence > > > > classes of the corresponding PER. > > > > > > > > b) Regular expressions can be interpreted into finite state > > > > machines > > > > quotiented by bisimulation. > > > > > > > > c) A set in relational algebra can be realised as > equivalence > > > > classes of B-trees, and relational algebra expressions as > > > nested > > > > for-loops over them. > > > > > > > > Note that in all three cases we have to quotient the machine > > > model > > > > by a suitable equivalence relation to preserve the equations > of > > > the > > > > language's theory. > > > > > > > > This quotient is *very* important, and is the source of a lot > > of > > > > confusion. It hides the equivalences the language theory > wants > > to > > > > deny, but that is not always what the programmer wants ? > e.g., > > is > > > > merge sort equal to bubble sort? As mathematical functions, > > they > > > > surely are, but if you consider them as operations running on > > an > > > > actual computer, then we will have strong preferences! > > > > > > > > 4. A common source of confusion arises from the fact that if you > > have > > > > a nice type-theoretic language (like the STLC), then: > > > > > > > > a) the term model of this theory will be the initial model > in > > > the > > > > category of models, and > > > > b) you can turn the terms into a machine > > > > model by orienting some of the equations the > lambda-theory > > > > satisfies and using them as rewrites. > > > > > > > > As a result we abuse language to talk about the theory of > the > > > > simply-typed calculus as "being" a programming language. > This > > is > > > > also where operational semantics comes from, at least for > > purely > > > > functional languages. > > > > > > > > Best, > > > > Neel > > > > > > > > On 18/05/2021 20:42, Talia Ringer wrote: > > > > > [ The Types Forum, > > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > > ] > > > > > > > > > > Hi friends, > > > > > > > > > > I have a strange discussion I'd like to start. Recently I was > > > > debating with > > > > > someone whether Curry-Howard extends to arbitrary logical > > > > systems---whether > > > > > all proofs are programs in some sense. I argued yes, he argued > > > > no. But > > > > > after a while of arguing, we realized that we had different > > > > axioms if you > > > > > will modeling what a "program" is. Is any term that can be > typed > > > > a program? > > > > > I assumed yes, he assumed no. > > > > > > > > > > So then I took to Twitter, and I asked the following questions > > > (some > > > > > informal language here, since audience was Twitter): > > > > > > > > > > 1. If you're working in a language in which not all terms > > compute > > > > (say, > > > > > HoTT without a computational interpretation of univalence, so > > not > > > > cubical), > > > > > would you still call terms that mostly compute but rely on > > axioms > > > > > "programs"? > > > > > > > > > > 2. If you answered no, would you call a term that does fully > > > > compute in the > > > > > same language a "program"? > > > > > > > > > > People actually really disagreed here; there was nothing > > > resembling > > > > > consensus. Is a term a program if it calls out to an oracle? > > > > Relies on an > > > > > unrealizable axiom? Relies on an axiom that is realizable, but > > > > not yet > > > > > realized, like univalence before cubical existed? (I suppose > > some > > > > reliance > > > > > on axioms at some point is necessary, which makes this even > > > > weirder to > > > > > me---what makes univalence different to people who do not view > > > > terms that > > > > > invoke it as an axiom as programs?) > > > > > > > > > > Anyways, it just feels strange to get to the last three weeks > of > > > my > > > > > programming languages PhD, and realize I've never once asked > > what > > > > makes a > > > > > term a program ?. So it'd be interesting to hear your > thoughts. > > > > > > > > > > Talia > > > > > > > > > > > > > > > From neelakantan.krishnaswami at gmail.com Thu May 20 05:31:56 2021 From: neelakantan.krishnaswami at gmail.com (Neel Krishnaswami) Date: Thu, 20 May 2021 10:31:56 +0100 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: <0c4832da-1878-70cb-cdbf-f05c27e727d2@gmail.com> Hi Jason, Tadeusz already explained what's going on, but let me unpack his remarks a bit. The basic idea is that the distinctive feature of intuitionistic logic is the existence property. However, classical and intuitionistic proofs coincide on the ?, ?, ? fragment. However, classically disjunction A ? B is equivalent to the negation of a conjunction ?(?A ? ?B). So if we had a constructive interpretation of logical negation, then we could translate classical formulas into intutionistic logic using the de Morgan dual, thereby avoiding the need to produce a concrete witness. It turns out that negation is really easy to define ? for literally any proposition p, it works to define ? A as A ? p. (This fact is Friedman's A-translation.) Explicitly, here it is: ? ? ? = 1 ? A ? B ? =? A ??? B ? ? ? A ? =? A ?? p With this interpretation, we can *define* disjunction via the de Morgan dual ? A ? B ? =? ?(?A ? ?B) ? = ((? A ?? p) ? (? B ?? p)) ? p In particular, for the law of the excluded middle: ? A ? ?A ? =? ?(?A ? ??A) ? = ((? A ?? p) ? ((? A ?? p) ? p)) ? p But this type is trivial to inhabit: lem : ((? A ?? p) ? ((? A ?? p) ? p)) ? p lem (ka : ? A ?? p , kka : (? A ?? p) ? p) = kka ka And that's it! As you can see from the types, this is all basically a continuation-passing style transformation. Something which is perhaps less immediately apparent is that specific choices of the answer type p give you the machinery you need to implement other axioms. Krivine's program of classical realisability is basically to figure out what you need to define realisers for all the axioms of ZFC. Here's a quote from the introduction to Jean-Louis Krivine's paper, *Realizability Algebras: A Program to Well-Order ?*, which I found particularly inspiring: Indeed, when we realize usual axioms of mathematics, we need to introduce, one after the other, the very standard tools in system programming: for the law of Peirce, these are continuations (particularly useful for exceptions); for the axiom of dependent choice, these are the clock and the process numbering; for the ultrafilter axiom and the well ordering of ?, these are no less than read and write instructions on a global memory, in other words assignment. Best, Neel On 20/05/2021 02:52, Jason Gross wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> If you ask for for a proof of (A \/ not A), you get a "fake" proof of > (not A); if you ever manage to build a proof of A and try to use it to get > a contradiction using this (not A), it will "cheat" by traveling back in > time to your "ask", and serve you your own proof of A. > > I don't understand how this semantics works; it seems to me that it > invalidates the normal reduction rules. > Consider the following: > Axiom LEM : forall A, A + (A -> False). > > Definition term1 > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | _ > => _ end with > | inl v => fun _ => v > | inr bad => fun f => f bad > end (fun bad => let _ := bad 0 in bad 1). > Definition term2 > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | _ > => _ end with > | inl v => fun _ => v > | inr bad => fun f => f bad > end (fun bad => bad 1). > Lemma pf : term1 = term2. Proof. reflexivity. Qed. > > However, if I understand your interpretation correctly, then term1 should > reduce to 0 but term2 should reduce to 1. > > Another issue is that typechecking requires normalization under binders, > but normalization under binders seems to invalidate the semantics you > suggest, because the proof of A might be not be well-scoped in the context > in which you asked for it. (Trivially, it seems like eta-expanding the > proof of fake proof of (not A) results in invoking the continuation if you > try to fully normalize a term.) > > What am I missing/misunderstanding? > > Best, > Jason > > > On Wed, May 19, 2021 at 11:27 AM Gabriel Scherer > wrote: > >> I am not convinced by the example of Jason and Thomas, which suggests that >> I am missing something. >> >> We can interpret the excluded middle in classical abstract machines (for >> example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical >> lambda-calculus), or in presence of control operators (classical abstract >> machines being nicer syntax for non-delimited continuation operators). If >> you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); >> if you ever manage to build a proof of A and try to use it to get a >> contradiction using this (not A), it will "cheat" by traveling back in time >> to your "ask", and serve you your own proof of A. >> >> This gives a computational interpretation of (non-truncated) excluded >> middle that seems perfectly in line with Talia's notion of "program". Of >> course, what we don't get that we might expect is a canonicity property: we >> now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" >> proofs by normalization alone, you have to interact with them to see where >> they take you. (Or, if you see those classical programs through a >> double-negation translation, they aren't really at type (A \/ B), but >> rather at its double-negation translation, which has weirder normal forms.) >> >> >> >> >> On Wed, May 19, 2021 at 5:07 PM Jason Gross wrote: >> >>> [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Non-truncated Excluded Middle (that is, the version that returns an >>> informative disjunction) cannot have a computational interpretation in >>> Turing machines, for it would allow you to decide the halting problem. >>> More generally, some computational complexity theory is done with >>> reference >>> to oracles for known-undecidable problems. Additionally, I'd be >>> suspicious >>> of a computational interpretation of the consistency of ZFC or PA ---- >>> would having a computational interpretation of these mean having a type >>> theory that believes that there are ground terms of type False in the >>> presence of a contradiction in ZFC? >>> >>> On Wed, May 19, 2021, 07:38 Talia Ringer >>> wrote: >>> >>>> [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ] >>>> >>>> Somewhat of a complementary question, and proof to the world that I'm >>> up at >>>> 330 AM still thinking about this: >>>> >>>> Are there interesting or commonly used logical axioms that we know for >>> sure >>>> cannot have computational interpretations? >>>> >>>> On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < >>>> neelakantan.krishnaswami at gmail.com> wrote: >>>> >>>>> [ The Types Forum, >>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>> ] >>>>> >>>>> Dear Sandro, >>>>> >>>>> Yes, you're right -- I didn't answer the question, since I was too >>>>> taken by the subject line. :) >>>>> >>>>> Anyway, I do think that HoTT with a non-reducible univalence axiom is >>>>> still a programming language, because we can give a computational >>>>> interpretation to that language: for example, you could follow the >>>>> strategy of Angiuli, Harper and Wilson's POPL 2017 paper, >>>>> *Computational Higher-Dimensional Type Theory*. >>>>> >>>>> Another, simpler example comes from Martin Escardo's example upthread >>>>> of basic Martin-Lo?f type theory with the function extensionality >>>>> axiom. You can give a very simple realizability interpretation to the >>>>> equality type and extensionality axiom, which lets every compiled >>>>> program compute. >>>>> >>>>> What you lose in both of these cases is not the ability to give a >>>>> computational model to the language, but rather the ability to >>>>> identify normal forms and to use an oriented version of the equational >>>>> theory of the language as the evaluation mechanism. >>>>> >>>>> This is not an overly shocking phenomenon: it occurs even in much >>>>> simpler languages than dependent type theories. For example, once you >>>>> add the reference type `ref a` to ML, it is no longer the case that >>>>> the language has normal forms, because the ref type does not have >>>>> introduction and elimination rules with beta- and eta- rules. >>>>> >>>>> Another way of thinking about this is that often, we *aren't sure* >>>>> what the equational theory of our language is or should be. This is >>>>> because we often derive a language by thinking about a particular >>>>> semantic model, and don't have a clear idea of which equations are >>>>> properly part of the theory of the language, and which ones are >>>>> accidental features of the concrete model. >>>>> >>>>> For example, in the case of name generation ? i.e., ref unit ? our >>>>> intuitions for which equations hold come from the concrete model of >>>>> nominal sets. But we don't know which of those equations should hold >>>>> in all models of name generation, and which are "coincidental" to >>>>> nominal sets. >>>>> >>>>> Another, more practical, example comes from the theory of state. We >>>>> all have the picture of memory as a big array which is updated by >>>>> assembly instructions a la the state monad. But this model incorrectly >>>>> models the behaviour of memory on modern multicore systems. So a >>>>> proper theory of state for this case should have fewer equations >>>>> than what the folk model of state validates. >>>>> >>>>> >>>>> Best, >>>>> Neel >>>>> >>>>> On 19/05/2021 09:03, Sandro Stucki wrote: >>>>>> Talia: thanks for a thought-provoking question, and thanks everyone >>>> else >>>>>> for all the interesting answers so far! >>>>>> >>>>>> Neel: I love your explanation and all your examples! >>>>>> >>>>>> But you didn't really answer Talia's question, did you? I'd be >>> curious >>>>>> to know where and how HoTT without a computation rule for univalence >>>>>> would fit into your classification. It would certainly be a >>> language, >>>>>> and by your definition it has models (e.g. cubical ones) which, if I >>>>>> understand correctly, can be turned into an abstract machine >>> (either a >>>>>> rewriting system per your point 4 or whatever the Agda backends >>> compile >>>>>> to). So according to your definition of programming language (point >>> 3), >>>>>> this version of HoTT would be a programming language simply because >>>>>> there is, in principle, an abstract machine model for it? Is that >>> what >>>>>> you had in mind? >>>>>> >>>>>> Cheers >>>>>> /Sandro >>>>>> >>>>>> >>>>>> On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami >>>>>> >>>>> > wrote: >>>>>> >>>>>> [ The Types Forum, >>>>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>>> ] >>>>>> >>>>>> Dear Talia, >>>>>> >>>>>> Here's an imprecise but useful way of organising these ideas >>> that I >>>>>> found helpful. >>>>>> >>>>>> 1. A *language* is a (generalised) algebraic theory. Basically, >>>> think >>>>>> of a theory as a set of generators and relations in the >>> style >>>> of >>>>>> abstract algebra. >>>>>> >>>>>> You need to beef this up to handle variables (e.g., see the >>>>> work of >>>>>> Fiore and Hamana) but (a) I promised to be imprecise, and >>> (b) >>>>> the >>>>>> core intuition that a language is a set of generators for >>>> terms, >>>>>> plus a set of equations these terms satisfy is already >>> totally >>>>>> visible in the basic case. >>>>>> >>>>>> For example: >>>>>> >>>>>> a) the simply-typed lambda calculus >>>>>> b) regular expressions >>>>>> c) relational algebra >>>>>> >>>>>> 2. A *model* of a a language is literally just any old >>> mathematical >>>>>> structure which supports the generators of the language and >>>>>> respects the equations. >>>>>> >>>>>> For example: >>>>>> >>>>>> a) you can model the typed lambda calculus using sets >>>>>> for types and mathematical functions for terms, >>>>>> b) you can model regular expressions as denoting particular >>>>>> languages (ie, sets of strings) >>>>>> c) you can model relational algebra expressions as sets of >>>>>> tuples >>>>>> >>>>>> 2. A *model of computation* or *machine model* is basically a >>>>>> description of an abstract machine that we think can be >>>>> implemented >>>>>> with physical hardware, at least in principle. So these are >>>>> things >>>>>> like finite state machines, Turing machines, Petri nets, >>>>> pushdown >>>>>> automata, register machines, circuits, and so on. >>> Basically, >>>>> think >>>>>> of models of computation as the things you study in a >>>>> computability >>>>>> class. >>>>>> >>>>>> The Church-Turing thesis bounds which abstract machines we >>>>> think it >>>>>> is possible to physically implement. >>>>>> >>>>>> 3. A language is a *programming language* when you can give at >>>> least >>>>>> one model of the language using some machine model. >>>>>> >>>>>> For example: >>>>>> >>>>>> a) the types of the lambda calculus can be viewed as >>> partial >>>>>> equivalence relations over Go?del codes for some >>> universal >>>>> turing >>>>>> machine, and the terms of a type can be assigned to >>>>> equivalence >>>>>> classes of the corresponding PER. >>>>>> >>>>>> b) Regular expressions can be interpreted into finite state >>>>>> machines >>>>>> quotiented by bisimulation. >>>>>> >>>>>> c) A set in relational algebra can be realised as >>> equivalence >>>>>> classes of B-trees, and relational algebra expressions >>> as >>>>> nested >>>>>> for-loops over them. >>>>>> >>>>>> Note that in all three cases we have to quotient the machine >>>>> model >>>>>> by a suitable equivalence relation to preserve the >>> equations of >>>>> the >>>>>> language's theory. >>>>>> >>>>>> This quotient is *very* important, and is the source of a >>> lot >>>> of >>>>>> confusion. It hides the equivalences the language theory >>> wants >>>> to >>>>>> deny, but that is not always what the programmer wants ? >>> e.g., >>>> is >>>>>> merge sort equal to bubble sort? As mathematical functions, >>>> they >>>>>> surely are, but if you consider them as operations running >>> on >>>> an >>>>>> actual computer, then we will have strong preferences! >>>>>> >>>>>> 4. A common source of confusion arises from the fact that if you >>>> have >>>>>> a nice type-theoretic language (like the STLC), then: >>>>>> >>>>>> a) the term model of this theory will be the initial model >>> in >>>>> the >>>>>> category of models, and >>>>>> b) you can turn the terms into a machine >>>>>> model by orienting some of the equations the >>> lambda-theory >>>>>> satisfies and using them as rewrites. >>>>>> >>>>>> As a result we abuse language to talk about the theory of >>> the >>>>>> simply-typed calculus as "being" a programming language. >>> This >>>> is >>>>>> also where operational semantics comes from, at least for >>>> purely >>>>>> functional languages. >>>>>> >>>>>> Best, >>>>>> Neel >>>>>> >>>>>> On 18/05/2021 20:42, Talia Ringer wrote: >>>>>> > [ The Types Forum, >>>>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>>> ] >>>>>> > >>>>>> > Hi friends, >>>>>> > >>>>>> > I have a strange discussion I'd like to start. Recently I was >>>>>> debating with >>>>>> > someone whether Curry-Howard extends to arbitrary logical >>>>>> systems---whether >>>>>> > all proofs are programs in some sense. I argued yes, he >>> argued >>>>>> no. But >>>>>> > after a while of arguing, we realized that we had different >>>>>> axioms if you >>>>>> > will modeling what a "program" is. Is any term that can be >>> typed >>>>>> a program? >>>>>> > I assumed yes, he assumed no. >>>>>> > >>>>>> > So then I took to Twitter, and I asked the following >>> questions >>>>> (some >>>>>> > informal language here, since audience was Twitter): >>>>>> > >>>>>> > 1. If you're working in a language in which not all terms >>>> compute >>>>>> (say, >>>>>> > HoTT without a computational interpretation of univalence, so >>>> not >>>>>> cubical), >>>>>> > would you still call terms that mostly compute but rely on >>>> axioms >>>>>> > "programs"? >>>>>> > >>>>>> > 2. If you answered no, would you call a term that does fully >>>>>> compute in the >>>>>> > same language a "program"? >>>>>> > >>>>>> > People actually really disagreed here; there was nothing >>>>> resembling >>>>>> > consensus. Is a term a program if it calls out to an oracle? >>>>>> Relies on an >>>>>> > unrealizable axiom? Relies on an axiom that is realizable, >>> but >>>>>> not yet >>>>>> > realized, like univalence before cubical existed? (I suppose >>>> some >>>>>> reliance >>>>>> > on axioms at some point is necessary, which makes this even >>>>>> weirder to >>>>>> > me---what makes univalence different to people who do not >>> view >>>>>> terms that >>>>>> > invoke it as an axiom as programs?) >>>>>> > >>>>>> > Anyways, it just feels strange to get to the last three >>> weeks of >>>>> my >>>>>> > programming languages PhD, and realize I've never once asked >>>> what >>>>>> makes a >>>>>> > term a program ?. So it'd be interesting to hear your >>> thoughts. >>>>>> > >>>>>> > Talia >>>>>> > >>>>>> >>>>> >>>> >>> >> From streicher at mathematik.tu-darmstadt.de Thu May 20 07:15:46 2021 From: streicher at mathematik.tu-darmstadt.de (Thomas Streicher) Date: Thu, 20 May 2021 13:15:46 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: <20210520111546.GD17838@mathematik.tu-darmstadt.de> One can extract witnesses only for Pi^0_2 sentences. Otherwise classical proofs of existential statements don't give you witnesses. That's part of our conditio humana... Thomas On Thu, May 20, 2021 at 09:17:27AM +0200, Gabriel Scherer wrote: > > > > I don't understand how this semantics works; > > > > Some references: > > - on Parigot's lambda-mu calculus > "An algorithmic interpretation of classical natural deduction", Michel > Parigot, 1992 > https://www.cs.ru.nl/~freek/courses/tt-2011/papers/parigot.pdf > > - on the nicer "classical abstract machines": > "The duality of computation", Hugo Herbelin and Pierre-Louis Curien, 2000 > https://hal.inria.fr/inria-00156377/document > > "Classical *F*??, orthogonality and symmetric candidates", St?phane > Lengrand and Alexandre MIquel, 2008 > http://www.csl.sri.com/users/sgl/Work/Reports/APAL2007.pdf > > "The duality of computation under focus", Pierre-Louis Curien and > Guillaume Munch-Maccagnoni, 2010 > https://arxiv.org/pdf/1006.2283 > > However, if I understand your interpretation correctly, then term1 should > > reduce to 0 but term2 should reduce to 1. > > > > Yes, terms in classical systems can replace the current continuation (and > erase it or duplicate it), so (let _ := term 0 in term 1) is not > necessarily equivalent to (term 1) (if we assume call-by-value reduction, > at least at type Nat, you will get 0 here instead of 1 as you predict). > Computing with excluded middle is "effectful". > > > Another issue is that typechecking requires normalization under binders, > > but normalization under binders seems to invalidate the semantics you > > suggest, because the proof of A might be not be well-scoped in the context > > in which you asked for it. > > > > If we wrote it all down precisely, I think that there would be free > variables (term variables or co-term/continuation blocks) in various places > that "block" some of the computations, no scope escape. We can perfectly > make sense of reduction under binders in classical calculi, although > sometimes there results are surprising because control effects are > surprising. > > (Trivially, it seems like eta-expanding the proof of fake proof of (not A) > > results in invoking the continuation if you try to fully normalize a term.) > > > > Yes, eta-expansion is not necessarily valid in an effectful system. In > general eta-expansion is only possible terms that your reduction strategy > will not evaluate right away. (If you choose the reduction strategy > carefully you can still have nice eta rules for base connectives.) > > I should also mention that classical calculi are hard to mix with dependent > types (just as most systems that are more effectful or more resourceful > than usual intuitionistic logic): it's an open problem with active research > how to provide nice dependently-typed systems that compute with LEM (in the > syntax of those classical abstract machines, or just with control > operators), exploring various tradeoffs. See for example > > "A Classical Sequent Calculus with Dependent Types", ?tienne Miquey, 2019 > https://hal.inria.fr/hal-01519929v3/document > > To summarize: in general giving computational interpretations of newer > axioms has a cost, we move to a new programming model with, typically, > weakened program equivalences; and some powerful extension of our logics > may be incompatible with it or less-compatible. In the particular case of > excluded middle, I have the impression that we now have well-trodded > computational interpretations (including just the double-negation > translation). > > On Thu, May 20, 2021 at 3:53 AM Jason Gross wrote: > > > > If you ask for for a proof of (A \/ not A), you get a "fake" proof of > > (not A); if you ever manage to build a proof of A and try to use it to get > > a contradiction using this (not A), it will "cheat" by traveling back in > > time to your "ask", and serve you your own proof of A. > > > > I don't understand how this semantics works; it seems to me that it > > invalidates the normal reduction rules. > > Consider the following: > > Axiom LEM : forall A, A + (A -> False). > > > > Definition term1 > > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | > > _ => _ end with > > | inl v => fun _ => v > > | inr bad => fun f => f bad > > end (fun bad => let _ := bad 0 in bad 1). > > Definition term2 > > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ => _ | > > _ => _ end with > > | inl v => fun _ => v > > | inr bad => fun f => f bad > > end (fun bad => bad 1). > > Lemma pf : term1 = term2. Proof. reflexivity. Qed. > > > > However, if I understand your interpretation correctly, then term1 should > > reduce to 0 but term2 should reduce to 1. > > > > Another issue is that typechecking requires normalization under binders, > > but normalization under binders seems to invalidate the semantics you > > suggest, because the proof of A might be not be well-scoped in the context > > in which you asked for it. (Trivially, it seems like eta-expanding the > > proof of fake proof of (not A) results in invoking the continuation if you > > try to fully normalize a term.) > > > > What am I missing/misunderstanding? > > > > Best, > > Jason > > > > > > On Wed, May 19, 2021 at 11:27 AM Gabriel Scherer < > > gabriel.scherer at gmail.com> wrote: > > > >> I am not convinced by the example of Jason and Thomas, which suggests > >> that I am missing something. > >> > >> We can interpret the excluded middle in classical abstract machines (for > >> example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical > >> lambda-calculus), or in presence of control operators (classical abstract > >> machines being nicer syntax for non-delimited continuation operators). If > >> you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); > >> if you ever manage to build a proof of A and try to use it to get a > >> contradiction using this (not A), it will "cheat" by traveling back in time > >> to your "ask", and serve you your own proof of A. > >> > >> This gives a computational interpretation of (non-truncated) excluded > >> middle that seems perfectly in line with Talia's notion of "program". Of > >> course, what we don't get that we might expect is a canonicity property: we > >> now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" > >> proofs by normalization alone, you have to interact with them to see where > >> they take you. (Or, if you see those classical programs through a > >> double-negation translation, they aren't really at type (A \/ B), but > >> rather at its double-negation translation, which has weirder normal forms.) > >> > >> > >> > >> > >> On Wed, May 19, 2021 at 5:07 PM Jason Gross > >> wrote: > >> > >>> [ The Types Forum, > >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >>> > >>> Non-truncated Excluded Middle (that is, the version that returns an > >>> informative disjunction) cannot have a computational interpretation in > >>> Turing machines, for it would allow you to decide the halting problem. > >>> More generally, some computational complexity theory is done with > >>> reference > >>> to oracles for known-undecidable problems. Additionally, I'd be > >>> suspicious > >>> of a computational interpretation of the consistency of ZFC or PA ---- > >>> would having a computational interpretation of these mean having a type > >>> theory that believes that there are ground terms of type False in the > >>> presence of a contradiction in ZFC? > >>> > >>> On Wed, May 19, 2021, 07:38 Talia Ringer > >>> wrote: > >>> > >>> > [ The Types Forum, > >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>> > ] > >>> > > >>> > Somewhat of a complementary question, and proof to the world that I'm > >>> up at > >>> > 330 AM still thinking about this: > >>> > > >>> > Are there interesting or commonly used logical axioms that we know for > >>> sure > >>> > cannot have computational interpretations? > >>> > > >>> > On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < > >>> > neelakantan.krishnaswami at gmail.com> wrote: > >>> > > >>> > > [ The Types Forum, > >>> > http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>> > > ] > >>> > > > >>> > > Dear Sandro, > >>> > > > >>> > > Yes, you're right -- I didn't answer the question, since I was too > >>> > > taken by the subject line. :) > >>> > > > >>> > > Anyway, I do think that HoTT with a non-reducible univalence axiom is > >>> > > still a programming language, because we can give a computational > >>> > > interpretation to that language: for example, you could follow the > >>> > > strategy of Angiuli, Harper and Wilson's POPL 2017 paper, > >>> > > *Computational Higher-Dimensional Type Theory*. > >>> > > > >>> > > Another, simpler example comes from Martin Escardo's example upthread > >>> > > of basic Martin-Lo??f type theory with the function extensionality > >>> > > axiom. You can give a very simple realizability interpretation to the > >>> > > equality type and extensionality axiom, which lets every compiled > >>> > > program compute. > >>> > > > >>> > > What you lose in both of these cases is not the ability to give a > >>> > > computational model to the language, but rather the ability to > >>> > > identify normal forms and to use an oriented version of the > >>> equational > >>> > > theory of the language as the evaluation mechanism. > >>> > > > >>> > > This is not an overly shocking phenomenon: it occurs even in much > >>> > > simpler languages than dependent type theories. For example, once you > >>> > > add the reference type `ref a` to ML, it is no longer the case that > >>> > > the language has normal forms, because the ref type does not have > >>> > > introduction and elimination rules with beta- and eta- rules. > >>> > > > >>> > > Another way of thinking about this is that often, we *aren't sure* > >>> > > what the equational theory of our language is or should be. This is > >>> > > because we often derive a language by thinking about a particular > >>> > > semantic model, and don't have a clear idea of which equations are > >>> > > properly part of the theory of the language, and which ones are > >>> > > accidental features of the concrete model. > >>> > > > >>> > > For example, in the case of name generation ??? i.e., ref unit ??? our > >>> > > intuitions for which equations hold come from the concrete model of > >>> > > nominal sets. But we don't know which of those equations should hold > >>> > > in all models of name generation, and which are "coincidental" to > >>> > > nominal sets. > >>> > > > >>> > > Another, more practical, example comes from the theory of state. We > >>> > > all have the picture of memory as a big array which is updated by > >>> > > assembly instructions a la the state monad. But this model > >>> incorrectly > >>> > > models the behaviour of memory on modern multicore systems. So a > >>> > > proper theory of state for this case should have fewer equations > >>> > > than what the folk model of state validates. > >>> > > > >>> > > > >>> > > Best, > >>> > > Neel > >>> > > > >>> > > On 19/05/2021 09:03, Sandro Stucki wrote: > >>> > > > Talia: thanks for a thought-provoking question, and thanks everyone > >>> > else > >>> > > > for all the interesting answers so far! > >>> > > > > >>> > > > Neel: I love your explanation and all your examples! > >>> > > > > >>> > > > But you didn't really answer Talia's question, did you? I'd be > >>> curious > >>> > > > to know where and how HoTT without a computation rule for > >>> univalence > >>> > > > would fit into your classification. It would certainly be a > >>> language, > >>> > > > and by your definition it has models (e.g. cubical ones) which, if > >>> I > >>> > > > understand correctly, can be turned into an abstract machine > >>> (either a > >>> > > > rewriting system per your point 4 or whatever the Agda backends > >>> compile > >>> > > > to). So according to your definition of programming language > >>> (point 3), > >>> > > > this version of HoTT would be a programming language simply because > >>> > > > there is, in principle, an abstract machine model for it? Is that > >>> what > >>> > > > you had in mind? > >>> > > > > >>> > > > Cheers > >>> > > > /Sandro > >>> > > > > >>> > > > > >>> > > > On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > >>> > > > >>> > > > > wrote: > >>> > > > > >>> > > > [ The Types Forum, > >>> > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>> > > > ] > >>> > > > > >>> > > > Dear Talia, > >>> > > > > >>> > > > Here's an imprecise but useful way of organising these ideas > >>> that I > >>> > > > found helpful. > >>> > > > > >>> > > > 1. A *language* is a (generalised) algebraic theory. Basically, > >>> > think > >>> > > > of a theory as a set of generators and relations in the > >>> style > >>> > of > >>> > > > abstract algebra. > >>> > > > > >>> > > > You need to beef this up to handle variables (e.g., see > >>> the > >>> > > work of > >>> > > > Fiore and Hamana) but (a) I promised to be imprecise, and > >>> (b) > >>> > > the > >>> > > > core intuition that a language is a set of generators for > >>> > terms, > >>> > > > plus a set of equations these terms satisfy is already > >>> totally > >>> > > > visible in the basic case. > >>> > > > > >>> > > > For example: > >>> > > > > >>> > > > a) the simply-typed lambda calculus > >>> > > > b) regular expressions > >>> > > > c) relational algebra > >>> > > > > >>> > > > 2. A *model* of a a language is literally just any old > >>> mathematical > >>> > > > structure which supports the generators of the language > >>> and > >>> > > > respects the equations. > >>> > > > > >>> > > > For example: > >>> > > > > >>> > > > a) you can model the typed lambda calculus using sets > >>> > > > for types and mathematical functions for terms, > >>> > > > b) you can model regular expressions as denoting > >>> particular > >>> > > > languages (ie, sets of strings) > >>> > > > c) you can model relational algebra expressions as sets of > >>> > > > tuples > >>> > > > > >>> > > > 2. A *model of computation* or *machine model* is basically a > >>> > > > description of an abstract machine that we think can be > >>> > > implemented > >>> > > > with physical hardware, at least in principle. So these > >>> are > >>> > > things > >>> > > > like finite state machines, Turing machines, Petri nets, > >>> > > pushdown > >>> > > > automata, register machines, circuits, and so on. > >>> Basically, > >>> > > think > >>> > > > of models of computation as the things you study in a > >>> > > computability > >>> > > > class. > >>> > > > > >>> > > > The Church-Turing thesis bounds which abstract machines we > >>> > > think it > >>> > > > is possible to physically implement. > >>> > > > > >>> > > > 3. A language is a *programming language* when you can give at > >>> > least > >>> > > > one model of the language using some machine model. > >>> > > > > >>> > > > For example: > >>> > > > > >>> > > > a) the types of the lambda calculus can be viewed as > >>> partial > >>> > > > equivalence relations over Go??del codes for some > >>> universal > >>> > > turing > >>> > > > machine, and the terms of a type can be assigned to > >>> > > equivalence > >>> > > > classes of the corresponding PER. > >>> > > > > >>> > > > b) Regular expressions can be interpreted into finite > >>> state > >>> > > > machines > >>> > > > quotiented by bisimulation. > >>> > > > > >>> > > > c) A set in relational algebra can be realised as > >>> equivalence > >>> > > > classes of B-trees, and relational algebra expressions > >>> as > >>> > > nested > >>> > > > for-loops over them. > >>> > > > > >>> > > > Note that in all three cases we have to quotient the > >>> machine > >>> > > model > >>> > > > by a suitable equivalence relation to preserve the > >>> equations of > >>> > > the > >>> > > > language's theory. > >>> > > > > >>> > > > This quotient is *very* important, and is the source of a > >>> lot > >>> > of > >>> > > > confusion. It hides the equivalences the language theory > >>> wants > >>> > to > >>> > > > deny, but that is not always what the programmer wants ??? > >>> e.g., > >>> > is > >>> > > > merge sort equal to bubble sort? As mathematical functions, > >>> > they > >>> > > > surely are, but if you consider them as operations running > >>> on > >>> > an > >>> > > > actual computer, then we will have strong preferences! > >>> > > > > >>> > > > 4. A common source of confusion arises from the fact that if > >>> you > >>> > have > >>> > > > a nice type-theoretic language (like the STLC), then: > >>> > > > > >>> > > > a) the term model of this theory will be the initial > >>> model in > >>> > > the > >>> > > > category of models, and > >>> > > > b) you can turn the terms into a machine > >>> > > > model by orienting some of the equations the > >>> lambda-theory > >>> > > > satisfies and using them as rewrites. > >>> > > > > >>> > > > As a result we abuse language to talk about the theory of > >>> the > >>> > > > simply-typed calculus as "being" a programming language. > >>> This > >>> > is > >>> > > > also where operational semantics comes from, at least for > >>> > purely > >>> > > > functional languages. > >>> > > > > >>> > > > Best, > >>> > > > Neel > >>> > > > > >>> > > > On 18/05/2021 20:42, Talia Ringer wrote: > >>> > > > > [ The Types Forum, > >>> > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>> > > > ] > >>> > > > > > >>> > > > > Hi friends, > >>> > > > > > >>> > > > > I have a strange discussion I'd like to start. Recently I > >>> was > >>> > > > debating with > >>> > > > > someone whether Curry-Howard extends to arbitrary logical > >>> > > > systems---whether > >>> > > > > all proofs are programs in some sense. I argued yes, he > >>> argued > >>> > > > no. But > >>> > > > > after a while of arguing, we realized that we had different > >>> > > > axioms if you > >>> > > > > will modeling what a "program" is. Is any term that can be > >>> typed > >>> > > > a program? > >>> > > > > I assumed yes, he assumed no. > >>> > > > > > >>> > > > > So then I took to Twitter, and I asked the following > >>> questions > >>> > > (some > >>> > > > > informal language here, since audience was Twitter): > >>> > > > > > >>> > > > > 1. If you're working in a language in which not all terms > >>> > compute > >>> > > > (say, > >>> > > > > HoTT without a computational interpretation of univalence, > >>> so > >>> > not > >>> > > > cubical), > >>> > > > > would you still call terms that mostly compute but rely on > >>> > axioms > >>> > > > > "programs"? > >>> > > > > > >>> > > > > 2. If you answered no, would you call a term that does fully > >>> > > > compute in the > >>> > > > > same language a "program"? > >>> > > > > > >>> > > > > People actually really disagreed here; there was nothing > >>> > > resembling > >>> > > > > consensus. Is a term a program if it calls out to an oracle? > >>> > > > Relies on an > >>> > > > > unrealizable axiom? Relies on an axiom that is realizable, > >>> but > >>> > > > not yet > >>> > > > > realized, like univalence before cubical existed? (I suppose > >>> > some > >>> > > > reliance > >>> > > > > on axioms at some point is necessary, which makes this even > >>> > > > weirder to > >>> > > > > me---what makes univalence different to people who do not > >>> view > >>> > > > terms that > >>> > > > > invoke it as an axiom as programs?) > >>> > > > > > >>> > > > > Anyways, it just feels strange to get to the last three > >>> weeks of > >>> > > my > >>> > > > > programming languages PhD, and realize I've never once asked > >>> > what > >>> > > > makes a > >>> > > > > term a program ????. So it'd be interesting to hear your > >>> thoughts. > >>> > > > > > >>> > > > > Talia > >>> > > > > > >>> > > > > >>> > > > >>> > > >>> > >> From tarmo at cs.ioc.ee Thu May 20 09:10:53 2021 From: tarmo at cs.ioc.ee (Tarmo Uustalu) Date: Thu, 20 May 2021 13:10:53 +0000 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <0c4832da-1878-70cb-cdbf-f05c27e727d2@gmail.com> References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> <0c4832da-1878-70cb-cdbf-f05c27e727d2@gmail.com> Message-ID: <20210520131053.40ed6159@cs.ioc.ee> Hi! On Thu, 20 May 2021 10:31:56 +0100 Neel Krishnaswami wrote: > Tadeusz already explained what's going on, but let me unpack his > remarks a bit. > > The basic idea is that the distinctive feature of intuitionistic logic > is the existence property. However, classical and intuitionistic > proofs coincide on the ?, ?, ? fragment. Not quite. Think of Peirce's formula, which is a purely implicative classical tautology not valid intuitionistically. Any translation from classical logic to intuitionistic logic for reducing derivability in the former to derivability in the latter must take this into account in some way. Tarmo U > On 20/05/2021 02:52, Jason Gross wrote: > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> If you ask for for a proof of (A \/ not A), you get a "fake" > >> proof of > > (not A); if you ever manage to build a proof of A and try to use it > > to get a contradiction using this (not A), it will "cheat" by > > traveling back in time to your "ask", and serve you your own proof > > of A. > > > > I don't understand how this semantics works; it seems to me that it > > invalidates the normal reduction rules. > > Consider the following: > > Axiom LEM : forall A, A + (A -> False). > > > > Definition term1 > > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ > > => _ | _ => _ end with > > | inl v => fun _ => v > > | inr bad => fun f => f bad > > end (fun bad => let _ := bad 0 in bad 1). > > Definition term2 > > := match LEM nat as LEM_nat return _ -> match LEM_nat with inl _ > > => _ | _ => _ end with > > | inl v => fun _ => v > > | inr bad => fun f => f bad > > end (fun bad => bad 1). > > Lemma pf : term1 = term2. Proof. reflexivity. Qed. > > > > However, if I understand your interpretation correctly, then term1 > > should reduce to 0 but term2 should reduce to 1. > > > > Another issue is that typechecking requires normalization under > > binders, but normalization under binders seems to invalidate the > > semantics you suggest, because the proof of A might be not be > > well-scoped in the context in which you asked for it. (Trivially, > > it seems like eta-expanding the proof of fake proof of (not A) > > results in invoking the continuation if you try to fully normalize > > a term.) > > > > What am I missing/misunderstanding? > > > > Best, > > Jason > > > > > > On Wed, May 19, 2021 at 11:27 AM Gabriel Scherer > > wrote: > > > >> I am not convinced by the example of Jason and Thomas, which > >> suggests that I am missing something. > >> > >> We can interpret the excluded middle in classical abstract > >> machines (for example Curien-Herbelin-family mu-mutilda, or > >> Parigot's earlier classical lambda-calculus), or in presence of > >> control operators (classical abstract machines being nicer syntax > >> for non-delimited continuation operators). If you ask for for a > >> proof of (A \/ not A), you get a "fake" proof of (not A); if you > >> ever manage to build a proof of A and try to use it to get a > >> contradiction using this (not A), it will "cheat" by traveling > >> back in time to your "ask", and serve you your own proof of A. > >> > >> This gives a computational interpretation of (non-truncated) > >> excluded middle that seems perfectly in line with Talia's notion > >> of "program". Of course, what we don't get that we might expect is > >> a canonicity property: we now have "fake" proofs of (A \/ B) that > >> cannot be distinguished from "real" proofs by normalization alone, > >> you have to interact with them to see where they take you. (Or, if > >> you see those classical programs through a double-negation > >> translation, they aren't really at type (A \/ B), but rather at > >> its double-negation translation, which has weirder normal forms.) > >> > >> > >> > >> > >> On Wed, May 19, 2021 at 5:07 PM Jason Gross > >> wrote: > >>> [ The Types Forum, > >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >>> > >>> Non-truncated Excluded Middle (that is, the version that returns > >>> an informative disjunction) cannot have a computational > >>> interpretation in Turing machines, for it would allow you to > >>> decide the halting problem. More generally, some computational > >>> complexity theory is done with reference > >>> to oracles for known-undecidable problems. Additionally, I'd be > >>> suspicious > >>> of a computational interpretation of the consistency of ZFC or PA > >>> ---- would having a computational interpretation of these mean > >>> having a type theory that believes that there are ground terms of > >>> type False in the presence of a contradiction in ZFC? > >>> > >>> On Wed, May 19, 2021, 07:38 Talia Ringer > >>> wrote: > >>> > >>>> [ The Types Forum, > >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>>> ] > >>>> > >>>> Somewhat of a complementary question, and proof to the world > >>>> that I'm > >>> up at > >>>> 330 AM still thinking about this: > >>>> > >>>> Are there interesting or commonly used logical axioms that we > >>>> know for > >>> sure > >>>> cannot have computational interpretations? > >>>> > >>>> On Wed, May 19, 2021, 3:24 AM Neel Krishnaswami < > >>>> neelakantan.krishnaswami at gmail.com> wrote: > >>>> > >>>>> [ The Types Forum, > >>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>>>> ] > >>>>> > >>>>> Dear Sandro, > >>>>> > >>>>> Yes, you're right -- I didn't answer the question, since I was > >>>>> too taken by the subject line. :) > >>>>> > >>>>> Anyway, I do think that HoTT with a non-reducible univalence > >>>>> axiom is still a programming language, because we can give a > >>>>> computational interpretation to that language: for example, you > >>>>> could follow the strategy of Angiuli, Harper and Wilson's POPL > >>>>> 2017 paper, *Computational Higher-Dimensional Type Theory*. > >>>>> > >>>>> Another, simpler example comes from Martin Escardo's example > >>>>> upthread of basic Martin-Lo?f type theory with the function > >>>>> extensionality axiom. You can give a very simple realizability > >>>>> interpretation to the equality type and extensionality axiom, > >>>>> which lets every compiled program compute. > >>>>> > >>>>> What you lose in both of these cases is not the ability to give > >>>>> a computational model to the language, but rather the ability to > >>>>> identify normal forms and to use an oriented version of the > >>>>> equational theory of the language as the evaluation mechanism. > >>>>> > >>>>> This is not an overly shocking phenomenon: it occurs even in > >>>>> much simpler languages than dependent type theories. For > >>>>> example, once you add the reference type `ref a` to ML, it is > >>>>> no longer the case that the language has normal forms, because > >>>>> the ref type does not have introduction and elimination rules > >>>>> with beta- and eta- rules. > >>>>> > >>>>> Another way of thinking about this is that often, we *aren't > >>>>> sure* what the equational theory of our language is or should > >>>>> be. This is because we often derive a language by thinking > >>>>> about a particular semantic model, and don't have a clear idea > >>>>> of which equations are properly part of the theory of the > >>>>> language, and which ones are accidental features of the > >>>>> concrete model. > >>>>> > >>>>> For example, in the case of name generation ? i.e., ref unit ? > >>>>> our intuitions for which equations hold come from the concrete > >>>>> model of nominal sets. But we don't know which of those > >>>>> equations should hold in all models of name generation, and > >>>>> which are "coincidental" to nominal sets. > >>>>> > >>>>> Another, more practical, example comes from the theory of > >>>>> state. We all have the picture of memory as a big array which > >>>>> is updated by assembly instructions a la the state monad. But > >>>>> this model incorrectly models the behaviour of memory on modern > >>>>> multicore systems. So a proper theory of state for this case > >>>>> should have fewer equations than what the folk model of state > >>>>> validates. > >>>>> > >>>>> > >>>>> Best, > >>>>> Neel > >>>>> > >>>>> On 19/05/2021 09:03, Sandro Stucki wrote: > >>>>>> Talia: thanks for a thought-provoking question, and thanks > >>>>>> everyone > >>>> else > >>>>>> for all the interesting answers so far! > >>>>>> > >>>>>> Neel: I love your explanation and all your examples! > >>>>>> > >>>>>> But you didn't really answer Talia's question, did you? I'd > >>>>>> be > >>> curious > >>>>>> to know where and how HoTT without a computation rule for > >>>>>> univalence would fit into your classification. It would > >>>>>> certainly be a > >>> language, > >>>>>> and by your definition it has models (e.g. cubical ones) > >>>>>> which, if I understand correctly, can be turned into an > >>>>>> abstract machine > >>> (either a > >>>>>> rewriting system per your point 4 or whatever the Agda > >>>>>> backends > >>> compile > >>>>>> to). So according to your definition of programming language > >>>>>> (point > >>> 3), > >>>>>> this version of HoTT would be a programming language simply > >>>>>> because there is, in principle, an abstract machine model for > >>>>>> it? Is that > >>> what > >>>>>> you had in mind? > >>>>>> > >>>>>> Cheers > >>>>>> /Sandro > >>>>>> > >>>>>> > >>>>>> On Wed, May 19, 2021 at 6:21 AM Neel Krishnaswami > >>>>>> >>>>>> > wrote: > >>>>>> > >>>>>> [ The Types Forum, > >>>>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>>>>> > >>>>>> ] > >>>>>> > >>>>>> Dear Talia, > >>>>>> > >>>>>> Here's an imprecise but useful way of organising these > >>>>>> ideas > >>> that I > >>>>>> found helpful. > >>>>>> > >>>>>> 1. A *language* is a (generalised) algebraic theory. > >>>>>> Basically, > >>>> think > >>>>>> of a theory as a set of generators and relations in > >>>>>> the > >>> style > >>>> of > >>>>>> abstract algebra. > >>>>>> > >>>>>> You need to beef this up to handle variables (e.g., > >>>>>> see the > >>>>> work of > >>>>>> Fiore and Hamana) but (a) I promised to be > >>>>>> imprecise, and > >>> (b) > >>>>> the > >>>>>> core intuition that a language is a set of > >>>>>> generators for > >>>> terms, > >>>>>> plus a set of equations these terms satisfy is > >>>>>> already > >>> totally > >>>>>> visible in the basic case. > >>>>>> > >>>>>> For example: > >>>>>> > >>>>>> a) the simply-typed lambda calculus > >>>>>> b) regular expressions > >>>>>> c) relational algebra > >>>>>> > >>>>>> 2. A *model* of a a language is literally just any old > >>> mathematical > >>>>>> structure which supports the generators of the > >>>>>> language and respects the equations. > >>>>>> > >>>>>> For example: > >>>>>> > >>>>>> a) you can model the typed lambda calculus using sets > >>>>>> for types and mathematical functions for terms, > >>>>>> b) you can model regular expressions as denoting > >>>>>> particular languages (ie, sets of strings) > >>>>>> c) you can model relational algebra expressions as > >>>>>> sets of tuples > >>>>>> > >>>>>> 2. A *model of computation* or *machine model* is > >>>>>> basically a description of an abstract machine that we think > >>>>>> can be > >>>>> implemented > >>>>>> with physical hardware, at least in principle. So > >>>>>> these are > >>>>> things > >>>>>> like finite state machines, Turing machines, Petri > >>>>>> nets, > >>>>> pushdown > >>>>>> automata, register machines, circuits, and so on. > >>> Basically, > >>>>> think > >>>>>> of models of computation as the things you study in > >>>>>> a > >>>>> computability > >>>>>> class. > >>>>>> > >>>>>> The Church-Turing thesis bounds which abstract > >>>>>> machines we > >>>>> think it > >>>>>> is possible to physically implement. > >>>>>> > >>>>>> 3. A language is a *programming language* when you can > >>>>>> give at > >>>> least > >>>>>> one model of the language using some machine model. > >>>>>> > >>>>>> For example: > >>>>>> > >>>>>> a) the types of the lambda calculus can be viewed > >>>>>> as > >>> partial > >>>>>> equivalence relations over Go?del codes for some > >>> universal > >>>>> turing > >>>>>> machine, and the terms of a type can be assigned > >>>>>> to > >>>>> equivalence > >>>>>> classes of the corresponding PER. > >>>>>> > >>>>>> b) Regular expressions can be interpreted into > >>>>>> finite state machines > >>>>>> quotiented by bisimulation. > >>>>>> > >>>>>> c) A set in relational algebra can be realised as > >>> equivalence > >>>>>> classes of B-trees, and relational algebra > >>>>>> expressions > >>> as > >>>>> nested > >>>>>> for-loops over them. > >>>>>> > >>>>>> Note that in all three cases we have to quotient the > >>>>>> machine > >>>>> model > >>>>>> by a suitable equivalence relation to preserve the > >>> equations of > >>>>> the > >>>>>> language's theory. > >>>>>> > >>>>>> This quotient is *very* important, and is the source > >>>>>> of a > >>> lot > >>>> of > >>>>>> confusion. It hides the equivalences the language > >>>>>> theory > >>> wants > >>>> to > >>>>>> deny, but that is not always what the programmer > >>>>>> wants ? > >>> e.g., > >>>> is > >>>>>> merge sort equal to bubble sort? As mathematical > >>>>>> functions, > >>>> they > >>>>>> surely are, but if you consider them as operations > >>>>>> running > >>> on > >>>> an > >>>>>> actual computer, then we will have strong preferences! > >>>>>> > >>>>>> 4. A common source of confusion arises from the fact that > >>>>>> if you > >>>> have > >>>>>> a nice type-theoretic language (like the STLC), then: > >>>>>> > >>>>>> a) the term model of this theory will be the initial > >>>>>> model > >>> in > >>>>> the > >>>>>> category of models, and > >>>>>> b) you can turn the terms into a machine > >>>>>> model by orienting some of the equations the > >>> lambda-theory > >>>>>> satisfies and using them as rewrites. > >>>>>> > >>>>>> As a result we abuse language to talk about the > >>>>>> theory of > >>> the > >>>>>> simply-typed calculus as "being" a programming > >>>>>> language. > >>> This > >>>> is > >>>>>> also where operational semantics comes from, at > >>>>>> least for > >>>> purely > >>>>>> functional languages. > >>>>>> > >>>>>> Best, > >>>>>> Neel > >>>>>> > >>>>>> On 18/05/2021 20:42, Talia Ringer wrote: > >>>>>> > [ The Types Forum, > >>>>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list > >>>>>> > >>>>>> ] > >>>>>> > > >>>>>> > Hi friends, > >>>>>> > > >>>>>> > I have a strange discussion I'd like to start. > >>>>>> > Recently I was > >>>>>> debating with > >>>>>> > someone whether Curry-Howard extends to arbitrary > >>>>>> > logical > >>>>>> systems---whether > >>>>>> > all proofs are programs in some sense. I argued yes, > >>>>>> > he > >>> argued > >>>>>> no. But > >>>>>> > after a while of arguing, we realized that we had > >>>>>> > different > >>>>>> axioms if you > >>>>>> > will modeling what a "program" is. Is any term that > >>>>>> > can be > >>> typed > >>>>>> a program? > >>>>>> > I assumed yes, he assumed no. > >>>>>> > > >>>>>> > So then I took to Twitter, and I asked the following > >>> questions > >>>>> (some > >>>>>> > informal language here, since audience was Twitter): > >>>>>> > > >>>>>> > 1. If you're working in a language in which not all > >>>>>> > terms > >>>> compute > >>>>>> (say, > >>>>>> > HoTT without a computational interpretation of > >>>>>> > univalence, so > >>>> not > >>>>>> cubical), > >>>>>> > would you still call terms that mostly compute but > >>>>>> > rely on > >>>> axioms > >>>>>> > "programs"? > >>>>>> > > >>>>>> > 2. If you answered no, would you call a term that does > >>>>>> > fully > >>>>>> compute in the > >>>>>> > same language a "program"? > >>>>>> > > >>>>>> > People actually really disagreed here; there was > >>>>>> > nothing > >>>>> resembling > >>>>>> > consensus. Is a term a program if it calls out to an > >>>>>> > oracle? > >>>>>> Relies on an > >>>>>> > unrealizable axiom? Relies on an axiom that is > >>>>>> > realizable, > >>> but > >>>>>> not yet > >>>>>> > realized, like univalence before cubical existed? (I > >>>>>> > suppose > >>>> some > >>>>>> reliance > >>>>>> > on axioms at some point is necessary, which makes this > >>>>>> > even > >>>>>> weirder to > >>>>>> > me---what makes univalence different to people who do > >>>>>> > not > >>> view > >>>>>> terms that > >>>>>> > invoke it as an axiom as programs?) > >>>>>> > > >>>>>> > Anyways, it just feels strange to get to the last > >>>>>> > three > >>> weeks of > >>>>> my > >>>>>> > programming languages PhD, and realize I've never once > >>>>>> > asked > >>>> what > >>>>>> makes a > >>>>>> > term a program ?. So it'd be interesting to hear your > >>> thoughts. > >>>>>> > > >>>>>> > Talia > >>>>>> > > >>>>>> > >>>>> > >>>> > >>> > >> From matthias at ccs.neu.edu Thu May 20 09:50:55 2021 From: matthias at ccs.neu.edu (matthias at ccs.neu.edu) Date: Thu, 20 May 2021 09:50:55 -0400 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: For the relationship to the OP, skip below the line. No need to read the whole email. > On May 19, 2021, at 9:52 PM, Jason Gross wrote: > > I don't understand how this semantics works; it seems to me that it > invalidates the normal reduction rules. The ideas that Gabriel mentioned date back to Tim Griffin (POPL ?89) and Chet Murthy (Cornell dissertation, ?90). They based their work on practical operators with reduction rules that are a tad unusual if you?re stuck in ?downwards is the way to go? mentality (thinkingCS trees here). So here is an illustrative example [*]: f (call/cc a) ?> call/cc (\k.k (f (a (\x.k (f x)))) where f is a value (lambda), a arbitrary The call/cc operator (and equivalent) exists in a bunch of programming languages. Spiritually, it originated with Scheme, thought Steele and Sussman introduced `catch` a variant of Reynolds `escape` operator (frim around the same time). The two are equivalent to `catch` and syntactic sugar for `call/cc`: escape k e == call/cc (\k.e) You can see that the type of call/cc is Peirce?s law. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - But let?s pop out a level to the original question. I?ll start with how Griffin found this idea. In ?88, I showed him `call/cc` and its type. He exclaimed: ?But that this can?t compute. It?s Perice?s law.? Then I explained the reduction rules and, as a Constable student, he had the predictable reaction. So he worked it all out and submitted to POPL and the reaction of the French types-and-logic researchers was also predictable. Fortunately, Murthy picked up and wrote a whole dissertation. [**] I consider this little anecdote important. It tells us that we should NOT be glued to the idea that what we know about the seemingly tight connection between logic and computation is all there is too programming. Let people explore programming "in the wild" and perhaps we will find a few more such neat surprises and perhaps we will need to expand our understanding of logic. ? Matthias ;; - - - [**] To their credit, they also worked out the relationship to the translations Thomas sketched out in an upstream post. To find this literature, Tim Griffin had to get to know the Rice football stadium. [*] The reduction rules go back to my dissertation, though as usual, a month after turning it in, I found vastly simpler ones. Due to the printer-queue backlog at TCS, the simple ones were published 4 years later. See Felleisen and Hieb TCS 91 for details. ? Yes, from then in, I presented only the ?evaluation context semantics? or ?reduction semantics?or ?small step semantics? (not my name) because it was easier to understand and use for the typical POPL attendee. From oleg at okmij.org Fri May 21 10:01:10 2021 From: oleg at okmij.org (Oleg) Date: Fri, 21 May 2021 23:01:10 +0900 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: Message-ID: <20210521140110.GA2806@Melchior.localnet> Classical and intuitionistic computational interpretations use very different CH correspondences; in particular, the meaning of -> differs. Lots of confusion could be avoided if we used different symbols for logical connectives, specifically ->. Or wrote modality explicitly (as Thielecke does). The type of call/cc does look like Peirce's law -- but the meaning of arrows differs, and so are CH correspondence, proof (ir)relevance and other properties. Recall the Peirce's law: peirce : ((A->B)->A) -> A When we talk about logic and the standard CH correspondence, -> in types is taken as logical implication: as Martin Escardo noted already, A -> B means that if you give me an instance (inhabitant) of A, I will give you an instance of B -- without fail. (Sorry for stating the obvious: we shall soon see how it all changes in computational interpretations of classical logic.) Such interpretation means that a term of the type A -> \bot witnesses that A is uninhabited. There are two points to note about such term: -- although it can be thought of as a program, such program can never be run because there are no suitable inputs; -- if there are two terms of the type A -> \bot, there are equivalent as programs: because neither can actually be run, we have no means to observe their distinction. Now lets recall call/cc and see how all the above changes, although looking superficially the same. The type of call/cc is call/cc : ((A->B)->A) -> A It looks identical to the type of peirce, as Matthias said. But only on the surface. A->B in call/cc does *not* mean a function that takes an instance of A and returns the instance of B. In call/cc, this subterm is actually an undelimited continuation, and when called, it does not return anything because it never returns (think of exceptions). So, the meaning of arrow is very different from what we saw in the original and familiar CH. As I said, it would save a lot of confusion if we use a different arrow sign, or write the modality explicitly, as shown in Hayo Thielecke: Control Effects as a Modality. JFP, 2009. To see the distinction clearly, let's look at double-negation elimination -- which can be stated without call/cc and time travel, or continuation or other exotic. Familiar exceptions suffice. Here it is in OCaml. type empty = | (* empty type *) let dne: type a. ((a -> empty) -> empty) -> a = fun c -> let exception Exc of a in match c (fun a -> raise (Exc a)) with | exception Exc a -> a | _ -> . (* impossible *) ;; We see several terms of the type A -> \bot. Now, the existence of such terms no longer implies that A is uninhabited! (Now, type inhabitation is no longer directly connected to the truth of propositions.) Since A may well be inhabited, we can run such terms. Different terms of that type can have observable differences (e.g., raise different exceptions) and are no longer equivalent. If we use an appropriate semantics -- bubble-up semantics proposed by Felleisen and Parigot, we can very well normalize under binders. For example, (fun a -> raise (Exc a); raise (Something else)) Here, raise (Exc a) creates a `bubble' containing Exc a. Following the rule (bubble v) ; e -> (bubble v) that bubble eradicates the second raise. There is no rule for a bubble to cross an abstraction boundary, so we stop and get the normal form. From Guillaume.Munch-Maccagnoni at inria.fr Fri May 21 11:19:59 2021 From: Guillaume.Munch-Maccagnoni at inria.fr (Guillaume Munch-Maccagnoni) Date: Fri, 21 May 2021 17:19:59 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <20210521140110.GA2806@Melchior.localnet> References: <20210521140110.GA2806@Melchior.localnet> Message-ID: Le 21/05/2021 ? 16:01, Oleg a ?crit?: > To see the distinction clearly, let's look at double-negation > elimination -- which can be stated without call/cc and time travel, or > continuation or other exotic. Familiar exceptions suffice. Here it is > in OCaml. > > type empty = | (* empty type *) > > let dne: type a. ((a -> empty) -> empty) -> a = fun c -> > let exception Exc of a in > match c (fun a -> raise (Exc a)) with > | exception Exc a -> a > | _ -> . (* impossible *) > ;; > > We see several terms of the type A -> \bot. Now, the existence of such > terms no longer implies that A is uninhabited! (Now, type inhabitation > is no longer directly connected to the truth of propositions.) > Since A may well be inhabited, we can > run such terms. Different terms of that type can have observable differences > (e.g., raise different exceptions) and are no longer equivalent. # let c (f : (int -> int) -> empty) = ??? let g _x = match f (fun n -> n) with _ -> . in ??? f g;; val c : ((int -> int) -> empty) -> empty = # let h = dne c;; val h : int -> int = # h 3;; Exception: Exc _. This interpretation misses what makes continuation semantics constructive, and should be seen as mere analogy. This is an example of mismatch between proof and program, different from the one that inhabits `empty` with a non-terminating function. -- Guillaume Munch-Maccagnoni Researcher at Inria Bretagne Atlantique Team Gallinette, Nantes From Guillaume.Munch-Maccagnoni at inria.fr Sat May 22 06:28:14 2021 From: Guillaume.Munch-Maccagnoni at inria.fr (Guillaume Munch-Maccagnoni) Date: Sat, 22 May 2021 12:28:14 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <20210519155913.GD15921@mathematik.tu-darmstadt.de> References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> <20210519155913.GD15921@mathematik.tu-darmstadt.de> Message-ID: Le 19/05/2021 ? 17:59, Thomas Streicher a ?crit?: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > n Wed, May 19, 2021 at 05:26:52PM +0200, Gabriel Scherer wrote: >> I am not convinced by the example of Jason and Thomas, which suggests that >> I am missing something. >> >> We can interpret the excluded middle in classical abstract machines (for >> example Curien-Herbelin-family mu-mutilda, or Parigot's earlier classical >> lambda-calculus), or in presence of control operators (classical abstract >> machines being nicer syntax for non-delimited continuation operators). If >> you ask for for a proof of (A \/ not A), you get a "fake" proof of (not A); >> if you ever manage to build a proof of A and try to use it to get a >> contradiction using this (not A), it will "cheat" by traveling back in time >> to your "ask", and serve you your own proof of A. >> >> This gives a computational interpretation of (non-truncated) excluded >> middle that seems perfectly in line with Talia's notion of "program". Of >> course, what we don't get that we might expect is a canonicity property: we >> now have "fake" proofs of (A \/ B) that cannot be distinguished from "real" >> proofs by normalization alone, you have to interact with them to see where >> they take you. (Or, if you see those classical programs through a >> double-negation translation, they aren't really at type (A \/ B), but >> rather at its double-negation translation, which has weirder normal forms.) > All these computational interpretations of classical logic are > actually constructive interpretations of their negative translations. > This is maybe not so transparent because macros are used. But if you > consider the cps translation of these macros you again get the negative > translations. > > I know the French school (Krivine in particular) is very fond of these > macros. But these macros don't add too computational power. > More generally, all effects can be translated to a purely functional kernel. > > Whether one likes these macros or not presumably depends on whether > one is more CS person or a mathematician. But these preferences don't > change strength. > > Thomas I believe this applies CS terminology incorrectly. A whole-program translation can increase computational expressiveness. This is very useful as this lets CS people give us compilers from high-level languages to low-level languages. ?Macro? in general indeed refers to something which does not add computational power. But if call/cc was macro-expressible in the lambda calculus, then I expect CS people to be the first to notice. A negative translation is a whole-program translation. The examples were really about the incompatibility of a notion of constructive LEM with other, unspecified, hypotheses. A whole-program translation lets you ignore features from the target language, whereas a macro does not. Alternatively, if your notion of computational power does not account for computational expressiveness, I suspect there are some kind of emergent phenomena in your system your notion might miss. There have been attempts at making the idea of macro-expressiveness and comparison of expressive power rigorous, to begin with Felleisen [1] which is a transposition at the level of terms of Kleene's notion of eliminability at the level of provability. Le 20/05/2021 ? 13:15, Thomas Streicher a ?crit?: > One can extract witnesses only for Pi^0_2 sentences. Otherwise > classical proofs of existential statements don't give you witnesses. > > That's part of our conditio humana... > (There are sketches of this conservativity result alluded to in this thread, which I will not continue here, instead one can refer to a textbook, for instance [2, pp. 198-201].) The interesting part is of course not going to be that different models of computation agree on the notion of computable function from N to N (a Pi^0_2 sentence), and that they can disagree beyond. This is a basic expectation. What is interesting is what it says beyond. I find one step missing in Gabriel's and other explanations. The ?fake? or ?cheating? proof (in Gabriel's words) is meant to be used at some point inside a bigger program of type N, Bool, or any other type one expects to enjoy canonicity, in order to get a final value. Which one obtains in the end, because canonicity holds for purely positive types (Sigma^0_1 formulae). When one extracts a witness for a Pi^0_2 formula, there is no restriction on the shape of the proof: it works as well for those obtained by composing presumably-non-constructive ones. This relies on the ?fake? or ?cheating? proofs always providing enough information to move the argument forward. Which they do. Then, I believe that such proofs are not cheating but constructive. Is it possible to say that one argues less in good faith than the other, between the classical player that backtracks on their position, and the intuitionistic opponent who demands amounts of evidence up-front unnecessary to the computation of the end result? And so I believe they are certainly all programs of their respective types, for some notion of computation, regardless of whether this notion can also be explained in terms of a negative translation of this type plus a way of running the result of the transformation of a whole program (cf. Friedman's trick, or Prop 10.11 in the reference). ?- - -?? - - -?? - - -?? - - -?? - - -?? - - -?? - - -?? - - - - - -?? - - - This situation, it turns out, is very similar to the one proposed in Talia's message: a language in which some terms relying on axioms are deemed not computing or "mostly computing" (when seen through an intuitionistic lens), but in which some other terms still "fully compute". But we have additional guarantees that some terms will always "fully compute" depending on their type, rather than their shape. [1] Felleisen, M., ?On the expressive power of programming languages? . [2] Krivine, J.-L., ?Lambda-calculus, types and models?, . -- Guillaume Munch-Maccagnoni Researcher at Inria Bretagne Atlantique Team Gallinette, Nantes From streicher at mathematik.tu-darmstadt.de Sat May 22 12:43:55 2021 From: streicher at mathematik.tu-darmstadt.de (streicher at mathematik.tu-darmstadt.de) Date: Sat, 22 May 2021 18:43:55 +0200 Subject: [TYPES] What's a program? (Seriously) Message-ID: <2145f35657aceb1f091b5297bcf4ad3f.squirrel@webmail.mathematik.tu-darmstadt.de> Indeed, "macro" is a bad wording for translational semantics. Most effects can be explained by such a tranlation inluding continuation which are discussed here. But not all of them as e.g. various kinds of nondeterminism or probability. Also the more refined model constructions of Krivine cannot be explained this way, e.g. when he introduces a kind of quote for realizing dependent choice. But one can avoid this using bar recursion which is nothing but an instance of general recursion in the target language. This, however, does not apply to his most recent work covering full AC where he introduces extensions of lambda calculus which cannot be explained by translation to pure functional programming. As long as one does not reason about programs extracted from classical proofs it is no problem if they are cluttered with such fancy constructs. But if one does it becomes a nightmare since one does not know how to do this. That is a problem already for continuations though there exist axiomatizations for them. Therefore, the 'unwinders' rather follow the path of negative translation followed by some functional interpretation. But as soon as you go beyond Pi^0_2 the negative translation of a meaningful statement becomes fairly obscure. In any case it is very different from the original statement understood constructively. A typical example is the socalled Specker phenomenon. When you classically prove the existence of a real number then often you can extract a computable Cauchy sequence for which there does not exist a computable modulus of convergence meaning that you can't extract sufficiently good approximations. Thus, all this extraction business works only for Pi^0_2 sentences. But for those ZFC is conservative over ZF for which ordinary control operators work! Thomas From andrew.polonsky at gmail.com Sat May 22 16:09:37 2021 From: andrew.polonsky at gmail.com (Andrew Polonsky) Date: Sat, 22 May 2021 16:09:37 -0400 Subject: [TYPES] What's a program? (Seriously) Message-ID: I would argue that every proof is a program which takes as input evidence that the hypotheses of a given statement are true, and produces as output evidence that the conclusion is true. This is the essence of the BHK interpretation, but is equally valid when considered for classical systems like ZFC, NGB, ZFC+Vopenka cardinal, etc. The machine running such a program though is not a Turing Machine, or some reduction system, but is the human mind. The proof/program can be presented with different levels of precision: as a sketch/flowchart, informal/pseudocode, a formal script in a particular logic/language, or fully formalized/compiled. Different machines will generally require different layers of precision in order to be able to "run" a given proof (i.e., to understand it). On the other hand, I don't see how every program can be considered a proof. What does the following program prove, for example? https://www.ioccc.org/1984/mullender/mullender.c Perhaps it proves my point. ;) But that is not immanent in the program itself. (And I am certain that some will disagree.) Best, Andrew From Guillaume.Munch-Maccagnoni at inria.fr Sun May 23 13:29:15 2021 From: Guillaume.Munch-Maccagnoni at inria.fr (Guillaume Munch-Maccagnoni) Date: Sun, 23 May 2021 19:29:15 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <2145f35657aceb1f091b5297bcf4ad3f.squirrel@webmail.mathematik.tu-darmstadt.de> References: <2145f35657aceb1f091b5297bcf4ad3f.squirrel@webmail.mathematik.tu-darmstadt.de> Message-ID: <0e0f38e7-277c-ce1a-52d4-ea6b2ebe053e@inria.fr> Le 22/05/2021 ? 18:43, streicher at mathematik.tu-darmstadt.de a ?crit?: > As long as one does not reason about programs extracted from classical > proofs it is no problem if they are cluttered with such fancy constructs. > But if one does it becomes a nightmare since one does not know how to do > this. That is a problem already for continuations though there exist > axiomatizations for them. > > Therefore, the 'unwinders' rather follow the path of negative translation > followed by some functional interpretation. But as soon as you go beyond > Pi^0_2 the negative translation of a meaningful statement becomes fairly > obscure. In any case it is very different from the original statement > understood constructively. > > A typical example is the socalled Specker phenomenon. When you classically > prove the existence of a real number then often you can extract a > computable Cauchy sequence for which there does not exist a computable > modulus of convergence meaning that you can't extract sufficiently good > approximations. > > Thus, all this extraction business works only for Pi^0_2 sentences. But > for those ZFC is conservative over ZF for which ordinary control operators > work! > > Thomas To give some context to Thomas' message, I would like to share this accessible introduction to what proof unwinding (or proof mining) is about: . It is in 4 parts, hosted on the nice Proof Theory blog that appeared last year. -- Guillaume Munch-Maccagnoni Researcher at Inria Bretagne Atlantique Team Gallinette, Nantes From gabriel.scherer at gmail.com Sun May 30 12:52:01 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sun, 30 May 2021 18:52:01 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example Message-ID: Dear list, Today I found out about JOSS, the Journal of Open Source Software ( https://joss.theoj.org/ ), an interesting journal in itself, which has a stunning "Cost and sustainability model" webpage section: https://joss.theoj.org/about#costs For more stunning details, go read their more detailed blog post, "Cost models for running an online open journal" : ) http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal (Meanwhile in ACM land, we are still waiting for basic financial transparency on paper publishing costs -- not that, say, ETAPS or JFP are doing any better. LIPIcs describes how they calculated their publishing costs at https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , and LMCS ( https://lmcs.episciences.org/ ) is now using a publicly-funded OA publishing platform, so they may actually have no costs at all.) Cheers From jgrosso at caltech.edu Sun May 30 14:40:32 2021 From: jgrosso at caltech.edu (Grosso, Joshua T. (Joshua)) Date: Sun, 30 May 2021 18:40:32 +0000 Subject: [TYPES] =?utf-8?q?Free_variables_in_TAPL=E2=80=99s_constraint_ty?= =?utf-8?q?ping_rules?= References: <5001cf61-9445-4025-9b36-d1def46b2238@Spark> Message-ID: <547de3d9-98d3-4d5f-9ace-ca44f3540a07@Spark> In TAPL's constraint typing rules (Figure 22-1), is it possible for the context to contain free type variables that aren?t part of the fresh variables? Because the typing rules are based on the STLC with an infinite number of base types (and T-Var allows anything in ? to be introduced into t or T), a na?ve reading would allow for ?, t, or T to contain type variables not mentioned in ?. However, this implies to me that pathological typing derivations exist where e.g. I can apply CT-Abs to two subderivations for which ?? contains type variables in ??. Intuitively, this seems to defeat the purpose of the fresh variable sets. Does TAPL implicitly assume that ? is ?closed? with respect to ?, maybe? Thanks, Joshua Grosso From roberto at dicosmo.org Mon May 31 02:00:00 2021 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Mon, 31 May 2021 08:00:00 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: References: Message-ID: Hi Gabriel, stunning as it may seem, there are over 29.000 diamond open access journals around the world (i.e. free [as in free beer] to publish and read). A majority of these are in the humanities, but there are quite a few in STEM too. Unfortunately, there is no free lunch, and somebody needs to foot the bill (there is a bill, see "What is a sustainable path to open access ?"), which usually means a lot of volunteer work besides reviewing and editing. I suggest to have a look at this editorial piece of JOT (Journal of Object Technology) that has been around for some 20 years: it provides quite a bit of insight Pierantonio, A., van den Brand, M., & Combemale, B. (2020). Open access all you wanted to know and never dared to ask. Journal of Object Technology, 19(1) Cheers -- Roberto ------------------------------------------------------------------ Computer Science Professor (on leave at Inria from IRIF/Universit? de Paris) Director Software Heritage E-mail : roberto at dicosmo.org INRIA Web : http://www.dicosmo.org Bureau C328 Twitter : http://twitter.com/rdicosmo 2, Rue Simone Iff Tel : +33 1 80 49 44 42 CS 42112 75589 Paris Cedex 12 ------------------------------------------------------------------ GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 On Sun, 30 May 2021 at 18:57, Gabriel Scherer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear list, > > Today I found out about JOSS, the Journal of Open Source Software ( > https://joss.theoj.org/ ), an interesting journal in itself, which has a > stunning "Cost and sustainability model" webpage section: > https://joss.theoj.org/about#costs > > For more stunning details, go read their more detailed blog post, "Cost > models for running an online open journal" : ) > > > http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal > > (Meanwhile in ACM land, we are still waiting for basic financial > transparency on paper publishing costs -- not that, say, ETAPS or JFP are > doing any better. > LIPIcs describes how they calculated their publishing costs at > https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , and > LMCS ( https://lmcs.episciences.org/ ) is now using a publicly-funded OA > publishing platform, so they may actually have no costs at all.) > > Cheers > From jon at jonmsterling.com Tue Jun 1 09:11:33 2021 From: jon at jonmsterling.com (Jon Sterling) Date: Tue, 01 Jun 2021 09:11:33 -0400 Subject: [TYPES] =?utf-8?q?Diamond_open_access=2C_cost_and_sustainability?= =?utf-8?q?_model=3A_a_JOSS_example?= In-Reply-To: References: Message-ID: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> I think it is confusing to refer to costs that are already paid by someone else as "a bill that someone needs to foot". arXiv already exists, etc. ---- and anyone who wants to make a new totally free journal does not need to pay the costs of running the teams that make those amazing resources continue to exist. With due respect, most of the discourse I am hearing (including from the post on the SIGPLAN blog) makes it sound like someone who wants to start a journal needs to build their own arXiv. It is true that as a community we need to think about how we can sustain the existence of resources like the arXiv. But doing so effectively almost certainly requires to work outside of ACM, IEEE, etc., if only because we need to fund (e.g.) preprint servers that are universal and not siloed by professional orgs that have shown again and again that they will spend millions of dollars on "value added" features that scientists are not asking for, in order to justify their large staffs. (e.g. How much did it cost Elsevier to develop and maintain the in-browser PDF viewer that we all immediately try to click away from?) With respect, the article that Gabriel linked to in the beginning actually addresses many of the points that you seem to want to bring up. I think it would be good to either argue that the JOSS article is lying, or to accept that these "fixed costs" that somehow always manage to spiral into the millions are nothing but a scam. Best, Jon On Mon, May 31, 2021, at 2:00 AM, Roberto Di Cosmo wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi Gabriel, > stunning as it may seem, there are over 29.000 diamond open access > journals around the world (i.e. free > [as in free beer] to publish and read). A majority of these are in the > humanities, but there are quite a few in STEM too. > > Unfortunately, there is no free lunch, and somebody needs to foot the > bill > (there is a bill, see "What is a sustainable path to open access > ?"), > which usually means a lot of volunteer work besides reviewing and > editing. > > I suggest to have a look at this editorial piece of JOT (Journal of Object > Technology) that has been around for some 20 years: it provides quite a bit > of insight > > Pierantonio, A., van den Brand, M., & Combemale, B. (2020). Open access all > you wanted to know and never dared to ask. Journal of Object Technology, > 19(1) > > Cheers > > -- > Roberto > > ------------------------------------------------------------------ > Computer Science Professor > (on leave at Inria from IRIF/Universit? de Paris) > > Director > Software Heritage E-mail : roberto at dicosmo.org > INRIA Web : http://www.dicosmo.org > Bureau C328 Twitter : http://twitter.com/rdicosmo > 2, Rue Simone Iff Tel : +33 1 80 49 44 42 > CS 42112 > 75589 Paris Cedex 12 > ------------------------------------------------------------------ > > GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 > > > On Sun, 30 May 2021 at 18:57, Gabriel Scherer > wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > Dear list, > > > > Today I found out about JOSS, the Journal of Open Source Software ( > > https://joss.theoj.org/ ), an interesting journal in itself, which has a > > stunning "Cost and sustainability model" webpage section: > > https://joss.theoj.org/about#costs > > > > For more stunning details, go read their more detailed blog post, "Cost > > models for running an online open journal" : ) > > > > > > http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal > > > > (Meanwhile in ACM land, we are still waiting for basic financial > > transparency on paper publishing costs -- not that, say, ETAPS or JFP are > > doing any better. > > LIPIcs describes how they calculated their publishing costs at > > https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , and > > LMCS ( https://lmcs.episciences.org/ ) is now using a publicly-funded OA > > publishing platform, so they may actually have no costs at all.) > > > > Cheers > > > From tarmo at cs.ioc.ee Thu Jun 3 12:17:47 2021 From: tarmo at cs.ioc.ee (Tarmo Uustalu) Date: Thu, 3 Jun 2021 19:17:47 +0300 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> References: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> Message-ID: <20210603191747.4399654b@cs.ioc.ee> Hi Gabriel, hi all, You mentioned ETAPS in the message that started this thread, in connection to financial transparency in publishing operations. Let me comment on this as the publicity chair of ETAPS and an EB member. We have been publishing with Springer since the beginning of ETAPS. From 2018, Springer has been publishing our proceedings in Gold OA. This means that the proceedings volumes and the individual papers in them are freely available for anyone to download from the moment of publication, perpetually. This was not small achievement and the main people that we should thank for this are Joost-Pieter Katoen and Holger Hermanns. Of course we pay Springer for this OA arrangement. We've discussed alternative publishers, but have so far remained with Springer. Deliberations like this are not easy. There are a number of factors to consider when choosing a publisher, and there are advantages and disadvantages with any solution, believe me. Just as conferences publishing with LIPIcs, we don't charge our publication costs to the authors but absorb them into the conference fees; the ETAPS Association has also covered part of the cost from some reserves. (2021 was an exception since the conference was fully online and, in order to budget soundly, we had to charge the authors higher fees than other participants. Notice also that there is a general expectation that an online-only conference must be free to attend or cost very little, so we didn't have much choice here.) How Springer has calculated the fees they charge us is of course not under our control. Frankly, I think these are just numbers that both they and our organization could agree to as a result of a negotiation. But I can assure you that, as a conference organization, we strive for low participation fees. Most certainly we do not add any mark-up to the fees Springer already charges us for publishing the proceedings. Membership in the ETAPS Association is free and I invite everybody interested in the work of the association to join! Our contracts with Springer are available to view for any member in the members-only section of the https://etaps.community/ website (but they are most certainly not for redistribution). Also, all members are welcome to attend the ETAPS Association general assembly, which is held each year during the conference, and where we discuss various matters of policy. I am personally very strongly in favor of open access and fair handling of the costs of publishing. Open access should certainly be much cheaper than it generally tends to be. What major corporations (but also some professional associations) have done to us as the research community until now has been simply cruel. Public money for research has been diverted into outrageous profit margins of the publishing industry as it shaped in the last century and continues to date. This said, quality archival publishing can never be completely free. Venues like EPTCS or LMCS appear to involve no or almost no cost only because there is a lot of altruistic voluntary work put into them. As a community, we should notice and acknowledge the hard work the good people operating these outlets contribute from their "free" time. They are heroes really! But one cannot build a publishing system on enthusiasts alone. arXiv was mentioned in this thread. arXiv is a great repository, but it is run by the Cornell University Library with support from the Simons Foundations and members. The costs are not small (in 2021, in total 2.4 MUSD, out of which Cornell contributes 0.9 MUSD in cash and in kind; check here: https://arxiv.org/about/reports-financials). What if, hypothetically, at one point the Cornell leadership finds that they've got more some important project to support than arXiv? Things appearing to cost nothing always cost something to someone. Tarmo U On Tue, 01 Jun 2021 09:11:33 -0400 "Jon Sterling" wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I think it is confusing to refer to costs that are already paid by > someone else as "a bill that someone needs to foot". arXiv already > exists, etc. ---- and anyone who wants to make a new totally free > journal does not need to pay the costs of running the teams that make > those amazing resources continue to exist. With due respect, most of > the discourse I am hearing (including from the post on the SIGPLAN > blog) makes it sound like someone who wants to start a journal needs > to build their own arXiv. > > It is true that as a community we need to think about how we can > sustain the existence of resources like the arXiv. But doing so > effectively almost certainly requires to work outside of ACM, IEEE, > etc., if only because we need to fund (e.g.) preprint servers that > are universal and not siloed by professional orgs that have shown > again and again that they will spend millions of dollars on "value > added" features that scientists are not asking for, in order to > justify their large staffs. (e.g. How much did it cost Elsevier to > develop and maintain the in-browser PDF viewer that we all > immediately try to click away from?) > > With respect, the article that Gabriel linked to in the beginning > actually addresses many of the points that you seem to want to bring > up. I think it would be good to either argue that the JOSS article is > lying, or to accept that these "fixed costs" that somehow always > manage to spiral into the millions are nothing but a scam. > > Best, > Jon > > > > > > On Mon, May 31, 2021, at 2:00 AM, Roberto Di Cosmo wrote: > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hi Gabriel, > > stunning as it may seem, there are over 29.000 diamond open > > access journals around the world > > (i.e. free [as in free beer] > > to publish and read). A majority of these are in the humanities, > > but there are quite a few in STEM too. > > > > Unfortunately, there is no free lunch, and somebody needs to foot > > the bill > > (there is a bill, see "What is a sustainable path to open access > > ?"), > > which usually means a lot of volunteer work besides reviewing and > > editing. > > > > I suggest to have a look at this editorial piece of JOT (Journal of > > Object Technology) that has been around for some 20 years: it > > provides quite a bit of insight > > > > Pierantonio, A., van den Brand, M., & Combemale, B. (2020). Open > > access all you wanted to know and never dared to ask. Journal of > > Object Technology, 19(1) > > > > Cheers > > > > -- > > Roberto > > > > ------------------------------------------------------------------ > > Computer Science Professor > > (on leave at Inria from IRIF/Universit? de Paris) > > > > Director > > Software Heritage E-mail : roberto at dicosmo.org > > INRIA Web : http://www.dicosmo.org > > Bureau C328 Twitter : http://twitter.com/rdicosmo > > 2, Rue Simone Iff Tel : +33 1 80 49 44 42 > > CS 42112 > > 75589 Paris Cedex 12 > > ------------------------------------------------------------------ > > > > GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 > > > > > > On Sun, 30 May 2021 at 18:57, Gabriel Scherer > > wrote: > > > > > [ The Types Forum, > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > Dear list, > > > > > > Today I found out about JOSS, the Journal of Open Source Software > > > ( https://joss.theoj.org/ ), an interesting journal in itself, > > > which has a stunning "Cost and sustainability model" webpage > > > section: https://joss.theoj.org/about#costs > > > > > > For more stunning details, go read their more detailed blog post, > > > "Cost models for running an online open journal" : ) > > > > > > > > > http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal > > > > > > (Meanwhile in ACM land, we are still waiting for basic financial > > > transparency on paper publishing costs -- not that, say, ETAPS or > > > JFP are doing any better. > > > LIPIcs describes how they calculated their publishing costs at > > > https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , > > > and LMCS ( https://lmcs.episciences.org/ ) is now using a > > > publicly-funded OA publishing platform, so they may actually have > > > no costs at all.) > > > > > > Cheers > > > > > From gabriel.scherer at gmail.com Thu Jun 3 16:28:00 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Thu, 3 Jun 2021 22:28:00 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: <20210603191747.4399654b@cs.ioc.ee> References: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> <20210603191747.4399654b@cs.ioc.ee> Message-ID: Hi Tarmo (and all), Thanks for your detailed message. I think it's great that ETAPS has moved to an OA model recently. My specific claim in my first post was that ETAPS does not offer "financial transparency on paper publishing costs". As far as I know, the cost of publishing with Springer OA that ETAPS pays is not public knowledge. Or maybe this has changed since I last asked the question, and you could now give us a figure? You mention that arXiv had 2.4M$ of expenses last year. Thanks to the data being available in the open, I can easily find that it had 178,329 submissions, giving a cost of $14 per submission. I think that's okay. Plus, a large share of these costs are fixed, they would not go up with the number of submissions, so we could setup our journals as arxiv overlays and its cost per submission would decrease. On Thu, Jun 3, 2021 at 7:27 PM Tarmo Uustalu wrote: > Hi Gabriel, hi all, > > You mentioned ETAPS in the message that started this thread, in > connection to financial transparency in publishing operations. > > > Let me comment on this as the publicity chair of ETAPS and an EB member. > > We have been publishing with Springer since the beginning of ETAPS. > From 2018, Springer has been publishing our proceedings in Gold OA. > This means that the proceedings volumes and the individual papers in > them are freely available for anyone to download from the moment of > publication, perpetually. This was not small achievement and the main > people that we should thank for this are Joost-Pieter Katoen and Holger > Hermanns. Of course we pay Springer for this OA arrangement. We've > discussed alternative publishers, but have so far remained with > Springer. Deliberations like this are not easy. There are a number of > factors to consider when choosing a publisher, and there are advantages > and disadvantages with any solution, believe me. > > Just as conferences publishing with LIPIcs, we don't charge our > publication costs to the authors but absorb them into the conference > fees; the ETAPS Association has also covered part of the cost from > some reserves. (2021 was an exception since the conference was fully > online and, in order to budget soundly, we had to charge the authors > higher fees than other participants. Notice also that there is a general > expectation that an online-only conference must be free to attend or > cost very little, so we didn't have much choice here.) > > How Springer has calculated the fees they charge us is of course not > under our control. Frankly, I think these are just numbers that both > they and our organization could agree to as a result of a negotiation. > But I can assure you that, as a conference organization, we strive for > low participation fees. Most certainly we do not add any mark-up to the > fees Springer already charges us for publishing the proceedings. > > Membership in the ETAPS Association is free and I invite everybody > interested in the work of the association to join! Our contracts > with Springer are available to view for any member in the members-only > section of the https://etaps.community/ website (but they are most > certainly not for redistribution). Also, all members are welcome to > attend the ETAPS Association general assembly, which is held each year > during the conference, and where we discuss various matters of policy. > > > I am personally very strongly in favor of open access and fair handling > of the costs of publishing. > > Open access should certainly be much cheaper than it generally tends to > be. What major corporations (but also some professional associations) > have done to us as the research community until now has been simply > cruel. Public money for research has been diverted into outrageous > profit margins of the publishing industry as it shaped in the last > century and continues to date. > > This said, quality archival publishing can never be completely free. > Venues like EPTCS or LMCS appear to involve no or almost no cost only > because there is a lot of altruistic voluntary work put into them. As a > community, we should notice and acknowledge the hard work the good > people operating these outlets contribute from their "free" time. They > are heroes really! But one cannot build a publishing system on > enthusiasts alone. > > arXiv was mentioned in this thread. arXiv is a great repository, but it > is run by the Cornell University Library with support from the Simons > Foundations and members. The costs are not small (in 2021, in total > 2.4 MUSD, out of which Cornell contributes 0.9 MUSD in cash and in > kind; check here: https://arxiv.org/about/reports-financials). What if, > hypothetically, at one point the Cornell leadership finds that they've > got more some important project to support than arXiv? Things appearing > to cost nothing always cost something to someone. > > > Tarmo U > > > > > > On Tue, 01 Jun 2021 09:11:33 -0400 > "Jon Sterling" wrote: > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > I think it is confusing to refer to costs that are already paid by > > someone else as "a bill that someone needs to foot". arXiv already > > exists, etc. ---- and anyone who wants to make a new totally free > > journal does not need to pay the costs of running the teams that make > > those amazing resources continue to exist. With due respect, most of > > the discourse I am hearing (including from the post on the SIGPLAN > > blog) makes it sound like someone who wants to start a journal needs > > to build their own arXiv. > > > > It is true that as a community we need to think about how we can > > sustain the existence of resources like the arXiv. But doing so > > effectively almost certainly requires to work outside of ACM, IEEE, > > etc., if only because we need to fund (e.g.) preprint servers that > > are universal and not siloed by professional orgs that have shown > > again and again that they will spend millions of dollars on "value > > added" features that scientists are not asking for, in order to > > justify their large staffs. (e.g. How much did it cost Elsevier to > > develop and maintain the in-browser PDF viewer that we all > > immediately try to click away from?) > > > > With respect, the article that Gabriel linked to in the beginning > > actually addresses many of the points that you seem to want to bring > > up. I think it would be good to either argue that the JOSS article is > > lying, or to accept that these "fixed costs" that somehow always > > manage to spiral into the millions are nothing but a scam. > > > > Best, > > Jon > > > > > > > > > > > > On Mon, May 31, 2021, at 2:00 AM, Roberto Di Cosmo wrote: > > > [ The Types Forum, > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > Hi Gabriel, > > > stunning as it may seem, there are over 29.000 diamond open > > > access journals around the world > > > (i.e. free [as in free beer] > > > to publish and read). A majority of these are in the humanities, > > > but there are quite a few in STEM too. > > > > > > Unfortunately, there is no free lunch, and somebody needs to foot > > > the bill > > > (there is a bill, see "What is a sustainable path to open access > > > < > https://blog.sigplan.org/2020/01/14/what-is-a-sustainable-path-to-open-access/ > >?"), > > > which usually means a lot of volunteer work besides reviewing and > > > editing. > > > > > > I suggest to have a look at this editorial piece of JOT (Journal of > > > Object Technology) that has been around for some 20 years: it > > > provides quite a bit of insight > > > > > > Pierantonio, A., van den Brand, M., & Combemale, B. (2020). Open > > > access all you wanted to know and never dared to ask. Journal of > > > Object Technology, 19(1) > > > > > > Cheers > > > > > > -- > > > Roberto > > > > > > ------------------------------------------------------------------ > > > Computer Science Professor > > > (on leave at Inria from IRIF/Universit? de Paris) > > > > > > Director > > > Software Heritage E-mail : roberto at dicosmo.org > > > INRIA Web : http://www.dicosmo.org > > > Bureau C328 Twitter : http://twitter.com/rdicosmo > > > 2, Rue Simone Iff Tel : +33 1 80 49 44 42 > > > CS 42112 > > > 75589 Paris Cedex 12 > > > ------------------------------------------------------------------ > > > > > > GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 > > > > > > > > > On Sun, 30 May 2021 at 18:57, Gabriel Scherer > > > wrote: > > > > > > > [ The Types Forum, > > > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > > > Dear list, > > > > > > > > Today I found out about JOSS, the Journal of Open Source Software > > > > ( https://joss.theoj.org/ ), an interesting journal in itself, > > > > which has a stunning "Cost and sustainability model" webpage > > > > section: https://joss.theoj.org/about#costs > > > > > > > > For more stunning details, go read their more detailed blog post, > > > > "Cost models for running an online open journal" : ) > > > > > > > > > > > > > http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal > > > > > > > > (Meanwhile in ACM land, we are still waiting for basic financial > > > > transparency on paper publishing costs -- not that, say, ETAPS or > > > > JFP are doing any better. > > > > LIPIcs describes how they calculated their publishing costs at > > > > https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , > > > > and LMCS ( https://lmcs.episciences.org/ ) is now using a > > > > publicly-funded OA publishing platform, so they may actually have > > > > no costs at all.) > > > > > > > > Cheers > > > > > > > > > From marco.servetto at gmail.com Thu Jun 3 17:14:40 2021 From: marco.servetto at gmail.com (Marco Servetto) Date: Fri, 4 Jun 2021 09:14:40 +1200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: References: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> <20210603191747.4399654b@cs.ioc.ee> Message-ID: >giving a cost of $14 per submission. I think that's okay. > Plus, a large share of these costs are fixed, they would not go up with the > number of submissions, so we could setup our journals as arxiv overlays -Yes A donation to arxiv of 300$ for a workshop and 1k for a conference is likely to cover all of their costs. If I were to chair an event as an "arxiv overlays" I think donating that kind of money would be a simple and sustainable way forward.... note how it is less than what an average participant pays for the air ticket alone. Overall, what is the real difference between A- actual conference proceeding published conventionally B- a well formatted website with a bunch of links to arxiv articles In the end, what gives credibility and value to our work is the voluntary and amazing process of peer review, validated and monitored by a committee of world experts in the specific research field. In this internet age, what is the value added by a publisher? I've never understood that. Marco. From tom.hirschowitz at univ-savoie.fr Fri Jun 4 05:05:42 2021 From: tom.hirschowitz at univ-savoie.fr (Tom Hirschowitz) Date: Fri, 04 Jun 2021 11:05:42 +0200 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> Message-ID: <87mts62e55.fsf@hirscho.lama.univ-savoie.fr> Just to add one bit: I find it curious that - proofs that may stumble on axioms are suspected of not being true programs, while - this would never happen to potentially crashing C code. [Sorry I'm late on this one. This message was sent long ago but silently rejected by the server, which I understood only later.] From tadeusz.litak at gmail.com Thu Jun 3 18:38:11 2021 From: tadeusz.litak at gmail.com (Tadeusz Litak) Date: Fri, 4 Jun 2021 00:38:11 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: <20210603191747.4399654b@cs.ioc.ee> References: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> <20210603191747.4399654b@cs.ioc.ee> Message-ID: <6a1974aa-96c1-15c2-8150-6449a4f99e77@gmail.com> On 3/6/21 6:17 PM, Tarmo Uustalu wrote: > Venues like EPTCS or LMCS appear to involve no or almost no cost only > because there is a lot of altruistic voluntary work put into them. Word. For years, I've watched closely how much unremunerated work goes into every issue of LMCS. Best, t. From gabriel.scherer at gmail.com Fri Jun 4 08:56:23 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Fri, 4 Jun 2021 14:56:23 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: <6a1974aa-96c1-15c2-8150-6449a4f99e77@gmail.com> References: <2fe86343-2a93-47e9-8b5d-b4c202ffeec9@www.fastmail.com> <20210603191747.4399654b@cs.ioc.ee> <6a1974aa-96c1-15c2-8150-6449a4f99e77@gmail.com> Message-ID: Dear Tadeusz, I can't comment on LMCS internal workings, but of course all scientific processes rely on a lot of unremunerated work frpom scientists. The organization of a conference is similar in this respect. (Of course I'm very grateful to the people doing this work on both sides!) Or is there a more specific claim that LMCS has researchers doing work that is done by paid staff in other venues like JFP, as opposed to the work we consider normal to ask our colleagues to do? My own experience of publishing workshop post-proceedings at EPTCS (an arxiv overlay) was nowhere the scale of a full journal, but I don't believe that it involved any "extra" work compared to what is expected of our researcher colleagues when preparing, say, a new PACMPL issue. On Fri, Jun 4, 2021 at 2:48 PM Tadeusz Litak wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > On 3/6/21 6:17 PM, Tarmo Uustalu wrote: > > Venues like EPTCS or LMCS appear to involve no or almost no cost only > > because there is a lot of altruistic voluntary work put into them. > > Word. For years, I've watched closely how much unremunerated work goes > into every issue of LMCS. > > Best, > > t. > > From gabriel.scherer at gmail.com Fri Jun 4 09:17:14 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Fri, 4 Jun 2021 15:17:14 +0200 Subject: [TYPES] online conferences should be free (was: global debriefing over our virtual experience of conferences) In-Reply-To: References: Message-ID: Dear list, Last year I played the unfortunate role of complaining about the $100 price tag on ICFP'20 registration. There were some great improvements in further events, for example POPL'21 had "discounted rate: $10" as an unconditional registration option, and PLDI'21 offers the same option. (I still wish that there events were free, as is common with other scientific conferences like FSCD'20, IJCAR'20, LICS'20 etc., but $10 is still much closer to a symbolic sum than $100 for a strict subset of the world.). Unfortunately, it is my understanding that ICFP'21 is planning to reuse the same fee structure. The details are not clear yet and possibly subject to change, as registration hasn't opened; but this seems to be the current plan. I wish it was possible to have a (public) discussion about this choice in advance, and not just a month or two before the conference during summer holidays. SIGPLAN has decided not to publish budget information for ICFP'20, but my understanding is that the $100 registration scheme generated a strong profit for the conference, to the point that, if the costs are comparable to last year, last year profit would suffice to fund ICFP'21 entirely. Why would we have a $100 registration fee again? ICFP is a flagship conference at the intersection of theoretical works and practical functional programming, and it could attract a vibrant crowd of people outside academia (in particular: not students), who may not have an easy path to reimbursement -- this is especially important for the workshops. (Disclaimer: I'm criticising past registration fees and prospective registration fees, but not of course the people doing the hard work of organizing the conference! They have all my gratitude.) On Sun, Aug 23, 2020 at 4:05 PM Gabriel Scherer wrote: > Dear types-list, > > Going on a tangent from Flavien's earlier post: I really think that online > conferences should be free. > > Several conferences (PLDI for example) managed to run free-of-charge since > the pandemic started, and they reported broader attendance and a strong > diversity of attendants, which sounds great. I don't think we can achieve > this with for-pay online conferences. > > ICFP is coming up shortly with a $100 registration price tag, and I did > not register. > > I'm aware that running a large virtual conference requires computing > resources that do have a cost. For PLDI for example, the report only says > that the cost was covered by industrial sponsors. Are numbers publicly > available on the cost of running a virtual conference? Note that if we > managed to run a conference on free software, I'm sure that institutions > and volunteers could be convinced to help hosting and monitoring the > conference services during the event. > From hendrik at topoi.pooq.com Fri Jun 4 12:24:58 2021 From: hendrik at topoi.pooq.com (Hendrik Boom) Date: Fri, 4 Jun 2021 12:24:58 -0400 Subject: [TYPES] What's a program? (Seriously) In-Reply-To: <87mts62e55.fsf@hirscho.lama.univ-savoie.fr> References: <812fd94b-842f-c856-4bf9-c071205f7555@gmail.com> <87mts62e55.fsf@hirscho.lama.univ-savoie.fr> Message-ID: <20210604162457.yynnxgdrznt3mkwl@topoi.pooq.com> On Fri, Jun 04, 2021 at 11:05:42AM +0200, Tom Hirschowitz wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Just to add one bit: I find it curious that > > - proofs that may stumble on axioms are suspected of not being true > programs, while > > - this would never happen to potentially crashing C code. The difference is one of intention. Using axioms that do not compute is expected to not compute. Potentially crashing C++ code is at least intended to compute, and once mostly debugged, it mostly does. -- hendrik > > [Sorry I'm late on this one. This message was sent long ago but silently > rejected by the server, which I understood only later.] From mehmetoguzderin at mehmetoguzderin.com Fri Jun 4 14:04:28 2021 From: mehmetoguzderin at mehmetoguzderin.com (Mehmet Oguz Derin) Date: Fri, 4 Jun 2021 21:04:28 +0300 Subject: [TYPES] online conferences should be free (was: global debriefing over our virtual experience of conferences) In-Reply-To: References: Message-ID: Outsider opinion: one good heuristic for pricing anything virtual and making it accessible for underprivileged individuals is localized video game & digital subscription prices. Companies expanding these have gone through many stages regarding price localization (symbolic or not) globally. - Oguz (Mehmet Oguz Derin) On Fri, Jun 4, 2021 at 4:18 PM Gabriel Scherer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear list, > > Last year I played the unfortunate role of complaining about the $100 price > tag on ICFP'20 registration. There were some great improvements in further > events, for example POPL'21 had "discounted rate: $10" as an unconditional > registration option, and PLDI'21 offers the same option. (I still wish that > there events were free, as is common with other scientific conferences like > FSCD'20, IJCAR'20, LICS'20 etc., but $10 is still much closer to a symbolic > sum than $100 for a strict subset of the world.). > > Unfortunately, it is my understanding that ICFP'21 is planning to reuse the > same fee structure. The details are not clear yet and possibly subject to > change, as registration hasn't opened; but this seems to be the current > plan. I wish it was possible to have a (public) discussion about this > choice in advance, and not just a month or two before the conference during > summer holidays. > > SIGPLAN has decided not to publish budget information for ICFP'20, but my > understanding is that the $100 registration scheme generated a strong > profit for the conference, to the point that, if the costs are comparable > to last year, last year profit would suffice to fund ICFP'21 entirely. Why > would we have a $100 registration fee again? > > ICFP is a flagship conference at the intersection of theoretical works and > practical functional programming, and it could attract a vibrant crowd of > people outside academia (in particular: not students), who may not have an > easy path to reimbursement -- this is especially important for the > workshops. > > (Disclaimer: I'm criticising past registration fees and prospective > registration fees, but not of course the people doing the hard work of > organizing the conference! They have all my gratitude.) > > On Sun, Aug 23, 2020 at 4:05 PM Gabriel Scherer > > wrote: > > > Dear types-list, > > > > Going on a tangent from Flavien's earlier post: I really think that > online > > conferences should be free. > > > > Several conferences (PLDI for example) managed to run free-of-charge > since > > the pandemic started, and they reported broader attendance and a strong > > diversity of attendants, which sounds great. I don't think we can achieve > > this with for-pay online conferences. > > > > ICFP is coming up shortly with a $100 registration price tag, and I did > > not register. > > > > I'm aware that running a large virtual conference requires computing > > resources that do have a cost. For PLDI for example, the report only says > > that the cost was covered by industrial sponsors. Are numbers publicly > > available on the cost of running a virtual conference? Note that if we > > managed to run a conference on free software, I'm sure that institutions > > and volunteers could be convinced to help hosting and monitoring the > > conference services during the event. > > > From gabriel.scherer at gmail.com Sat Jun 5 02:39:16 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sat, 5 Jun 2021 08:39:16 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: References: Message-ID: Dear list, I was pointed off-list to the following interesting article from 2020, the first attempt by ACM to provide some sort of public financial information. CACM: ACM Publications Finances https://cacm.acm.org/magazines/2020/5/244322-acm-publications-finances/fulltext The report is interesting, and it suggests that the costs of publishing at ACM are quite high. Some details below. (Those are all 2019 numbers) - There is a staggering difference in per-article cost between "ACM journals" and "conference proceedings". According to this data, ACM Journal cost on average $1500 per article, while conference proceedings cost of $151 per article. In particular, the data reports that each ACM journal article spends $333 (on average) on "composition and copy-editing". My uninformed guess would be that PACML, while being nominally a journal, is still handled by conference-proceedings processes: I have published in conference proceedings and then PACMPL, and not observed any change that would correspond to a ten-fold cost increase. - ACM reports a conference proceeding cost of $151 per article. In the blog post, they report it as $410, but in fact much of that figure corresponds to ACM revenue that they give to SIGs and include in the "cost of publication" of the SIG. (It's great that ACM makes revenue and that they fund the SIGs, but we are trying to understand publishing costs.) - None of the figure above include the ACM Digital Library (web hosting + long-term archiving), whose cost in 2019 were massive: $299 per conference proceeding publication on average. The post has more details on that: 2019 was right after they launched the "new DL" platform, so a share of these costs are fixed and will not recur on following years. But they also expect to host more video content (indeed), so some other costs will increase. Note that arxiv has costs of $14 per article, and Zenodo offers long-term archiving of gigabyte-large content for free as a public service ( https://help.zenodo.org/ ; this is supported as a "drop in the bucket" of CERN physicists costs archiving petabytes of experiment data. We use Zenodo to host the artifacts submitted to the Artifact Evaluation processes of several SIGPLAN conferences. ) ACM should let authors choose to publish on the ACM DL *or* on Arxiv+Zenodo; the people who see value in ACM DL could choose this option, and the others would vastly reduce hosting/archiving costs (from $299 to $14). - The revenue/costs numbers for ACM ICPS are interesting (International Conference Proceeding Series: as a non-ACM conference you can contract ACM to publish your proceedings, for a small per-paper price, in exchange for forcing your authors to give up their copyright to the ACM; several conferences of our community do this, for example PPDP). In 2019 they brought $362K of revenue in publication fees, for about $250K of publishing costs?. And the ICPS publishing fees are pretty reasonable! See the fee structure at https://www.acm.org/publications/icps-series : for one edition of proceedings, you pay $750 of fixed costs for up to 30 articles, plus $20 for each paper above the 30th. If ACM makes a net profit with just those fees (the revenue figure is just for publication fees, it does not include subscriptions, pay-per-view etc.), this means that our ACM conferences could pay *exactly this price* for (Open Access) proceeding publications and not cost ACM any money, in fact bring revenue. (This only covers publishing costs, not ACM DL costs.) ?: ACM gives a figure of $215K ofr ICPS publishing costs, but it has not estimated overhead expense, so the figure is an under-estimate. We can estimate this cost. ICPS publishes on average 255 papers per year since 2002; optimistically assuming 1000 papers in 2019, if the overhead costs are proportional to the other conference proceeding overehead cost, this adds another $34K, for a total cost of $250K. On Sun, May 30, 2021 at 6:52 PM Gabriel Scherer wrote: > Dear list, > > Today I found out about JOSS, the Journal of Open Source Software ( > https://joss.theoj.org/ ), an interesting journal in itself, which has a > stunning "Cost and sustainability model" webpage section: > https://joss.theoj.org/about#costs > > For more stunning details, go read their more detailed blog post, "Cost > models for running an online open journal" : ) > > http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal > > (Meanwhile in ACM land, we are still waiting for basic financial > transparency on paper publishing costs -- not that, say, ETAPS or JFP are > doing any better. > LIPIcs describes how they calculated their publishing costs at > https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , and > LMCS ( https://lmcs.episciences.org/ ) is now using a publicly-funded OA > publishing platform, so they may actually have no costs at all.) > > Cheers > From gabriel.scherer at gmail.com Sat Jun 5 02:44:23 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sat, 5 Jun 2021 08:44:23 +0200 Subject: [TYPES] Diamond open access, cost and sustainability model: a JOSS example In-Reply-To: References: Message-ID: I also received off-list the confirmation that Springer OA prices for ETAPS are not public, but that they are in the $100-$200 range. It's interesting that Springer's *price* is in line with the ACM publication *costs* for conference proceedings (without counting ACM DL costs). On Sat, Jun 5, 2021 at 8:39 AM Gabriel Scherer wrote: > Dear list, > > I was pointed off-list to the following interesting article from 2020, the > first attempt by ACM to provide some sort of public financial information. > > CACM: ACM Publications Finances > > https://cacm.acm.org/magazines/2020/5/244322-acm-publications-finances/fulltext > > The report is interesting, and it suggests that the costs of publishing at > ACM are quite high. Some details below. (Those are all 2019 numbers) > > - There is a staggering difference in per-article cost between "ACM > journals" and "conference proceedings". According to this data, ACM Journal > cost on average $1500 per article, while conference proceedings cost of > $151 per article. In particular, the data reports that each ACM journal > article spends $333 (on average) on "composition and copy-editing". > My uninformed guess would be that PACML, while being nominally a > journal, is still handled by conference-proceedings processes: I have > published in conference proceedings and then PACMPL, and not observed any > change that would correspond to a ten-fold cost increase. > > - ACM reports a conference proceeding cost of $151 per article. In the > blog post, they report it as $410, but in fact much of that figure > corresponds to ACM revenue that they give to SIGs and include in the "cost > of publication" of the SIG. (It's great that ACM makes revenue and that > they fund the SIGs, but we are trying to understand publishing costs.) > > - None of the figure above include the ACM Digital Library (web hosting + > long-term archiving), whose cost in 2019 were massive: $299 per conference > proceeding publication on average. The post has more details on that: 2019 > was right after they launched the "new DL" platform, so a share of these > costs are fixed and will not recur on following years. But they also expect > to host more video content (indeed), so some other costs will increase. > > Note that arxiv has costs of $14 per article, and Zenodo offers > long-term archiving of gigabyte-large content for free as a public service > ( https://help.zenodo.org/ ; this is supported as a "drop in the bucket" > of CERN physicists costs archiving petabytes of experiment data. We use > Zenodo to host the artifacts submitted to the Artifact Evaluation processes > of several SIGPLAN conferences. ) > ACM should let authors choose to publish on the ACM DL *or* on > Arxiv+Zenodo; the people who see value in ACM DL could choose this option, > and the others would vastly reduce hosting/archiving costs (from $299 to > $14). > > - The revenue/costs numbers for ACM ICPS are interesting (International > Conference Proceeding Series: as a non-ACM conference you can contract ACM > to publish your proceedings, for a small per-paper price, in exchange for > forcing your authors to give up their copyright to the ACM; several > conferences of our community do this, for example PPDP). In 2019 they > brought $362K of revenue in publication fees, for about $250K of publishing > costs?. > And the ICPS publishing fees are pretty reasonable! See the fee > structure at https://www.acm.org/publications/icps-series : for one > edition of proceedings, you pay $750 of fixed costs for up to 30 articles, > plus $20 for each paper above the 30th. > If ACM makes a net profit with just those fees (the revenue figure is > just for publication fees, it does not include subscriptions, pay-per-view > etc.), this means that our ACM conferences could pay *exactly this price* > for (Open Access) proceeding publications and not cost ACM any money, in > fact bring revenue. (This only covers publishing costs, not ACM DL costs.) > > ?: ACM gives a figure of $215K ofr ICPS publishing costs, but it has not > estimated overhead expense, so the figure is an under-estimate. We can > estimate this cost. ICPS publishes on average 255 papers per year since > 2002; optimistically assuming 1000 papers in 2019, if the overhead costs > are proportional to the other conference proceeding overehead cost, this > adds another $34K, for a total cost of $250K. > > On Sun, May 30, 2021 at 6:52 PM Gabriel Scherer > wrote: > >> Dear list, >> >> Today I found out about JOSS, the Journal of Open Source Software ( >> https://joss.theoj.org/ ), an interesting journal in itself, which has a >> stunning "Cost and sustainability model" webpage section: >> https://joss.theoj.org/about#costs >> >> For more stunning details, go read their more detailed blog post, "Cost >> models for running an online open journal" : ) >> >> http://blog.joss.theoj.org/2019/06/cost-models-for-running-an-online-open-journal >> >> (Meanwhile in ACM land, we are still waiting for basic financial >> transparency on paper publishing costs -- not that, say, ETAPS or JFP are >> doing any better. >> LIPIcs describes how they calculated their publishing costs at >> https://www.dagstuhl.de/en/publications/lipics/processing-charge/ , and >> LMCS ( https://lmcs.episciences.org/ ) is now using a publicly-funded OA >> publishing platform, so they may actually have no costs at all.) >> >> Cheers >> > From alejandro at diaz-caro.info Sat Jun 5 14:22:22 2021 From: alejandro at diaz-caro.info (=?UTF-8?Q?Alejandro_D=C3=ADaz=2DCaro?=) Date: Sat, 5 Jun 2021 15:22:22 -0300 Subject: [TYPES] online conferences should be free (was: global debriefing over our virtual experience of conferences) In-Reply-To: References: Message-ID: Dear all, The real costs of online conferences are much less than for physical ones, that is clear. However, it is not free of cost. The costs may be: * Publication costs (for example, LIPIcs charges 60 euros per paper) * Easychair licence * Award prize for best paper (if the conference have this kind of award) * Conference platform costs (Clowdr, Slack, Zoom, GatherTown, etc, all have an associated cost). >From these four, the first three are somehow fixed with the number of accepted papers (which is usually similar from one year to the next). However the last one is the more difficult to predict, since platforms such as Clowdr, GatherTown, or Easychar's VCS charge per person (Easychair's VCS even charges per person per day). So, even if you get funding from the organising institution or sponsors, making it free of charge could imply a really big amount of registrations, and you may pay for those even if they do not show up at the conference. The solution that we chose at FSCD this year is to make it free of charge, unless we receive way too many requests (with "way too many" left undefined yet), surpassing the grants we have got for the conference. In such a case, we will ask for a very modest amount (less than 10 dollars), making it clear that those who cannot pay, can participate enterally free of charge. So, we are hoping to have a fully free of charge conference (and quite probably we will), but we have no idea how many people will register and how much the bill at the chosen platform may grow. Of course there are also free platforms, but they are less reliable and you do not have anyone to ask if things do not work as expected. Coming from Argentina, I agree that conferences (specially virtual) should be free of charge or as cheap as they can, as much as they can. This allows students to participate. I also agree that the slogan "you will pay more attention to what you have paid" should not condition our conferences model. Paying attention to talks is a responsibility (or a choice) of the attendee (and of the speaker to make the talk interesting, maybe). Putting money in the middle to encourage it is not the best practice, in my opinion, especially if that could result in people left behind. Best, Alejandro On Fri, 4 Jun 2021 at 15:28, Mehmet Oguz Derin wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Outsider opinion: one good heuristic for pricing anything virtual and > making it accessible for underprivileged individuals is localized video > game & digital subscription prices. Companies expanding these have gone > through many stages regarding price localization (symbolic or not) > globally. - Oguz (Mehmet Oguz Derin) > > On Fri, Jun 4, 2021 at 4:18 PM Gabriel Scherer > wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > Dear list, > > > > Last year I played the unfortunate role of complaining about the $100 price > > tag on ICFP'20 registration. There were some great improvements in further > > events, for example POPL'21 had "discounted rate: $10" as an unconditional > > registration option, and PLDI'21 offers the same option. (I still wish that > > there events were free, as is common with other scientific conferences like > > FSCD'20, IJCAR'20, LICS'20 etc., but $10 is still much closer to a symbolic > > sum than $100 for a strict subset of the world.). > > > > Unfortunately, it is my understanding that ICFP'21 is planning to reuse the > > same fee structure. The details are not clear yet and possibly subject to > > change, as registration hasn't opened; but this seems to be the current > > plan. I wish it was possible to have a (public) discussion about this > > choice in advance, and not just a month or two before the conference during > > summer holidays. > > > > SIGPLAN has decided not to publish budget information for ICFP'20, but my > > understanding is that the $100 registration scheme generated a strong > > profit for the conference, to the point that, if the costs are comparable > > to last year, last year profit would suffice to fund ICFP'21 entirely. Why > > would we have a $100 registration fee again? > > > > ICFP is a flagship conference at the intersection of theoretical works and > > practical functional programming, and it could attract a vibrant crowd of > > people outside academia (in particular: not students), who may not have an > > easy path to reimbursement -- this is especially important for the > > workshops. > > > > (Disclaimer: I'm criticising past registration fees and prospective > > registration fees, but not of course the people doing the hard work of > > organizing the conference! They have all my gratitude.) > > > > On Sun, Aug 23, 2020 at 4:05 PM Gabriel Scherer > > > > wrote: > > > > > Dear types-list, > > > > > > Going on a tangent from Flavien's earlier post: I really think that > > online > > > conferences should be free. > > > > > > Several conferences (PLDI for example) managed to run free-of-charge > > since > > > the pandemic started, and they reported broader attendance and a strong > > > diversity of attendants, which sounds great. I don't think we can achieve > > > this with for-pay online conferences. > > > > > > ICFP is coming up shortly with a $100 registration price tag, and I did > > > not register. > > > > > > I'm aware that running a large virtual conference requires computing > > > resources that do have a cost. For PLDI for example, the report only says > > > that the cost was covered by industrial sponsors. Are numbers publicly > > > available on the cost of running a virtual conference? Note that if we > > > managed to run a conference on free software, I'm sure that institutions > > > and volunteers could be convinced to help hosting and monitoring the > > > conference services during the event. > > > > > -- http://staff.dc.uba.ar/adiazcaro From tringer at cs.washington.edu Sat Jun 5 15:56:45 2021 From: tringer at cs.washington.edu (Talia Ringer) Date: Sat, 5 Jun 2021 12:56:45 -0700 Subject: [TYPES] online conferences should be free (was: global debriefing over our virtual experience of conferences) In-Reply-To: References: Message-ID: Since I complained as last year about this, I've joined on the SPLASH hybridization committee. My understanding so far is: - there are some hidden costs - overcharging some people acts as a subsidy for other people who need scholarships - planning is hard and everyone is afraid of losing money, so sometimes people are conservative in budgeting because of this - sometimes people overspend on platforms that probably aren't necessary Anyways, I'll forward this to the rest of the committee when we plan the hybrid fee structure. Hybrid is a bit different since there are still in person costs, and costs of interaction between the two, but it's still worth thinking about. Can't help with ICFP though. $100 seems like a lot even knowing what I know about virtual budgets now. On Sat, Jun 5, 2021, 12:04 PM Alejandro D?az-Caro wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear all, > > The real costs of online conferences are much less than for physical > ones, that is clear. However, it is not free of cost. The costs may > be: > * Publication costs (for example, LIPIcs charges 60 euros per paper) > * Easychair licence > * Award prize for best paper (if the conference have this kind of award) > * Conference platform costs (Clowdr, Slack, Zoom, GatherTown, etc, all > have an associated cost). > > From these four, the first three are somehow fixed with the number of > accepted papers (which is usually similar from one year to the next). > However the last one is the more difficult to predict, since platforms > such as Clowdr, GatherTown, or Easychar's VCS charge per person > (Easychair's VCS even charges per person per day). So, even if you get > funding from the organising institution or sponsors, making it free of > charge could imply a really big amount of registrations, and you may > pay for those even if they do not show up at the conference. > > The solution that we chose at FSCD this year is to make it free of > charge, unless we receive way too many requests (with "way too many" > left undefined yet), surpassing the grants we have got for the > conference. In such a case, we will ask for a very modest amount > (less than 10 dollars), making it clear that those who cannot pay, can > participate enterally free of charge. So, we are hoping to have a > fully free of charge conference (and quite probably we will), but we > have no idea how many people will register and how much the bill at > the chosen platform may grow. > > Of course there are also free platforms, but they are less reliable > and you do not have anyone to ask if things do not work as expected. > > Coming from Argentina, I agree that conferences (specially virtual) > should be free of charge or as cheap as they can, as much as they can. > This allows students to participate. I also agree that the slogan "you > will pay more attention to what you have paid" should not condition > our conferences model. Paying attention to talks is a responsibility > (or a choice) of the attendee (and of the speaker to make the talk > interesting, maybe). Putting money in the middle to encourage it is > not the best practice, in my opinion, especially if that could result > in people left behind. > > Best, > Alejandro > > > On Fri, 4 Jun 2021 at 15:28, Mehmet Oguz Derin > wrote: > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Outsider opinion: one good heuristic for pricing anything virtual and > > making it accessible for underprivileged individuals is localized video > > game & digital subscription prices. Companies expanding these have gone > > through many stages regarding price localization (symbolic or not) > > globally. - Oguz (Mehmet Oguz Derin) > > > > On Fri, Jun 4, 2021 at 4:18 PM Gabriel Scherer < > gabriel.scherer at gmail.com> > > wrote: > > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list > > > ] > > > > > > Dear list, > > > > > > Last year I played the unfortunate role of complaining about the $100 > price > > > tag on ICFP'20 registration. There were some great improvements in > further > > > events, for example POPL'21 had "discounted rate: $10" as an > unconditional > > > registration option, and PLDI'21 offers the same option. (I still wish > that > > > there events were free, as is common with other scientific conferences > like > > > FSCD'20, IJCAR'20, LICS'20 etc., but $10 is still much closer to a > symbolic > > > sum than $100 for a strict subset of the world.). > > > > > > Unfortunately, it is my understanding that ICFP'21 is planning to > reuse the > > > same fee structure. The details are not clear yet and possibly subject > to > > > change, as registration hasn't opened; but this seems to be the current > > > plan. I wish it was possible to have a (public) discussion about this > > > choice in advance, and not just a month or two before the conference > during > > > summer holidays. > > > > > > SIGPLAN has decided not to publish budget information for ICFP'20, but > my > > > understanding is that the $100 registration scheme generated a strong > > > profit for the conference, to the point that, if the costs are > comparable > > > to last year, last year profit would suffice to fund ICFP'21 entirely. > Why > > > would we have a $100 registration fee again? > > > > > > ICFP is a flagship conference at the intersection of theoretical works > and > > > practical functional programming, and it could attract a vibrant crowd > of > > > people outside academia (in particular: not students), who may not > have an > > > easy path to reimbursement -- this is especially important for the > > > workshops. > > > > > > (Disclaimer: I'm criticising past registration fees and prospective > > > registration fees, but not of course the people doing the hard work of > > > organizing the conference! They have all my gratitude.) > > > > > > On Sun, Aug 23, 2020 at 4:05 PM Gabriel Scherer < > gabriel.scherer at gmail.com > > > > > > > wrote: > > > > > > > Dear types-list, > > > > > > > > Going on a tangent from Flavien's earlier post: I really think that > > > online > > > > conferences should be free. > > > > > > > > Several conferences (PLDI for example) managed to run free-of-charge > > > since > > > > the pandemic started, and they reported broader attendance and a > strong > > > > diversity of attendants, which sounds great. I don't think we can > achieve > > > > this with for-pay online conferences. > > > > > > > > ICFP is coming up shortly with a $100 registration price tag, and I > did > > > > not register. > > > > > > > > I'm aware that running a large virtual conference requires computing > > > > resources that do have a cost. For PLDI for example, the report only > says > > > > that the cost was covered by industrial sponsors. Are numbers > publicly > > > > available on the cost of running a virtual conference? Note that if > we > > > > managed to run a conference on free software, I'm sure that > institutions > > > > and volunteers could be convinced to help hosting and monitoring the > > > > conference services during the event. > > > > > > > > > > > -- > http://staff.dc.uba.ar/adiazcaro > From monnier at iro.umontreal.ca Sat Jun 5 18:20:28 2021 From: monnier at iro.umontreal.ca (Stefan Monnier) Date: Sat, 05 Jun 2021 18:20:28 -0400 Subject: [TYPES] online conferences should be free In-Reply-To: ("Alejandro =?windows-1252?Q?D=EDaz-Caro=22's?= message of "Sat, 5 Jun 2021 15:22:22 -0300") References: Message-ID: Alejandro D?az-Caro [2021-06-05 15:22:22] wrote: > Of course there are also free platforms, but they are less reliable > and you do not have anyone to ask if things do not work as expected. I think it would make sense for the ACM to host some of those services (and participate in their development). IIUC this is partly what happen(s|ed) with clowdr. Similarly it would make sense for the ACM to host a video streaming service, basically make it so the ACM DL doesn't only host articles and books but videos as well. [ Pretty much every time I watch a talk on youtube I find myself wanting to "click" to consult the corresponding paper. ] Stefan From bcpierce at cis.upenn.edu Sun Jun 6 19:33:00 2021 From: bcpierce at cis.upenn.edu (Benjamin Pierce) Date: Sun, 6 Jun 2021 19:33:00 -0400 Subject: [TYPES] online conferences should be free In-Reply-To: References: Message-ID: On Sun, Jun 6, 2021 at 1:15 AM Stefan Monnier wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > I think it would make sense for the ACM to host some of those services > (and participate in their development). IIUC this is partly what > happen(s|ed) with clowdr. > I can say a few words about what happened with Clowdr... :-) For those that don't know, Clowdr is an open-source virtual conference platform that's been under development for the past year. The first version of Clowdr in summer 2020 was supported by a small NSF grant. Since the autumn, further development has been led by Clowdr CIC, a UK Community Interest Company that was set up to maintain, develop, and help conferences use Clowdr. The company has grown organically, funded by service contracts with individual conferences under which Clowdr hosts the platform and, in some cases, assists with conference organization and logistics. Several of these have been ACM conferences, but the company has never received funding or sponsorship directly from ACM. Best, - Benjamin P.S. For full disclosure, I am one of the original Clowdr developers and one of the founding directors of Clowdr CIC. To avoid conflicts of interest, I resigned all my official positions within ACM and SIGPLAN when the company was formed. From alejandro at diaz-caro.info Sun Jun 6 20:00:01 2021 From: alejandro at diaz-caro.info (=?UTF-8?Q?Alejandro_D=C3=ADaz=2DCaro?=) Date: Sun, 6 Jun 2021 21:00:01 -0300 Subject: [TYPES] online conferences should be free In-Reply-To: References: Message-ID: I may add that it makes totally sense to me that a service such as Clowdr (and other similar platforms) charge for their usage. The price do not only covers the usage, but also there is somebody to assist you (giving tutorials, answering questions, etc). In any case the prices are quite fair and, as I mentioned before, there are still options to make the conference either free, or mostly free. The alternative would be to use some ad-hoc solution, which probably will not be perfectly adapted for a conference. And since conferences will be virtual yet for some time (at least), we should use the best available tools. So, my point of view is that conferences should be free, as much as possible, and should not renounce to use the best available tools. -- Alejandro El dom., 6 jun. 2021 20:33, Benjamin Pierce escribi?: > On Sun, Jun 6, 2021 at 1:15 AM Stefan Monnier > wrote: > >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> I think it would make sense for the ACM to host some of those services >> (and participate in their development). IIUC this is partly what >> happen(s|ed) with clowdr. >> > > I can say a few words about what happened with Clowdr... :-) > > For those that don't know, Clowdr is an open-source > virtual conference platform that's been under development for the past year. > > The first version of Clowdr in summer 2020 was supported by a small NSF > grant. Since the autumn, further development has been led by Clowdr CIC, a > UK Community Interest Company that was set up to maintain, develop, and > help conferences use Clowdr. The company has grown organically, funded by > service contracts with individual conferences under which Clowdr hosts the > platform and, in some cases, assists with conference organization and > logistics. Several of these have been ACM conferences, but the company has > never received funding or sponsorship directly from ACM. > > Best, > > - Benjamin > > P.S. For full disclosure, I am one of the original Clowdr developers and > one of the founding directors of Clowdr CIC. To avoid conflicts of > interest, I resigned all my official positions within ACM and SIGPLAN when > the company was formed. > > > > From monnier at iro.umontreal.ca Mon Jun 7 01:45:52 2021 From: monnier at iro.umontreal.ca (Stefan Monnier) Date: Mon, 07 Jun 2021 01:45:52 -0400 Subject: [TYPES] online conferences should be free In-Reply-To: (Cyrus Omar's message of "Sun, 6 Jun 2021 21:28:18 -0400") References: Message-ID: Cyrus Omar [2021-06-06 21:28:18] wrote: > On Wed, Aug 26, 2020 at 2:02 AM Stefan Monnier wrote: [ Going through old mail, eh? ] >> BTW, while watching ICFP, somewhat pleased with Clowdr [ for >> a first run of the software, I'm really pleased ] but annoyed at >> some aspects [ besides the need to run proprietary code for Zoom ] such >> as the fact that I can't find a recording of the TyDe talks I missed... > FYI, the TyDe talk videos are posted on the SIGPLAN Youtube channel under > the following playlist (linked from the TyDe webpage): > https://www.youtube.com/watch?v=pA0wOOcf5rM&list=PLyrlk8Xaylp63TV8z8yZb79BNzFse9wVO Oh, excellent, thanks. > (TyDe extended abstracts are not archival, so we can't make the > corresponding talks available via the ACM DL) That makes sense, indeed, Stefan From hendrik at topoi.pooq.com Mon Jun 7 12:16:24 2021 From: hendrik at topoi.pooq.com (Hendrik Boom) Date: Mon, 7 Jun 2021 12:16:24 -0400 Subject: [TYPES] online conferences should be free In-Reply-To: References: Message-ID: <20210607161624.ti7cbbyxwfl65f4m@topoi.pooq.com> On Sun, Jun 06, 2021 at 09:00:01PM -0300, Alejandro D?az-Caro wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I may add that it makes totally sense to me that a service such as Clowdr > (and other similar platforms) charge for their usage. The price do not only > covers the usage, but also there is somebody to assist you (giving > tutorials, answering questions, etc). In any case the prices are quite fair > and, as I mentioned before, there are still options to make the conference > either free, or mostly free. And a conference organiser presumaby has the ability to run the open-source Clowdr software on his own server, paying nothing to the company. > The alternative would be to use some ad-hoc > solution, which probably will not be perfectly adapted for a conference. > And since conferences will be virtual yet for some time (at least), we > should use the best available tools. > > So, my point of view is that conferences should be free, as much as > possible, and should not renounce to use the best available tools. > > -- Alejandro > > > El dom., 6 jun. 2021 20:33, Benjamin Pierce > escribi?: > > > On Sun, Jun 6, 2021 at 1:15 AM Stefan Monnier > > wrote: > > > >> [ The Types Forum, > >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > >> I think it would make sense for the ACM to host some of those services > >> (and participate in their development). IIUC this is partly what > >> happen(s|ed) with clowdr. > >> > > > > I can say a few words about what happened with Clowdr... :-) > > > > For those that don't know, Clowdr is an open-source > > virtual conference platform that's been under development for the past year. > > > > The first version of Clowdr in summer 2020 was supported by a small NSF > > grant. Since the autumn, further development has been led by Clowdr CIC, a > > UK Community Interest Company that was set up to maintain, develop, and > > help conferences use Clowdr. The company has grown organically, funded by > > service contracts with individual conferences under which Clowdr hosts the > > platform and, in some cases, assists with conference organization and > > logistics. Several of these have been ACM conferences, but the company has > > never received funding or sponsorship directly from ACM. > > > > Best, > > > > - Benjamin > > > > P.S. For full disclosure, I am one of the original Clowdr developers and > > one of the founding directors of Clowdr CIC. To avoid conflicts of > > interest, I resigned all my official positions within ACM and SIGPLAN when > > the company was formed. > > > > > > > > From anitha.gollamudi at gmail.com Thu Jun 24 13:21:56 2021 From: anitha.gollamudi at gmail.com (Anitha Gollamudi) Date: Thu, 24 Jun 2021 13:21:56 -0400 Subject: [TYPES] Compiler correctness for a stack machine backend Message-ID: Hi, I am looking for references to prove compilation correctness from Imp/C-like language (preferably with pointers and function calls) to a stack machine. Using small-step/big-step semantics to prove compilation correctness (? la Compcert) will be helpful. It need not be mechanised---a pen-and-paper proof exposition works as well. Here are a couple of references that are somewhat related to what I am looking for. (a) CakeML: Compiles ML to CakeML byetcode [1]. (b) Xavier Leroy's tutorial: It uses continuation-passing-style [2]. If you have other pointers, please suggest. (Lecture notes, if any, are also helpful) Best Anitha [1]. https://cakeml.org/popl14.pdf [2]. https://xavierleroy.org/courses/EUTypes-2019/ From matdzb at gmail.com Sat Jun 26 10:13:48 2021 From: matdzb at gmail.com (Matt P. Dziubinski) Date: Sat, 26 Jun 2021 16:13:48 +0200 Subject: [TYPES] Compiler correctness for a stack machine backend In-Reply-To: References: Message-ID: <8d386e59-e7c8-5c53-1431-b1596f86d3f5@gmail.com> On 6/24/2021 19:21, Anitha Gollamudi wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi, > > I am looking for references to prove compilation correctness from > Imp/C-like language (preferably with pointers and function calls) to > a stack machine. > > Using small-step/big-step semantics to prove compilation correctness > (? la Compcert) will be helpful. It need not be mechanised---a > pen-and-paper proof exposition works as well. > > Here are a couple of references that are somewhat related to what I am > looking for. > (a) CakeML: Compiles ML to CakeML byetcode [1]. > (b) Xavier Leroy's tutorial: It uses continuation-passing-style [2]. > > If you have other pointers, please suggest. (Lecture notes, if any, > are also helpful) Hello Anitha, Perhaps the following may also be of use: https://github.com/MattPD/cpplinks/blob/master/compilers.correctness.md Best, Matt From Xavier.Leroy at inria.fr Sat Jun 26 11:21:13 2021 From: Xavier.Leroy at inria.fr (Xavier Leroy) Date: Sat, 26 Jun 2021 17:21:13 +0200 Subject: [TYPES] Compiler correctness for a stack machine backend In-Reply-To: References: Message-ID: On Sat, Jun 26, 2021 at 2:55 PM Anitha Gollamudi wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi, > > I am looking for references to prove compilation correctness from > Imp/C-like language (preferably with pointers and function calls) to > a stack machine. > > Using small-step/big-step semantics to prove compilation correctness > (? la Compcert) will be helpful. It need not be mechanised---a > pen-and-paper proof exposition works as well. > > Here are a couple of references that are somewhat related to what I am > looking for. > (a) CakeML: Compiles ML to CakeML byetcode [1]. > (b) Xavier Leroy's tutorial: It uses continuation-passing-style [2]. > > If you have other pointers, please suggest. (Lecture notes, if any, > are also helpful) > The Jinja project (https://www.isa-afp.org/entries/Jinja.html) by Klein and Nipkow contains a verified compiler for a sizable subset of Java to a sizable subset of the Java VM. Their textbook *Concrete Semantics* (http://concrete-semantics.org/) also contains a proof of a much simpler compiler for IMP, similar in ambition to my lecture notes [2], but with a different proof technique. Hope this helps, - Xavier Leroy > > Best > Anitha > > > [1]. https://cakeml.org/popl14.pdf > [2]. https://xavierleroy.org/courses/EUTypes-2019/ > From nipkow at in.tum.de Mon Jun 28 01:13:06 2021 From: nipkow at in.tum.de (Tobias Nipkow) Date: Mon, 28 Jun 2021 07:13:06 +0200 Subject: [TYPES] Compiler correctness for a stack machine backend In-Reply-To: References: Message-ID: On 26/06/2021 17:21, Xavier Leroy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On Sat, Jun 26, 2021 at 2:55 PM Anitha Gollamudi > wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Hi, >> >> I am looking for references to prove compilation correctness from >> Imp/C-like language (preferably with pointers and function calls) to >> a stack machine. >> >> Using small-step/big-step semantics to prove compilation correctness >> (? la Compcert) will be helpful. It need not be mechanised---a >> pen-and-paper proof exposition works as well. >> >> Here are a couple of references that are somewhat related to what I am >> looking for. >> (a) CakeML: Compiles ML to CakeML byetcode [1]. >> (b) Xavier Leroy's tutorial: It uses continuation-passing-style [2]. >> >> If you have other pointers, please suggest. (Lecture notes, if any, >> are also helpful) >> > > The Jinja project (https://www.isa-afp.org/entries/Jinja.html) by Klein and > Nipkow contains a verified compiler for a sizable subset of Java to a > sizable subset of the Java VM. > > Their textbook *Concrete Semantics* (http://concrete-semantics.org/) also > contains a proof of a much simpler compiler for IMP, similar in ambition to > my lecture notes [2], but with a different proof technique. Recently a shorter proof of the same material was published online: https://www.isa-afp.org/entries/IMP_Compiler.html Best Tobias > Hope this helps, > > - Xavier Leroy > > > >> >> Best >> Anitha >> >> >> [1]. https://cakeml.org/popl14.pdf >> [2]. https://xavierleroy.org/courses/EUTypes-2019/ >> From anitha.gollamudi at gmail.com Tue Jun 29 19:26:15 2021 From: anitha.gollamudi at gmail.com (Anitha Gollamudi) Date: Tue, 29 Jun 2021 19:26:15 -0400 Subject: [TYPES] Compiler correctness for a stack machine backend In-Reply-To: References: Message-ID: Thanks Matt, Xavier and Tobias for the suggestions. Suppose that the IMP and the stack machines (given in these references) are extended with an explicit heap and call stack as well as instructions that operate on heap. Then, compiling addresses would probably require an address-map that maps source heap addresses (integers) to target heap addresses (integers). Is this reasonably standard? On Tue, 29 Jun 2021 at 09:58, Tobias Nipkow wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > On 26/06/2021 17:21, Xavier Leroy wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > On Sat, Jun 26, 2021 at 2:55 PM Anitha Gollamudi > > wrote: > > > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > >> ] > >> > >> Hi, > >> > >> I am looking for references to prove compilation correctness from > >> Imp/C-like language (preferably with pointers and function calls) to > >> a stack machine. > >> > >> Using small-step/big-step semantics to prove compilation correctness > >> (? la Compcert) will be helpful. It need not be mechanised---a > >> pen-and-paper proof exposition works as well. > >> > >> Here are a couple of references that are somewhat related to what I am > >> looking for. > >> (a) CakeML: Compiles ML to CakeML byetcode [1]. > >> (b) Xavier Leroy's tutorial: It uses continuation-passing-style [2]. > >> > >> If you have other pointers, please suggest. (Lecture notes, if any, > >> are also helpful) > >> > > > > The Jinja project (https://www.isa-afp.org/entries/Jinja.html) by Klein and > > Nipkow contains a verified compiler for a sizable subset of Java to a > > sizable subset of the Java VM. > > > > Their textbook *Concrete Semantics* (http://concrete-semantics.org/) also > > contains a proof of a much simpler compiler for IMP, similar in ambition to > > my lecture notes [2], but with a different proof technique. > > Recently a shorter proof of the same material was published online: > > https://www.isa-afp.org/entries/IMP_Compiler.html > > Best > Tobias > > > Hope this helps, > > > > - Xavier Leroy > > > > > > > >> > >> Best > >> Anitha > >> > >> > >> [1]. https://cakeml.org/popl14.pdf > >> [2]. https://xavierleroy.org/courses/EUTypes-2019/ > >> > From gabriel.scherer at gmail.com Thu Jul 8 11:33:55 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Thu, 8 Jul 2021 17:33:55 +0200 Subject: [TYPES] online conferences should be free (was: global debriefing over our virtual experience of conferences) In-Reply-To: References: Message-ID: Dear list, I'm writing with excellent news: in the last month ICFP'21 has decided to change its fee structure, which should now include a "discounted $10" option for the whole conference, in line with POPL'21 and PLDI'21 for example. I think this is an excellent compromise. As far as I can tell, thanks are due to the general chair, Sukyoung Ryu, and the ICFP steering committee. Thanks! On Fri, Jun 4, 2021 at 3:17 PM Gabriel Scherer wrote: > Dear list, > > Last year I played the unfortunate role of complaining about the $100 > price tag on ICFP'20 registration. There were some great improvements in > further events, for example POPL'21 had "discounted rate: $10" as an > unconditional registration option, and PLDI'21 offers the same option. (I > still wish that there events were free, as is common with other scientific > conferences like FSCD'20, IJCAR'20, LICS'20 etc., but $10 is still much > closer to a symbolic sum than $100 for a strict subset of the world.). > > Unfortunately, it is my understanding that ICFP'21 is planning to reuse > the same fee structure. The details are not clear yet and possibly subject > to change, as registration hasn't opened; but this seems to be the current > plan. I wish it was possible to have a (public) discussion about this > choice in advance, and not just a month or two before the conference during > summer holidays. > > SIGPLAN has decided not to publish budget information for ICFP'20, but my > understanding is that the $100 registration scheme generated a strong > profit for the conference, to the point that, if the costs are comparable > to last year, last year profit would suffice to fund ICFP'21 entirely. Why > would we have a $100 registration fee again? > > ICFP is a flagship conference at the intersection of theoretical works and > practical functional programming, and it could attract a vibrant crowd of > people outside academia (in particular: not students), who may not have an > easy path to reimbursement -- this is especially important for the > workshops. > > (Disclaimer: I'm criticising past registration fees and prospective > registration fees, but not of course the people doing the hard work of > organizing the conference! They have all my gratitude.) > > On Sun, Aug 23, 2020 at 4:05 PM Gabriel Scherer > wrote: > >> Dear types-list, >> >> Going on a tangent from Flavien's earlier post: I really think that >> online conferences should be free. >> >> Several conferences (PLDI for example) managed to run free-of-charge >> since the pandemic started, and they reported broader attendance and a >> strong diversity of attendants, which sounds great. I don't think we can >> achieve this with for-pay online conferences. >> >> ICFP is coming up shortly with a $100 registration price tag, and I did >> not register. >> >> I'm aware that running a large virtual conference requires computing >> resources that do have a cost. For PLDI for example, the report only says >> that the cost was covered by industrial sponsors. Are numbers publicly >> available on the cost of running a virtual conference? Note that if we >> managed to run a conference on free software, I'm sure that institutions >> and volunteers could be convinced to help hosting and monitoring the >> conference services during the event. >> > From roberto at dicosmo.org Mon Jul 19 08:16:00 2021 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Mon, 19 Jul 2021 14:16:00 +0200 Subject: [TYPES] Policy news from France: the new national plan for open science with concrete measures for open access and software Message-ID: Dear all, I am delighted to share great news from France: on July 6th, the French Ministry of Research unveiled the second multi-annual National Plan for Open Science, which is a landmark policy document. The official document is available online from the website of the French Ministry of Research , with an unofficial (and perfectible) english translation available at https://www.ouvrirlascience.fr/second-national-plan-for-open-science/ (an official English version will be available in a few weeks from the Ministry of Resarch dedicated page ). For our community, I believe two parts of this plan are particularly interesting: section 1, that contains important concrete provisions to foster Open Access; and section 3, that squarely puts software on a par with publications and data in research and Open Science, and announces a number of measures designed to open up research software and better recognize software development in research. Here are some significant measures that are announced for research publications (section 1 of the plan): - achieve 100% open access publications by 2030 - support Diamond open access publishing - request that the data and code associated with the article texts submitted be provided - promote the use of narrative CVs to reduce the importance of quantitative assessments to the benefit of qualitative ones Here are some highlights of the measures related to software (section 3 of the plan): - a clear recommendation to make research software available under an open source licence, unless there are strong reasons not to do so - the creation of a high level expert group dedicated to research software in the National Committee for Open Science - the objective to achieve better recognition of software development in career evaluation for researchers and engineers - a renewed and strengthened official support of Software Heritage , with a recommendation to archive in it *all research software produced in France* (a simple HOWTO is available at https://www.softwareheritage.org/howto-archive-and-reference-your-code/) - a plan to get an ISO standard for the Software Heritage intrinsic identifiers for source code [1] To the best of my knowledge, it is the first time that such an organic, clearly designed strategy for (open source) software in research is laid out in an official government-level document. -- Roberto [1] these identifiers (aka SWHID ) allow to pinpoint inside the archive any software artifact, down to the level of the line of code. For example, here is a *nice fragment in the Apollo 11 source code * and here is *a mythical routine in the Quake III Arena source code * *. *For a short demo, see https://www.youtube.com/watch?v=8nlSvYh7VpI (the interesting part starts around 9:00) ------------------------------------------------------------------ Computer Science Professor (on leave at Inria from IRIF/Universit? de Paris) Director Software Heritage E-mail : roberto at dicosmo.org INRIA Web : http://www.dicosmo.org Bureau C328 Twitter : http://twitter.com/rdicosmo 2, Rue Simone Iff Tel : +33 1 80 49 44 42 CS 42112 75589 Paris Cedex 12 ------------------------------------------------------------------ GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 From jcxz at cs.ubc.ca Tue Jul 27 21:03:36 2021 From: jcxz at cs.ubc.ca (Jonathan Chan) Date: Tue, 27 Jul 2021 18:03:36 -0700 Subject: [TYPES] Impredicative Set and large elimination Message-ID: <338611cc43a299cf90aa6ea8eeef8082@mail.cs.ubc.ca> Hello all, I'm trying to find some references for what is and isn't allowed with what's known as an impredicative Set universe in Coq. There's several references and explanations such as this issue [1] that say that an impredicative Prop universe combined with unrestricted elimination and excluded middle yields a contradiction. I've also found this [2] saying that in Coq, "large" inductives in an impredicative Set with constructors with arguments in larger universes have elimination restricted to Prop and Set only, presumably to also allow postulating an excluded middle axiom. Suppose I had a large inductive in impredicative Set and I do not postulate excluded middle (or double negation elimination, or choice, or indefinite description, etc.): can I eliminate my inductive into large universes, i.e. Type, while maintaining consistency? Are there any references that discuss the consequences of having an impredicative Set and other inconsistencies I should watch out for? I will not be postulating any axioms that aren't part of CIC, but I am wondering about whether I can have an impredicative Prop subtype an impredicative Set, and whether making the theory extensional (with equality reflection) would at all interfere with consistency. Regards, Jonathan [1] https://github.com/FStarLang/FStar/issues/360 [2] http://adam.chlipala.net/cpdt/html/Universes.html#lab80 From streicher at mathematik.tu-darmstadt.de Wed Jul 28 13:02:21 2021 From: streicher at mathematik.tu-darmstadt.de (Thomas Streicher) Date: Wed, 28 Jul 2021 19:02:21 +0200 Subject: [TYPES] Impredicative Set and large elimination In-Reply-To: <338611cc43a299cf90aa6ea8eeef8082@mail.cs.ubc.ca> References: <338611cc43a299cf90aa6ea8eeef8082@mail.cs.ubc.ca> Message-ID: <20210728170221.GA1724@mathematik.tu-darmstadt.de> Though I am not fully aware of what pecisely is the precise type system form general type theoretic knowledge my anser is as follows. As long as Prop is not an element of Set there is no problem to assume both Set and Prop as impredicative. Look at the realizability model where types are interpreted as assemblies, Set as modest sets and Prop as subterminal modes sets. These models all validate large elimination. But if Prop \in Set then then Girard's papardox applies. Thomas > I'm trying to find some references for what is and isn't allowed with > what's known as an impredicative Set universe in Coq. > There's several references and explanations such as this issue [1] that > say that an impredicative Prop universe combined with unrestricted > elimination and excluded middle yields a contradiction. > I've also found this [2] saying that in Coq, "large" inductives in an > impredicative Set with constructors with arguments in larger universes > have elimination restricted to Prop and Set only, > presumably to also allow postulating an excluded middle axiom. > > Suppose I had a large inductive in impredicative Set and I do not > postulate excluded middle (or double negation elimination, or choice, or > indefinite description, etc.): > can I eliminate my inductive into large universes, i.e. Type, while > maintaining consistency? > Are there any references that discuss the consequences of having an > impredicative Set and other inconsistencies I should watch out for? > I will not be postulating any axioms that aren't part of CIC, but I am > wondering about whether I can have an impredicative Prop subtype an > impredicative Set, > and whether making the theory extensional (with equality reflection) > would at all interfere with consistency. > > Regards, > Jonathan > > [1] https://github.com/FStarLang/FStar/issues/360 > [2] http://adam.chlipala.net/cpdt/html/Universes.html#lab80 From aravara at fct.unl.pt Fri Jul 30 13:37:49 2021 From: aravara at fct.unl.pt (Antonio Ravara) Date: Fri, 30 Jul 2021 18:37:49 +0100 Subject: [TYPES] Announcement: typestates in Rust Message-ID: <251ad4d4-a582-c2e3-576b-c10e8401f37d@fct.unl.pt> While there are several works providing support to discipline channel communication in Rust, based on (Multi-party) Session Types, there is a lack of support to discipline the protocols of modules (seen as APIs). Jos? Duarte and I developed a DSL to provide typestate support for Rust. We recently had a paper accepted in the Brazilian Symposium of Programming Languages (proceedings published in the ACM DL): http://cbsoft2021.joinville.udesc.br/sblp.php You can find the paper here: https://github.com/rustype/typestate-rs/tree/main/paper The software is also already available: https://lib.rs/crates/typestate-proc-macro Comments most welcome. -- Cheers, Ant?nio (also on behalf of Jos?) ----------------------- Abstract: Rust leverages the type system along with information about object lifetimes, allowing the compiler to keep track of objects throughout the program and checking for memory misusage. While preventing memory-related bugs goes a long way in software security, other categories of bugs remain in Rust. One of which would be Application Programming Interface (API) misusage, where the developer does not respect constraints put in place by an API, thus resulting in the program crashing. Typestates elevate state to the type level, allowing for the enforcement of API constraints at compile-time, relieving the developer from the burden that is keeping track of the possible computation states at runtime, and preventing possible API misusage during development. While Rust does not support typestates by design, the type system is powerful enough to express and validate typestates. We propose a new macro-based approach to deal with typestates in Rust; this approach provides an embedded Domain-Specific Language (DSL) which allows developers to express typestates using only existing Rust syntax. Furthermore, Rust?s macro system is leveraged to extract a state machine out of the typestate specification and then perform compile-time checks over the specification. Afterwards we leverage Rust?s type system to check protocol-compliance. The DSL avoids workflow-bloat by requiring nothing but a Rust compiler and the library itself. From klaus.ostermann at uni-tuebingen.de Sat Aug 28 17:28:33 2021 From: klaus.ostermann at uni-tuebingen.de (Klaus Ostermann) Date: Sat, 28 Aug 2021 23:28:33 +0200 Subject: [TYPES] Recovering functions from classical disjunction and negation Message-ID: <79b1126f-aa3f-528e-4033-615089cec65d@uni-tuebingen.de> In classical logic, implication A -> B can be encoded as (not A \/ B). Is there any work on how that encoding works on the term side of things, that is, how to encode lambda abstraction and application in terms of these logical connectives. For instance, there are variants of Parigot's lambda mu calculus with primitive disjunction and negation (such as by Pym, Ritter and Wallen or de Groote (2001) ). Shouldn't it be possible to remove implication / lambda from such calculi and encode them as macros? Has this been done? Regards, Klaus From gabriel.scherer at gmail.com Sat Aug 28 17:56:49 2021 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Sat, 28 Aug 2021 23:56:49 +0200 Subject: [TYPES] Recovering functions from classical disjunction and negation In-Reply-To: <79b1126f-aa3f-528e-4033-615089cec65d@uni-tuebingen.de> References: <79b1126f-aa3f-528e-4033-615089cec65d@uni-tuebingen.de> Message-ID: Yes ! (I would consider it folklore now.) This is explicitly written out, for example, in Wadler's "Call-by-Value is Dual to Call-by-Name", 2003 -- propositions 5.2 and 5.3, page 8 of https://urldefense.com/v3/__https://homepages.inf.ed.ac.uk/wadler/papers/dual/dual.pdf__;!!IBzWLUs!FW5mQvtua1NFPPztYIIgaWEUxhsni5aYR8rRMELFL1q--cmMQj4e3nlcNlMgr8QUrF_0IfYjOd8$ On Sat, Aug 28, 2021 at 11:51 PM Klaus Ostermann < klaus.ostermann at uni-tuebingen.de> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > In classical logic, implication A -> B can be encoded as (not A \/ B). > > Is there any work on how that encoding works on the term side of things, > that is, how > to encode lambda abstraction and application in terms of these logical > connectives. > For instance, there are variants of Parigot's lambda mu calculus with > primitive disjunction > and negation (such as by Pym, Ritter and Wallen or de Groote (2001) ). > Shouldn't it > be possible to remove implication / lambda from such calculi and encode > them as macros? > Has this been done? > > Regards, > > Klaus > > > From aravara at fct.unl.pt Thu Sep 9 11:00:34 2021 From: aravara at fct.unl.pt (Antonio Ravara) Date: Thu, 9 Sep 2021 16:00:34 +0100 Subject: [TYPES] Annoucement: release of Java Typestate Checker Message-ID: <9e09523e-83e1-d0ab-9164-d65dfd6d7676@fct.unl.pt> We are glad to announce the first release of Java Typestate Checker (JaTyC [1]), a tool to statically check that class methods are called in a prescribed order, specified in a protocol file associated with that class. Our tool ensures that client code conforms to the specification, guaranteeing protocol compliance and completion. Our teams, composed by Lorenzo Bacchiani and Mario Bravetti from the Bologna University (Italy), and by Marco Giunti, Jo?o Mota and Ant?nio Ravara from the NOVA School of Science and Technology (Portugal), have joined forces and, building on previous work of each team, developed support for subtyping. In particular, this has been done by adapting a synchronous subtyping algorithm (with error detection) used in the general synchronous/asynchronous session subtyping tool presented in [2]. Our typestate checker can now safely handle polymorphic code, although in this first version in a limited way. You can find a quick installation guide at https://urldefense.com/v3/__https://github.com/jdmota/java-typestate-checker__;!!IBzWLUs!D49C1U2m4kDdL_vDqxPSXoh_enomFqulAw4YpasAQ8bFJo1aVIuvKSoeGO_dl-F-HC9mWn7f-vI$ and further documentation at https://urldefense.com/v3/__https://github.com/jdmota/java-typestate-checker/wiki/Documentation__;!!IBzWLUs!D49C1U2m4kDdL_vDqxPSXoh_enomFqulAw4YpasAQ8bFJo1aVIuvKSoeGO_dl-F-HC9mDVV4mdI$ . Feel free to send us feedback on your experience working with this tool. Warm regards, Ant?nio (also on behalf of Jo?o, Lorenzo, Marco, and Mario) [1] Mota, Jo?o, Marco Giunti, and Ant?nio Ravara. ?Java Typestate Checker?, in Proc. of COORDINATION 2021, LNCS 12717, Springer 2021 https://urldefense.com/v3/__https://doi.org/10.1007/978-3-030-78142-2_8__;!!IBzWLUs!D49C1U2m4kDdL_vDqxPSXoh_enomFqulAw4YpasAQ8bFJo1aVIuvKSoeGO_dl-F-HC9m8o0FOmQ$ [2] Bacchiani, L., Bravetti, M., Lange, J., Zavattaro, G.: ?A Session Subtyping Tool?, in Proc. of COORDINATION 2021, LNCS 12717, Springer 2021 https://urldefense.com/v3/__https://doi.org/10.1007/978-3-030-78142-2_6__;!!IBzWLUs!D49C1U2m4kDdL_vDqxPSXoh_enomFqulAw4YpasAQ8bFJo1aVIuvKSoeGO_dl-F-HC9mbpsM0L4$ Tool source files for Linux, Windows and OSx (and binaries for Windows and OSx). https://urldefense.com/v3/__https://github.com/LBacchiani/session-subtyping-tool__;!!IBzWLUs!D49C1U2m4kDdL_vDqxPSXoh_enomFqulAw4YpasAQ8bFJo1aVIuvKSoeGO_dl-F-HC9mewOnkXI$ From wadler at inf.ed.ac.uk Sat Oct 9 11:32:51 2021 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Sat, 9 Oct 2021 16:32:51 +0100 Subject: [TYPES] Congruence rules vs frames Message-ID: Most mechanised formulations of reduction systems, such as those found in Software Foundations or in Programming Language Foundations in Agda, use one congruence rule for each evaluation context: ?-?? : ? {L L? M} ? L ?? L? ----------------- ? L ? M ?? L? ? M ?-?? : ? {V M M?} ? Value V ? M ?? M? ----------------- ? V ? M ?? V ? M? One might instead define frames that specify evaluation contexts and have a single congruence rule. data Frame : Set where ?? : Term ? Frame ?? : (V : Term) ? Value V ? Frame _[_] : Frame ? Term ? Term (?? M) [ L ] = L ? M (?? V _) [ M ] = V ? M ? : ? F {M M?} ? M ?? M? ------------------- ? F [ M ] ?? F [ M? ] However, one rapidly gets into problems. For instance, consider the proof that types are preserved by reduction. preserve : ? {M N A} ? ? ? M ? A ? M ?? N ---------- ? ? ? N ? A ... preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) ... The first of these two lines gives an error message: I'm not sure if there should be a case for the constructor ?, because I get stuck when trying to solve the following unification problems (inferred index ? expected index): F [ M ] ? L ? M? F [ M? ] ? N when checking that the pattern ? (?? _) L??L? has type L ? M ?? N And the second provokes a similar error. This explains why so many formulations use one congruence rule for each evaluation context. But is there a way to make the approach with a single congruence rule work? Any citations to such approaches in the literature? Thank you for your help. Go well, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From dreyer at mpi-sws.org Sat Oct 9 17:12:01 2021 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 9 Oct 2021 23:12:01 +0200 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: Hi, Phil. Yes, there is a way to make the single congruence rule work. See for example the way I set things up in my Semantics course notes (see Section 1.2): https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2Y7fSLG9GkuFfn_pRKp5yvrMo1tv78em8j3kc4xuW-yBS3NZuhJ49Y6sdougc4yziUkPf0h6W8$ (Note: the general approach described below is not original, but my version may be easier to mechanize than others. Earlier work, such as Wright-Felleisen 94 and Harper-Stone 97, presents variations on this -- they use the term Replacement Lemma -- that I think are a bit clunkier and/or more annoying to mechanize. Wright-Felleisen cites Hindley-Seldin for this Replacement Lemma. In my version, the Replacement Lemma is broken into two lemmas -- Decomposition and Composition -- by defining a typing judgment for evaluation contexts.) Under this approach, you: 1. Divide the definition of reduction into two relations: let's call them "base reduction" and "full reduction". The base one has all the interesting basic reduction rules that actually do something (e.g. beta). The full one has just one rule, which handles all the "search" cases via eval ctxts: it says that K[e] reduces to K[e'] iff e base-reduces to e'. I believe it isn't strictly necessary to separate into two relations, but I've tried it without separating, and it makes the proof significantly cleaner to separate. 2. Define a notion of evaluation context typing K : A => B (signifying that K takes a hole of type A and returns a term of type B). This is the key part that many other accounts skip, but it makes things cleaner. With eval ctxt typing in hand, we can now prove the following two very easy lemmas (each requires like only 1 or 2 lines of Coq): 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : A => B and e : A. 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. (Without eval ctxt typing, you have to state and prove these lemmas as one joint Replacement lemma.) Then, to prove preservation, you first prove preservation for base reduction in the usual way. Then, the proof of preservation for full reduction follows immediately by wrapping the base-reduction preservation lemma with calls to Decomposition and Composition (again, just a few lines of Coq). My Semantics course notes just show this on pen and paper, but my students have also mechanized it in Coq, and we will be using that in the newest version of my course this fall. It is quite straightforward. The Coq source for the course is still in development at the moment, but I can share it with you if you're interested. I would be interested to know if for some reason this proof structure is harder to mechanize in Agda. Best wishes, Derek On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Most mechanised formulations of reduction systems, such as those found in > Software Foundations or in Programming Language Foundations in Agda, use > one congruence rule for each evaluation context: > > ?-?? : ? {L L? M} > ? L ?? L? > ----------------- > ? L ? M ?? L? ? M > > ?-?? : ? {V M M?} > ? Value V > ? M ?? M? > ----------------- > ? V ? M ?? V ? M? > > One might instead define frames that specify evaluation contexts and have a > single congruence rule. > > data Frame : Set where > ?? : Term ? Frame > ?? : (V : Term) ? Value V ? Frame > > _[_] : Frame ? Term ? Term > (?? M) [ L ] = L ? M > (?? V _) [ M ] = V ? M > > ? : ? F {M M?} > ? M ?? M? > ------------------- > ? F [ M ] ?? F [ M? ] > > However, one rapidly gets into problems. For instance, consider the proof > that types are preserved by reduction. > > preserve : ? {M N A} > ? ? ? M ? A > ? M ?? N > ---------- > ? ? ? N ? A > ... > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) > ... > > The first of these two lines gives an error message: > > I'm not sure if there should be a case for the constructor ?, > because I get stuck when trying to solve the following unification > problems (inferred index ? expected index): > F [ M ] ? L ? M? > F [ M? ] ? N > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N > > And the second provokes a similar error. > > This explains why so many formulations use one congruence rule for each > evaluation context. But is there a way to make the approach with a single > congruence rule work? Any citations to such approaches in the literature? > > Thank you for your help. Go well, -- P > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. From julesjacobs at gmail.com Sat Oct 9 17:28:18 2021 From: julesjacobs at gmail.com (Jules Jacobs) Date: Sat, 9 Oct 2021 23:28:18 +0200 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: I had previously only addressed the first part of this only to Philip Wadler, so I reproduce it here: Compcert uses evaluation contexts: https://urldefense.com/v3/__https://compcert.org/doc/html/compcert.cfrontend.Csem.html*context__;Iw!!IBzWLUs!HGX2hGRQRuVyTKBYG3pfweQirZSHsZHubbOfy84VJXuPAA-Sn0j82bCDEgj0LczocJqUb-S-1mA$ Iris does too: https://urldefense.com/v3/__https://gitlab.mpi-sws.org/iris/iris/-/blob/master/iris_heap_lang/lang.v*L412__;Iw!!IBzWLUs!HGX2hGRQRuVyTKBYG3pfweQirZSHsZHubbOfy84VJXuPAA-Sn0j82bCDEgj0LczocJqULhe3zyM$ This is slightly different than what you propose, because here the nesting is handled in the definition of contexts rather than in the step relation. Your approach works too: https://urldefense.com/v3/__https://pastebin.com/raw/TQU9UrnS__;!!IBzWLUs!HGX2hGRQRuVyTKBYG3pfweQirZSHsZHubbOfy84VJXuPAA-Sn0j82bCDEgj0LczocJqUT7t_EKs$ I have here represented the frame f directly as its f[_] function, but I think that shouldn't make a difference. About decomposition/composition: I have found it useful to define context typing as (K : A => B) := ? e, e : A => K[e] : B This means you don't need to give the inductive rules, and you get composition for free. In fact, I usually don't even give context typing a name in Coq, and instead prove this lemma by induction on typing: K[e] : B <=> ? A, e : A ? ? e, e : A => K[e] : B. Jules On Sat, Oct 9, 2021 at 11:14 PM Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi, Phil. > > Yes, there is a way to make the single congruence rule work. See for > example the way I set things up in my Semantics course notes (see > Section 1.2): > https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2Y7fSLG9GkuFfn_pRKp5yvrMo1tv78em8j3kc4xuW-yBS3NZuhJ49Y6sdougc4yziUkPf0h6W8$ > > (Note: the general approach described below is not original, but my > version may be easier to mechanize than others. Earlier work, such as > Wright-Felleisen 94 and Harper-Stone 97, presents variations on this > -- they use the term Replacement Lemma -- that I think are a bit > clunkier and/or more annoying to mechanize. Wright-Felleisen cites > Hindley-Seldin for this Replacement Lemma. In my version, the > Replacement Lemma is broken into two lemmas -- Decomposition and > Composition -- by defining a typing judgment for evaluation contexts.) > > Under this approach, you: > > 1. Divide the definition of reduction into two relations: let's call > them "base reduction" and "full reduction". The base one has all the > interesting basic reduction rules that actually do something (e.g. > beta). The full one has just one rule, which handles all the "search" > cases via eval ctxts: it says that K[e] reduces to K[e'] iff e > base-reduces to e'. I believe it isn't strictly necessary to separate > into two relations, but I've tried it without separating, and it makes > the proof significantly cleaner to separate. > > 2. Define a notion of evaluation context typing K : A => B (signifying > that K takes a hole of type A and returns a term of type B). This is > the key part that many other accounts skip, but it makes things > cleaner. > > With eval ctxt typing in hand, we can now prove the following two very > easy lemmas (each requires like only 1 or 2 lines of Coq): > > 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : > A => B and e : A. > > 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. > > (Without eval ctxt typing, you have to state and prove these lemmas as > one joint Replacement lemma.) > > Then, to prove preservation, you first prove preservation for base > reduction in the usual way. Then, the proof of preservation for full > reduction follows immediately by wrapping the base-reduction > preservation lemma with calls to Decomposition and Composition (again, > just a few lines of Coq). > > My Semantics course notes just show this on pen and paper, but my > students have also mechanized it in Coq, and we will be using that in > the newest version of my course this fall. It is quite > straightforward. The Coq source for the course is still in > development at the moment, but I can share it with you if you're > interested. I would be interested to know if for some reason this > proof structure is harder to mechanize in Agda. > > Best wishes, > Derek > > > On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler wrote: > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Most mechanised formulations of reduction systems, such as those found in > > Software Foundations or in Programming Language Foundations in Agda, use > > one congruence rule for each evaluation context: > > > > ?-?? : ? {L L? M} > > ? L ?? L? > > ----------------- > > ? L ? M ?? L? ? M > > > > ?-?? : ? {V M M?} > > ? Value V > > ? M ?? M? > > ----------------- > > ? V ? M ?? V ? M? > > > > One might instead define frames that specify evaluation contexts and > have a > > single congruence rule. > > > > data Frame : Set where > > ?? : Term ? Frame > > ?? : (V : Term) ? Value V ? Frame > > > > _[_] : Frame ? Term ? Term > > (?? M) [ L ] = L ? M > > (?? V _) [ M ] = V ? M > > > > ? : ? F {M M?} > > ? M ?? M? > > ------------------- > > ? F [ M ] ?? F [ M? ] > > > > However, one rapidly gets into problems. For instance, consider the proof > > that types are preserved by reduction. > > > > preserve : ? {M N A} > > ? ? ? M ? A > > ? M ?? N > > ---------- > > ? ? ? N ? A > > ... > > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M > > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) > > ... > > > > The first of these two lines gives an error message: > > > > I'm not sure if there should be a case for the constructor ?, > > because I get stuck when trying to solve the following unification > > problems (inferred index ? expected index): > > F [ M ] ? L ? M? > > F [ M? ] ? N > > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N > > > > And the second provokes a similar error. > > > > This explains why so many formulations use one congruence rule for each > > evaluation context. But is there a way to make the approach with a single > > congruence rule work? Any citations to such approaches in the literature? > > > > Thank you for your help. Go well, -- P > > > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > From gshen42 at ucsc.edu Sat Oct 9 20:02:11 2021 From: gshen42 at ucsc.edu (Gan Shen) Date: Sat, 9 Oct 2021 17:02:11 -0700 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: Hi Philip, Here's a hacky (but simple!) way that works on my end, try defining _[_] like this ``` _[_]? : Frame ? Term ? Term ? Term (?? M) [ L ]? = L , M (?? V _) [ M ]? = V , M _[_] : Frame ? Term ? Term F [ T ] = let M? , M? = F [ T ]? in _?_ M? M? ``` Here's my explanation of why it works (disclaimer: I don't know too much about Agda internals, this is purely based on my experience): When doing unification, Agda will first evaluate terms to weak head normal form (WHNF), the problem with your old _[_] is that the evaluation is blocked by the pattern matching, even though it's very clear from the code that it always returns a WHNF starting with _?_, unfortunately Agda doesn't know that. The new _[_] fixes this by unconditionally returning a WHNF starting with _?_. Best, Gan On Sat, Oct 9, 2021 at 2:15 PM Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi, Phil. > > Yes, there is a way to make the single congruence rule work. See for > example the way I set things up in my Semantics course notes (see > Section 1.2): > https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2Y7fSLG9GkuFfn_pRKp5yvrMo1tv78em8j3kc4xuW-yBS3NZuhJ49Y6sdougc4yziUkPf0h6W8$ > > (Note: the general approach described below is not original, but my > version may be easier to mechanize than others. Earlier work, such as > Wright-Felleisen 94 and Harper-Stone 97, presents variations on this > -- they use the term Replacement Lemma -- that I think are a bit > clunkier and/or more annoying to mechanize. Wright-Felleisen cites > Hindley-Seldin for this Replacement Lemma. In my version, the > Replacement Lemma is broken into two lemmas -- Decomposition and > Composition -- by defining a typing judgment for evaluation contexts.) > > Under this approach, you: > > 1. Divide the definition of reduction into two relations: let's call > them "base reduction" and "full reduction". The base one has all the > interesting basic reduction rules that actually do something (e.g. > beta). The full one has just one rule, which handles all the "search" > cases via eval ctxts: it says that K[e] reduces to K[e'] iff e > base-reduces to e'. I believe it isn't strictly necessary to separate > into two relations, but I've tried it without separating, and it makes > the proof significantly cleaner to separate. > > 2. Define a notion of evaluation context typing K : A => B (signifying > that K takes a hole of type A and returns a term of type B). This is > the key part that many other accounts skip, but it makes things > cleaner. > > With eval ctxt typing in hand, we can now prove the following two very > easy lemmas (each requires like only 1 or 2 lines of Coq): > > 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : > A => B and e : A. > > 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. > > (Without eval ctxt typing, you have to state and prove these lemmas as > one joint Replacement lemma.) > > Then, to prove preservation, you first prove preservation for base > reduction in the usual way. Then, the proof of preservation for full > reduction follows immediately by wrapping the base-reduction > preservation lemma with calls to Decomposition and Composition (again, > just a few lines of Coq). > > My Semantics course notes just show this on pen and paper, but my > students have also mechanized it in Coq, and we will be using that in > the newest version of my course this fall. It is quite > straightforward. The Coq source for the course is still in > development at the moment, but I can share it with you if you're > interested. I would be interested to know if for some reason this > proof structure is harder to mechanize in Agda. > > Best wishes, > Derek > > > On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler wrote: > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Most mechanised formulations of reduction systems, such as those found in > > Software Foundations or in Programming Language Foundations in Agda, use > > one congruence rule for each evaluation context: > > > > ?-?? : ? {L L? M} > > ? L ?? L? > > ----------------- > > ? L ? M ?? L? ? M > > > > ?-?? : ? {V M M?} > > ? Value V > > ? M ?? M? > > ----------------- > > ? V ? M ?? V ? M? > > > > One might instead define frames that specify evaluation contexts and > have a > > single congruence rule. > > > > data Frame : Set where > > ?? : Term ? Frame > > ?? : (V : Term) ? Value V ? Frame > > > > _[_] : Frame ? Term ? Term > > (?? M) [ L ] = L ? M > > (?? V _) [ M ] = V ? M > > > > ? : ? F {M M?} > > ? M ?? M? > > ------------------- > > ? F [ M ] ?? F [ M? ] > > > > However, one rapidly gets into problems. For instance, consider the proof > > that types are preserved by reduction. > > > > preserve : ? {M N A} > > ? ? ? M ? A > > ? M ?? N > > ---------- > > ? ? ? N ? A > > ... > > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M > > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) > > ... > > > > The first of these two lines gives an error message: > > > > I'm not sure if there should be a case for the constructor ?, > > because I get stuck when trying to solve the following unification > > problems (inferred index ? expected index): > > F [ M ] ? L ? M? > > F [ M? ] ? N > > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N > > > > And the second provokes a similar error. > > > > This explains why so many formulations use one congruence rule for each > > evaluation context. But is there a way to make the approach with a single > > congruence rule work? Any citations to such approaches in the literature? > > > > Thank you for your help. Go well, -- P > > > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > From wadler at inf.ed.ac.uk Sun Oct 10 09:15:27 2021 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Sun, 10 Oct 2021 14:15:27 +0100 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: Thanks, Derek, and everyone else. I am asking for a friend. (Literally!) The friend's situation is adapting theory to a more realistic language with more constructs. As the number of constructs grows, the disadvantage of one congruence rule for each construct grows, and the advantages of contexts or frames become more pronounced. In our informal developments the advantage of contexts or frames is that they capture commonality in the rules, resulting in less ink on the page. However, if I read Derek's advice correctly, using contexts or frames doesn't save much ink. The ink that would have been required from writing down the congruence rules and the corresponding cases in the proofs now goes into writing down the corresponding cases in the composition and decomposition lemmas. Jules points out a neat trick that saves effort for a proof of preservation, but that won't help elsewhere. For instance, I first ran into the problem when trying to prove something simpler than preservation: that if a term is a value then no reduction applies. Apparently, identifying a commonality in the statement of the rules doesn't mean the properties of the rules get proved uniformly, they still require the same work as if the congruence rules were given separately. Alas, there is no free lunch. I managed to carry out a proof along the lines described by Derek and Jules, but it used more ink (and more think!) than my original proof using a congruence rule for each constructor. In this, as in much else, I find myself agreeing with Bob. Have I got that right? Or is there a magic bullet that I'm missing? Go well, -- P . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EfT_MPNP2SmpqfH1RBLPmmMMfLjinF1_UF0esLogHr7jl1XBZt4GF2s6Rc3wg7bZvbr7yZxAKwc$ On Sat, 9 Oct 2021 at 22:12, Derek Dreyer wrote: > This email was sent to you by someone outside the University. > You should only click on links or attachments if you are certain that the > email is genuine and the content is safe. > > Hi, Phil. > > Yes, there is a way to make the single congruence rule work. See for > example the way I set things up in my Semantics course notes (see > Section 1.2): > https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!EfT_MPNP2SmpqfH1RBLPmmMMfLjinF1_UF0esLogHr7jl1XBZt4GF2s6Rc3wg7bZvbr7W73gsBA$ > > (Note: the general approach described below is not original, but my > version may be easier to mechanize than others. Earlier work, such as > Wright-Felleisen 94 and Harper-Stone 97, presents variations on this > -- they use the term Replacement Lemma -- that I think are a bit > clunkier and/or more annoying to mechanize. Wright-Felleisen cites > Hindley-Seldin for this Replacement Lemma. In my version, the > Replacement Lemma is broken into two lemmas -- Decomposition and > Composition -- by defining a typing judgment for evaluation contexts.) > > Under this approach, you: > > 1. Divide the definition of reduction into two relations: let's call > them "base reduction" and "full reduction". The base one has all the > interesting basic reduction rules that actually do something (e.g. > beta). The full one has just one rule, which handles all the "search" > cases via eval ctxts: it says that K[e] reduces to K[e'] iff e > base-reduces to e'. I believe it isn't strictly necessary to separate > into two relations, but I've tried it without separating, and it makes > the proof significantly cleaner to separate. > > 2. Define a notion of evaluation context typing K : A => B (signifying > that K takes a hole of type A and returns a term of type B). This is > the key part that many other accounts skip, but it makes things > cleaner. > > With eval ctxt typing in hand, we can now prove the following two very > easy lemmas (each requires like only 1 or 2 lines of Coq): > > 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : > A => B and e : A. > > 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. > > (Without eval ctxt typing, you have to state and prove these lemmas as > one joint Replacement lemma.) > > Then, to prove preservation, you first prove preservation for base > reduction in the usual way. Then, the proof of preservation for full > reduction follows immediately by wrapping the base-reduction > preservation lemma with calls to Decomposition and Composition (again, > just a few lines of Coq). > > My Semantics course notes just show this on pen and paper, but my > students have also mechanized it in Coq, and we will be using that in > the newest version of my course this fall. It is quite > straightforward. The Coq source for the course is still in > development at the moment, but I can share it with you if you're > interested. I would be interested to know if for some reason this > proof structure is harder to mechanize in Agda. > > Best wishes, > Derek > > > On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler wrote: > > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Most mechanised formulations of reduction systems, such as those found in > > Software Foundations or in Programming Language Foundations in Agda, use > > one congruence rule for each evaluation context: > > > > ?-?? : ? {L L? M} > > ? L ?? L? > > ----------------- > > ? L ? M ?? L? ? M > > > > ?-?? : ? {V M M?} > > ? Value V > > ? M ?? M? > > ----------------- > > ? V ? M ?? V ? M? > > > > One might instead define frames that specify evaluation contexts and > have a > > single congruence rule. > > > > data Frame : Set where > > ?? : Term ? Frame > > ?? : (V : Term) ? Value V ? Frame > > > > _[_] : Frame ? Term ? Term > > (?? M) [ L ] = L ? M > > (?? V _) [ M ] = V ? M > > > > ? : ? F {M M?} > > ? M ?? M? > > ------------------- > > ? F [ M ] ?? F [ M? ] > > > > However, one rapidly gets into problems. For instance, consider the proof > > that types are preserved by reduction. > > > > preserve : ? {M N A} > > ? ? ? M ? A > > ? M ?? N > > ---------- > > ? ? ? N ? A > > ... > > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M > > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) > > ... > > > > The first of these two lines gives an error message: > > > > I'm not sure if there should be a case for the constructor ?, > > because I get stuck when trying to solve the following unification > > problems (inferred index ? expected index): > > F [ M ] ? L ? M? > > F [ M? ] ? N > > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N > > > > And the second provokes a similar error. > > > > This explains why so many formulations use one congruence rule for each > > evaluation context. But is there a way to make the approach with a single > > congruence rule work? Any citations to such approaches in the literature? > > > > Thank you for your help. Go well, -- P > > > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > From dreyer at mpi-sws.org Sun Oct 10 09:28:25 2021 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sun, 10 Oct 2021 15:28:25 +0200 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: @Jules: I think you're right and I was wrong. There's no need to define the eval ctxt typing *intensionally*, as I have been doing. I might as well define it *extensionally* (which is more my style anyway, but somehow I thought the extensional approach didn't work so well here). Ironically, what you end up then is just the old idea of the Replacement lemma. So never mind what I said about Decomposition/Composition being an improvement over Replacement. I think I'm gonna change my course notes to follow your approach. ;-) The case where I don't know how to do the "extensional" thing (or where there's no point in doing so?) is when proving preservation for an abstract machine semantics where the evaluation context is made explicit as the "stack" of the machine, and where the stack may contain frames that do not directly correspond to any language term. I have used such abstract machines in several papers. There, the only way I could see to set up preservation was to define (intensionally) a bespoke typing judgment on the continuation stacks (and their constituent frames). But also for that kind of semantics, since all the "search" steps are actually made explicit as steps of computation, there's no need to prove Decomposition or Replacement at all. In fact, in that semantics, the reduction relation is totally flat, so the preservation proof is just by case analysis, not induction. You can see an example of this in one of my earliest papers spelled out in full gory detail: https://urldefense.com/v3/__https://people.mpi-sws.org/*dreyer/papers/recursion/tr/main.pdf__;fg!!IBzWLUs!C9QBGWf7aqrcqMMM2YLscE0Hn5M6P29k6izHDlozBFllrQAv5GvXArITpyiOM3uNajQH42pexBo$ Thanks! Derek On Sat, Oct 9, 2021 at 11:29 PM Jules Jacobs wrote: > > I had previously only addressed the first part of this only to Philip Wadler, so I reproduce it here: > > Compcert uses evaluation contexts: https://urldefense.com/v3/__https://compcert.org/doc/html/compcert.cfrontend.Csem.html*context__;Iw!!IBzWLUs!C9QBGWf7aqrcqMMM2YLscE0Hn5M6P29k6izHDlozBFllrQAv5GvXArITpyiOM3uNajQH2gnpDsc$ > Iris does too: https://urldefense.com/v3/__https://gitlab.mpi-sws.org/iris/iris/-/blob/master/iris_heap_lang/lang.v*L412__;Iw!!IBzWLUs!C9QBGWf7aqrcqMMM2YLscE0Hn5M6P29k6izHDlozBFllrQAv5GvXArITpyiOM3uNajQHkGZ825c$ > > This is slightly different than what you propose, because here the nesting is handled in the definition of contexts rather than in the step relation. > Your approach works too: https://urldefense.com/v3/__https://pastebin.com/raw/TQU9UrnS__;!!IBzWLUs!C9QBGWf7aqrcqMMM2YLscE0Hn5M6P29k6izHDlozBFllrQAv5GvXArITpyiOM3uNajQHGT19Rf4$ > I have here represented the frame f directly as its f[_] function, but I think that shouldn't make a difference. > > About decomposition/composition: > I have found it useful to define context typing as > > (K : A => B) := ? e, e : A => K[e] : B > > This means you don't need to give the inductive rules, and you get composition for free. > In fact, I usually don't even give context typing a name in Coq, and instead prove this lemma by induction on typing: > > K[e] : B <=> ? A, e : A ? ? e, e : A => K[e] : B. > > Jules > > > On Sat, Oct 9, 2021 at 11:14 PM Derek Dreyer wrote: >> >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hi, Phil. >> >> Yes, there is a way to make the single congruence rule work. See for >> example the way I set things up in my Semantics course notes (see >> Section 1.2): https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2Y7fSLG9GkuFfn_pRKp5yvrMo1tv78em8j3kc4xuW-yBS3NZuhJ49Y6sdougc4yziUkPf0h6W8$ >> >> (Note: the general approach described below is not original, but my >> version may be easier to mechanize than others. Earlier work, such as >> Wright-Felleisen 94 and Harper-Stone 97, presents variations on this >> -- they use the term Replacement Lemma -- that I think are a bit >> clunkier and/or more annoying to mechanize. Wright-Felleisen cites >> Hindley-Seldin for this Replacement Lemma. In my version, the >> Replacement Lemma is broken into two lemmas -- Decomposition and >> Composition -- by defining a typing judgment for evaluation contexts.) >> >> Under this approach, you: >> >> 1. Divide the definition of reduction into two relations: let's call >> them "base reduction" and "full reduction". The base one has all the >> interesting basic reduction rules that actually do something (e.g. >> beta). The full one has just one rule, which handles all the "search" >> cases via eval ctxts: it says that K[e] reduces to K[e'] iff e >> base-reduces to e'. I believe it isn't strictly necessary to separate >> into two relations, but I've tried it without separating, and it makes >> the proof significantly cleaner to separate. >> >> 2. Define a notion of evaluation context typing K : A => B (signifying >> that K takes a hole of type A and returns a term of type B). This is >> the key part that many other accounts skip, but it makes things >> cleaner. >> >> With eval ctxt typing in hand, we can now prove the following two very >> easy lemmas (each requires like only 1 or 2 lines of Coq): >> >> 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : >> A => B and e : A. >> >> 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. >> >> (Without eval ctxt typing, you have to state and prove these lemmas as >> one joint Replacement lemma.) >> >> Then, to prove preservation, you first prove preservation for base >> reduction in the usual way. Then, the proof of preservation for full >> reduction follows immediately by wrapping the base-reduction >> preservation lemma with calls to Decomposition and Composition (again, >> just a few lines of Coq). >> >> My Semantics course notes just show this on pen and paper, but my >> students have also mechanized it in Coq, and we will be using that in >> the newest version of my course this fall. It is quite >> straightforward. The Coq source for the course is still in >> development at the moment, but I can share it with you if you're >> interested. I would be interested to know if for some reason this >> proof structure is harder to mechanize in Agda. >> >> Best wishes, >> Derek >> >> >> On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler wrote: >> > >> > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > >> > Most mechanised formulations of reduction systems, such as those found in >> > Software Foundations or in Programming Language Foundations in Agda, use >> > one congruence rule for each evaluation context: >> > >> > ?-?? : ? {L L? M} >> > ? L ?? L? >> > ----------------- >> > ? L ? M ?? L? ? M >> > >> > ?-?? : ? {V M M?} >> > ? Value V >> > ? M ?? M? >> > ----------------- >> > ? V ? M ?? V ? M? >> > >> > One might instead define frames that specify evaluation contexts and have a >> > single congruence rule. >> > >> > data Frame : Set where >> > ?? : Term ? Frame >> > ?? : (V : Term) ? Value V ? Frame >> > >> > _[_] : Frame ? Term ? Term >> > (?? M) [ L ] = L ? M >> > (?? V _) [ M ] = V ? M >> > >> > ? : ? F {M M?} >> > ? M ?? M? >> > ------------------- >> > ? F [ M ] ?? F [ M? ] >> > >> > However, one rapidly gets into problems. For instance, consider the proof >> > that types are preserved by reduction. >> > >> > preserve : ? {M N A} >> > ? ? ? M ? A >> > ? M ?? N >> > ---------- >> > ? ? ? N ? A >> > ... >> > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M >> > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) >> > ... >> > >> > The first of these two lines gives an error message: >> > >> > I'm not sure if there should be a case for the constructor ?, >> > because I get stuck when trying to solve the following unification >> > problems (inferred index ? expected index): >> > F [ M ] ? L ? M? >> > F [ M? ] ? N >> > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N >> > >> > And the second provokes a similar error. >> > >> > This explains why so many formulations use one congruence rule for each >> > evaluation context. But is there a way to make the approach with a single >> > congruence rule work? Any citations to such approaches in the literature? >> > >> > Thank you for your help. Go well, -- P >> > >> > >> > >> > . \ Philip Wadler, Professor of Theoretical Computer Science, >> > . /\ School of Informatics, University of Edinburgh >> > . / \ and Senior Research Fellow, IOHK >> > . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ >> > The University of Edinburgh is a charitable body, registered in >> > Scotland, with registration number SC005336. From dreyer at mpi-sws.org Sun Oct 10 09:46:38 2021 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sun, 10 Oct 2021 15:46:38 +0200 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: > I managed to carry out a proof along the lines described by Derek and Jules, but it used more ink (and more think!) than my original proof using a congruence rule for each constructor. In this, as in much else, I find myself agreeing with Bob. I don't think there's likely to be a big difference one way or the other when proving syntactic properties. I personally like to use the evaluation context semantics because the idea of an evaluation context comes in extremely handy later on when you get to deeper semantic or Hoare-style reasoning (of the sort we do when formulating logical relations arguments in Iris). At that point, the "inductive" cases of the relation can be handled uniformly using a "Bind" lemma. In the case of unary logical relations, this lemma looks something like the following: e \in E[A] forall v:Val. (v \in V[A]) => (K[v] \in E[B]) ---------------------------------------------------- K[e] \in E[B] where E[B] is the logical relation on terms of type B, and V[A] is the logical relation on values of type A. Using the Bind Lemma, for example, you can take a proof goal like the following (this is the case of proving "compatibility" of the logical relation for function applications): e1 \in E[A -> B] e2 \in E[A] -------------------- e1 e2 \in E[B] and through two applications of Bind, you can reduce it to: v1 \in V[A -> B] v2 \in V[A] -------------------- v1 v2 \in E[B] This approach scales also to languages with a wide range of features (recursive types, state, concurrency), at least if you work in a logical framework like Iris. In fact, I used this very example to motivate the use of Iris in my POPL'18 keynote (see around 28:30, the Bind rule is then discussed a few minutes later: https://urldefense.com/v3/__https://www.youtube.com/watch?v=8Xyk_dGcAwk&ab_channel=POPL2018__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp8ZPZSz5Q$ ). Without evaluation contexts, I don't know how to express such a lemma nicely. Cheers, Derek > > Have I got that right? Or is there a magic bullet that I'm missing? Go well, -- P > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > . /\ School of Informatics, University of Edinburgh > . / \ and Senior Research Fellow, IOHK > . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp8h6qNokQ$ > > > > On Sat, 9 Oct 2021 at 22:12, Derek Dreyer wrote: >> >> This email was sent to you by someone outside the University. >> You should only click on links or attachments if you are certain that the email is genuine and the content is safe. >> >> Hi, Phil. >> >> Yes, there is a way to make the single congruence rule work. See for >> example the way I set things up in my Semantics course notes (see >> Section 1.2): https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp86DUCbU8$ >> >> (Note: the general approach described below is not original, but my >> version may be easier to mechanize than others. Earlier work, such as >> Wright-Felleisen 94 and Harper-Stone 97, presents variations on this >> -- they use the term Replacement Lemma -- that I think are a bit >> clunkier and/or more annoying to mechanize. Wright-Felleisen cites >> Hindley-Seldin for this Replacement Lemma. In my version, the >> Replacement Lemma is broken into two lemmas -- Decomposition and >> Composition -- by defining a typing judgment for evaluation contexts.) >> >> Under this approach, you: >> >> 1. Divide the definition of reduction into two relations: let's call >> them "base reduction" and "full reduction". The base one has all the >> interesting basic reduction rules that actually do something (e.g. >> beta). The full one has just one rule, which handles all the "search" >> cases via eval ctxts: it says that K[e] reduces to K[e'] iff e >> base-reduces to e'. I believe it isn't strictly necessary to separate >> into two relations, but I've tried it without separating, and it makes >> the proof significantly cleaner to separate. >> >> 2. Define a notion of evaluation context typing K : A => B (signifying >> that K takes a hole of type A and returns a term of type B). This is >> the key part that many other accounts skip, but it makes things >> cleaner. >> >> With eval ctxt typing in hand, we can now prove the following two very >> easy lemmas (each requires like only 1 or 2 lines of Coq): >> >> 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : >> A => B and e : A. >> >> 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. >> >> (Without eval ctxt typing, you have to state and prove these lemmas as >> one joint Replacement lemma.) >> >> Then, to prove preservation, you first prove preservation for base >> reduction in the usual way. Then, the proof of preservation for full >> reduction follows immediately by wrapping the base-reduction >> preservation lemma with calls to Decomposition and Composition (again, >> just a few lines of Coq). >> >> My Semantics course notes just show this on pen and paper, but my >> students have also mechanized it in Coq, and we will be using that in >> the newest version of my course this fall. It is quite >> straightforward. The Coq source for the course is still in >> development at the moment, but I can share it with you if you're >> interested. I would be interested to know if for some reason this >> proof structure is harder to mechanize in Agda. >> >> Best wishes, >> Derek >> >> >> On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler wrote: >> > >> > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > >> > Most mechanised formulations of reduction systems, such as those found in >> > Software Foundations or in Programming Language Foundations in Agda, use >> > one congruence rule for each evaluation context: >> > >> > ?-?? : ? {L L? M} >> > ? L ?? L? >> > ----------------- >> > ? L ? M ?? L? ? M >> > >> > ?-?? : ? {V M M?} >> > ? Value V >> > ? M ?? M? >> > ----------------- >> > ? V ? M ?? V ? M? >> > >> > One might instead define frames that specify evaluation contexts and have a >> > single congruence rule. >> > >> > data Frame : Set where >> > ?? : Term ? Frame >> > ?? : (V : Term) ? Value V ? Frame >> > >> > _[_] : Frame ? Term ? Term >> > (?? M) [ L ] = L ? M >> > (?? V _) [ M ] = V ? M >> > >> > ? : ? F {M M?} >> > ? M ?? M? >> > ------------------- >> > ? F [ M ] ?? F [ M? ] >> > >> > However, one rapidly gets into problems. For instance, consider the proof >> > that types are preserved by reduction. >> > >> > preserve : ? {M N A} >> > ? ? ? M ? A >> > ? M ?? N >> > ---------- >> > ? ? ? N ? A >> > ... >> > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M >> > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) >> > ... >> > >> > The first of these two lines gives an error message: >> > >> > I'm not sure if there should be a case for the constructor ?, >> > because I get stuck when trying to solve the following unification >> > problems (inferred index ? expected index): >> > F [ M ] ? L ? M? >> > F [ M? ] ? N >> > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N >> > >> > And the second provokes a similar error. >> > >> > This explains why so many formulations use one congruence rule for each >> > evaluation context. But is there a way to make the approach with a single >> > congruence rule work? Any citations to such approaches in the literature? >> > >> > Thank you for your help. Go well, -- P >> > >> > >> > >> > . \ Philip Wadler, Professor of Theoretical Computer Science, >> > . /\ School of Informatics, University of Edinburgh >> > . / \ and Senior Research Fellow, IOHK >> > . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ >> > The University of Edinburgh is a charitable body, registered in >> > Scotland, with registration number SC005336. From julesjacobs at gmail.com Sun Oct 10 11:35:31 2021 From: julesjacobs at gmail.com (Jules Jacobs) Date: Sun, 10 Oct 2021 17:35:31 +0200 Subject: [TYPES] Congruence rules vs frames In-Reply-To: References: Message-ID: Here is a proof that values don't step: https://urldefense.com/v3/__https://pastebin.com/raw/V9Kg0cY4__;!!IBzWLUs!DI_iHIlT1E8rdhB4G9G-Wm5cLeF8loOevBXfeUhqHo4zaA1zOdGzKrCuHt_3u2trbh_ZLMAUCho$ The relevant parts are: Lemma ctx1_not_val e k : ctx1 k -> value (k e) -> False. Proof. intros [] Hv; inversion Hv. Qed. Lemma ctx_val e k : ctx k -> value (k e) -> k = id. Proof. intros Hk Hv. induction Hk. - by apply functional_extensionality. - exfalso. by eapply ctx1_not_val. Qed. Lemma val_no_step e e' : value e ? ? step e e'. Proof. intros Hv Hs. destruct Hs. assert (k = id) as -> by eauto using ctx_val. destruct H0; inversion Hv. Qed. The proof script is of constant size (not proportional to the number of language constructs). So maybe the not-so-magic bullet is tactic scripts that can uniformly solve a bunch of cases. However, in this case perhaps it can be done in Agda too, because we have only invoked a tactic on multiple subgoals in the pattern destruct A; inversion B. This seems to correspond to dependent pattern matching, so maybe in Agda there also would only be O(1) cases? @Derek: with the stack machine there are run-time constructs that enter into the state of the operational semantics that are not already covered by the expression typing judgement. The fact that this doesn't happen in the other case seems to be the reason why an extensional approach works. So maybe the intensional approach is unavoidable in that case...? Jules On Sun, Oct 10, 2021 at 3:47 PM Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > I managed to carry out a proof along the lines described by Derek and > Jules, but it used more ink (and more think!) than my original proof using > a congruence rule for each constructor. In this, as in much else, I find > myself agreeing with Bob. > > I don't think there's likely to be a big difference one way or the > other when proving syntactic properties. I personally like to use the > evaluation context semantics because the idea of an evaluation context > comes in extremely handy later on when you get to deeper semantic or > Hoare-style reasoning (of the sort we do when formulating logical > relations arguments in Iris). At that point, the "inductive" cases of > the relation can be handled uniformly using a "Bind" lemma. In the > case of unary logical relations, this lemma looks something like the > following: > > e \in E[A] > forall v:Val. (v \in V[A]) => (K[v] \in E[B]) > ---------------------------------------------------- > K[e] \in E[B] > > where E[B] is the logical relation on terms of type B, and V[A] is the > logical relation on values of type A. > > Using the Bind Lemma, for example, you can take a proof goal like the > following (this is the case of proving "compatibility" of the logical > relation for function applications): > > e1 \in E[A -> B] > e2 \in E[A] > -------------------- > e1 e2 \in E[B] > > and through two applications of Bind, you can reduce it to: > > v1 \in V[A -> B] > v2 \in V[A] > -------------------- > v1 v2 \in E[B] > > This approach scales also to languages with a wide range of features > (recursive types, state, concurrency), at least if you work in a > logical framework like Iris. In fact, I used this very example to > motivate the use of Iris in my POPL'18 keynote (see around 28:30, the > Bind rule is then discussed a few minutes later: > > https://urldefense.com/v3/__https://www.youtube.com/watch?v=8Xyk_dGcAwk&ab_channel=POPL2018__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp8ZPZSz5Q$ > ). > > Without evaluation contexts, I don't know how to express such a lemma > nicely. > > Cheers, > Derek > > > > > Have I got that right? Or is there a magic bullet that I'm missing? Go > well, -- P > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp8h6qNokQ$ > > > > > > > > On Sat, 9 Oct 2021 at 22:12, Derek Dreyer wrote: > >> > >> This email was sent to you by someone outside the University. > >> You should only click on links or attachments if you are certain that > the email is genuine and the content is safe. > >> > >> Hi, Phil. > >> > >> Yes, there is a way to make the single congruence rule work. See for > >> example the way I set things up in my Semantics course notes (see > >> Section 1.2): > https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp86DUCbU8$ > >> > >> (Note: the general approach described below is not original, but my > >> version may be easier to mechanize than others. Earlier work, such as > >> Wright-Felleisen 94 and Harper-Stone 97, presents variations on this > >> -- they use the term Replacement Lemma -- that I think are a bit > >> clunkier and/or more annoying to mechanize. Wright-Felleisen cites > >> Hindley-Seldin for this Replacement Lemma. In my version, the > >> Replacement Lemma is broken into two lemmas -- Decomposition and > >> Composition -- by defining a typing judgment for evaluation contexts.) > >> > >> Under this approach, you: > >> > >> 1. Divide the definition of reduction into two relations: let's call > >> them "base reduction" and "full reduction". The base one has all the > >> interesting basic reduction rules that actually do something (e.g. > >> beta). The full one has just one rule, which handles all the "search" > >> cases via eval ctxts: it says that K[e] reduces to K[e'] iff e > >> base-reduces to e'. I believe it isn't strictly necessary to separate > >> into two relations, but I've tried it without separating, and it makes > >> the proof significantly cleaner to separate. > >> > >> 2. Define a notion of evaluation context typing K : A => B (signifying > >> that K takes a hole of type A and returns a term of type B). This is > >> the key part that many other accounts skip, but it makes things > >> cleaner. > >> > >> With eval ctxt typing in hand, we can now prove the following two very > >> easy lemmas (each requires like only 1 or 2 lines of Coq): > >> > >> 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : > >> A => B and e : A. > >> > >> 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. > >> > >> (Without eval ctxt typing, you have to state and prove these lemmas as > >> one joint Replacement lemma.) > >> > >> Then, to prove preservation, you first prove preservation for base > >> reduction in the usual way. Then, the proof of preservation for full > >> reduction follows immediately by wrapping the base-reduction > >> preservation lemma with calls to Decomposition and Composition (again, > >> just a few lines of Coq). > >> > >> My Semantics course notes just show this on pen and paper, but my > >> students have also mechanized it in Coq, and we will be using that in > >> the newest version of my course this fall. It is quite > >> straightforward. The Coq source for the course is still in > >> development at the moment, but I can share it with you if you're > >> interested. I would be interested to know if for some reason this > >> proof structure is harder to mechanize in Agda. > >> > >> Best wishes, > >> Derek > >> > >> > >> On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler > wrote: > >> > > >> > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > > >> > Most mechanised formulations of reduction systems, such as those > found in > >> > Software Foundations or in Programming Language Foundations in Agda, > use > >> > one congruence rule for each evaluation context: > >> > > >> > ?-?? : ? {L L? M} > >> > ? L ?? L? > >> > ----------------- > >> > ? L ? M ?? L? ? M > >> > > >> > ?-?? : ? {V M M?} > >> > ? Value V > >> > ? M ?? M? > >> > ----------------- > >> > ? V ? M ?? V ? M? > >> > > >> > One might instead define frames that specify evaluation contexts and > have a > >> > single congruence rule. > >> > > >> > data Frame : Set where > >> > ?? : Term ? Frame > >> > ?? : (V : Term) ? Value V ? Frame > >> > > >> > _[_] : Frame ? Term ? Term > >> > (?? M) [ L ] = L ? M > >> > (?? V _) [ M ] = V ? M > >> > > >> > ? : ? F {M M?} > >> > ? M ?? M? > >> > ------------------- > >> > ? F [ M ] ?? F [ M? ] > >> > > >> > However, one rapidly gets into problems. For instance, consider the > proof > >> > that types are preserved by reduction. > >> > > >> > preserve : ? {M N A} > >> > ? ? ? M ? A > >> > ? M ?? N > >> > ---------- > >> > ? ? ? N ? A > >> > ... > >> > preserve (?L ? ?M) (? (?? _) L??L?) = (preserve ?L L??L?) ? ?M > >> > preserve (?L ? ?M) (? (?? _ _) M??M?) = ?L ? (preserve ?M M??M?) > >> > ... > >> > > >> > The first of these two lines gives an error message: > >> > > >> > I'm not sure if there should be a case for the constructor ?, > >> > because I get stuck when trying to solve the following unification > >> > problems (inferred index ? expected index): > >> > F [ M ] ? L ? M? > >> > F [ M? ] ? N > >> > when checking that the pattern ? (?? _) L??L? has type L ? M ?? N > >> > > >> > And the second provokes a similar error. > >> > > >> > This explains why so many formulations use one congruence rule for > each > >> > evaluation context. But is there a way to make the approach with a > single > >> > congruence rule work? Any citations to such approaches in the > literature? > >> > > >> > Thank you for your help. Go well, -- P > >> > > >> > > >> > > >> > . \ Philip Wadler, Professor of Theoretical Computer Science, > >> > . /\ School of Informatics, University of Edinburgh > >> > . / \ and Senior Research Fellow, IOHK > >> > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ > >> > The University of Edinburgh is a charitable body, registered in > >> > Scotland, with registration number SC005336. > From roberto at dicosmo.org Tue Oct 12 10:18:54 2021 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Tue, 12 Oct 2021 16:18:54 +0200 Subject: [TYPES] Policy news from France: the new national plan for open science with concrete measures for open access and software In-Reply-To: References: Message-ID: Dear all, you may remember the announcement made in July of the Second French National Plan for Open Science, which included significant new provisions for supporting open access, open sourcing software produced by research, and recognizing the development efforts made by colleagues and research engineers. I'm delighted to share the news that the official english version is now available online , and will allow you to better appreciate the breadth of scope of this official plan of the french ministry of research. The translation took a bit longer than expected, but it was worth the wait. Feel free to share as you see fit Cheers -- Roberto ------------------------------------------------------------------ Computer Science Professor (on leave at Inria from IRIF/Universit? de Paris) Director Software Heritage E-mail : roberto at dicosmo.org INRIA Web : https://urldefense.com/v3/__http://www.dicosmo.org__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrlQRrcKzc$ Bureau C328 Twitter : https://urldefense.com/v3/__http://twitter.com/rdicosmo__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrlqnETQ3Y$ 2, Rue Simone Iff Tel : +33 1 80 49 44 42 CS 42112 75589 Paris Cedex 12 ------------------------------------------------------------------ GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 On Mon, 19 Jul 2021 at 14:16, Roberto Di Cosmo wrote: > Dear all, > I am delighted to share great news from France: on July 6th, the French > Ministry of Research unveiled the second multi-annual National Plan for > Open Science, which is a landmark policy document. > > The official document is available online from the website of the French > Ministry of Research > , > with an unofficial (and perfectible) english translation available at > https://urldefense.com/v3/__https://www.ouvrirlascience.fr/second-national-plan-for-open-science/__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrlTFXSUp0$ (an > official English version will be available in a few weeks from the > Ministry of Resarch dedicated page > > ). > > For our community, I believe two parts of this plan are particularly > interesting: section 1, that contains important concrete provisions to > foster Open Access; and section 3, that squarely puts software on a par > with publications and data in research and Open Science, and announces a > number of measures designed to open up research software and better > recognize software development in research. > > Here are some significant measures that are announced for research > publications (section 1 of the plan): > > - achieve 100% open access publications by 2030 > - support Diamond open access publishing > - request that the data and code associated with the article texts > submitted be provided > - promote the use of narrative CVs to reduce the importance of > quantitative assessments to the benefit of qualitative ones > > Here are some highlights of the measures related to software (section 3 of > the plan): > > - a clear recommendation to make research software available under an > open source licence, unless there are strong reasons not to do so > - the creation of a high level expert group dedicated to research > software in the National Committee for Open Science > - the objective to achieve better recognition of software development > in career evaluation for researchers and engineers > - a renewed and strengthened official support of Software Heritage > , with a recommendation to archive > in it *all research software produced in France* (a simple HOWTO is > available at > https://urldefense.com/v3/__https://www.softwareheritage.org/howto-archive-and-reference-your-code/__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrl9jAtrmU$ > ) > - a plan to get an ISO standard for the Software Heritage intrinsic > identifiers > > for source code [1] > > To the best of my knowledge, it is the first time that such an organic, > clearly designed strategy for (open source) software in research is laid > out in an official government-level document. > > -- > Roberto > > [1] these identifiers (aka SWHID > ) allow to > pinpoint inside the archive any software artifact, down to the level of the > line of code. For example, here is a *nice fragment in the Apollo 11 > source code > * and > here is *a mythical routine in the Quake III Arena source code > * > *. *For a short demo, see https://urldefense.com/v3/__https://www.youtube.com/watch?v=8nlSvYh7VpI__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrlT6_zmAE$ (the > interesting part starts around 9:00) > > ------------------------------------------------------------------ > Computer Science Professor > (on leave at Inria from IRIF/Universit? de Paris) > > Director > Software Heritage E-mail : roberto at dicosmo.org > INRIA Web : https://urldefense.com/v3/__http://www.dicosmo.org__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrlQRrcKzc$ > Bureau C328 Twitter : https://urldefense.com/v3/__http://twitter.com/rdicosmo__;!!IBzWLUs!AzbVj4FLV-8W-8WpFNwhchqfrnzaNb6aJq3QOS3gBD8iqklubqjZ2kj-9_kWbQXvXBrlqnETQ3Y$ > 2, Rue Simone Iff Tel : +33 1 80 49 44 42 > CS 42112 > 75589 Paris Cedex 12 > ------------------------------------------------------------------ > > GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 > From nestmann at gmail.com Wed Dec 8 17:18:22 2021 From: nestmann at gmail.com (Uwe Nestmann) Date: Wed, 8 Dec 2021 23:18:22 +0100 Subject: [TYPES] screencast series on the lambda cube Message-ID: <2F2119AE-C0AD-4345-B6BE-8A390C3843E5@gmail.com> Dear types-enthusiasts, a group of bright Master students of mine produced a series of 13 screencasts covering the ?Lambda Cube Unboxed? and made it accessible on Youtube at https://www.youtube.com/playlist?list=PLNwzBl6BGLwOKBFVbvp-GFjAA_ESZ--q4. It is largely based on parts of the wonderful book https://www.cambridge.org/de/academic/subjects/computer-science/programming-languages-and-applied-logic/type-theory-and-formal-proof-introduction by Rob Nederpelt and Herman Geuvers. Maybe, you find it useful for you and your students. In any case, we would be happy to get your feedback. Best regards, Uwe Nestmann From aaronngray.lists at gmail.com Thu Dec 16 15:37:41 2021 From: aaronngray.lists at gmail.com (Aaron Gray) Date: Thu, 16 Dec 2021 20:37:41 +0000 Subject: [TYPES] Wanted: Nancy McCracken's Ph.D Thesis - An Investigation of a Programming Language with a Polymorphic Type Structure Message-ID: I recently got her Ph.D Thesis scanned from Microfiche by ProQuest. But unfortunately it is missing a page, page 113. This is a seminal work as she introduces the notion of Kinds. I am wondering if anyone has a complete copy in their archives please ? Regards, Aaron -- Aaron Gray Independent Open Source Software Engineer, Computer Language Researcher, and amateur computer scientist. From aaronngray.lists at gmail.com Thu Dec 16 15:38:27 2021 From: aaronngray.lists at gmail.com (Aaron Gray) Date: Thu, 16 Dec 2021 20:38:27 +0000 Subject: [TYPES] =?utf-8?q?Wanted=3A_Jean-Yves_Girard=27s_=22Interpre?= =?utf-8?q?=CC=81tation_fonctionnelle_et_e=CC=81limination_des_coup?= =?utf-8?q?ures_de_l=27arithme=CC=81tique_d=27ordre_supe=CC=81rieur?= =?utf-8?q?=22?= Message-ID: Jean-Yves Girard Ph.D thesis :- "Interpre?tation fonctionnelle et e?limination des coupures de l'arithme?tique d'ordre supe?rieur" Putting out the feelers for a copy of Jean-Yves Girard Ph.D thesis again. Regards, Aaron -- Aaron Gray Independent Open Source Software Engineer, Computer Language Researcher, and amateur computer scientist. From Clement.Aubert at math.cnrs.fr Mon Dec 27 12:51:29 2021 From: Clement.Aubert at math.cnrs.fr (=?UTF-8?Q?Cl=c3=a9ment_Aubert?=) Date: Mon, 27 Dec 2021 12:51:29 -0500 Subject: [TYPES] =?utf-8?q?Wanted=3A_Jean-Yves_Girard=27s_=22Interpre?= =?utf-8?q?=CC=81tation_fonctionnelle_et_e=CC=81limination_des_coupures_de?= =?utf-8?q?_l=27arithme=CC=81tique_d=27ordre_supe=CC=81rieur=22?= In-Reply-To: References: Message-ID: There are "bits and pieces" hosted at https://www.cs.cmu.edu/~kw/scans/girard72thesis.pdf as presented at https://www.cs.cmu.edu/~kw/scans.html Maybe you can ask Kevin Watkins for more and/or a source? On 12/16/21 3:38 PM, Aaron Gray wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Jean-Yves Girard Ph.D thesis :- "Interpre?tation fonctionnelle et > e?limination des coupures de l'arithme?tique d'ordre supe?rieur" > > Putting out the feelers for a copy of Jean-Yves Girard Ph.D thesis again. > > Regards, > > Aaron -- Cl?ment Aubert, Assistant Professor of Computer Science, School of Computer and Cyber Sciences, Augusta University, https://spots.augusta.edu/caubert/ From wadler at inf.ed.ac.uk Mon Dec 27 11:54:27 2021 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Mon, 27 Dec 2021 16:54:27 +0000 Subject: [TYPES] Postdoctoral Research Assistant position at University of Edinburgh Message-ID: The School of Informatics, University of Edinburgh invites applications for a postdoctoral research assistant on the SAGE project (Scoped and Gradual Effects), funded by Huawei. The post will be for eighteen months. We seek applicants at an international level of excellence. The School of Informatics at Edinburgh is among the strongest in the world, and Edinburgh is known as a cultural centre providing a high quality of life. If interested, please contact Prof. Philip Wadler . Full details are here . Applications close at 5pm on 14 January 2021. The Opportunity: In the early 2000s, Plotkin and Power introduced an approach to computational effects in programming languages known as *algebraic effects*. This approach has attracted a great deal of interest and further development, including the introduction by Plotkin and Pretnar of *effect handlers*. Several research languages, such as Eff, Frank, Koka, and Links, support the use of algebraic effects. A few languages, including Multicore O'Caml and WebAssembly, support specific algebraic effects, and we may see general algebraic effects make their way into mainstream languages in the future. The proposed research has two focusses: (a) a simpler version of static scoping for algebraic effect handlers, and (b) gradual types for algebraic effect systems. The former is a question of how to match an effect with its corresponding handler. This issue arises when two effects have the same name, and you expect the handler for one but invoke the handler for the other. Some current solutions, like tunnelling and instance variables, are based on lexical scoping. However, despite this insight, tunnelling and instance variables are more complex than lexical binding of a variable to a value. In particular, the type system needs to track that an effect does not escape the scope of its surrounding handler (reminiscent of the funarg problem with dynamic variable binding). Can we do better? The latter is a question of how to integrate legacy code that lacks typed effects with new code that supports typed effects. Usually, integration of dynamically and statically typed languages is best explored in the context of gradual types, which support running legacy code and new code in tandem. To date there is only a tiny amount of work in the area of gradual types for effects, and important aspects of algebraic effects, such as effect handlers, have yet to be addressed. Overall, work to date on effect handlers is often complex, and attention is required to where ideas can be simplified, which is the ultimate objective of this project. Your skills and attributes for success: Essential ? PhD (or close to completion) in programming languages or a related discipline. ? Expertise and experience in relevant areas, including lambda calculus, type systems, and compilation. ? Ability to implement prototype language processors in O?Caml, Haskell, or a related language. ? Excellence in written and oral communication, analytical, and time management skills. ? Ability to collaborate with a team. Desirable ? Published research in programming languages or a related field, preferably in top-tier conferences. ? Experience in presenting at conferences or seminars. ? Knowledge of effect systems or gradual type systems. ? Experience with a proof assistant, such as Agda or Coq. ? Strong programming and software engineering skills. . \ Philip Wadler, Professor of Theoretical Computer Science, . /\ School of Informatics, University of Edinburgh . / \ and Senior Research Fellow, IOHK . https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!FV56I-OMUnQZ85YJDE8oWTxv2s-uAhFEQITL5l6J3dVFfwecuXimKJvlLrarq3bEyRTfPGNw5IM$ -------------- next part -------------- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.