From r.c.de.vrijer at vu.nl Wed Mar 14 10:44:17 2012 From: r.c.de.vrijer at vu.nl (Vrijer, R.C. de) Date: Wed, 14 Mar 2012 14:44:17 +0000 Subject: [TYPES] passing away of N.G. de Bruijn Message-ID: Sad to announce the passing away of N. G. (Dick) de Bruijn July 9, 1918 - February 17, 2012 We are grateful to this remarkable and inspiring man. Next to putting his everlasting mark on diverse areas in pure and applied mathematics, he introduced, in 1967, the pioneering language AUTOMATH and the idea of automated verification of mathematical proofs after formalization. This application of type theory and rewriting is both fundamental and high-tech. The resulting field of formal verification has developed into a major industrial contribution of mathematics. Henk Barendregt Jan Willem Klop Roel de Vrijer From gc at pps.jussieu.fr Wed Mar 28 09:13:52 2012 From: gc at pps.jussieu.fr (Giuseppe Castagna) Date: Wed, 28 Mar 2012 15:13:52 +0200 Subject: [TYPES] A question about unification terminology Message-ID: <4F730E90.1070802@pps.jussieu.fr> Dear Typers, the problem of finding a substitution ? such that t?=s? is called unification; the problem of finding a substitution ? such that t?=s is called semi-unification; but how is it called the problem of finding a substitution ? such that t??s? where ? denotes a generic subtyping relation? I found in the literature the generic name of "subtype constraint resolution" (e.g., Pottier) but I was wondering whether there exists a more specific name. Thanks in advance for your answers and pointers to related work. Beppe From wintersmind at gmail.com Wed Mar 28 14:47:45 2012 From: wintersmind at gmail.com (Christian Skalka) Date: Wed, 28 Mar 2012 14:47:45 -0400 Subject: [TYPES] A question about unification terminology In-Reply-To: <4F730E90.1070802@pps.jussieu.fr> References: <4F730E90.1070802@pps.jussieu.fr> Message-ID: Beppe, but how is it called the problem of finding a substitution ? such that > t??s? where ? denotes a generic subtyping relation? > > I found in the literature the generic name of "subtype constraint > resolution" (e.g., Pottier) but I was wondering whether there exists a more > specific name. > > Thanks in advance for your answers and pointers to related work. > In related work, we have been using the terms constraint "closure" and "consistency" to describe algorithmic techniques for addressing the problem you describe, which together guarantee the logical property of constraint "solvability". See especially Section 7 of [skalka-etal-jfp08] (bibtex below) which contains pertinent discussion. Note that we use a regular tree semantics of (recursive) type constraints in our work, so we are concerned with *solvability* (proving existence of a solution) rather than obtaining a solution in a concrete sense (as with e.g. a unifying substitution). This is because regular trees may be infinitely large. We adapt this approach and terminology from earlier work by Smith et al., especially [eifrig-etal-mfps95, trifonov-smith-sas96]. -chris @ARTICLE{skalka-etal-jfp08, AUTHOR = {Christian Skalka and Scott Smith and David Van Horn}, TITLE = {\textcolor{navy}{Types and Trace Effects of Higher Order Programs}}, YEAR = {2008}, VOLUME = 18, NUMBER = 2, PAGES = {179-249}, JOURNAL = {Journal of Functional Programming}, PDF = { http://www.cs.uvm.edu/~skalka/skalka-pubs/skalka-smith-vanhorn-jfp07.pdf}} @InProceedings{trifonov-smith-sas96, author = "Valery Trifonov and Scott Smith", title = "Subtyping Constrained Types", booktitle = "Proceedings of the Third International Static Analysis Symposium", series = "LNCS", volume = "1145", pages = "349--365", year = "1996", month = sep, publisher = "SV", note = "\url{http://rum.cs.yale.edu/trifonov/papers/subcon.ps.gz }", } @InProceedings{eifrig-smith-trifonov-94, author = "Jonathan Eifrig and Scott Smith and Valery Trifonov", title = "Type Inference for Recursively Constrained Types and its Application to {OOP}", booktitle = "Mathematical Foundations of Programming Semantics, New Orleans", series = "Electronic Notes in Theoretical Computer Science", publisher = "Elsevier", volume = "1", year = "1995", note = "\url{http://www.elsevier.nl/locate/entcs/volume1.html}", } > > > Beppe > -- Christian Skalka Associate Professor Department of Computer Science University of Vermont http://www.cs.uvm.edu/~skalka From henglein at diku.dk Wed Mar 28 17:09:22 2012 From: henglein at diku.dk (Fritz Henglein) Date: Wed, 28 Mar 2012 23:09:22 +0200 Subject: [TYPES] A question about unification terminology In-Reply-To: <4F730E90.1070802@pps.jussieu.fr> References: <4F730E90.1070802@pps.jussieu.fr> Message-ID: Hi Beppe: On Wed, Mar 28, 2012 at 3:13 PM, Giuseppe Castagna wrote: > ?the problem of finding a substitution ? such that t?=s? is called > unification; > > ?the problem of finding a substitution ? such that t?=s is called > semi-unification; I'd call that matching. Semi-unification is the problem of finding ? such that t??' = s? for some ?'. If one defines t <= s if t?' = s for some ?', semi-unification is a particular subtyping relation in the sense you're looking for, since it is reflextive and transitive: > ?but how is it called the problem of finding a substitution ? such that > t??s? where ? denotes a generic subtyping relation? Regards, Fritz From Francois.Fages at inria.fr Wed Mar 28 17:16:55 2012 From: Francois.Fages at inria.fr (=?UTF-8?B?RnJhbsOnb2lzIEZhZ2Vz?=) Date: Wed, 28 Mar 2012 23:16:55 +0200 Subject: [TYPES] A question about unification terminology In-Reply-To: <583433490.453527.1332961229478.JavaMail.root@zmbs3.inria.fr> References: <583433490.453527.1332961229478.JavaMail.root@zmbs3.inria.fr> Message-ID: <4F737FC7.2080505@inria.fr> Hi Beppe "finding a substitution ? such that t??s? where ? denotes a generic subtyping relation" is indeed the problem of solving subtyping constraints over (type) terms, just like unification is the problem of solving equality constraints over terms. To my opinion, there should not be specific names. We studied this problem in a slightly more general setting than lattices in: Emmanuel Coquery, Fran?ois Fages: Subtyping Constraints in Quasi-lattices. FSTTCS 2003 : 136-148. Springer-Verlag. LNCS 2914. December 2003. Complete version available as INRIA research report RR4926 ps . Best Fran?ois Giuseppe Castagna a ?crit : > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Typers, > > the problem of finding a substitution ? such that t?=s? is called > unification; > > the problem of finding a substitution ? such that t?=s is called > semi-unification; > > but how is it called the problem of finding a substitution ? such > that t??s? where ? denotes a generic subtyping relation? > > I found in the literature the generic name of "subtype constraint > resolution" (e.g., Pottier) but I was wondering whether there exists a > more specific name. > > Thanks in advance for your answers and pointers to related work. > > > Beppe From gc at pps.jussieu.fr Fri Mar 30 09:08:21 2012 From: gc at pps.jussieu.fr (Giuseppe Castagna) Date: Fri, 30 Mar 2012 15:08:21 +0200 Subject: [TYPES] A question about unification terminology In-Reply-To: <4F737FC7.2080505@inria.fr> References: <583433490.453527.1332961229478.JavaMail.root@zmbs3.inria.fr> <4F737FC7.2080505@inria.fr> Message-ID: <4F75B045.2010209@pps.jussieu.fr> I want to thank Fran?ois Fages, Christian Skalka, Pawe? Urzyczyn and Fritz Henglein for their answers, references, and corrections [I love the types-list] As they pointed me: >> the problem of finding a substitution ? such that t?=s is NOT >> called semi-unification but matching. Semi-unification is to find ? >> such that t? matches s?, i.e. there is another substitution ?' with >> t??'=s?. For what concerns the problem I asked about: >> how is it called the problem of finding a substitution ? such that >> t??s? where ? denotes a generic subtyping relation? Pawe? wrote: > If you write ? for matching that would be semi-unification. But I > guess you mean other kinds of subtyping. Indeed. I am finishing to write a paper on type inference for polymorphic higher-order functions with union, intersection and negation types where instantiation and subsumption are kept separated. I use the subtyping relation to define and subsume intersections, unions and negations of types while the polymorphic variables have a specific instantiation rule. So ? denotes just "classic" monomorphic type containment and it is extended to polymorphic types by a universal rather than existential quantification. That is, roughly, t?s holds if only if t??s? for all possible substitutions ?, (and not if there exists a substitution such that t?=s) (*). I cannot remember other works that make such a distinction, so if anybody knows some reference please let me know. (*) actually these substitutions are semantic substitutions in a particular class of models (these are just technical details: I can send references to whoever is interested). Furthermore Fran?ois suggests that > > "finding a substitution ? such that t??s? where ? denotes a generic > subtyping relation" is indeed the problem of solving subtyping > constraints over (type) terms, just like unification is the problem > of solving equality constraints over terms. To my opinion, there > should not be specific names. > My point is that in the literature constraint solving may concern other kinds of constraints (conditionals, universal, ...) while the one I asked about is a problem onto which seems so easy to stumble that I thought it should already be given a name by somebody. Anyhow, thanks again ---Beppe--- From wadler at inf.ed.ac.uk Fri Apr 27 07:14:07 2012 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 27 Apr 2012 12:14:07 +0100 Subject: [TYPES] Is naive freshness adequate to avoid capture? Message-ID: Say that a lambda term is 'fresh' if each lambda abstraction binds a distinct variable. For instance, (\x.(\y.\z.y)x) is fresh, but (\x.(\y.\x.y)x) is not. Consider lambda terms without the alpha rule, where capture avoiding substitution is a partial function. For instance, [x/y](\z.y) is defined, but [x/y](\x.y) is undefined, because the substitution would result in capture. Consider a sequence of beta reductions where no alpha reduction is allowed. In our first example, this terminates in a normal form: (\x.(\y.\z.y)x) --> \x.\z.x In our second example, we get stuck. (\x.(\y.\x.y)x) -/-> Attempting to beta reduce the redex would result in capture. Starting with a fresh term and performing beta reductions but no alpha reductions, are there reduction sequences which get stuck? I suspect the answer is yes, but I am having difficulty locating a counter-example in the literature or creating one on my own. Answers gratefully received. Yours, -- P -- .\ Philip Wadler, Professor of Theoretical Computer Science ./\ School of Informatics, University of Edinburgh /? \ http://homepages.inf.ed.ac.uk/wadler/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From eschew at gmail.com Fri Apr 27 11:19:51 2012 From: eschew at gmail.com (Ben Karel) Date: Fri, 27 Apr 2012 11:19:51 -0400 Subject: [TYPES] Is naive freshness adequate to avoid capture? In-Reply-To: References: Message-ID: http://cstheory.stackexchange.com/questions/8947/closed-term-and-alpha-conversion On Fri, Apr 27, 2012 at 7:14 AM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Say that a lambda term is 'fresh' if each lambda abstraction binds a > distinct variable. For instance, (\x.(\y.\z.y)x) is fresh, but > (\x.(\y.\x.y)x) is not. > > Consider lambda terms without the alpha rule, where capture avoiding > substitution is a partial function. For instance, [x/y](\z.y) is > defined, but [x/y](\x.y) is undefined, because the substitution would > result in capture. > > Consider a sequence of beta reductions where no alpha reduction is > allowed. In our first example, this terminates in a normal form: > > (\x.(\y.\z.y)x) --> \x.\z.x > > In our second example, we get stuck. > > (\x.(\y.\x.y)x) -/-> > > Attempting to beta reduce the redex would result in capture. > > Starting with a fresh term and performing beta reductions but no alpha > reductions, are there reduction sequences which get stuck? I suspect > the answer is yes, but I am having difficulty locating a > counter-example in the literature or creating one on my own. > > Answers gratefully received. Yours, -- P > > -- > .\ Philip Wadler, Professor of Theoretical Computer Science > ./\ School of Informatics, University of Edinburgh > / \ http://homepages.inf.ed.ac.uk/wadler/ > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > From wadler at inf.ed.ac.uk Fri Apr 27 11:26:45 2012 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 27 Apr 2012 16:26:45 +0100 Subject: [TYPES] Is naive freshness adequate to avoid capture? In-Reply-To: References: Message-ID: Bentiamino, Many thanks for your counter-example. Yours, -- P On Fri, Apr 27, 2012 at 4:19 PM, Beniamino Accattoli wrote: > Dear Philip, > My first observation has been that it is easy to break "freshness", > just consider the reduction of \delta \delta. But in this case is > however possible to reduce without ever using alpha. > > If I understand the problem correctly, this is a counter-example for > both freshness and alpha: > > (\l z.zz) (\ly.\lx.yx) > -> > (\ly.\lx.yx) (\ly.\lx.yx) > -> > \lx.((\ly.\lx.yx)x) > -/-> > > Best, > Beniamino Accattoli > > On Fri, Apr 27, 2012 at 1:14 PM, Philip Wadler wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Say that a lambda term is 'fresh' if each lambda abstraction binds a >> distinct variable. ?For instance, (\x.(\y.\z.y)x) is fresh, but >> (\x.(\y.\x.y)x) is not. >> >> Consider lambda terms without the alpha rule, where capture avoiding >> substitution is a partial function. ?For instance, [x/y](\z.y) is >> defined, but [x/y](\x.y) is undefined, because the substitution would >> result in capture. >> >> Consider a sequence of beta reductions where no alpha reduction is >> allowed. ?In our first example, this terminates in a normal form: >> >> ?(\x.(\y.\z.y)x) --> ?\x.\z.x >> >> In our second example, we get stuck. >> >> ?(\x.(\y.\x.y)x) -/-> >> >> Attempting to beta reduce the redex would result in capture. >> >> Starting with a fresh term and performing beta reductions but no alpha >> reductions, are there reduction sequences which get stuck? ?I suspect >> the answer is yes, but I am having difficulty locating a >> counter-example in the literature or creating one on my own. >> >> Answers gratefully received. ?Yours, ?-- P >> >> -- >> .\ Philip Wadler, Professor of Theoretical Computer Science >> ./\ School of Informatics, University of Edinburgh >> /? \ http://homepages.inf.ed.ac.uk/wadler/ >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> > -- .\ Philip Wadler, Professor of Theoretical Computer Science ./\ School of Informatics, University of Edinburgh /? \ http://homepages.inf.ed.ac.uk/wadler/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From george.cherevichenko at gmail.com Fri Apr 27 16:31:42 2012 From: george.cherevichenko at gmail.com (George Cherevichenko) Date: Sat, 28 Apr 2012 00:31:42 +0400 Subject: [TYPES] Explicit renaming of bound variables Message-ID: Hello. I think I have found "the right definition" of substitution and alpha-conversion. Three main ideas are: 1) Contexts with multiplicity (i.e., repetitions of variables are permitted); 2) A new form of explicit substitutions; 3) A new definition of free variables. Renaming of bound variables when performing substitutions is done using special reductions and may be delayed. So far told two people: Lev Beklemishev and Mati Pentus (both like). George. http://arxiv.org/abs/1111.3171 From vester at jaist.ac.jp Fri Apr 27 18:13:19 2012 From: vester at jaist.ac.jp (Rene Vestergaard) Date: Sat, 28 Apr 2012 07:13:19 +0900 Subject: [TYPES] Is naive freshness adequate to avoid capture? In-Reply-To: References: Message-ID: <4F9B19FF.9070105@jaist.ac.jp> All residual (non-renaming) beta-steps are enabled for fresh terms, i.e., all beta-reductions of redexes that are not newly created with respect to the fresh term we start from have enabled renaming-free substitutions. This follows from (what Vincent van Oostrom once told me is called) Hyland's Disjointness Property: within beta-residual theory, two descendants of a given redex are disjoint, i.e., neither contains the other. Theorem 2.17 in my PhD thesis (The primitive proof of the lambda-calculus) shows that any sequence of residual beta-steps from a fresh term can be residually-completed. The setting of the result is the marked lambda-calculus, with a marked and an unmarked application constructor and only (renaming-free) beta-reduction for the former kind. Residual completion is the (partial) function that attempts to contract all marked redexes inside-out. Hope this help with the slightly bigger picture, Rene On 4/27/12 8:14 PM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Say that a lambda term is 'fresh' if each lambda abstraction binds a > distinct variable. For instance, (\x.(\y.\z.y)x) is fresh, but > (\x.(\y.\x.y)x) is not. > > Consider lambda terms without the alpha rule, where capture avoiding > substitution is a partial function. For instance, [x/y](\z.y) is > defined, but [x/y](\x.y) is undefined, because the substitution would > result in capture. > > Consider a sequence of beta reductions where no alpha reduction is > allowed. In our first example, this terminates in a normal form: > > (\x.(\y.\z.y)x) --> \x.\z.x > > In our second example, we get stuck. > > (\x.(\y.\x.y)x) -/-> > > Attempting to beta reduce the redex would result in capture. > > Starting with a fresh term and performing beta reductions but no alpha > reductions, are there reduction sequences which get stuck? I suspect > the answer is yes, but I am having difficulty locating a > counter-example in the literature or creating one on my own. > > Answers gratefully received. Yours, -- P > From luidomi at gmail.com Fri Apr 27 18:41:35 2012 From: luidomi at gmail.com (LD) Date: Fri, 27 Apr 2012 23:41:35 +0100 Subject: [TYPES] Is naive freshness adequate to avoid capture? In-Reply-To: References: Message-ID: [Resending from former email address as current one is rejected by list.] Dear Philip, Such freshness resembles usual axiomatisations in proof-theoretic logical treatments of (typed) lambda calculus. Keeping track of lambda subterm free variables invariantly for beta and eta (open) reduction (working like a static type) avoids variable capture, dispensing alpha conversion and simplifying substitution (N/y of term N for variable y) to replacement (otherwise in general possibly capturing). Write X for an useq; ie sequence without repetitions of free variables (eg x,y,z) X # Y for disjoint X and Y ie when X and Y have no common elements [X]M for an (X free variables) annotated lambda term M; derived by rules [X,y,Z]y if (X # Z and X # y and y # Z); ie all pairwise disjoint [X](M N) if ([X]M and [X]N) [X](\y.M) if (X # y and [X,y]M) [X]C{[X,Y]} for an annotated context ie an annotated term with hole {} [X]C{[X,Y]M} for [X]C{[X,Y]} with {} replaced by [X,Y]M. Then I expect [X]((\y.M) N) -beta-> [X]((N/y)M) [X](\y.(M y)) -eta-> [X]M if X includes all free variables of M [X]C{[X,Y]M} -> [X]C{[X,Y]M'} if [X,Y]M -> [X,Y]M' X=X' if [X]M ->* [X']M' ie free variables (super)useq is preserved by reduction (like type preservation ie subject reduction in typed lambda calculi) to follow gradually from the above. Hope this basis-like typed formalisation helps even for lambda reduction. Cheers, Luis Dominguez On 2012/04/27, at 12:14, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Say that a lambda term is 'fresh' if each lambda abstraction binds a > distinct variable. For instance, (\x.(\y.\z.y)x) is fresh, but > (\x.(\y.\x.y)x) is not. > > Consider lambda terms without the alpha rule, where capture avoiding > substitution is a partial function. For instance, [x/y](\z.y) is > defined, but [x/y](\x.y) is undefined, because the substitution would > result in capture. > > Consider a sequence of beta reductions where no alpha reduction is > allowed. In our first example, this terminates in a normal form: > > (\x.(\y.\z.y)x) --> \x.\z.x > > In our second example, we get stuck. > > (\x.(\y.\x.y)x) -/-> > > Attempting to beta reduce the redex would result in capture. > > Starting with a fresh term and performing beta reductions but no alpha > reductions, are there reduction sequences which get stuck? I suspect > the answer is yes, but I am having difficulty locating a > counter-example in the literature or creating one on my own. > > Answers gratefully received. Yours, -- P > > -- > .\ Philip Wadler, Professor of Theoretical Computer Science > ./\ School of Informatics, University of Edinburgh > / \ http://homepages.inf.ed.ac.uk/wadler/ > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From Vincent.vanOostrom at phil.uu.nl Sat Apr 28 08:56:37 2012 From: Vincent.vanOostrom at phil.uu.nl (Vincent van Oostrom) Date: Sat, 28 Apr 2012 14:56:37 +0200 Subject: [TYPES] Is naive freshness adequate to avoid capture? In-Reply-To: <4F9B19FF.9070105@jaist.ac.jp> References: <4F9B19FF.9070105@jaist.ac.jp> Message-ID: <4F9BE905.6000901@phil.uu.nl> Maybe it's interesting to note that each of the three reduction steps of the canonical counterexample as given by Beniamonino corresponds to a phenomenon that, when forbidden, yields a calculus that is safe (cf. the work of Blum and Ong in the typed case), in the sense that the initial term can be renamed such that alpha-free-substitution is correct during reduction: (1) Duplicating step. Forbidding gives linear lambda-calculus; (2) Created step. Forbidding gives developments (Rene points out); (3) Step below lambda. Forbidding gives weak reduction. For more on this see pp. 24-38 of our presentation: http://www.phil.uu.nl/~oostrom/publication/talk/terese181110.pdf In the corresponding paper (also available from my homepage): http://dx.doi.org/10.1016/j.tcs.2011.04.011 it is shown that mu-reduction as captured by the rule mu x.Z(x) -> Z(mu x.Z(x)) is safe in the above sense (without more). A general setting encompassing both the mu-rule and developments as in (2) above, is provided by a certain class of higher-order recursive program schemes for which "self-capture-freeness" is preserved under reduction (freshness is not preserved as Beniamonino points out). kind regards, Vincent van Oostrom On 28-04-12 00:13, Rene Vestergaard wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > All residual (non-renaming) beta-steps are enabled for fresh terms, > i.e., all beta-reductions of redexes that are not newly created with > respect to the fresh term we start from have enabled renaming-free > substitutions. This follows from (what Vincent van Oostrom once told me > is called) Hyland's Disjointness Property: within beta-residual theory, > two descendants of a given redex are disjoint, i.e., neither contains > the other. > > Theorem 2.17 in my PhD thesis (The primitive proof of the > lambda-calculus) shows that any sequence of residual beta-steps from a > fresh term can be residually-completed. The setting of the result is the > marked lambda-calculus, with a marked and an unmarked application > constructor and only (renaming-free) beta-reduction for the former kind. > Residual completion is the (partial) function that attempts to contract > all marked redexes inside-out. > > Hope this help with the slightly bigger picture, > Rene > > > On 4/27/12 8:14 PM, Philip Wadler wrote: >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Say that a lambda term is 'fresh' if each lambda abstraction binds a >> distinct variable. For instance, (\x.(\y.\z.y)x) is fresh, but >> (\x.(\y.\x.y)x) is not. >> >> Consider lambda terms without the alpha rule, where capture avoiding >> substitution is a partial function. For instance, [x/y](\z.y) is >> defined, but [x/y](\x.y) is undefined, because the substitution would >> result in capture. >> >> Consider a sequence of beta reductions where no alpha reduction is >> allowed. In our first example, this terminates in a normal form: >> >> (\x.(\y.\z.y)x) --> \x.\z.x >> >> In our second example, we get stuck. >> >> (\x.(\y.\x.y)x) -/-> >> >> Attempting to beta reduce the redex would result in capture. >> >> Starting with a fresh term and performing beta reductions but no alpha >> reductions, are there reduction sequences which get stuck? I suspect >> the answer is yes, but I am having difficulty locating a >> counter-example in the literature or creating one on my own. >> >> Answers gratefully received. Yours, -- P >> > From vladimir at ias.edu Sun Apr 29 10:10:47 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Sun, 29 Apr 2012 10:10:47 -0400 Subject: [TYPES] strict unit type Message-ID: Hello, let Pt be the unit type ("one point"). In Coq-like systems it is introduced through an eliminator and the associated iota-reduction rule. This is in practice sufficient for proving all its expected properties modulo propositional equality (or identity types) but does not create a terminal object in the category of contexts. I wonder if any one has considered type theories where Pt is introduced as having a distinguished object tt together with reduction rules of the following form (in the absence of dependent sums, otherwise one needs more): 1. any object of Pt other than tt reduces to tt, 2. Prod x : T , Pt reduces to Pt, 3. Prod x : Pt , T reduces to T. Thanks, Vladimir. From frederic.blanqui at inria.fr Sun Apr 29 20:27:25 2012 From: frederic.blanqui at inria.fr (Frederic Blanqui) Date: Mon, 30 Apr 2012 08:27:25 +0800 Subject: [TYPES] strict unit type In-Reply-To: References: Message-ID: <4F9DDC6D.9040406@inria.fr> Hello Vladimir. Le 29/04/2012 22:10, Vladimir Voevodsky a ?crit : > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > let Pt be the unit type ("one point"). In Coq-like systems it is introduced through an eliminator and the associated iota-reduction rule. This is in practice sufficient for proving all its expected properties modulo propositional equality (or identity types) but does not create a terminal object in the category of contexts. > > I wonder if any one has considered type theories where Pt is introduced as having a distinguished object tt together with reduction rules of the following form (in the absence of dependent sums, otherwise one needs more): > > 1. any object of Pt other than tt reduces to tt, This has been done already with this reduction rule. Please, check the work of Roberto Di Cosmo and Delia Kesner for instance. > 2. Prod x : T , Pt reduces to Pt, > > 3. Prod x : Pt , T reduces to T. On the other hand, I am not sure that these rules have been considered yet since, by doing so, you may well loose the property that reduction preserves typing: if t : T reduces to u, then u : T (a property often called "subject reduction"). Unless perhaps if you consider the following extra rules: 2'. (\lambda x:T. t) reduces to tt if t : Pt, 3'. (\lambda x:Pt, t) reduces to t{x=tt}. Best regards, Frederic. > Thanks, > Vladimir. From vladimir at ias.edu Mon Apr 30 09:04:48 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Mon, 30 Apr 2012 09:04:48 -0400 Subject: [TYPES] strict unit type In-Reply-To: <4F9DDC6D.9040406@inria.fr> References: <4F9DDC6D.9040406@inria.fr> Message-ID: On Apr 29, 2012, at 8:27 PM, Frederic Blanqui wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello Vladimir. > > Le 29/04/2012 22:10, Vladimir Voevodsky a ?crit : >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hello, >> >> let Pt be the unit type ("one point"). In Coq-like systems it is introduced through an eliminator and the associated iota-reduction rule. This is in practice sufficient for proving all its expected properties modulo propositional equality (or identity types) but does not create a terminal object in the category of contexts. >> >> I wonder if any one has considered type theories where Pt is introduced as having a distinguished object tt together with reduction rules of the following form (in the absence of dependent sums, otherwise one needs more): >> >> 1. any object of Pt other than tt reduces to tt, > This has been done already with this reduction rule. Please, check the work of Roberto Di Cosmo and Delia Kesner for instance. > >> 2. Prod x : T , Pt reduces to Pt, >> >> 3. Prod x : Pt , T reduces to T. > On the other hand, I am not sure that these rules have been considered yet since, by doing so, you may well loose the property that reduction preserves typing: if t : T reduces to u, then u : T (a property often called "subject reduction"). Unless perhaps if you consider the following extra rules: > > 2'. (\lambda x:T. t) reduces to tt if t : Pt, > > 3'. (\lambda x:Pt, t) reduces to t{x=tt}. Yes, I did. But also this would follow from the previous 3 rules. For example (\lambda x:T. t) for t:Pt will have the type, in normal form, Pt and therefore will be reducible to tt by the first reduction. Vladimir. From frederic.blanqui at inria.fr Mon Apr 30 23:10:11 2012 From: frederic.blanqui at inria.fr (Frederic Blanqui) Date: Tue, 01 May 2012 11:10:11 +0800 Subject: [TYPES] strict unit type In-Reply-To: References: <4F9DDC6D.9040406@inria.fr> Message-ID: <4F9F5413.7010902@inria.fr> Le 30/04/2012 21:04, Vladimir Voevodsky a ?crit : > On Apr 29, 2012, at 8:27 PM, Frederic Blanqui wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hello Vladimir. >> >> Le 29/04/2012 22:10, Vladimir Voevodsky a ?crit : >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Hello, >>> >>> let Pt be the unit type ("one point"). In Coq-like systems it is introduced through an eliminator and the associated iota-reduction rule. This is in practice sufficient for proving all its expected properties modulo propositional equality (or identity types) but does not create a terminal object in the category of contexts. >>> >>> I wonder if any one has considered type theories where Pt is introduced as having a distinguished object tt together with reduction rules of the following form (in the absence of dependent sums, otherwise one needs more): >>> >>> 1. any object of Pt other than tt reduces to tt, >> This has been done already with this reduction rule. Please, check the work of Roberto Di Cosmo and Delia Kesner for instance. >> >>> 2. Prod x : T , Pt reduces to Pt, >>> >>> 3. Prod x : Pt , T reduces to T. >> On the other hand, I am not sure that these rules have been considered yet since, by doing so, you may well loose the property that reduction preserves typing: if t : T reduces to u, then u : T (a property often called "subject reduction"). Unless perhaps if you consider the following extra rules: >> >> 2'. (\lambda x:T. t) reduces to tt if t : Pt, >> >> 3'. (\lambda x:Pt, t) reduces to t{x=tt}. > Yes, I did. But also this would follow from the previous 3 rules. For example (\lambda x:T. t) for t:Pt will have the type, in normal form, Pt and therefore will be reducible to tt by the first reduction. > > Vladimir. > > > Yes, 2' is a consequence of 2 & 1. But I do not see why 3' is a consequence of the other rules. By 1, you have (\lambda x:Pt. t) reduces to (\lambda x:Pt. t{x=tt}). Then, how to remove (\lambda x:Pt)? But, after all, it may not be necessary to consider this extra rule. I was thinking that having term-level rules corresponding to your type-level rules could help, but I am not sure anymore. It may well work fine to consider types up to your equivalence relation. On the other hand, there is a little problem with rule 3 in case x occurs free in T. You should instead consider: 3. (Prod x:Pt, T) reduces to T{x=tt}. Frederic. From vladimir at ias.edu Tue May 1 09:18:37 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Tue, 1 May 2012 09:18:37 -0400 Subject: [TYPES] strict unit type In-Reply-To: <4F9F5413.7010902@inria.fr> References: <4F9DDC6D.9040406@inria.fr> <4F9F5413.7010902@inria.fr> Message-ID: <4AA45563-2E7F-4772-AD71-7AA34D088409@ias.edu> You are right on both counts. 3' does not follow from 3 and in 3 one has to consider T{x=tt}. I'll need to check details with how beta reduction should be set up in the presence of 2 and 3 and then it will be clear, I hope, weather 3' is needed. Vladimir. On Apr 30, 2012, at 11:10 PM, Frederic Blanqui wrote: > > > Le 30/04/2012 21:04, Vladimir Voevodsky a ?crit : >> On Apr 29, 2012, at 8:27 PM, Frederic Blanqui wrote: >> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Hello Vladimir. >>> >>> Le 29/04/2012 22:10, Vladimir Voevodsky a ?crit : >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>> >>>> Hello, >>>> >>>> let Pt be the unit type ("one point"). In Coq-like systems it is introduced through an eliminator and the associated iota-reduction rule. This is in practice sufficient for proving all its expected properties modulo propositional equality (or identity types) but does not create a terminal object in the category of contexts. >>>> >>>> I wonder if any one has considered type theories where Pt is introduced as having a distinguished object tt together with reduction rules of the following form (in the absence of dependent sums, otherwise one needs more): >>>> >>>> 1. any object of Pt other than tt reduces to tt, >>> This has been done already with this reduction rule. Please, check the work of Roberto Di Cosmo and Delia Kesner for instance. >>> >>>> 2. Prod x : T , Pt reduces to Pt, >>>> >>>> 3. Prod x : Pt , T reduces to T. >>> On the other hand, I am not sure that these rules have been considered yet since, by doing so, you may well loose the property that reduction preserves typing: if t : T reduces to u, then u : T (a property often called "subject reduction"). Unless perhaps if you consider the following extra rules: >>> >>> 2'. (\lambda x:T. t) reduces to tt if t : Pt, >>> >>> 3'. (\lambda x:Pt, t) reduces to t{x=tt}. >> Yes, I did. But also this would follow from the previous 3 rules. For example (\lambda x:T. t) for t:Pt will have the type, in normal form, Pt and therefore will be reducible to tt by the first reduction. >> >> Vladimir. >> >> >> > Yes, 2' is a consequence of 2 & 1. But I do not see why 3' is a consequence of the other rules. By 1, you have (\lambda x:Pt. t) reduces to (\lambda x:Pt. t{x=tt}). Then, how to remove (\lambda x:Pt)? > > But, after all, it may not be necessary to consider this extra rule. I was thinking that having term-level rules corresponding to your type-level rules could help, but I am not sure anymore. It may well work fine to consider types up to your equivalence relation. > > On the other hand, there is a little problem with rule 3 in case x occurs free in T. You should instead consider: > > 3. (Prod x:Pt, T) reduces to T{x=tt}. > > Frederic. From frederic.blanqui at inria.fr Tue May 1 18:53:18 2012 From: frederic.blanqui at inria.fr (Frederic Blanqui) Date: Wed, 02 May 2012 06:53:18 +0800 Subject: [TYPES] strict unit type In-Reply-To: <4AA45563-2E7F-4772-AD71-7AA34D088409@ias.edu> References: <4F9DDC6D.9040406@inria.fr> <4F9F5413.7010902@inria.fr> <4AA45563-2E7F-4772-AD71-7AA34D088409@ias.edu> Message-ID: <4FA0695E.1090107@inria.fr> Hello. I just realized that there may be another problem with rule 2. If rule 2 is included as is in the usual type conversion (conv) used in dependent type systems (conv) if t:T, T\equiv T' and T':s, then t:T', then you get that: \lambda x:nat.Pt is of type \Pi x:bool.Pt since \Pi x:nat.Pt \equiv Pt \equiv \Pi:bool. Pt. And, to prove the subject reduction property for beta ((\lambda x:T.u)t -> u{x=t}), one usually first check the "injectivity" of \Pi wrt \equiv: \Pi x:A.B \equiv \Pi x:A'.B' implies A\equiv A' and B\equiv B' This cannot be the case (nat\not\equiv bool) if you add rule 2 in \equiv . So, could you make more precise what you mean by adding these rules in a dependent type system? Frederic. Le 01/05/2012 21:18, Vladimir Voevodsky a ?crit : > You are right on both counts. 3' does not follow from 3 and in 3 one has to consider T{x=tt}. > > I'll need to check details with how beta reduction should be set up in the presence of 2 and 3 and then it will be clear, I hope, weather 3' is needed. > > Vladimir. > > > > > > On Apr 30, 2012, at 11:10 PM, Frederic Blanqui wrote: > >> >> Le 30/04/2012 21:04, Vladimir Voevodsky a ?crit : >>> On Apr 29, 2012, at 8:27 PM, Frederic Blanqui wrote: >>> >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>> >>>> Hello Vladimir. >>>> >>>> Le 29/04/2012 22:10, Vladimir Voevodsky a ?crit : >>>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>>> >>>>> Hello, >>>>> >>>>> let Pt be the unit type ("one point"). In Coq-like systems it is introduced through an eliminator and the associated iota-reduction rule. This is in practice sufficient for proving all its expected properties modulo propositional equality (or identity types) but does not create a terminal object in the category of contexts. >>>>> >>>>> I wonder if any one has considered type theories where Pt is introduced as having a distinguished object tt together with reduction rules of the following form (in the absence of dependent sums, otherwise one needs more): >>>>> >>>>> 1. any object of Pt other than tt reduces to tt, >>>> This has been done already with this reduction rule. Please, check the work of Roberto Di Cosmo and Delia Kesner for instance. >>>> >>>>> 2. Prod x : T , Pt reduces to Pt, >>>>> >>>>> 3. Prod x : Pt , T reduces to T. >>>> On the other hand, I am not sure that these rules have been considered yet since, by doing so, you may well loose the property that reduction preserves typing: if t : T reduces to u, then u : T (a property often called "subject reduction"). Unless perhaps if you consider the following extra rules: >>>> >>>> 2'. (\lambda x:T. t) reduces to tt if t : Pt, >>>> >>>> 3'. (\lambda x:Pt, t) reduces to t{x=tt}. >>> Yes, I did. But also this would follow from the previous 3 rules. For example (\lambda x:T. t) for t:Pt will have the type, in normal form, Pt and therefore will be reducible to tt by the first reduction. >>> >>> Vladimir. >>> >>> >>> >> Yes, 2' is a consequence of 2& 1. But I do not see why 3' is a consequence of the other rules. By 1, you have (\lambda x:Pt. t) reduces to (\lambda x:Pt. t{x=tt}). Then, how to remove (\lambda x:Pt)? >> >> But, after all, it may not be necessary to consider this extra rule. I was thinking that having term-level rules corresponding to your type-level rules could help, but I am not sure anymore. It may well work fine to consider types up to your equivalence relation. >> >> On the other hand, there is a little problem with rule 3 in case x occurs free in T. You should instead consider: >> >> 3. (Prod x:Pt, T) reduces to T{x=tt}. >> >> Frederic. From vladimir at ias.edu Wed May 9 19:04:54 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Wed, 9 May 2012 19:04:54 -0400 Subject: [TYPES] strict unit type In-Reply-To: References: Message-ID: <59F8BF2F-6455-4E77-8AFB-299064AF1C8F@ias.edu> On May 1, 2012, at 5:24 AM, Altenkirch Thorsten wrote: > In general we shouldn't expect to decide eta equality by untyped > reduction. eta equality isa form of decidable extensionality and does not > correspond well to computation to a normal form. > > Clearly, the case of the unit type shows that we need to know the type to > check equality. > > Trying to decide eta equalities by reduction is very non-modular. This is > basically the point of Vladimir's email. > > However, if we implement it as a type directed decision procedure things > are much better. E.g. to terms f,g : Pi x:A.B x are convertible if f x and > g x : B x are convertible (whewre x is a fresh variable) after beta > reduction. Similarily, two terms p,q : Sigma x:A.B x are convertible if > p.1 and q.1 : A are convertible after beta reduction and p.2 and q.2 : B > p.1 (or B q.1) are convertible after beta reduction. (The and here has to > be read sequentially, and the choice B p.1 or B p.2 actually causes some > technical problems). Why just after beta reduction and not say iota-reduction or even some internal eta-reduction? Vladimir. From psztxa at exmail.nottingham.ac.uk Thu May 10 11:41:46 2012 From: psztxa at exmail.nottingham.ac.uk (Altenkirch Thorsten) Date: Thu, 10 May 2012 16:41:46 +0100 Subject: [TYPES] strict unit type Message-ID: On 10/05/2012 00:04, "Vladimir Voevodsky" wrote: > >On May 1, 2012, at 5:24 AM, Altenkirch Thorsten wrote: > >> In general we shouldn't expect to decide eta equality by untyped >> reduction. eta equality isa form of decidable extensionality and does >>not >> correspond well to computation to a normal form. >> >> Clearly, the case of the unit type shows that we need to know the type >>to >> check equality. >> >> Trying to decide eta equalities by reduction is very non-modular. This >>is >> basically the point of Vladimir's email. >> >> However, if we implement it as a type directed decision procedure things >> are much better. E.g. to terms f,g : Pi x:A.B x are convertible if f x >>and >> g x : B x are convertible (whewre x is a fresh variable) after beta >> reduction. Similarily, two terms p,q : Sigma x:A.B x are convertible if >> p.1 and q.1 : A are convertible after beta reduction and p.2 and q.2 : B >> p.1 (or B q.1) are convertible after beta reduction. (The and here has >>to >> be read sequentially, and the choice B p.1 or B p.2 actually causes some >> technical problems). > >Why just after beta reduction and not say iota-reduction or even some >internal eta-reduction? When I say beta I also mean beta for product and inductive types. The Coq people seem to call this iota reduction. What do you mean by internal eta-reduction. Btw, I don't think you need to reduce under a lambda (which is also a form of extensionality). So what I mean is to execute the purely computational rules (these are the once used by any functional programming language). Thorsten This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From rwh at cs.cmu.edu Thu May 10 14:08:33 2012 From: rwh at cs.cmu.edu (Robert Harper) Date: Thu, 10 May 2012 14:08:33 -0400 Subject: [TYPES] strict unit type In-Reply-To: References: Message-ID: <30A04511-6F31-4F7C-8C68-464DED6DF19A@cs.cmu.edu> hello everyone, there is no problem to decide equivalence in the presence of extensionality axioms such as eta and xi, nor for unicity conditions such as the one stating that all maps to 1 are equal. my work with chris stone on singleton types shows how to handle sigma's and pi's and unit (actually much more than just unit, but including it as a special case), which specializes to arrow and cross as well. generally the method works well for negative types, but cannot be made to scale to positive types, which require some form of induction. so, for example, there is no problem with the "iota" (i.e., beta) rules for inductive types, since these are purely computational, and we can handle extensionality conditions for negative types, but one cannot expect something as strong as the coproduct universal condition for sums, merely the beta-like rule of reduction. the algorithm that stone and i give has a two-phase flavor. first, a type-directed phase that reduces things to atomic types by elimination forms, then a structural phase that compares the weak head normal forms of the results of the type-directed phase. (weak head normalization takes care of all beta-like rules.) the structural phase calls back to the typed phase. the correctness argument is fascinating (imo) because of the complications introduced by the asymmetries hinted at in thorsten's post having to do with the interaction between sigma's and pi's. as an aside, would someone please explain to me as concisely as possible what is the problem with universes in coq? this came up in discussion here yesterday, and i drew the impression, perhaps mistaken, that the problem is that coq does not implement the solution that pollack and i worked out in 1988. if i am right, the problem is already solved, but was apparently not used in the coq implementation for some reason. i can explain separately what pollack and i did and how it works. it sounded as though it would address many of the problems that have arisen, but i'm not completely certain until i learn more about them. best, bob On May 10, 2012, at 11:41 AM, Altenkirch Thorsten wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > On 10/05/2012 00:04, "Vladimir Voevodsky" wrote: > >> >> On May 1, 2012, at 5:24 AM, Altenkirch Thorsten wrote: >> >>> In general we shouldn't expect to decide eta equality by untyped >>> reduction. eta equality isa form of decidable extensionality and does >>> not >>> correspond well to computation to a normal form. >>> >>> Clearly, the case of the unit type shows that we need to know the type >>> to >>> check equality. >>> >>> Trying to decide eta equalities by reduction is very non-modular. This >>> is >>> basically the point of Vladimir's email. >>> >>> However, if we implement it as a type directed decision procedure things >>> are much better. E.g. to terms f,g : Pi x:A.B x are convertible if f x >>> and >>> g x : B x are convertible (whewre x is a fresh variable) after beta >>> reduction. Similarily, two terms p,q : Sigma x:A.B x are convertible if >>> p.1 and q.1 : A are convertible after beta reduction and p.2 and q.2 : B >>> p.1 (or B q.1) are convertible after beta reduction. (The and here has >>> to >>> be read sequentially, and the choice B p.1 or B p.2 actually causes some >>> technical problems). >> >> Why just after beta reduction and not say iota-reduction or even some >> internal eta-reduction? > > When I say beta I also mean beta for product and inductive types. The Coq > people seem to call this iota reduction. > > What do you mean by internal eta-reduction. > > Btw, I don't think you need to reduce under a lambda (which is also a form > of extensionality). So what I mean is to execute the purely computational > rules (these are the once used by any functional programming language). > > Thorsten > > > This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. > > This message has been checked for viruses but the contents of an attachment > may still contain software viruses which could damage your computer system: > you are advised to perform your own checks. Email communications with the > University of Nottingham may be monitored as permitted by UK legislation. From vladimir at ias.edu Thu May 10 17:19:02 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Thu, 10 May 2012 17:19:02 -0400 Subject: [TYPES] strict unit type In-Reply-To: <30A04511-6F31-4F7C-8C68-464DED6DF19A@cs.cmu.edu> References: <30A04511-6F31-4F7C-8C68-464DED6DF19A@cs.cmu.edu> Message-ID: On May 10, 2012, at 2:08 PM, Robert Harper wrote: > hello everyone, > > there is no problem to decide equivalence in the presence of extensionality axioms such as eta and xi, nor for unicity conditions such as the one stating that all maps to 1 are equal. my work with chris stone on singleton types shows how to handle sigma's and pi's and unit (actually much more than just unit, but including it as a special case), which specializes to arrow and cross as well. generally the method works well for negative types, but cannot be made to scale to positive types, which require some form of induction. so, for example, there is no problem with the "iota" (i.e., beta) rules for inductive types, since these are purely computational, and we can handle extensionality conditions for negative types, but one cannot expect something as strong as the coproduct universal condition for sums, merely the beta-like rule of reduction. > > the algorithm that stone and i give has a two-phase flavor. first, a type-directed phase that reduces things to atomic types by elimination forms, then a structural phase that compares the weak head normal forms of the results of the type-directed phase. (weak head normalization takes care of all beta-like rules.) the structural phase calls back to the typed phase. the correctness argument is fascinating (imo) because of the complications introduced by the asymmetries hinted at in thorsten's post having to do with the interaction between sigma's and pi's. > > as an aside, would someone please explain to me as concisely as possible what is the problem with universes in coq? this came up in discussion here yesterday, and i drew the impression, perhaps mistaken, that the problem is that coq does not implement the solution that pollack and i worked out in 1988. if i am right, the problem is already solved, but was apparently not used in the coq implementation for some reason. i can explain separately what pollack and i did and how it works. it sounded as though it would address many of the problems that have arisen, but i'm not completely certain until i learn more about them. > > best, > bob > Hi Bob, I would sum things with universes as follows: 1. there is no universe polymorphism 2. because of strict sub-typing for universes eta-reduction of \lambda-terms is problematic From the HOTT point of view another problem is that it is unclear how to proceed with the modifications which are needed for adding the resizing rules. Vladimir. PS Could you send a link to the your paper with Stone which you are referring to? From roberto at dicosmo.org Fri May 11 05:45:24 2012 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Fri, 11 May 2012 11:45:24 +0200 Subject: [TYPES] strict unit type In-Reply-To: <4F9DDC6D.9040406@inria.fr> References: <4F9DDC6D.9040406@inria.fr> Message-ID: I am quite happy to see a revival of interest in this subject, and thanks Frederic for remembering some of the works done in the 1990's :-) In case this may be of interest, I had been maintaining a short history of rewriting with extensionality til 2000, with a brief summary of the results known then, and of the relevant papers I knew of by 2000 (this includes some results for CoC too). It is available as a set of slides here: http://www.dicosmo.org/Slides/Edi99.pdf Of course, we know more today than what is reported in this short survey, but I could not help getting interested in a different research direction, and did not maintain this survey up to date. If anybody is interested in updating it, I'll be more than happy to share the LaTeX sources ... --Roberto 2012/4/30 Frederic Blanqui : > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello Vladimir. > > Le 29/04/2012 22:10, Vladimir Voevodsky a ?crit : > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Hello, >> >> let Pt be the unit type ("one point"). In Coq-like systems it is >> introduced through an eliminator and the associated iota-reduction rule. >> This is in practice sufficient for proving all its expected properties >> modulo propositional equality (or identity types) but does not create a >> terminal object in the category of contexts. >> >> I wonder if any one has considered type theories where Pt is introduced as >> having a distinguished object tt together with reduction rules of the >> following form (in the absence of dependent sums, otherwise one needs more): >> >> 1. any object of Pt other than tt reduces to tt, > > This has been done already with this reduction rule. Please, check the work > of Roberto Di Cosmo and Delia Kesner for instance. > > >> 2. Prod x : T , Pt reduces to Pt, >> >> 3. Prod x : Pt , T reduces to T. > > On the other hand, I am not sure that these rules have been considered yet > since, by doing so, you may well loose the property that reduction preserves > typing: if t : T reduces to u, then u : T (a property often called "subject > reduction"). Unless perhaps if you consider the following extra rules: > > 2'. (\lambda x:T. t) reduces to tt if t : Pt, > > 3'. (\lambda x:Pt, t) reduces to t{x=tt}. > > Best regards, Frederic. > >> Thanks, >> Vladimir. -- --Roberto Di Cosmo ------------------------------------------------------------------ Professeur? ? ? ? ? ? ?? En delegation a l'INRIA PPS? ? ? ? ? ? ? ? ? ? ? E-mail: roberto at dicosmo.org Universite Paris Diderot WWW? : http://www.dicosmo.org Case 7014? ? ? ? ? ? ? ? Tel? : ++33-(0)1-57 27 92 20 5, Rue Thomas Mann F-75205 Paris Cedex 13 FRANCE. ------------------------------------------------------------------ Attachments: ?? MIME accepted ?? Word deprecated, http://www.rfc1149.net/documents/whynotword ------------------------------------------------------------------ Office location: Bureau 6C15 (6th floor) 175, rue du Chevaleret, XIII Metro Chevaleret, ligne 6 ------------------------------------------------------------------ From vladimir at ias.edu Thu Jul 19 10:43:10 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Thu, 19 Jul 2012 10:43:10 -0400 Subject: [TYPES] a question Message-ID: Hello, does any one know if the system of natural arithmetic with equality, addition, multiplication, exponentiation (or without exponentiation), "forall" quantifier, implication and conjunction is decidable? There is no existential quantifier and no negation. Thanks! Vladimir. From vladimir at ias.edu Thu Jul 19 10:44:47 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Thu, 19 Jul 2012 10:44:47 -0400 Subject: [TYPES] question (cont.) Message-ID: Sorry, I forgot to mention that there is also induction. V. From vladimir at ias.edu Thu Jul 19 11:16:38 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Thu, 19 Jul 2012 11:16:38 -0400 Subject: [TYPES] question (cont.) In-Reply-To: References: Message-ID: <77EE5B90-4A6C-41C7-A092-B60536DF4954@ias.edu> > Don't you have False (as 0=1 for instance) hence not A (as > A ->False) hence exA (as forall notA -> False), hence everything? Thanks to everybody who pointed this out to me. I'll have to think whether my question has a more sensible reformulation. Vladimir. From vladimir at ias.edu Fri Jul 20 17:58:08 2012 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Fri, 20 Jul 2012 17:58:08 -0400 Subject: [TYPES] a question In-Reply-To: References: Message-ID: Many thanks to everybody who provided suggestions on my, not so well formulated, question. It appears to me now after more thinking and some Wikipedia searches :), that the system which I had in mind is equivalent to Primitive Recursive Arithmetic and, as I have been told, the provability of sentences in this system is undecidable. Vladimir. On Jul 19, 2012, at 10:43 AM, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > does any one know if the system of natural arithmetic with equality, addition, multiplication, exponentiation (or without exponentiation), "forall" quantifier, implication and conjunction is decidable? There is no existential quantifier and no negation. > > Thanks! > Vladimir. > > From Sergei.Soloviev at irit.fr Fri Jul 20 18:39:25 2012 From: Sergei.Soloviev at irit.fr (Sergei Soloviev) Date: Sat, 21 Jul 2012 00:39:25 +0200 Subject: [TYPES] question (cont.) In-Reply-To: <77EE5B90-4A6C-41C7-A092-B60536DF4954@ias.edu> Message-ID: <5add-5009de00-b-640f7700@255806042> Dear Vladimir, do you include equality? Best Sergei Soloviev On Thursday, July 19, 2012 17:16 CEST, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Don't you have False (as 0=1 for instance) hence not A (as > > A ->False) hence exA (as forall notA -> False), hence everything? > > Thanks to everybody who pointed this out to me. I'll have to think whether my question has a more sensible reformulation. > > Vladimir. > > From dreyer at mpi-sws.org Fri Aug 10 07:34:49 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Fri, 10 Aug 2012 13:34:49 +0200 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC Message-ID: Hi, all. I'm having a hard time finding a (canonical) source for what I thought would be a very simple question. Consider the simply-typed lambda calculus (STLC), Church-style (with type annotations on variable binders), and beta-eta reduction. Beta-eta convertibility, which I'll write as M ~ N, means that M is convertible to N by some sequence of beta/eta reductions/expansions, which may very well go through ill-typed intermediate terms even if M and N are themselves well-typed. I want to show the following: If G |- M : A and G |- N : A, and M ~ N, then M and N have the same beta-eta (short) normal form. It is well known that well-typed terms in STLC are strongly normalizing wrt beta-eta reduction. But to prove the above, one also needs confluence. The problem is that the intermediate terms in the beta-eta conversion of M and N may be ill-typed, and beta-eta-reduction is not confluent in this setting for ill-typed terms. In particular, (\x:A.(\y:B.y)x) can eta-convert to \y:B.y or beta-convert to \x:A.x which are only equal if A = B. One strategy would be to prove that beta-eta-convertibility -- *between terms of the same type* -- going through ill-typed terms is equivalent to beta-eta convertibility restricted to going only through well-typed terms. Intuitively, this seems like it should be true, but it is not at all clear to me how to show it. In particular, the *between terms of the same type* part seems critical in order to outlaw the confluence counterexample given above. I found some discussion of some related questions in a thread on the Types list from 1989: http://article.gmane.org/gmane.comp.science.types/266 (and subsequent posts) But it doesn't seem to contain a direct answer to my question. Thanks! Derek From dreyer at mpi-sws.org Fri Aug 10 10:31:54 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Fri, 10 Aug 2012 16:31:54 +0200 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: References: Message-ID: Just to clarify, since several people e-mailed me and were confused about my confluence counterexample: The classic argument for showing decidability of convertibility depends on a combination of Church-Rosser (confluence) and normalization. For the confluence part, the common approach I'm familiar with involves proving (a) If M ->* P and M ->* Q, then there exists T s.t. P ->* T and Q ->* T. (b) If P ~ Q, then there exists T s.t. P ->* T and Q ->* T. Part (b) follows by applying (a) to the "peaks" in the reduction/expansion sequence converting P to Q (and then completing the remaining diamonds till you arrive at T). See for instance Theorem 1B5 and Figure 1B5b on pages 5-6 of Hindley's "Basic Simple Type Theory" book. And part (b) is the one that's really important for showing decidability of ~. The problem in the setting I described (i.e. Church-style, with beta-eta reduction) is that, even if P and Q are required to be well-typed (at the same type), the definition of convertibility permits the intermediate terms in the reduction/expansion sequence between P and Q (and, in particular, the "peaks" M) to be ill-typed. However, for ill-typed terms M, part (a) of the confluence property does not hold when -> includes eta-reduction. In particular, take the example I gave, with M = \x:A.(\y:B.y)x, P = \x:A.x, Q = \y:B.y, and A distinct from B. M, which is ill-typed, reduces to both P and Q, but they are distinct normal forms. I should say, I got this specific example from Garrel Pottinger in the 1989 Types list thread: http://www.seas.upenn.edu/~sweirich/types/archive/1989/msg00014.html Now, for well-typed M, confluence *does* hold (see Theorem 5B8 on page 70 of Hindley). So, if we instead define beta-eta convertibility to require all terms M in the sequence connecting P and Q to be well-typed, then the proof would work fine (I believe). But I want to know if it is also provable without that requirement, i.e. relying on the usual definition of beta-eta convertibility, which says nothing about well-typedness even when it is defined on Church-style explicitly-typed terms. FWIW, Hindley follows Theorem 5B8 with a Warning: "No attempt is made here to define a convertibility relation =beta or =beta/eta on typed terms by expansions and contractions of redexes. First, we shall not need one. Second, since the typed terms in this chapter correspond exactly to TA\tau-deductions [Curry-style], any attempt to define such a typed equality would meet the same type-variation problems as were discussed for TA_\tau in 2C. In fact, beta-expanding a typed term may lead to an expression which is no longer a typed term (cf. 2C2.2-6)." I'm not sure what to make of this warning. In particular, if beta-eta-convertibility is a property of interest (e.g. if it is used to define equivalence of type constructors in F-omega), then what is one supposed to do? Thanks, Derek On Fri, Aug 10, 2012 at 1:34 PM, Derek Dreyer wrote: > Hi, all. I'm having a hard time finding a (canonical) source for what > I thought would be a very simple question. > > Consider the simply-typed lambda calculus (STLC), Church-style (with > type annotations on variable binders), and beta-eta reduction. > Beta-eta convertibility, which I'll write as M ~ N, means that M is > convertible to N by some sequence of beta/eta reductions/expansions, > which may very well go through ill-typed intermediate terms even if M > and N are themselves well-typed. > > I want to show the following: > > If G |- M : A and G |- N : A, and M ~ N, then M and N have the same > beta-eta (short) normal form. > > It is well known that well-typed terms in STLC are strongly > normalizing wrt beta-eta reduction. But to prove the above, one also > needs confluence. The problem is that the intermediate terms in the > beta-eta conversion of M and N may be ill-typed, and > beta-eta-reduction is not confluent in this setting for ill-typed > terms. In particular, > > (\x:A.(\y:B.y)x) > > can eta-convert to > > \y:B.y > > or beta-convert to > > \x:A.x > > which are only equal if A = B. > > One strategy would be to prove that beta-eta-convertibility -- > *between terms of the same type* -- going through ill-typed terms is > equivalent to beta-eta convertibility restricted to going only through > well-typed terms. Intuitively, this seems like it should be true, but > it is not at all clear to me how to show it. In particular, the > *between terms of the same type* part seems critical in order to > outlaw the confluence counterexample given above. > > I found some discussion of some related questions in a thread on the > Types list from 1989: > > http://article.gmane.org/gmane.comp.science.types/266 (and subsequent posts) > > But it doesn't seem to contain a direct answer to my question. > > Thanks! > Derek From Sam.Lindley at ed.ac.uk Fri Aug 10 10:39:31 2012 From: Sam.Lindley at ed.ac.uk (Sam Lindley) Date: Fri, 10 Aug 2012 15:39:31 +0100 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: References: Message-ID: <50251D23.1090502@ed.ac.uk> I think this is quite straightforward to show using the fact that beta-eta-reduction is confluent on untyped terms. Write |M| for the type-erasure of the Church-style simply-typed term M. If M ~ N, then certainly |M| ~ |N|. Let M', N' be respectively the normal forms of M and N. By confluence, |M'| = |N'|. The key property we need is that given a Curry-style judgement, G |- P : A, where P is in normal form, there is a unique Church-style term P* such that G |- P* : A and |P*| = P. This means that M' = |M'|* = |N'|* = N'. Sam On 10/08/12 12:34, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi, all. I'm having a hard time finding a (canonical) source for what > I thought would be a very simple question. > > Consider the simply-typed lambda calculus (STLC), Church-style (with > type annotations on variable binders), and beta-eta reduction. > Beta-eta convertibility, which I'll write as M ~ N, means that M is > convertible to N by some sequence of beta/eta reductions/expansions, > which may very well go through ill-typed intermediate terms even if M > and N are themselves well-typed. > > I want to show the following: > > If G |- M : A and G |- N : A, and M ~ N, then M and N have the same > beta-eta (short) normal form. > > It is well known that well-typed terms in STLC are strongly > normalizing wrt beta-eta reduction. But to prove the above, one also > needs confluence. The problem is that the intermediate terms in the > beta-eta conversion of M and N may be ill-typed, and > beta-eta-reduction is not confluent in this setting for ill-typed > terms. In particular, > > (\x:A.(\y:B.y)x) > > can eta-convert to > > \y:B.y > > or beta-convert to > > \x:A.x > > which are only equal if A = B. > > One strategy would be to prove that beta-eta-convertibility -- > *between terms of the same type* -- going through ill-typed terms is > equivalent to beta-eta convertibility restricted to going only through > well-typed terms. Intuitively, this seems like it should be true, but > it is not at all clear to me how to show it. In particular, the > *between terms of the same type* part seems critical in order to > outlaw the confluence counterexample given above. > > I found some discussion of some related questions in a thread on the > Types list from 1989: > > http://article.gmane.org/gmane.comp.science.types/266 (and subsequent posts) > > But it doesn't seem to contain a direct answer to my question. > > Thanks! > Derek > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From dreyer at mpi-sws.org Fri Aug 10 11:48:01 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Fri, 10 Aug 2012 17:48:01 +0200 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: <50251D23.1090502@ed.ac.uk> References: <50251D23.1090502@ed.ac.uk> Message-ID: Thanks, Sam! I think what you suggest here works. (You still have to prove the "key property" you mention separately, by that seems to go through by a relatively straightforward induction on typing derivations.) Gilles Barthe also pointed me to Herman Geuvers' LICS'92 paper (and also thesis), which proves the theorem I'm after in a more general setting (for the class of "functional, normalizing PTS's"): http://www.cs.ru.nl/~herman/PUBS/LICS92_CRbh.ps.gz http://www.cs.ru.nl/~herman/PUBS/Proefschrift.ps.gz (see Chapter 5, which contains an expanded/revised version of the proof in the LICS paper) FYI: According to Geuvers, the confluence counterexample I gave is due originally to Nederpelt (1973). Thanks, Derek On Fri, Aug 10, 2012 at 4:39 PM, Sam Lindley wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I think this is quite straightforward to show using the fact that > beta-eta-reduction is confluent on untyped terms. > > Write |M| for the type-erasure of the Church-style simply-typed term M. > > If M ~ N, then certainly |M| ~ |N|. Let M', N' be respectively the normal > forms of M and N. By confluence, |M'| = |N'|. The key property we need is > that given a Curry-style judgement, G |- P : A, where P is in normal form, > there is a unique Church-style term P* such that G |- P* : A and |P*| = P. > This means that M' = |M'|* = |N'|* = N'. > > Sam > > > On 10/08/12 12:34, Derek Dreyer wrote: >> >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Hi, all. I'm having a hard time finding a (canonical) source for what >> I thought would be a very simple question. >> >> Consider the simply-typed lambda calculus (STLC), Church-style (with >> type annotations on variable binders), and beta-eta reduction. >> Beta-eta convertibility, which I'll write as M ~ N, means that M is >> convertible to N by some sequence of beta/eta reductions/expansions, >> which may very well go through ill-typed intermediate terms even if M >> and N are themselves well-typed. >> >> I want to show the following: >> >> If G |- M : A and G |- N : A, and M ~ N, then M and N have the same >> beta-eta (short) normal form. >> >> It is well known that well-typed terms in STLC are strongly >> normalizing wrt beta-eta reduction. But to prove the above, one also >> needs confluence. The problem is that the intermediate terms in the >> beta-eta conversion of M and N may be ill-typed, and >> beta-eta-reduction is not confluent in this setting for ill-typed >> terms. In particular, >> >> (\x:A.(\y:B.y)x) >> >> can eta-convert to >> >> \y:B.y >> >> or beta-convert to >> >> \x:A.x >> >> which are only equal if A = B. >> >> One strategy would be to prove that beta-eta-convertibility -- >> *between terms of the same type* -- going through ill-typed terms is >> equivalent to beta-eta convertibility restricted to going only through >> well-typed terms. Intuitively, this seems like it should be true, but >> it is not at all clear to me how to show it. In particular, the >> *between terms of the same type* part seems critical in order to >> outlaw the confluence counterexample given above. >> >> I found some discussion of some related questions in a thread on the >> Types list from 1989: >> >> http://article.gmane.org/gmane.comp.science.types/266 (and subsequent >> posts) >> >> But it doesn't seem to contain a direct answer to my question. >> >> Thanks! >> Derek >> > > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From dreyer at mpi-sws.org Sat Aug 11 08:13:43 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 11 Aug 2012 14:13:43 +0200 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: References: <50251D23.1090502@ed.ac.uk> Message-ID: A quick follow-up: The proof in Geuvers' LICS'92 paper is in fact a generalization of Sam's argument to a broader class of PTS's. If one instantiates Geuvers' proof to STLC, it degenerates precisely to Sam's proof, with the "key property" Sam mentioned being an instance of Lemma 3.12 in Geuvers' paper. Derek On Fri, Aug 10, 2012 at 5:48 PM, Derek Dreyer wrote: > Thanks, Sam! I think what you suggest here works. (You still have to > prove the "key property" you mention separately, by that seems to go > through by a relatively straightforward induction on typing > derivations.) > > Gilles Barthe also pointed me to Herman Geuvers' LICS'92 paper (and > also thesis), which proves the theorem I'm after in a more general > setting (for the class of "functional, normalizing PTS's"): > > http://www.cs.ru.nl/~herman/PUBS/LICS92_CRbh.ps.gz > > http://www.cs.ru.nl/~herman/PUBS/Proefschrift.ps.gz (see Chapter 5, > which contains an expanded/revised version of the proof in the LICS > paper) > > FYI: According to Geuvers, the confluence counterexample I gave is due > originally to Nederpelt (1973). > > Thanks, > Derek > > On Fri, Aug 10, 2012 at 4:39 PM, Sam Lindley wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> I think this is quite straightforward to show using the fact that >> beta-eta-reduction is confluent on untyped terms. >> >> Write |M| for the type-erasure of the Church-style simply-typed term M. >> >> If M ~ N, then certainly |M| ~ |N|. Let M', N' be respectively the normal >> forms of M and N. By confluence, |M'| = |N'|. The key property we need is >> that given a Curry-style judgement, G |- P : A, where P is in normal form, >> there is a unique Church-style term P* such that G |- P* : A and |P*| = P. >> This means that M' = |M'|* = |N'|* = N'. >> >> Sam >> >> >> On 10/08/12 12:34, Derek Dreyer wrote: >>> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ] >>> >>> Hi, all. I'm having a hard time finding a (canonical) source for what >>> I thought would be a very simple question. >>> >>> Consider the simply-typed lambda calculus (STLC), Church-style (with >>> type annotations on variable binders), and beta-eta reduction. >>> Beta-eta convertibility, which I'll write as M ~ N, means that M is >>> convertible to N by some sequence of beta/eta reductions/expansions, >>> which may very well go through ill-typed intermediate terms even if M >>> and N are themselves well-typed. >>> >>> I want to show the following: >>> >>> If G |- M : A and G |- N : A, and M ~ N, then M and N have the same >>> beta-eta (short) normal form. >>> >>> It is well known that well-typed terms in STLC are strongly >>> normalizing wrt beta-eta reduction. But to prove the above, one also >>> needs confluence. The problem is that the intermediate terms in the >>> beta-eta conversion of M and N may be ill-typed, and >>> beta-eta-reduction is not confluent in this setting for ill-typed >>> terms. In particular, >>> >>> (\x:A.(\y:B.y)x) >>> >>> can eta-convert to >>> >>> \y:B.y >>> >>> or beta-convert to >>> >>> \x:A.x >>> >>> which are only equal if A = B. >>> >>> One strategy would be to prove that beta-eta-convertibility -- >>> *between terms of the same type* -- going through ill-typed terms is >>> equivalent to beta-eta convertibility restricted to going only through >>> well-typed terms. Intuitively, this seems like it should be true, but >>> it is not at all clear to me how to show it. In particular, the >>> *between terms of the same type* part seems critical in order to >>> outlaw the confluence counterexample given above. >>> >>> I found some discussion of some related questions in a thread on the >>> Types list from 1989: >>> >>> http://article.gmane.org/gmane.comp.science.types/266 (and subsequent >>> posts) >>> >>> But it doesn't seem to contain a direct answer to my question. >>> >>> Thanks! >>> Derek >>> >> >> >> -- >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> From psztxa at exmail.nottingham.ac.uk Tue Aug 14 05:13:24 2012 From: psztxa at exmail.nottingham.ac.uk (Altenkirch Thorsten) Date: Tue, 14 Aug 2012 10:13:24 +0100 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: Message-ID: I don't think the classical approach of showing decidability via strong normalisation and confluence of small step reduction is such a good choice. Indeed it ceases to work for systems with dependnet types and eta-reductions. Another issue is that it is hardly the way we would implement a decision procedure. An alternative are semantic methods, I.e. normalisation by evaluation. There is a weak version for combinatory logic which can be used to only implement beta reduction (but not xi or eta rules) or a strong one which does give you full beta eta equality. However, to extend strong NBE for more complex systems can be quite tricky too. I don't think I have seen a straightforward extension of the strong version to dependent types (I am saying this expecting somebody to scream :-). Another idea which James Chapman and I have investigated is to directly prove that the natural implementation of a decision procedure using big step semantics is correct. This also has the advantage that you actually prove the correctness of the implementation. This has been written up in James PhD and there is also a journal paper - see http://www.cs.nott.ac.uk/~txa/publ/jtait07.pdf Cheers, Thorsten On 11/08/2012 13:13, "Derek Dreyer" wrote: >[ The Types Forum, >http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >A quick follow-up: > >The proof in Geuvers' LICS'92 paper is in fact a generalization of >Sam's argument to a broader class of PTS's. If one instantiates >Geuvers' proof to STLC, it degenerates precisely to Sam's proof, with >the "key property" Sam mentioned being an instance of Lemma 3.12 in >Geuvers' paper. > >Derek > >On Fri, Aug 10, 2012 at 5:48 PM, Derek Dreyer wrote: >> Thanks, Sam! I think what you suggest here works. (You still have to >> prove the "key property" you mention separately, by that seems to go >> through by a relatively straightforward induction on typing >> derivations.) >> >> Gilles Barthe also pointed me to Herman Geuvers' LICS'92 paper (and >> also thesis), which proves the theorem I'm after in a more general >> setting (for the class of "functional, normalizing PTS's"): >> >> http://www.cs.ru.nl/~herman/PUBS/LICS92_CRbh.ps.gz >> >> http://www.cs.ru.nl/~herman/PUBS/Proefschrift.ps.gz (see Chapter 5, >> which contains an expanded/revised version of the proof in the LICS >> paper) >> >> FYI: According to Geuvers, the confluence counterexample I gave is due >> originally to Nederpelt (1973). >> >> Thanks, >> Derek >> >> On Fri, Aug 10, 2012 at 4:39 PM, Sam Lindley >>wrote: >>> [ The Types Forum, >>>http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> I think this is quite straightforward to show using the fact that >>> beta-eta-reduction is confluent on untyped terms. >>> >>> Write |M| for the type-erasure of the Church-style simply-typed term M. >>> >>> If M ~ N, then certainly |M| ~ |N|. Let M', N' be respectively the >>>normal >>> forms of M and N. By confluence, |M'| = |N'|. The key property we need >>>is >>> that given a Curry-style judgement, G |- P : A, where P is in normal >>>form, >>> there is a unique Church-style term P* such that G |- P* : A and |P*| >>>= P. >>> This means that M' = |M'|* = |N'|* = N'. >>> >>> Sam >>> >>> >>> On 10/08/12 12:34, Derek Dreyer wrote: >>>> >>>> [ The Types Forum, >>>>http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ] >>>> >>>> Hi, all. I'm having a hard time finding a (canonical) source for what >>>> I thought would be a very simple question. >>>> >>>> Consider the simply-typed lambda calculus (STLC), Church-style (with >>>> type annotations on variable binders), and beta-eta reduction. >>>> Beta-eta convertibility, which I'll write as M ~ N, means that M is >>>> convertible to N by some sequence of beta/eta reductions/expansions, >>>> which may very well go through ill-typed intermediate terms even if M >>>> and N are themselves well-typed. >>>> >>>> I want to show the following: >>>> >>>> If G |- M : A and G |- N : A, and M ~ N, then M and N have the same >>>> beta-eta (short) normal form. >>>> >>>> It is well known that well-typed terms in STLC are strongly >>>> normalizing wrt beta-eta reduction. But to prove the above, one also >>>> needs confluence. The problem is that the intermediate terms in the >>>> beta-eta conversion of M and N may be ill-typed, and >>>> beta-eta-reduction is not confluent in this setting for ill-typed >>>> terms. In particular, >>>> >>>> (\x:A.(\y:B.y)x) >>>> >>>> can eta-convert to >>>> >>>> \y:B.y >>>> >>>> or beta-convert to >>>> >>>> \x:A.x >>>> >>>> which are only equal if A = B. >>>> >>>> One strategy would be to prove that beta-eta-convertibility -- >>>> *between terms of the same type* -- going through ill-typed terms is >>>> equivalent to beta-eta convertibility restricted to going only through >>>> well-typed terms. Intuitively, this seems like it should be true, but >>>> it is not at all clear to me how to show it. In particular, the >>>> *between terms of the same type* part seems critical in order to >>>> outlaw the confluence counterexample given above. >>>> >>>> I found some discussion of some related questions in a thread on the >>>> Types list from 1989: >>>> >>>> http://article.gmane.org/gmane.comp.science.types/266 (and subsequent >>>> posts) >>>> >>>> But it doesn't seem to contain a direct answer to my question. >>>> >>>> Thanks! >>>> Derek >>>> >>> >>> >>> -- >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From dreyer at mpi-sws.org Tue Aug 14 06:04:58 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Tue, 14 Aug 2012 12:04:58 +0200 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: References: Message-ID: Hi, Thorsten. Yes, I'm well aware that the theorem (and proof) I was asking for are not very robust in the sense of being scalable to richer type systems, and using beta-eta normalization to implement equivalence is not very practical algorithmically. But in the particular case of Church-style STLC, I thought the proof of confluence of beta-eta normalization was a standard (old?) result. However, the only reference I can find is the Geuvers paper from 1992, which presents a much more involved proof for the more general class of PTS's. I'd be glad if anyone can point me to an older reference containing the simple proof that Sam suggested. Thanks, Derek On Tue, Aug 14, 2012 at 11:13 AM, Altenkirch Thorsten wrote: > I don't think the classical approach of showing decidability via strong > normalisation and confluence of small step reduction is such a good > choice. Indeed it ceases to work for systems with dependnet types and > eta-reductions. Another issue is that it is hardly the way we would > implement a decision procedure. > > An alternative are semantic methods, I.e. normalisation by evaluation. > There is a weak version for combinatory logic which can be used to only > implement beta reduction (but not xi or eta rules) or a strong one which > does give you full beta eta equality. However, to extend strong NBE for > more complex systems can be quite tricky too. I don't think I have seen a > straightforward extension of the strong version to dependent types (I am > saying this expecting somebody to scream :-). > > Another idea which James Chapman and I have investigated is to directly > prove that the natural implementation of a decision procedure using big > step semantics is correct. This also has the advantage that you actually > prove the correctness of the implementation. This has been written up in > James PhD and there is also a journal paper - see > http://www.cs.nott.ac.uk/~txa/publ/jtait07.pdf > > Cheers, > Thorsten > > On 11/08/2012 13:13, "Derek Dreyer" wrote: > >>[ The Types Forum, >>http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >>A quick follow-up: >> >>The proof in Geuvers' LICS'92 paper is in fact a generalization of >>Sam's argument to a broader class of PTS's. If one instantiates >>Geuvers' proof to STLC, it degenerates precisely to Sam's proof, with >>the "key property" Sam mentioned being an instance of Lemma 3.12 in >>Geuvers' paper. >> >>Derek >> >>On Fri, Aug 10, 2012 at 5:48 PM, Derek Dreyer wrote: >>> Thanks, Sam! I think what you suggest here works. (You still have to >>> prove the "key property" you mention separately, by that seems to go >>> through by a relatively straightforward induction on typing >>> derivations.) >>> >>> Gilles Barthe also pointed me to Herman Geuvers' LICS'92 paper (and >>> also thesis), which proves the theorem I'm after in a more general >>> setting (for the class of "functional, normalizing PTS's"): >>> >>> http://www.cs.ru.nl/~herman/PUBS/LICS92_CRbh.ps.gz >>> >>> http://www.cs.ru.nl/~herman/PUBS/Proefschrift.ps.gz (see Chapter 5, >>> which contains an expanded/revised version of the proof in the LICS >>> paper) >>> >>> FYI: According to Geuvers, the confluence counterexample I gave is due >>> originally to Nederpelt (1973). >>> >>> Thanks, >>> Derek >>> >>> On Fri, Aug 10, 2012 at 4:39 PM, Sam Lindley >>>wrote: >>>> [ The Types Forum, >>>>http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>> >>>> I think this is quite straightforward to show using the fact that >>>> beta-eta-reduction is confluent on untyped terms. >>>> >>>> Write |M| for the type-erasure of the Church-style simply-typed term M. >>>> >>>> If M ~ N, then certainly |M| ~ |N|. Let M', N' be respectively the >>>>normal >>>> forms of M and N. By confluence, |M'| = |N'|. The key property we need >>>>is >>>> that given a Curry-style judgement, G |- P : A, where P is in normal >>>>form, >>>> there is a unique Church-style term P* such that G |- P* : A and |P*| >>>>= P. >>>> This means that M' = |M'|* = |N'|* = N'. >>>> >>>> Sam >>>> >>>> >>>> On 10/08/12 12:34, Derek Dreyer wrote: >>>>> >>>>> [ The Types Forum, >>>>>http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>> ] >>>>> >>>>> Hi, all. I'm having a hard time finding a (canonical) source for what >>>>> I thought would be a very simple question. >>>>> >>>>> Consider the simply-typed lambda calculus (STLC), Church-style (with >>>>> type annotations on variable binders), and beta-eta reduction. >>>>> Beta-eta convertibility, which I'll write as M ~ N, means that M is >>>>> convertible to N by some sequence of beta/eta reductions/expansions, >>>>> which may very well go through ill-typed intermediate terms even if M >>>>> and N are themselves well-typed. >>>>> >>>>> I want to show the following: >>>>> >>>>> If G |- M : A and G |- N : A, and M ~ N, then M and N have the same >>>>> beta-eta (short) normal form. >>>>> >>>>> It is well known that well-typed terms in STLC are strongly >>>>> normalizing wrt beta-eta reduction. But to prove the above, one also >>>>> needs confluence. The problem is that the intermediate terms in the >>>>> beta-eta conversion of M and N may be ill-typed, and >>>>> beta-eta-reduction is not confluent in this setting for ill-typed >>>>> terms. In particular, >>>>> >>>>> (\x:A.(\y:B.y)x) >>>>> >>>>> can eta-convert to >>>>> >>>>> \y:B.y >>>>> >>>>> or beta-convert to >>>>> >>>>> \x:A.x >>>>> >>>>> which are only equal if A = B. >>>>> >>>>> One strategy would be to prove that beta-eta-convertibility -- >>>>> *between terms of the same type* -- going through ill-typed terms is >>>>> equivalent to beta-eta convertibility restricted to going only through >>>>> well-typed terms. Intuitively, this seems like it should be true, but >>>>> it is not at all clear to me how to show it. In particular, the >>>>> *between terms of the same type* part seems critical in order to >>>>> outlaw the confluence counterexample given above. >>>>> >>>>> I found some discussion of some related questions in a thread on the >>>>> Types list from 1989: >>>>> >>>>> http://article.gmane.org/gmane.comp.science.types/266 (and subsequent >>>>> posts) >>>>> >>>>> But it doesn't seem to contain a direct answer to my question. >>>>> >>>>> Thanks! >>>>> Derek >>>>> >>>> >>>> >>>> -- >>>> The University of Edinburgh is a charitable body, registered in >>>> Scotland, with registration number SC005336. >>>> > > This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. > > This message has been checked for viruses but the contents of an attachment > may still contain software viruses which could damage your computer system: > you are advised to perform your own checks. Email communications with the > University of Nottingham may be monitored as permitted by UK legislation. From andreas.abel at ifi.lmu.de Thu Aug 16 08:22:38 2012 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Thu, 16 Aug 2012 14:22:38 +0200 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: References: Message-ID: <502CE60E.5000800@ifi.lmu.de> On 14.08.2012 11:13, Altenkirch Thorsten wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > An alternative are semantic methods, I.e. normalisation by evaluation. > There is a weak version for combinatory logic which can be used to only > implement beta reduction (but not xi or eta rules) or a strong one which > does give you full beta eta equality. However, to extend strong NBE for > more complex systems can be quite tricky too. I don't think I have seen a > straightforward extension of the strong version to dependent types (I am > saying this expecting somebody to scream :-). Yes, me! Towards Normalization by Evaluation for the Calculus of Constructions Andreas Abel Tenth International Symposium on Functional and Logic Programming, FLOPS 2010, 19-21 April 2010, Sendai, Japan ? Springer. http://www2.tcs.ifi.lmu.de/~abel/publications.html#flops10 With computation on the type-level, it does not get much more straightforward... Cheers, Andreas > Another idea which James Chapman and I have investigated is to directly > prove that the natural implementation of a decision procedure using big > step semantics is correct. This also has the advantage that you actually > prove the correctness of the implementation. This has been written up in > James PhD and there is also a journal paper - see > http://www.cs.nott.ac.uk/~txa/publ/jtait07.pdf > > Cheers, > Thorsten > -- Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich Oettingenstr. 67, D-80538 Munich, GERMANY andreas.abel at ifi.lmu.de http://www2.tcs.ifi.lmu.de/~abel/ From psztxa at exmail.nottingham.ac.uk Thu Aug 16 08:42:12 2012 From: psztxa at exmail.nottingham.ac.uk (Altenkirch Thorsten) Date: Thu, 16 Aug 2012 13:42:12 +0100 Subject: [TYPES] decidability/confluence of beta-eta convertibility in Church-style STLC In-Reply-To: <502CE60E.5000800@ifi.lmu.de> Message-ID: Hmm, I thought it was you. :-) Is this really the generalisation of NBE for simply typed lambda calculus to dependent types we ought to expect? I mean your construction involves PERs but the strong NBE for STL doesn't. Maybe because you do it for an impredicative system. Is there a PER-free version if we don't have impredicativity? Or the other way around: How would the STL version of your construction look like? In my view the main ingredient of NBE is the observation that the presheaf model for the simply typed lambda calculus is equivalent to the initial model (interpreting base types syntactically). Indeed it is a proof-relevant generalisation of the completeness proof for Kripke models. I would expect that the same holds for dependent types. Thorsten On 16/08/2012 13:22, "Andreas Abel" wrote: >On 14.08.2012 11:13, Altenkirch Thorsten wrote: >> [ The Types Forum, >>http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> An alternative are semantic methods, I.e. normalisation by evaluation. >> There is a weak version for combinatory logic which can be used to only >> implement beta reduction (but not xi or eta rules) or a strong one which >> does give you full beta eta equality. However, to extend strong NBE for >> more complex systems can be quite tricky too. I don't think I have seen >>a >> straightforward extension of the strong version to dependent types (I am >> saying this expecting somebody to scream :-). > >Yes, me! > >Towards Normalization by Evaluation for the Calculus of Constructions > Andreas Abel > Tenth International Symposium on Functional and Logic Programming, >FLOPS 2010, 19-21 April 2010, Sendai, Japan ? Springer. > > http://www2.tcs.ifi.lmu.de/~abel/publications.html#flops10 > >With computation on the type-level, it does not get much more >straightforward... > >Cheers, >Andreas > >> Another idea which James Chapman and I have investigated is to directly >> prove that the natural implementation of a decision procedure using big >> step semantics is correct. This also has the advantage that you actually >> prove the correctness of the implementation. This has been written up in >> James PhD and there is also a journal paper - see >> http://www.cs.nott.ac.uk/~txa/publ/jtait07.pdf >> >> Cheers, >> Thorsten >> > > >-- >Andreas Abel <>< Du bist der geliebte Mensch. > >Theoretical Computer Science, University of Munich >Oettingenstr. 67, D-80538 Munich, GERMANY > >andreas.abel at ifi.lmu.de >http://www2.tcs.ifi.lmu.de/~abel/ This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From nicolas.pouillard at gmail.com Thu Aug 23 11:46:22 2012 From: nicolas.pouillard at gmail.com (Nicolas Pouillard) Date: Thu, 23 Aug 2012 15:46:22 +0000 Subject: [TYPES] [REMINDER] Agda Intensive Meeting 16th (AIM XVI) in Copenhagen Message-ID: <20120823154622.GE31258@mail-np.ts.ail> [We apologize for multiple copies] Hello, This is a reminder that the next instance of AIM approaches. It will be held at ITU in Copenhagen from October 3 to October 9. Registration is still open and it is now a good time propose talks and code sprint ideas. Everyone with a genuine interest in Agda is invited to attend. The meeting will be similar to previous ones: * Presentations concerning theory, implementation, and use cases of Agda. * Discussions around issues of the Agda language. * Plenty of time to work on or in Agda, in collaboration with the other participants. A few details can be found on the wiki page of the event: http://wiki.portal.chalmers.se/agda/pmwiki.php?n=Main.AIMXVI including pointers to recommended accommodations. Registration ------------ Although no official registration is needed, we want to know how many will attend. If you want to give a talk please send us the title and an abstract. To register, please reply to us (our email is aim16 at ?.net ? or if you are not an unicode aficionado aim16 at xn--dia.net), filling out the following form: ---------------8<---------------------------------------- Name: Affiliation: email: Program: * I'd like to give a talk or lead a discussion (yes/no): Title: Abstract: (optional) * Suggestion for code-sprint (optional): ---------------8<---------------------------------------- See you in K?benhavn, -- Nicolas Pouillard Daniel Gustafsson From kyagrd at gmail.com Tue Oct 16 19:38:55 2012 From: kyagrd at gmail.com (Ahn, Ki Yung) Date: Tue, 16 Oct 2012 16:38:55 -0700 Subject: [TYPES] Free theorems for maps of higher rank and polarized systems? Message-ID: <507DF00F.1080405@gmail.com> I have some questions on free theorems for maps, which most people may believe, but having difficulty to cite where actual proof is written down, or an instance of a more general theorem, or whether it has ever been actually proved. I'd greatly appreciate pointers to related papers. Question 1. Proper citation for free theorems for fmap (fmap is Haskell-ish term for generic maps) for rank 1 (* -> *) and rank 2 (such as (* -> *) -> (* -> *))? Wadler's "Theorems for free!" papers discusses map for lists. But everyone seem to believe it holds for arbitrary structure of rank 1, that is, F : * -> *. We can formalize this idea in Fw as follows: If there exists fmap : forall (f:*->*)(a:*)(b:*). (a -> b) -> f a -> f b then (1) any such fmap satisfy the expected property of fmap, i.e., fmap id = id fmap f . fmap f = fmap g and (2) also such fmap is unique. Similar statement could be made for rank 2 version of this. These generic maps of also called as monotonicity witness. Papers on monotonicity that I've seen doesn't exactly proves (1) but since they use monotonicity witness to encode iterations or folds, I think it may be considered as indirect proof of the properties in (1). But it would be nice to know if you have seen proofs of statements in the form above as (1), and also for uniqeness (2), I'd appreciate the pointers to such literature. Matthes' CSL'98 paper discusses that monotonicity witness for rank 2 has not been proved yet and it is an open question. Has this been proved afterwards for rank 2 or higher? Question 2. Are fully(?) positively polarized type constructors monotone in general? We can track polarities of the use of type constructor variables in Fw and label and have extra annotation +,-, or 0 (means it can be used in both + and - positions) at kinds (e.g., +* -> *, -* -> *, 0* -> *). Systems like Fixw in CSL 04 paper by Abel and others is an example of a polarized system. Then, in such systems, I conjecture that type constructors of kinds +* -> *, +(+* -> *) -> (+* ->*) that have + polarities everywhere must always have monotonicity witness. This is a slightly more general version of strictly positivity because it considers negative of negative position to be positive. If you have seen anyone proved this, again, I would very much appreciate pointers to those materials. Thanks in advance. Ki Yung From george.cherevichenko at gmail.com Sun Nov 4 15:43:49 2012 From: george.cherevichenko at gmail.com (George Cherevichenko) Date: Mon, 5 Nov 2012 00:43:49 +0400 Subject: [TYPES] Please give me a link. Is alpha-conversion easy or not? Message-ID: Hi Where can I find a precise description how to interpret typed lambda calculi with named variables in cartesian closed categories? Or how to interpret lambda calculi with named variables in lambda calculi with De Brujn indices? I know how to do that, but I need a link. I need proofs of the following facts: 1) Alpha-equal terms correspond to the same term with De Brujn indices (or the same arrow in CCC). 2) Substitutions with named variables correspond to substitutions with De Brujn indices. I read Altenkirch "Alpha-conversion is easy", should I only refer to this article? Please, no nominal logic. George From Sergei.Soloviev at irit.fr Sun Nov 4 17:27:03 2012 From: Sergei.Soloviev at irit.fr (Sergei Soloviev) Date: Sun, 04 Nov 2012 23:27:03 +0100 Subject: [TYPES] =?utf-8?q?Please_give_me_a_link=2E_Is_alpha-conversion_ea?= =?utf-8?q?sy_or_not=3F?= In-Reply-To: Message-ID: <48fb-5096eb80-1-1de2bce0@205403440> Dear George, I'd like to cite (for historical reasons) my old paper - it does not contain the answers to your questions in exactly such terms as you formulate them (it is of 1981) but is shows how the connection between lambda-calculus and cartesian closed categories was understood at this time, and it contains references to even earlier papers. The papers contains the results later understood as the results about isomorphism of types, and also a version of "Statman's finite completeness theorem" obtained independently about the same time. The paper nowadays is downlodable in .pdf, you can find it via google (maybe needing institutional subscribtion) - I can do it in our university (university of Toulouse-3). Zap. Nauchn. Sem. LOMI, 1981, Volume 105, Pages 174?194 The category of finite sets and Cartesian closed categories S. V. Solov'ev There is English translation (Journal of Soviet Mathematics/J. of Math. Sciences) Best regards, Sergei Soloviev On Sunday, November 4, 2012 21:43 CET, George Cherevichenko wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi > Where can I find a precise description how to interpret typed lambda > calculi with named variables in cartesian closed categories? Or how to > interpret lambda calculi with named variables in lambda calculi with De > Brujn indices? I know how to do that, but I need a link. I need proofs of > the following facts: > 1) Alpha-equal terms correspond to the same term with De Brujn indices (or > the same arrow in CCC). > 2) Substitutions with named variables correspond to substitutions with De > Brujn indices. > I read Altenkirch "Alpha-conversion is easy", should I only refer to this > article? > Please, no nominal logic. > George From michael.norrish at nicta.com.au Sun Nov 4 18:36:04 2012 From: michael.norrish at nicta.com.au (Michael Norrish) Date: Mon, 05 Nov 2012 10:36:04 +1100 Subject: [TYPES] Please give me a link. Is alpha-conversion easy or not? In-Reply-To: References: Message-ID: <5096FBE4.206@nicta.com.au> The following ignores the category theory part of your question. See: Shankar, A mechanical proof of the Church-Rosser theorem. Journal of the ACM, 35(3):475?522, 1988, where a proof is given of the correspondence between beta on de Bruijn terms and on terms with ?raw? (unquotiented) names. You might tie this result into another establishing a connection between the raw and quotiented syntax. Alternatively, my paper with Ren? Vestergaard proves the connection between de Bruijn beta and quotiented beta directly (there is a dash of nominal stuff in there, though). That paper is Norrish & Vestergaard. Proof Pearl: de Bruijn terms really work. Proceedings of TPHOLs 2007, LNCS 4732. (Available from http://nicta.com.au/people/norrishm/attachments/bibliographies_and_papers/2007/tphols2007.pdf) Note that authors from Shankar onwards have not bothered to define arbitrary substitution on dB terms, but have defined a version that assumes they?re substituting into a term underneath a ?-abstraction. This means that when the substitution encounters a small enough variable (which is not the one they?re looking for), its value is decremented to reflect the fact that it is no longer under a ?. This notion of substitution on de Bruijn terms doesn?t correspond to anything nice over ?real ?-terms?. I hope these references help, Michael On 05/11/12 07:43, George Cherevichenko wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > Hi > Where can I find a precise description how to interpret typed lambda > calculi with named variables in cartesian closed categories? Or how to > interpret lambda calculi with named variables in lambda calculi with De > Brujn indices? I know how to do that, but I need a link. I need proofs of > the following facts: > 1) Alpha-equal terms correspond to the same term with De Brujn indices (or > the same arrow in CCC). > 2) Substitutions with named variables correspond to substitutions with De > Brujn indices. > I read Altenkirch "Alpha-conversion is easy", should I only refer to this > article? > Please, no nominal logic. > George From dreyer at mpi-sws.org Wed Dec 5 08:42:50 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 5 Dec 2012 14:42:50 +0100 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? Message-ID: I have what appears to be a counterexample to the characterization of equivalence at existential type in Plotkin and Abadi's logic for parametric polymorphism, and I'm wondering what's going on. Here is a link to their TLCA'93 paper from Gordon's web page: http://homepages.inf.ed.ac.uk/gdp/publications/Par_Poly.pdf In short, Theorem 7 in their paper says that two terms of existential type are equal *if and only if* there exists a "simulation relation" between their internal representations of the abstract type such that the operations are logically related at the appropriate type (interpreting the abstract type using the chosen simulation relation). The "if" direction is fine: that's just the well-known principle of "representation independence". It's the "only if" direction that I'm confused about. In particular, it seems to me that, using the "only if" direction of Theorem 7, I can derive a contradiction in the logic, so I'm trying to figure out where/if I screwed up. The issue is the following: There is an example given in Pitts' chapter in Pierce's ATTAPL book, namely Sumii's variation on Example 7.7.4 discussed in Remark 7.7.7. (I will reproduce it below.) The entire point of the example, which in Sumii's variation applies to pure System F, is that it shows a case where two existential packages are contextually equivalent despite the fact that there is no simulation relation between their type representations that would make a simulation argument (i.e. representation independence proof) go through. Pitts proves that there is no simulation proof, and he also sketches the proof of contextual equivalence of the packages through brute-force means. Now, by itself, Pitts' result is not in contradiction to Theorem 7 from the Plotkin-Abadi logic (PAL) paper, since PAL does not talk about contextual equivalence: it talks about its own primitive notion of equality. So one might write off this example as a quirk of contextual equivalence which is not relevant to the better-behaved PAL equality. However, in March 2008, I worked out (and privately circulated) a *much* simpler proof of the contextual equivalence in question, which is not brute force: rather, it relies on a simple transitive combination of two simulation subproofs (see below). Now it seems to me that there should be no problem expressing my simulation proofs in PAL, and therefore constructing a proof of equivalence of the existential packages in the example (by transitivity of PAL's equality), despite the fact that there is provably no direct simulation argument that shows them equivalent. So, unless I'm confused (which I probably am), this would seem to contradict the "only if" direction of Theorem 7. One basic problem I have is that I do not understand the proof of the "only if" direction of Theorem 7 in PAL -- it is not presented in the paper (which merely suggests that it follows from the "parametricity schema"), and I have not yet found a worked-out proof of it in any follow-on paper either. If someone can point me to such a proof, that would be most helpful. If not, how would one prove it? Alternatively, have I gone wrong somewhere in my argument that it is false? The details of the "counterexample" are given below. Thanks and best regards, Derek ------- First, here are the details of Pitts' example (the Sumii variation): Q = Exists X. (X -> Bool) -> Bool Void = Forall X. X H1 = \f:Void->Bool. False P1 = pack [Void,H1] as Q H2 = \f:Bool->Bool. (f True) and not (f False) P2 = pack [Bool,H2] as Q We want to prove that P1 is equivalent to P2. Intuitively, they are equivalent because the only possible function that could be passed in as the instantiation of f is the constant function returning True or False. In either case, both H1(f) and H2(f) will return False. A direct simulation proof does not exist. To see this, suppose one did exist: it would involve a simulation relation between Void and Bool, together with a proof that H1 and H2 are related at "(E -> Bool) -> Bool". 1. Void is an empty type, so the only possible simulation relation between Void and Bool is the empty relation E. 2. Given E as the necessary choice of simulation relation, we can choose f1 to be any function of type Void -> Bool and we can choose f2 to be the identity function at Bool -> Bool. Since E is empty, it is trivial to show that f1 and f2 are logically related at "E -> Bool". However, H1(f1) returns False, while H2(f2) returns True, in which case they are not logically related at Bool. So H1 and H2 are *not* logically related at "(E -> Bool) -> Bool". Contradiction. OK, now I will prove that P1 and P2 are in fact equivalent by giving a *pair* of simulation proofs demonstrating that each of them is equivalent to P3, defined as follows: H3 = \f:Bool->Bool. (f True) and not (f True) P3 = pack [Bool,H3] as Q To prove P1 equivalent to P3 by simulation argument: 1. We choose the simulation relation to be the empty relation E between Void and Bool (that's all we can choose). 2. Let f1 and f3 be functions of type Void -> Bool and Bool -> Bool related at "E -> Bool". We must show H1(f1) and H3(f3) are related at Bool. Clearly, H1(f1) = False, so we must show H3(f3) = False. We know that (f3 True) has type Bool, so it either equals True or False (the logical relations lemma tells us that, if nothing else). In either case, H3(f3) = False, so we are done. To prove P3 equivalent to P2 by simulation argument: 1. We choose the simulation relation to be the full relation R between Bool and Bool. Choosing R = {(True,True),(True,False)} would also work. 2. Let f3 and f2 be functions, both of type Bool -> Bool, related at "R -> Bool". We must show H3(f3) and H2(f2) related at Bool, i.e. that H3(f3) = H2(f2). Since (True,True) and (True,False) are both in R, we have from the relatedness of f3 and f2 that (f3 True) = (f2 True) and (f3 True) = (f2 False). Hence, (f2 True) = (f3 True) = (f2 False), and so H3(f3) = H2(f2) = False. Thus, P1 = P3 = P2, which implies P1 = P2 by transitivity, but there is no direct simulation proof of P1 = P2, which seems to contradict the "only if" direction of Theorem 7. From u.s.reddy at cs.bham.ac.uk Wed Dec 5 10:05:42 2012 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 5 Dec 2012 15:05:42 +0000 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: References: Message-ID: <20671.25286.166000.378985@gargle.gargle.HOWL> Derek Dreyer writes: > In short, Theorem 7 in their paper says that two terms of existential > type are equal *if and only if* there exists a "simulation relation" > between their internal representations of the abstract type such that > the operations are logically related at the appropriate type > (interpreting the abstract type using the chosen simulation relation). The Theorem 7 doesn't quite say that. u = (pack X x) means that make up *some* representation of the abstract data type u. X and x have no reason to be the same as what you regard as "the internal representation" of u. To put another way, the elements u of an existential type are *equivalence classes* of type representations. The Theorem 7 says that a simulation relation exists between two cleverly chosen representatives of such equivalence classes. In my papers, I avoid the notation (pack X x), which is open to misinterpretation, and use two separate notations - for representations <|X, x|> - for equivalence classes of representations. See for instance, "Objects and classes in Algol-like languages", Sec. 5.1. http://www.cs.bham.ac.uk/~udr/papers/objects-and-classes.pdf. Cheers, Uday From dreyer at mpi-sws.org Wed Dec 5 11:24:20 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 5 Dec 2012 17:24:20 +0100 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: <20671.25286.166000.378985@gargle.gargle.HOWL> References: <20671.25286.166000.378985@gargle.gargle.HOWL> Message-ID: Oh, of course! I was reading "u = pack X x" and "v = pack Y y" in Theorem 7 as syntactic equality, when they're really just PAL equalities, which could themselves be established by simulation arguments. So Theorem 7 really allows for u and v, the equivalent existential packages, to be related by a transitive composition of simulation proofs, as my "counterexample" demonstrates is necessary. Thanks for the clarification, Uday! Having said that, however, now I'm wondering about something else. It seems to me that, at least when restricted to closed types (and possibly more generally), all that the "only if" direction of Theorem 7 shows is that every term of existential type is equal to *some* term of the form "pack X x". In particular, it does *not* seem to imply that equivalent packages are related by a transitive composition of simulation arguments. That is, suppose pack B1 b1 = pack B2 b2 : Exists X. A[X] (with A[X] having only free variable X). The "only if" direction of Theorem 7 doesn't actually tell us anything (new) about the relationship between b1 and b2 because there's a trivial solution for the existentially bound variables on the rhs. I could have just chosen X and Y to *both* be B1, x and y to *both* be b1, and S to be the identity relation on B1. It's certainly true that pack B1 b1 = pack B1 b1, that pack B2 b2 = pack B1 b1, and that b1 is logically related to b1 at A[B1]. The first is trivial, the second follows from the assumption, and the third follows from identity extension. So, it seems that for existential types, knowing that two package terms (pack B1 b1 and pack B2 b2 above) are logically related at existential type does not reveal anything useful about the relation between the terms inside the packages (b1 and b2 above). Or does it? Incidentally, this corresponds to my intuition about equivalence at existential type, but I was under the impression that Theorem 7 was stating something stronger. Now perhaps what I said only applies to closed types A? I'm not sure, but it makes me wonder if the simulation condition (the last part of the theorem that says x and y are related at A[S,eq_Zvec]) really adds anything in the "only if" direction. Thanks, Derek On Wed, Dec 5, 2012 at 4:05 PM, Uday S Reddy wrote: > Derek Dreyer writes: > >> In short, Theorem 7 in their paper says that two terms of existential >> type are equal *if and only if* there exists a "simulation relation" >> between their internal representations of the abstract type such that >> the operations are logically related at the appropriate type >> (interpreting the abstract type using the chosen simulation relation). > > The Theorem 7 doesn't quite say that. u = (pack X x) means that make > up *some* representation of the abstract data type u. X and x have no > reason to be the same as what you regard as "the internal representation" of > u. > > To put another way, the elements u of an existential type are *equivalence > classes* of type representations. The Theorem 7 says that a simulation > relation exists between two cleverly chosen representatives of such > equivalence classes. > > In my papers, I avoid the notation (pack X x), which is open to > misinterpretation, and use two separate notations > > - for representations > <|X, x|> - for equivalence classes of representations. > > See for instance, "Objects and classes in Algol-like languages", Sec. 5.1. > http://www.cs.bham.ac.uk/~udr/papers/objects-and-classes.pdf. > > Cheers, > Uday From u.s.reddy at cs.bham.ac.uk Wed Dec 5 12:31:47 2012 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 5 Dec 2012 17:31:47 +0000 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: References: <20671.25286.166000.378985@gargle.gargle.HOWL> Message-ID: <20671.34051.322000.758417@gargle.gargle.HOWL> Derek Dreyer writes: > Thanks for the clarification, Uday! My pleasure. I think syntactic reasoning can be mysterious, and it is only clarified when we consider semantics. Consider that a plug for semantics! > Having said that, however, now I'm wondering about something else. It > seems to me that, at least when restricted to closed types (and > possibly more generally), all that the "only if" direction of Theorem > 7 shows is that every term of existential type is equal to *some* term > of the form "pack X x". In particular, it does *not* seem to imply > that equivalent packages are related by a transitive composition of > simulation arguments. Once again, syntactic reasoning locks in mysteries here. The remark following Theorem 7 provides the only way in PAL to show that two terms of a an existential type are equal. So, if you managed to prove that two such terms are equal in PAL, you would have constructed a trasitive composition of simulation arguments. So, the property you want follows as a metatheorem about PAL. (If we were givinng a semantics, we would be forced to write that down explicitly. But, in a formal system, it is implicit.) > That is, suppose pack B1 b1 = pack B2 b2 : Exists X. A[X] (with A[X] > having only free variable X). The "only if" direction of Theorem 7 > doesn't actually tell us anything (new) about the relationship between > b1 and b2 because there's a trivial solution for the existentially > bound variables on the rhs. That is indeed right. For closed types, the "only if" direction is trivial. However, for open types, it is not. You would notice in Theorem 7 the additional condition that x and y have to be related by A[S,rho]. That plays a significant role. Cheers, Uday From dreyer at mpi-sws.org Wed Dec 5 12:46:42 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 5 Dec 2012 18:46:42 +0100 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: <20671.34051.322000.758417@gargle.gargle.HOWL> References: <20671.25286.166000.378985@gargle.gargle.HOWL> <20671.34051.322000.758417@gargle.gargle.HOWL> Message-ID: > Once again, syntactic reasoning locks in mysteries here. The remark > following Theorem 7 provides the only way in PAL to show that two terms of a > an existential type are equal. So, if you managed to prove that two such > terms are equal in PAL, you would have constructed a trasitive composition > of simulation arguments. So, the property you want follows as a metatheorem > about PAL. So how do you show this? How do you *prove* that simulation is the only way to prove that two terms of existential type are equal? I don't see how it follows from Theorem 7. > That is indeed right. For closed types, the "only if" direction is trivial. > However, for open types, it is not. You would notice in Theorem 7 the > additional condition that x and y have to be related by A[S,rho]. That > plays a significant role. This is fascinating, but I still don't understand concretely what one can "do" with the "only if" direction in the case of open types. Thanks, Derek From kevin.watkins at gmail.com Wed Dec 5 13:50:52 2012 From: kevin.watkins at gmail.com (Kevin Watkins) Date: Wed, 5 Dec 2012 13:50:52 -0500 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: References: <20671.25286.166000.378985@gargle.gargle.HOWL> <20671.34051.322000.758417@gargle.gargle.HOWL> Message-ID: A naive question: I wonder if anything has been written on the homotopy theory of these notions of equality? Would a homotopical semantics capture at least part of the syntactic information regarding "how" two existential packages are equal? On Wed, Dec 5, 2012 at 12:46 PM, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> Once again, syntactic reasoning locks in mysteries here. The remark >> following Theorem 7 provides the only way in PAL to show that two terms of a >> an existential type are equal. So, if you managed to prove that two such >> terms are equal in PAL, you would have constructed a trasitive composition >> of simulation arguments. So, the property you want follows as a metatheorem >> about PAL. > > So how do you show this? How do you *prove* that simulation is the > only way to prove that two terms of existential type are equal? I > don't see how it follows from Theorem 7. > >> That is indeed right. For closed types, the "only if" direction is trivial. >> However, for open types, it is not. You would notice in Theorem 7 the >> additional condition that x and y have to be related by A[S,rho]. That >> plays a significant role. > > This is fascinating, but I still don't understand concretely what one > can "do" with the "only if" direction in the case of open types. > > Thanks, > Derek From rwh at cs.cmu.edu Wed Dec 5 14:55:20 2012 From: rwh at cs.cmu.edu (Robert Harper) Date: Wed, 5 Dec 2012 14:55:20 -0500 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: References: <20671.25286.166000.378985@gargle.gargle.HOWL> <20671.34051.322000.758417@gargle.gargle.HOWL> Message-ID: <2B4794D2-3003-469C-8755-6FAD0D4D1B64@cs.cmu.edu> Oh, yes, indeed we were talking about this last week at the IAS: one can consider a bisimulation of packages to be, by univalence, a proof of their equality. Dan Licata has written about this already, and presented a talk about it at WG2.8 last month. He and I have been working on this off-and-on for about a year. Bob On Dec 5, 2012, at 1:50 PM, Kevin Watkins wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > A naive question: I wonder if anything has been written on the > homotopy theory of these notions of equality? Would a homotopical > semantics capture at least part of the syntactic information regarding > "how" two existential packages are equal? > > On Wed, Dec 5, 2012 at 12:46 PM, Derek Dreyer wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >>> Once again, syntactic reasoning locks in mysteries here. The remark >>> following Theorem 7 provides the only way in PAL to show that two terms of a >>> an existential type are equal. So, if you managed to prove that two such >>> terms are equal in PAL, you would have constructed a trasitive composition >>> of simulation arguments. So, the property you want follows as a metatheorem >>> about PAL. >> >> So how do you show this? How do you *prove* that simulation is the >> only way to prove that two terms of existential type are equal? I >> don't see how it follows from Theorem 7. >> >>> That is indeed right. For closed types, the "only if" direction is trivial. >>> However, for open types, it is not. You would notice in Theorem 7 the >>> additional condition that x and y have to be related by A[S,rho]. That >>> plays a significant role. >> >> This is fascinating, but I still don't understand concretely what one >> can "do" with the "only if" direction in the case of open types. >> >> Thanks, >> Derek From u.s.reddy at cs.bham.ac.uk Wed Dec 5 17:44:08 2012 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 5 Dec 2012 22:44:08 +0000 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: References: <20671.25286.166000.378985@gargle.gargle.HOWL> <20671.34051.322000.758417@gargle.gargle.HOWL> Message-ID: <20671.52792.285000.956377@gargle.gargle.HOWL> Derek Dreyer writes: > So how do you show this? How do you *prove* that simulation is the > only way to prove that two terms of existential type are equal? I > don't see how it follows from Theorem 7. It doesn't follow from Theorem 7. It can't. Notice that Theorem 7 is a theorem *within* PAL. (It is really like a derived proof rule.) The property you want is a metatheorem *about* PAL. It says that the only proofs of equality within PAL will be transitive compositions of simulation proofs. I don't expect that proving that formally will be easy. It will have to be some kind of a proof reduction argument, along the lines of "if there is a proof of u = v then there is a proof of u = v of this particular form...". It might have to run through all the axioms and proof rules of PAL and argue that nothing else is possible. However, intuitively it is easy to see (I hope) that the only way to prove the equality of terms of existential types is via Theorem 7, and it allows you to prove it using simulation relations. So, in a finite proof, you can only have a finite number of such simulation relation steps. > > That is indeed right. For closed types, the "only if" direction is trivial. > > However, for open types, it is not. You would notice in Theorem 7 the > > additional condition that x and y have to be related by A[S,rho]. That > > plays a significant role. > > This is fascinating, but I still don't understand concretely what one > can "do" with the "only if" direction in the case of open types. My point was this: In Theorem 7, you find \vecB and \vecC as free type variables. That means that A[X, \vecB] and A[Y, \vecC] are different types, and you can't pick x and y to be the same as you did in the closed types case. So, the "only if" part of the general statement of Theorem 7 is not trivial in general, but it becomes trivial for the closed types case. As to what you can "do" with the "only if" direction, I don't know. I was assuming that you had some application in mind since you raised this. The "only if" directions are essentially stating completeness of the respective proof rules. So, they could presumably be used to *disprove* that two terms are related by showing that all possible ways proving them related are exhausted. Cheers, Uday From dreyer at mpi-sws.org Thu Dec 6 03:03:42 2012 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Thu, 6 Dec 2012 09:03:42 +0100 Subject: [TYPES] a counterexample to Plotkin and Abadi's characterization of equivalence at existential type? In-Reply-To: <20671.52792.285000.956377@gargle.gargle.HOWL> References: <20671.25286.166000.378985@gargle.gargle.HOWL> <20671.34051.322000.758417@gargle.gargle.HOWL> <20671.52792.285000.956377@gargle.gargle.HOWL> Message-ID: > As to what you can "do" with the "only if" direction, I don't know. I was > assuming that you had some application in mind since you raised this. The > "only if" directions are essentially stating completeness of the respective > proof rules. So, they could presumably be used to *disprove* that two terms > are related by showing that all possible ways proving them related are > exhausted. What I find surprising and unintuitive somehow is that we have a theorem which could presumably be used for such "disproving" purposes only in the case that the types are open, and not in the specific case when they are closed. (Off the top of my head, I can't think of other examples of this phenomenon, but it's early in the morning for me. ;-) Incidentally, as I wrote before, I don't think the "only if" direction of Theorem 7 is trivial even in the closed types case: it *does* give you the "canonical forms" property that every term of existential type is equal to some "pack X x". It just doesn't seem to give you anything else in that case. Derek > > Cheers, > Uday > From andreas.abel at ifi.lmu.de Mon Dec 10 19:34:53 2012 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Tue, 11 Dec 2012 01:34:53 +0100 Subject: [TYPES] Strong normalization of overlapping rewrite rules Message-ID: <50C67FAD.8010707@ifi.lmu.de> The typical normalization proof for a typed lambda calculus with rewrite rules requires the rules to be left-linear and non-overlapping. What is known about left-linear but /overlapping/ rewriting? [References appreciated.] Best, Andreas -- Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich Oettingenstr. 67, D-80538 Munich, GERMANY andreas.abel at ifi.lmu.de http://www2.tcs.ifi.lmu.de/~abel/ From frederic.blanqui at inria.fr Tue Dec 11 04:47:38 2012 From: frederic.blanqui at inria.fr (Frederic Blanqui) Date: Tue, 11 Dec 2012 17:47:38 +0800 Subject: [TYPES] Strong normalization of overlapping rewrite rules In-Reply-To: <50C67FAD.8010707@ifi.lmu.de> References: <50C67FAD.8010707@ifi.lmu.de> Message-ID: <50C7013A.3070704@inria.fr> Hello Andreas. In fact, all the works I know on the combination of (typed) lambda-calculus and first or higher-order rewriting consider arbitrary rewrite rules, thus including non-left-linear and overlapping rules. This includes the works of Breazu-Tannen, Gallier, Okada, Barbanera, Dougherty, Jouannaud, Fernandez, Geuvers, Barthe, Rubio, Walukiewicz, ... and myself. You can for instance find some references in my MSCS'05 paper, pages 5-6. Frederic. Le 11/12/2012 08:34, Andreas Abel a ?crit : > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > The typical normalization proof for a typed lambda calculus with > rewrite rules requires the rules to be left-linear and > non-overlapping. What is known about left-linear but /overlapping/ > rewriting? [References appreciated.] > > Best, > Andreas >