From oleg at okmij.org Fri Jan 16 04:40:22 2009 From: oleg at okmij.org (oleg@okmij.org) Date: Fri, 16 Jan 2009 01:40:22 -0800 (PST) Subject: [TYPES] First mechanized proofs of type soundness of delimited control Message-ID: <20090116094022.6F043AA92@Adric.metnet.fnmoc.navy.mil> We present two very different mechanized proofs of soundness of a type-and-effect system for delimited control operators shift/reset. Our calculus is call-by-value simply-typed lambda-calculus with constants and answer-type-modifying delimited control. The dynamic semantics is small-step. This is the calculus originally proposed by Danvy and Filinski, enhanced by Asai and Kameyama (APLAS07) with let-bound polymorphism. The constants of the calculus may include natural numbers, arithmetic, and the fixpoint. The type of a general expression specifies not only the type of the produced value but also the _effect_ of the expression. In the simplest case, the effect may be raising of an exception; more complex effects include co-routine invocations or backtracking. The type of a functional value is not just a->b but rather a/t1->b/t2, so to describe effects that may occur when the function is applied. Despite the more complex expression and arrow types, all types are inferred and no type annotations are required. This is joint work with Kenichi Asai, Noriko Hirota, Yukiyoshi Kameyama and Chung-chieh Shan. One mechanized soundness proof is done in Coq with a locally nameless methodology. The calculus includes the let-bound polymorphism but omits, for simplicity, the fixpoint and constants. This development deals only with meta-theory: it proves subject reduction and progress properties, but does not infer types and does not evaluate expressions. The complete Coq development is available at http://okmij.org/ftp/formalizations/shift-Coq/README.dr We have also mechanized type soundness in Twelf, using an approach that is different from known to us ways of mechanizing meta-theory in Twelf. We present not only progress and subject reduction proofs but also the type inferencer and the evaluator: we can run sample terms and see their types. Our code includes the standard shift/reset test suite. It is Twelf itself that infers and checks object language types. We did not specify the type-checking relation since terms not typeable in the object language cannot be built. More remarkably, the one-step evaluator of our calculus is also the proof term of type soundness. In other words, the very _proof_ of type soundness can be used to evaluate sample expressions. Proofs literally have a practical use. The functional relation eval for one-step reduction is defined as eval : {E:exp T} non-value E -> exp T -> type. That definition itself is the statement of subject reduction: any expression E of type T which is not a value is reduced to an expression of the same type T. Furthermore, eval is total: _every_ non-value can be further reduced. This is the statement of progress. We supply the totality declaration, and Twelf agrees. The Twelf development is available at http://okmij.org/ftp/formalizations/tagless-dl.elf Here is an introduction to the tagless typed approach to mechanizing type soundness in Twelf, for simply typed lambda-calculus: http://okmij.org/ftp/formalizations/tagless-l.elf We have also tried Isabelle/HOL. Without the nominal package, Kenichi Asai has formalized Danvy/Filinski calculus with the type-and-effect system, without let-polymorphism. He has proved subject reduction. http://okmij.org/ftp/formalizations/shift-Isabelle/README.dr To introduce let-polymorphism into this framework, one has to consider alpha-renaming more seriously. He tried Nominal Isabelle for this purpose, but without success, because of the eigen variable problem in a proof tree. The bound variables in terms are nicely treated by the nominal package, but a similar problem occurred for a variable that is introduced within a proof tree that has to be fresh. From drl at cs.cmu.edu Fri Jan 16 12:23:19 2009 From: drl at cs.cmu.edu (Dan Licata) Date: Fri, 16 Jan 2009 12:23:19 -0500 Subject: [TYPES] First mechanized proofs of type soundness of delimited control In-Reply-To: <20090116094022.6F043AA92@Adric.metnet.fnmoc.navy.mil> References: <20090116094022.6F043AA92@Adric.metnet.fnmoc.navy.mil> Message-ID: <20090116172319.GC407@cs.cmu.edu> Hi Oleg, The representation technique you're describing is referred to as "intrinsic encodings" on the Twelf Wiki: http://twelf.plparty.org/wiki/Intrinsic_encoding The idea with the terminology is that "intrinsic encodings" bake invariants (e.g. well-typedness, valuehood) in using dependent types, whereas "extrinsic encodings" use separate, after-the-fact judgements to identify those raw objects that satisfy an invariant. You can find a simple example of an evaluator on intrinsically typed syntax (whose type guarantees type preservation, and whose totality Twelf proves automatically, like in your development) in the code for "Class 1" of our Oregon Summer School course: http://twelf.plparty.org/wiki/Summer_school_2008 and see the accompanying slides for some discussion. One question: Could you elaborate on what you mean by "tagless" in this context? -Dan > We have also mechanized type soundness in Twelf, using an approach > that is different from known to us ways of mechanizing meta-theory in > Twelf. We present not only progress and subject reduction proofs but > also the type inferencer and the evaluator: we can run sample terms > and see their types. Our code includes the standard shift/reset test > suite. It is Twelf itself that infers and checks object language > types. We did not specify the type-checking relation since terms not > typeable in the object language cannot be built. More remarkably, the > one-step evaluator of our calculus is also the proof term of type > soundness. In other words, the very _proof_ of type soundness can be > used to evaluate sample expressions. Proofs literally have a practical > use. > > The functional relation eval for one-step reduction is defined as > > eval : {E:exp T} non-value E -> exp T -> type. > > That definition itself is the statement of subject reduction: any > expression E of type T which is not a value is reduced to an > expression of the same type T. Furthermore, eval is total: _every_ > non-value can be further reduced. This is the statement of > progress. We supply the totality declaration, and Twelf agrees. > > The Twelf development is available at > http://okmij.org/ftp/formalizations/tagless-dl.elf > > Here is an introduction to the tagless typed approach to mechanizing > type soundness in Twelf, for simply typed lambda-calculus: > http://okmij.org/ftp/formalizations/tagless-l.elf From tom7 at cs.cmu.edu Fri Jan 16 14:03:41 2009 From: tom7 at cs.cmu.edu (Tom Murphy) Date: Fri, 16 Jan 2009 14:03:41 -0500 (EST) Subject: [TYPES] First mechanized proofs of type soundness of delimited control Message-ID: This way of giving the semantics in Twelf isn't new (though it's a lovely way to do it, in my opinion). I think it has been independently discovered a couple of times. I use it in my thesis, for example (http://tom7.org/papers/modal-types-for-mobile-code.pdf, section 4.4.1) and in some code on the Twelf wiki. It does scale up to pretty big stuff, in my case some typed compilation transformations like CPS and closure conversion. I wanted to point out two ways you can take the idea further: 1. If you syntactically distinguish expressions and values, then the proofs become even more beautiful. This is basically just a matter of taking a judgment like is-value and internalizing it syntactically, just as you've taken a standard well-typedness judgment and internalized it into the LF types of your expression language. (For a call-by-value language you have lam : (val A -> exp B) -> val (A => B) and an inclusion of values as expressions like value : val A -> exp A and everything follows from there.) You can see then that big-step evaluation always results in a value, and you get the preservation theorem for free without any additional conditions. It just looks like: eval : exp A -> val A -> type. %mode eval +E -V. 2. One trouble with this way of doing it (really the trouble is getting spoiled because everything else is so automatic) is that if your big-step evaluation relation does not always terminate, then you cannot use stock Twelf to automatically prove a kind of progress theorem. Lots of real-sized languages are non-terminating, so that's too bad. In my thesis work I modified Twelf to add a %partial keyword. It is the same as %total but it suppresses the termination check. (This differs from %covers in that it performs output coverage checking, output freeness checking, and ensures that if the relation appeals to others, then those others are either total or partial.) With it, what you're checking is that the evaluation relation, cast as a logic program, will always be able to make progress using Twelf's operational semantics (which itself has progress and preservation properties). This is a nice way to do the argument that would be coinductive on paper, although its correspondence with a paper argument is not at all simple, formally speaking. - Tom > We have also mechanized type soundness in Twelf, using an approach > that is different from known to us ways of mechanizing meta-theory in > Twelf. We present not only progress and subject reduction proofs but > also the type inferencer and the evaluator: we can run sample terms > and see their types. Our code includes the standard shift/reset test > suite. It is Twelf itself that infers and checks object language > types. We did not specify the type-checking relation since terms not > typeable in the object language cannot be built. More remarkably, the > one-step evaluator of our calculus is also the proof term of type > soundness. In other words, the very _proof_ of type soundness can be > used to evaluate sample expressions. Proofs literally have a practical > use. > > The functional relation eval for one-step reduction is defined as > > eval : {E:exp T} non-value E -> exp T -> type. > > That definition itself is the statement of subject reduction: any > expression E of type T which is not a value is reduced to an > expression of the same type T. Furthermore, eval is total: _every_ > non-value can be further reduced. This is the statement of > progress. We supply the totality declaration, and Twelf agrees. > > The Twelf development is available at > http://okmij.org/ftp/formalizations/tagless-dl.elf > > Here is an introduction to the tagless typed approach to mechanizing > type soundness in Twelf, for simply typed lambda-calculus: > http://okmij.org/ftp/formalizations/tagless-l.elf > > We have also tried Isabelle/HOL. Without the nominal package, Kenichi > Asai has formalized Danvy/Filinski calculus with the type-and-effect > system, without let-polymorphism. He has proved subject reduction. > http://okmij.org/ftp/formalizations/shift-Isabelle/README.dr > To introduce let-polymorphism into this framework, one has to consider > alpha-renaming more seriously. He tried Nominal Isabelle for this > purpose, but without success, because of the eigen variable problem in > a proof tree. The bound variables in terms are nicely treated by the > nominal package, but a similar problem occurred for a variable that is > introduced within a proof tree that has to be fresh. > > > -- [ NEW! : http://tom7.org/ ] [ OLD! : http://fonts.tom7.com/ ] From bcpierce at cis.upenn.edu Fri Jan 23 13:07:59 2009 From: bcpierce at cis.upenn.edu (Benjamin Pierce) Date: Fri, 23 Jan 2009 13:07:59 -0500 Subject: [TYPES] Who needs a textbook for teaching PL using Coq? Message-ID: <650B27CB-3908-48FA-937D-8BA95AAEDBB6@cis.upenn.edu> For the past couple of years, I've been working on developing course materials for teaching basic theory of programming languages using a proof assistant. Last year I taught a semester-long course to a mixture of undergraduates, Masters, and beginning PhD students, covering elements of functional programming, constructive logic, theorem proving, operational semantics, and types, with 100% of the lectures and homeworks fully mechanized in Coq. I found this approach worked amazingly well: the interactivity enabled by Coq was extremely motivating for students, and both the stronger *and* the weaker students performed better on exams than in previous years where I'd delivered similar material with blackboard, paper, and pencil. Encouraged by this success, I am working this semester on a second revision of the material. Since I'm producing reasonably complete lecture notes, I've naturally begun to wonder whether it would be worth putting in the extra work required to turn these notes into a proper book. But since this work is inevitably going to be significant, I'd like to try to get an idea how many people out there might actually use such a book. So... If this sounds like a course that you would teach if a textbook for it existed, could you please drop me a note? I'd be interested in how many (and what level) students you think would take such a course at your institution, how often you'd think of offering it, whether you already teach a course covering related material, and what book you use now for this course. I'd also be interested in hearing from people that would want to use such a book for self study, or from anyone who has any thoughts at all about what such a book should be like. Many thanks! - Benjamin P.S. Here is a little write-up on my experiences with the course last year: http://www.cis.upenn.edu/~bcpierce/papers/plcurriculum.pdf Also, in case people are interested, here are the complete Fall 2007 version and the ongoing current instance of the course: http://www.seas.upenn.edu/~cis500/cis500-s09/index.html http://www.seas.upenn.edu/~cis500/cis500-f07/index.html The coverage of material this time will be similar to last year, except that I'll drop the untyped lambda-calculus in favor of a module on simple while-programs, including a little Hoare logic. From Andrew.Pitts at cl.cam.ac.uk Tue Mar 17 08:28:33 2009 From: Andrew.Pitts at cl.cam.ac.uk (Andrew Pitts) Date: Tue, 17 Mar 2009 12:28:33 +0000 Subject: [TYPES] history of typing contexts Message-ID: When formulating a type theory there are (at least) two different approaches to how variables are typed: - variables come tagged with an explicit type - variables are assigned types in a typing context that forms part of the typing judgement I believe that the first approach can be traced back to Church's original 1940 paper on the Simple Theory of Types. Can any one tell me about the origins of the second approach? Andy Pitts From beta.ziliani at gmail.com Tue Mar 17 10:35:37 2009 From: beta.ziliani at gmail.com (Beta Ziliani) Date: Tue, 17 Mar 2009 11:35:37 -0300 Subject: [TYPES] history of typing contexts In-Reply-To: References: Message-ID: The second approach is based on the work of Curry in Combinatory Logic. I think it's in Curry [1958]: Combinatory Logic Vol. I (North-Holland, Amstedam) but I'm not completely sure. Best, Beta On Tue, Mar 17, 2009 at 9:28 AM, Andrew Pitts wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > When formulating a type theory there are (at least) two different > approaches to how variables are typed: > > - variables come tagged with an explicit type > > - variables are assigned types in a typing context that forms part of > the typing judgement > > I believe that the first approach can be traced back to Church's > original 1940 paper on the Simple Theory of Types. Can any one tell me > about the origins of the second approach? > > Andy Pitts > From jonathan.seldin at uleth.ca Tue Mar 17 12:15:01 2009 From: jonathan.seldin at uleth.ca (Jonathan Seldin) Date: Tue, 17 Mar 2009 10:15:01 -0600 Subject: [TYPES] (no subject) Message-ID: <2E60F833-FA08-4646-AE4F-8946DF4D27DE@uleth.ca> It is in Curry's theory of functionality, which is presented in chapters 9 and 10 of Combinatory Logic, volume I, and it is also discussed in Combinatory Logic, volume II (North-Holland, Amsterdam, 1972), Chapter 14. There are references to earlier works on the theory in volume I. On 17-Mar-09, at 8:35 AM, Beta Ziliani wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/ > types-list ] > > The second approach is based on the work of Curry in Combinatory > Logic. I > think it's in > Curry [1958]: Combinatory Logic Vol. I (North-Holland, Amstedam) > but I'm not completely sure. > > Best, > Beta > > On Tue, Mar 17, 2009 at 9:28 AM, Andrew Pitts > wrote: > > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/ >> types-list] >> >> When formulating a type theory there are (at least) two different >> approaches to how variables are typed: >> >> - variables come tagged with an explicit type >> >> - variables are assigned types in a typing context that forms part of >> the typing judgement >> >> I believe that the first approach can be traced back to Church's >> original 1940 paper on the Simple Theory of Types. Can any one >> tell me >> about the origins of the second approach? >> >> Andy Pitts >> Jonathan P. Seldin tel: 403-329-2364 Department of Mathematics and Computer Science jonathan.seldin at uleth.ca University of Lethbridge seldin at cs.uleth.ca 4401 University Drive http://www.cs.uleth.ca/~seldin/ Lethbridge, Alberta, Canada, T1K 3M4 From alain.lecomte at univ-paris8.fr Wed Mar 18 11:08:26 2009 From: alain.lecomte at univ-paris8.fr (alain.lecomte@univ-paris8.fr) Date: Wed, 18 Mar 2009 16:08:26 +0100 (MET) Subject: [TYPES] Colloquium on Games, Dialogue and Interaction Message-ID: Could you please make this call for papers circulate among your colleagues? Thanks [* apologies for multiple postings] CALL FOR PAPERS Workshop on Games, Dialogue and Interaction http://anr-prelude.fr University of Paris ? VIII September 28/29, 2009 BACKGROUND This workshop is to be held in the frame of the PRELUDE project (funded by the French ANR ? Agence Nationale pour la Recherche), which is a joint project of four research teams : laboratoire ?Structures Formelles de la Langue? (Paris VIII and CNRS), Institut de math?matiques de Luminy (Aix-Marseille and CNRS), Laboratoire d?informatique de Bordeaux (?quipe Signes) (Bordeaux I, CNRS and INRIA) and Laboratoire lorrain de recherches en informatique et applications (LORIA, Nancy). It will be hosted by Paris VIII University in its premises located in Paris, 59 rue Pouchet (17e). AIMS AND SCOPE ?PRELUDE? stands for ?Towards a Theoretical Pragmatic based on Ludics and Continuations?, that is to say that a particular emphasis has been put during all these years on the use of new formal tools coming from Theoretical Computer Science and Logic in order to give new formulations for language phenomena pertaining to formal semantics (logical forms), pragmatics (dialogue, presupposition) and even argumentation (fallacies and stratagems). As indicated by its title, this workshop will be devoted on all aspects of the formal study of dialogue and interactions: syntax of dialogue, semantic, pragmatic and philosophical aspects. INVITED SPEAKERS Jean-Yves Girard (CNRS, Institut de Math?matiques de Luminy) Ruth Kempson (King?s College, London) Sandra Laugier (Department of Philosophy, Universit? de Picardie) Mathieu Marion (Department of Philosophy, Universit? du Qu?bec ? Montr?al) Dale Miller (INRIA, Saclay ? Ile de France and Ecole Polytechnique) Ahti-Veikko Pietarinen (Department of Philosophy, Helsinki) SUBMISSION DETAILS We invite submissions of extended abstracts of maximally 8 pages (A4 or letter, 12pt single spaced). The material must be original and previously unpublished. Parallel submissions to other conferences is possible but must be indicated with the submission. The material that is covered in the abstract must be appropriate in length and content to be presented in a 30-minute talk. Submissions must be made before *May 15* and sent to one of the following addresses: Alain.Lecomte at univ-paris8.fr, quatrini at iml.univ-mrs.fr IMPORTANT DATES - Conference: 28/28 September - Submission: 15 May - Notification: 10 June - Final version: 15 July INSCRIPTION FEES: - 100 euros, for lunches and proceedings (students : 50 euros) LOCAL ORGANIZERS: Alain Lecomte, Laurent Roussarie, Maya Hickman (Paris VIII) PROGRAM COMMITTEE Michele Abrusci (Roma III) Nicholas Asher (Toulouse & Austin) Francis Corblin (Paris IV) Marie-Ren?e Fleury (Aix-Marseille II) Christophe Fouquer? (Paris XIII) Jean-Baptiste Joinet (Paris I) Laurent Keiff (Lille III) Ruth Kempson (King?s College, London) Alain Lecomte (Paris VIII) Pierre Livet (Aix-Marseille) Giuseppe Longo (ENS ? Paris) Mathieu Marion (UQAM ? Montr?al) Dale Miller (Polytechnique ? Paris) Fr?d?ric Nef (EHESS ? Paris) Ahti-Veikko Pietarinen (Helsinki) Myriam Quatrini (Aix-Marseille II) Shahid Rahman (Lille III) Christian Retor? (Bordeaux I) Sylviane Schwer (Paris XIII) Jean-Jacques Szczeciniarz (Paris VII) ---------------------------------------------------------------------- Pr. Alain Lecomte Alain.Lecomte at univ-paris8.fr Universit? Paris 8 tel. 06 63 09 67 64 UMR 7023 ? Structures Formelles de la Langue ? B?timent D, pi?ces 323-327 2, rue de la Libert?, Saint-Denis ------------------------------------------------------------------------- From ayala at unb.br Fri Mar 27 07:37:04 2009 From: ayala at unb.br (=?iso-8859-1?Q?Mauricio_Ayala-Rinc=F3n?=) Date: Fri, 27 Mar 2009 08:37:04 -0300 Subject: [TYPES] Third LSFA CFPs References: <2E60F833-FA08-4646-AE4F-8946DF4D27DE@uleth.ca> Message-ID: <32ACE713458A42F3BA134B38F2C10363@AYALANOTEBOOK> Fourth Workshop on Logical and Semantic Frameworks, with Applications 28th June, 2009 - Bras?lia, Brazil >>> PART of RDP 2009 <<< Third Call for Papers Scope Logical and semantic frameworks are formal languages used to represent logics, languages and systems. These frameworks provide foundations for formal specification of systems and programming languages, supporting tool development and reasoning. The objective of this one-day workshop is to put together theoreticians and practitioners to promote new techniques and results, from the theoretical side, and feedback on the implementation and use of such techniques and results, from the practical side. Topics of interest to this forum include, but are not limited to: Logical frameworks Proof theory Type theory Automated deduction Semantic frameworks Specification languages and meta-languages Formal semantics of languages and systems Computational and logical properties of semantic frameworks Implementation of logical and/or semantic frameworks Applications of logical and/or semantic frameworks LSFA'09 also aims to be a forum for presenting and discussing work in progress, and therefore to provide feedback to authors on their preliminary research. Submissions to the workshop will in the form of full papers. The proceedings are produced only after the meeting, so that authors can incorporate this feedback in the published papers. Invited Speakers There will be three invited talks: Delia Kesner (PPS, CNRS and Universite Paris-Diderot) Title: Untyped Pattern Calculi Jonathan Seldin (University of Lethbridge, Canada) Title: A Reduction in Combinatory Logic Equivalent to LambdaBeta-Reduction Luis Fari?as de Cerro (Universit? Paul Sabatier -- Toulose III, France) Title:Dedutcion and Abduction of Information about an Entity Program Committee Mauricio Ayala-Rinc?n (University of Bras?lia, Brazil), co-chair Fairouz Kamareddine (Heriot-Watt University, Edinburgh, UK), co-chair Serge Autexier (DFKI, Bremen, Germany) Benjamin Rene Callejas Bedregal (Federal University of Rio Grande do Norte, Brazil) Mario Benevides (Federal University of Rio de Janeiro, Brazil) Guilherme Bittencourt (Federal University of Santa Catarina, Brazil) Christiano Braga (Fluminense Federal University, Brazil) Andreas Brunner (Federal University of Bahia, Brazil) Marcelo Coniglio (State University of Campinas, Brazil) Clare Dixon (University of Liverpool, UK) Gilles Dowek (?cole Polytechnique, France) William Farmer (McMaster University, Hamilton, Ontario, Canada) Marcelo Finger (University of S?o Paulo, Brazil) Edward Hermann Haeusler (PUC Rio, Brazil) Tudor Jebelean (RISC Johannes Kepler University, Austria) Manfred Kerber (The University of Birmingham, UK) Luis C. Lamb (Federal University of Rio Grande do Sul, Brazil) Daniel Leivant (Indiana University, USA) Jo?o Marcos (Federal University of Rio Grande do Norte, Brazil) Ana Teresa de Castro Martins (Federal University of Cear?, Brazil) Dale Miller (INRIA, France) Fl?vio Leonardo Cavalcanti de Moura (University of Bras?lia, Brazil) Luca Paolini (Universit? di Torino, Italy) Alberto Pardo (Universidad de la Rep?blica, Uruguay) Elaine Pimentel (Federal University of Minas Gerais, Brazil) Ruy de Queiroz (Federal University of Pernambuco, Brazil) Simona Ronchi della Rocca (Universit? di Torino, Italy) Amr Sabry (Indiana University, USA) Christian Urban (TUM, Germany) Freek Wiedijk (Radboud Universiteit, The Netherlands) Organizing Committee Cl?udia Nalon (University of Bras?lia, Brazil), Local Chair Elaine Pimentel (Federal University of Minas Gerais, Brazil) Edward Hermann Haeusler (PUC Rio, Brazil) Mauricio Ayala-Rincon (University of Bras?lia, Brazil) Guilherme Albuquerque Pinto (University of Bras?lia, Brazil) Dates and Submission Paper submission: 3rd April, 2009 Author notification: 15th May, 2009 Camera ready: 31st May, 2009 Contributions should be written in English and submitted in the form full papers with at most 16 pages. They must be unpublished and not submitted simultaneously for publication elsewhere. The submission should be in the form of a PDF file uploaded to LSFA'09 page at EasyChair until the submission deadline by midnight, Central European Standard Time (GMT+1). The papers should be prepared in latex using Elsevier ENTCS style. Please see the Instructions for Preparing Files for Preliminary Versions Instructions for styles and examples. The file entcs.cls is also available here. The prentcsmacro.sty file is available soon. The workshop pre-proceedings, containing the reviewed papers, will be handed-out at workshop registration and the proceedings will be published as a volume of ENTCS. After the workshop, according to the quantity and quality of selected papers, the authors will be invited to submit full versions of their works that will be also reviewed to high standards. A special issue of LSFA'06 appeared in the Journal of Algorithms and currently a special issue of LSFA'07 is being processed and will appear in The Logical Journal of the IGPL. At least one of the authors should register at the conference. The paper presentation should be in English. Contact Information For more information please contact the organizers. The web page of the event can be reached at: http://lsfa09.cic.unb.br From stefan at cs.uu.nl Fri Apr 3 15:51:39 2009 From: stefan at cs.uu.nl (Stefan Holdermans) Date: Fri, 3 Apr 2009 21:51:39 +0200 Subject: [TYPES] Inductive definition of order Message-ID: L.S., In the context of higher-order functions, I remember seeing "order" defined by induction over the types of functions. More precisely, let ? range over types with ? ::= base | ?? ? ?? . Then: order(base) = 0 order(?? ? ??) = max {order(??) + 1, order(??)} . So far, so good. But let us now add type variables ? and universal quantification: ? ::= ... | ? | ??. ?? . If we are to extend the inductive definition of order accordingly, then order(??. ??) = order(??) seems reasonable. But what about order(?)? Should we choose order(?) = 0 ? Then, curiously an order-k? type can then be instantiated with an order-k? with k? < k?. Are there any scenarios in which this can be problematic? Is zero a "fair" order for values of variable type? Does it even make sense to talk about the order of a type scheme rather than a type? I am just curious... Any references to similar definitions of order? Thanks in advance, Stefan From ko at daimi.au.dk Tue Apr 7 08:38:17 2009 From: ko at daimi.au.dk (Klaus Ostermann) Date: Tue, 07 Apr 2009 14:38:17 +0200 Subject: [TYPES] HOAS and Denotational Semantics Message-ID: <49DB4939.1080902@daimi.au.dk> Dear list members, I wonder whether there is any work on denotational semantics based on a higher-order abstract syntax representation of the terms. To maintain compositionality, it seems to me that a HOAS term of type "Term -> Term" has to be given a denotation of type "Value -> Value". There are some papers that deal with the problem of how to write "folds" on HOAS terms, such as the "Boxes go bananas" paper by Washburn and Weirich, but none of the works I have seen specifically addresses the problem of defining a denotational semantics Surprisingly, the original HOAS paper by Pfenning and Elliott touches on the issue of denotational semantics and claims that it "significantly simplifies denotational semantics specifications", but that paper does not say anything about the problem how to compute "Value -> Value" from "Term -> Term". Hence my question is whether there are any works that elaborate on this issue. Any pointers will be appreciated. Thanks in advance, Klaus From adamc at hcoop.net Tue Apr 7 09:24:49 2009 From: adamc at hcoop.net (Adam Chlipala) Date: Tue, 07 Apr 2009 09:24:49 -0400 Subject: [TYPES] HOAS and Denotational Semantics In-Reply-To: <49DB4939.1080902@daimi.au.dk> References: <49DB4939.1080902@daimi.au.dk> Message-ID: <49DB5421.9080404@hcoop.net> Klaus Ostermann wrote: > I wonder whether there is any work on denotational semantics based on a > higher-order abstract > syntax representation of the terms. > I wrote a paper on applying the "Boxes Go Bananas" style of HOAS in Coq to give (non-Turing-complete) languages denotational semantics and verify program transformations: http://adam.chlipala.net/papers/PhoasICFP08/ From brianh at metamilk.com Wed Apr 8 23:11:46 2009 From: brianh at metamilk.com (Brian Hulley) Date: Thu, 09 Apr 2009 04:11:46 +0100 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML Message-ID: <49DD6772.4080503@metamilk.com> Hi, Although there is some literature on the issue of integrating Haskell-style typeclasses into modular languages such as ML, it appears to me that several important issues remain unresolved. For example, in "Modular Type Classes" [1], the authors model some aspects of typeclasses by a stylised use of the ML module system, translating a Haskell class declaration into an ML signature with a distinguished type component (t), an instance declaration into an ML structure or functor, and provide a facility to explicitly declare which structures/functors are to be regarded as "canonical" in any specific context. While this is certainly a useful contribution, there are some aspects of the Haskell typeclass system that do not appear to be amenable to this treatment. Specifically, their treatment seems (to me) to be only applicable to those uses of typeclasses where the overloading can be statically resolved at compile time, rather than the more general case where one wants to have restricted parametric polymorphism, perhaps in conjunction with polymorphic recursion and/or existential quantification, which involves dynamic generation of (or dispatch from) "dictionary arguments" at runtime, and therefore I do not see how functor application, which afaict is a strictly compile time activity, can be used to model this. Moreover their solution to the coherence problem in the face of multiple named instances does not appear to deal with the issue of partial application. This is because they seem to assume that a polymorphic function can be translated into a suitable functor application that returns a structure containing a monomorphic value, which is indeed suggested by the syntax of qualified types in Haskell, but is confounded by some experimentation with ghci, which shows that dictionary arguments are actually interspersed with "core" arguments so as to maximize polymorphism in the face of partial application: $ ghci -fglasgow-exts -fno-monomorphism-restriction GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help Loading package base ... linking ... done. Prelude> let f x y = show y Prelude> :t f f :: forall t a. (Show a) => t -> a -> String Prelude> let g = f 'c' Prelude> :t g g :: forall a. (Show a) => a -> String Prelude> (g 1, g "hi") ("1","\"hi\"") Prelude> The salient point in the above example is that the type of (f) in reality is something like: Pi (t : *) . Pi (x : t) . Pi (a : * | Show a) . Pi (y : a) . String so that the dictionary is passed only at the point where we choose how to instantiate the type variable (a). Since a partially applied function could be passed as a polymorphic value to be used anywhere in the program, it seems unlikely that there is any hope of solving the coherence problem with lexical scoping. (Aside: two uses of Pi above would usually just be written as arrows but I find it easier to understand types if everything is written uniformly in quantifier notation - is there a better quantifier instead of Pi for this case?) Of course I am not trying to criticise the above paper in any way, and many of these issues would not arise in an ML setting at the moment due to the constraints imposed on the type system by the traditional desire to continue to support top level HM type inference. It is just that when one is trying to design a new language, with different tradeoffs (e.g. discarding TI in favour of first class polymorphism and automatic type-driven namespace management), one finds that the existing literature does not yet provide all the answers regarding the integration of typeclasses with a powerful module system. Another issue that is problematic concerns the question of identity of a type, for example: signature S = sig type t val x : t -> int end structure M = struct type t = string val x = String.toInt end structure N = M :> S In the above, is N.t the same type as M.t or not (as far as any instance declarations inside M / outside M are concerned)? i.e. if one defines an instance of some typeclass for M.t would N.t also be a member of the class via this instance, and if so, would this not imply that abstraction has been broken? The problem does not arise in the context of Haskell, since Haskell does not have any notion of abstract type at all: all types are completely concrete and the only abstraction facility is to hide the data constructor(s) so that no inhabitants of the type can be constructed elsewhere in the program. I have not been able to find any papers that address the above issue, since they all seem to just use a pervasive type like (int) or tuples/ lists thereof to illustrate their encoding of typeclasses. The issue concerns the meaning of "a posteriori" abstraction in ML, and the literature seems to be divided between merely hiding the relationship between an abstract type and its implementing type, or actually really generating a new distinct type (for the latter see [2]). Where does the notion of instance definition for "*the* type t" fit into this question? The question *may* be addressed in "ML mit Typklassen" [3] but unfortunately although the examples are in English the text is in German and I think I would need more than just a pocket dictionary to try to follow the arguments therein. Does anyone know of an English translation, or, if the issue is addressed there, could anyone point me to the relevant section since with enough time I probably could translate a few paragraphs. Apologies for the length of this post. Any pointers to literature or corrections to my no doubt incomplete understanding would be much appreciated. Thanks, Brian. [1] "Modular Type Classes" Derek Dreyer, Robert Harper, Manuel M.T. Chakravarty, Gabriele Keller. 2006/10/26 [2] "Generativity and Dynamic Opacity for Abstract Types (Extended Version)" Andreas Rossberg. 2003 [3] "ML mit Typklassen" Gerhard Schneider. 4th June 2000. From dreyer at mpi-sws.org Thu Apr 9 12:15:09 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Thu, 9 Apr 2009 18:15:09 +0200 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML In-Reply-To: <7fa251b70904090908r651d535fxfa5bf8420db1876@mail.gmail.com> References: <49DD6772.4080503@metamilk.com> <7fa251b70904090908r651d535fxfa5bf8420db1876@mail.gmail.com> Message-ID: <7fa251b70904090915v795ffd66h6ecdf056a6433aff@mail.gmail.com> Dear Brian, Thanks for reading our POPL07 paper on Modular Type Classes, and asking some good questions! The short answer is: Most of the potential problems that you mentioned do not actually arise in our MTC type system. ?The one problem that does arise has nothing to do with type classes and all to do with the "value restriction", which is orthogonal. The long answer is: > Although there is some literature on the issue of integrating > Haskell-style typeclasses into modular languages such as ML, it appears > to me that several important issues remain unresolved. Our MTC paper is certainly not the final word on the subject (in particular, we do not attempt to support constructor classes, multi-parameter classes, recursive classes, etc.). ?However, the important issues that you are concerned about, to the extent that they have to do with type classes, are resolved quite satisfactorily by our type system. > While this is certainly a useful contribution, there are some aspects of > the Haskell typeclass system that do not appear to be amenable to this > treatment. Specifically, their treatment seems (to me) to be only > applicable to those uses of typeclasses where the overloading can be > statically resolved at compile time, rather than the more general case > where one wants to have restricted parametric polymorphism, perhaps in > conjunction with polymorphic recursion and/or existential > quantification, which involves dynamic generation of (or dispatch from) > "dictionary arguments" at runtime, and therefore I do not see how > functor application, which afaict is a strictly compile time activity, > can be used to model this. Functor application is *not* a strictly compile-time activity. ?I believe that in Standard ML there are certain restrictions on where module bindings can occur, but in most dialects of ML, functor applications may appear inside core-language expressions, and their arguments may refer to "dynamic" values, e.g., values passed as arguments to a core-language function, or extracted from a ref cell or from an existential (for those few dialects that support first-class modules via existentials). Certainly in the MTC type system given in (the extended version of) the paper, there is no static/dynamic distinction between the module and core levels. > Moreover their solution to the coherence problem in the face of multiple > named instances does not appear to deal with the issue of partial > application. This is because they seem to assume that a polymorphic > function can be translated into a suitable functor application that > returns a structure containing a monomorphic value, which is indeed > suggested by the syntax of qualified types in Haskell, but is confounded > by some experimentation with ghci, which shows that dictionary arguments > are actually interspersed with "core" arguments so as to maximize > polymorphism in the face of partial application: > > ? $ ghci -fglasgow-exts -fno-monomorphism-restriction > ? GHCi, version 6.8.2: http://www.haskell.org/ghc/ ?:? for help > ? Loading package base ... linking ... done. > ? Prelude> let f x y = show y > ? Prelude> :t f > ? f :: forall t a. (Show a) => t -> a -> String > ? Prelude> let g = f 'c' > ? Prelude> :t g > ? g :: forall a. (Show a) => a -> String > ? Prelude> (g 1, g "hi") > ? ("1","\"hi\"") > ? Prelude> There is a problem here, but it's not the one you mention: rather, it's the so-called "value restriction". First, AFAICS, there is nothing mysterious about why GHC accepts the above code. ?When f is applied to 'c', the variables t and a are instantiated with fresh unification variables, t is unified with Char, a is not unified with anything, and thus, at the binding to g, a is subsequently generalized. ?(That's the Algorithm W explanation, but one could give an equivalent explanation in terms of the declarative Hindley-Milner-style type system.) The reason this code would not typecheck in MTC has nothing to do with type classes and all to do with the value restriction. ?Quite simply, f('c') is not a value, and due to the value restriction, only values (or things that are known to be "valuable") may have their types generalized. ?Like ML, our MTC type system is pretty conservative about what is considered valuable: in particular, core-language function applications (other than applications of data constructors) are considered non-valuable. The problem could be fixed in this instance by eta-expanding the definition of g to: let g y = f 'c' y Eta-expanding bindings is the standard (often admittedly unsatisfactory) approach to working around the value restriction. It is worth noting, however, that the MTC proposal is not intrinsically tied to an ML-like language with the value restriction. One can imagine employing a very similar proposal for a Haskell-like language with a monadic separation of effects and no value restriction. ?(The main reason we started with ML instead of Haskell is that the point of MTC is to show how to encode Haskell-style type classes in terms of ML-style modules; ML has ML-style modules, while Haskell does not.) > The salient point in the above example is that the type of (f) in > reality is something like: > > ? Pi (t : *) . Pi (x : t) . Pi (a : * | Show a) . Pi (y : a) . String > > so that the dictionary is passed only at the point where we choose how > to instantiate the type variable (a). I don't really understand your point here. ?In MTC, there is great flexibility in terms of exactly which type is inferred as the principal polymorphic type (or, rather, as the principal implicit functor signature) of an expression. ?For example, your function f could be given the "type" Forall (X : sig type u; structure S : SHOW end). [X.u -> X.S.t -> String] or Forall (X : sig type shmoo; type oyvey; structure S : SHOW where type t = oyvey end). [X.shmoo -> X.oyvey -> String] These are both implicitly interconvertible types for f. ?Given either of these types, when we typecheck the binding let g y = f 'c' y, what will happen is that the occurrence of f in the body of g will implicitly instantiate the type components of f's module argument X with fresh unification variables, and introduce constraints corresponding to the dictionary modules in the signature of X. Afterwards, things proceed in much the same way as in Haskell typechecking. ?The only difference is that at the end instead of generalizing the type of g over a bunch of type variables with some constraints, our type system packages those things together into a signature, e.g. g : Forall (X : SHOW). [X.t -> String] > (Aside: two uses of Pi above would usually just be written as arrows but > I find it easier to understand types if everything is written uniformly > in quantifier notation - is there a better quantifier instead of Pi for > this case?) We wouldn't write things out in the Pi notation you used. ?I think our Forall notation (see above) is pretty uniform. ?That said, if one wanted to allow Haskell-style notation for polymorphic types, it would be totally straightforward to encode it as syntactic sugar. > Of course I am not trying to criticise the above paper in any way, and > many of these issues would not arise in an ML setting at the moment due > to the constraints imposed on the type system by the traditional desire > to continue to support top level HM type inference. It is just that when > one is trying to design a new language, with different tradeoffs (e.g. > discarding TI in favour of first class polymorphism and automatic > type-driven namespace management), one finds that the existing > literature does not yet provide all the answers regarding the > integration of typeclasses with a powerful module system. Criticizing other people's papers (rightly or wrongly) is the essence of scientific inquiry. ?No worries! That said, I'm not sure what you're getting at here. > Another issue that is problematic concerns the question of identity of a > ?type, for example: > > ? signature S = sig > ? ? ?type t > ? ? ?val x : t -> int > ? end > > ? structure M = struct > ? ? ?type t = string > ? ? ?val x = String.toInt > ? end > > ? structure N = M :> S > > In the above, is N.t the same type as M.t or not (as far as any instance > declarations inside M / outside M are concerned)? i.e. if one defines an > instance of some typeclass for M.t would N.t also be a member of the > class via this instance, and if so, would this not imply that > abstraction has been broken? No. ?Here, M.t and N.t are distinct types, so after the declarations of M and N, an instance of a class C for type M.t would not conflict with an instance of that class for N.t. ?Of course, *inside* M or N, N.t doesn't even exist yet, so I'm not sure I entirely comprehend the first part of your question. For the remainder of your questions (concerning citations [2] and [3]), I will defer to Andreas Rossberg, who will post a separate reply. Best regards, Derek From brianh at metamilk.com Thu Apr 9 19:02:32 2009 From: brianh at metamilk.com (Brian Hulley) Date: Fri, 10 Apr 2009 00:02:32 +0100 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML In-Reply-To: <7fa251b70904090915v795ffd66h6ecdf056a6433aff@mail.gmail.com> References: <49DD6772.4080503@metamilk.com> <7fa251b70904090908r651d535fxfa5bf8420db1876@mail.gmail.com> <7fa251b70904090915v795ffd66h6ecdf056a6433aff@mail.gmail.com> Message-ID: <49DE7E88.4000602@metamilk.com> Dear Derek, Thanks for taking the time to write such a detailed reply, and especially for the explanation of partial application in terms of full instantiation to unification variables followed (modulo value restriction) by re-generalization, which shows that MTC is indeed much more applicable (no pun intended) than I had thought. (This was also pointed out to me off-list by another kind person.) I'm still not sure about the treatment of coherence, since it seems that the purpose of the using declarations in section 3.1 is to prevent ambiguity arising via type inference, whereas I'd have thought that even with fully annotated types for all structure components, coherence would still be a problem, i.e. that there are very deep reasons why Haskell requires only one instance per type, but I have not been able to think up an example to demonstrate this at the moment. This is one area where it is possible that the lack of first class polymorphism in ML (due to the value restriction due in turn to the unsound generalization rule of HM TI) may just possibly be masking an issue by ruling out the interesting cases by construction. Regarding the Pi notation, clearly my understanding of GHCi's treatment of the types of (f) and (g) in: let f x y = show y let g = f 'c' was wrong, but if one were to consider a language where the type of (f) is represented internally by: Pi (t : *) . Pi (x : t) . Pi (a : * | Show a) . Pi (y : a) . String then the binding to (g) would have the type: Pi (a : * | Show a) . Pi (y : a) . String The important thing here is that there is no full instantiation followed by re-generalization - the binding to (g) simply supplies the first 2 args of (f). There is therefore no need (afaict) in such a language for a value restriction or separation of effects via monads (this becomes an orthogonal issue) since there is no generalization rule to introduce unsoundness. In contrast, the MTC type system seems to require the HM concept of full instantiation followed (if possible) by re-generalization in order to allow an MTC polymorphic type to be formulated as a functor application yielding a structure with a monomorphic value component. In order to integrate something like MTC into my language I would have to find a different technical formulation like the Pi notation above, to avoid having to have a generalization rule. In any case there's lots of stuff to be thinking about ;-) Thanks again, Brian. From brianh at metamilk.com Thu Apr 9 20:13:19 2009 From: brianh at metamilk.com (Brian Hulley) Date: Fri, 10 Apr 2009 01:13:19 +0100 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML In-Reply-To: <06AB8D56-3E77-4563-B1F9-BE6D4F8E8C9C@mpi-sws.mpg.de> References: <49DD6772.4080503@metamilk.com> <06AB8D56-3E77-4563-B1F9-BE6D4F8E8C9C@mpi-sws.mpg.de> Message-ID: <49DE8F1F.9060101@metamilk.com> Dear Andreas, Thanks for elucidating the relevance of [2] and [3] to my questions. If I've now understood things right, then I assume that in the example below (in the language of [3]), the sealing operation has actually done something quite interesting: not only has it created a new statically distinct type (t) but it has also created a new instance declaration (instance t C) that is implemented by the instance declaration (instance int C) inside M. class 'a C with ... structure M = struct type t = int instance int C with ... end structure N = M :> sig type t instance t C end Perhaps I shouldn't really find this interesting but somehow an instance declaration seems like quite a different thing from a simple val or type binding and hence the above does seem slightly mystifying. However I agree that the MTC paper seems like the more elegant approach so I'll leave the subtle clouds of mystery in the above to float away... ;-) Best regards, Brian. [2] "Generativity and Dynamic Opacity for Abstract Types (Extended Version)" Andreas Rossberg. 2003 [3] "ML mit Typklassen" Gerhard Schneider. 4th June 2000. From dreyer at mpi-sws.org Thu Apr 9 20:37:23 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Fri, 10 Apr 2009 02:37:23 +0200 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML In-Reply-To: <49DE8F1F.9060101@metamilk.com> References: <49DD6772.4080503@metamilk.com> <06AB8D56-3E77-4563-B1F9-BE6D4F8E8C9C@mpi-sws.mpg.de> <49DE8F1F.9060101@metamilk.com> Message-ID: <7fa251b70904091737k1ddb6debmc33316f61ef3b5ef@mail.gmail.com> > Thanks for elucidating the relevance of [2] and [3] to my questions. If I've now understood things right, then I assume that in the example below (in the language of [3]), the sealing operation has actually done something quite interesting: not only has it created a new statically distinct type (t) but it has also created a new instance declaration (instance t C) that is implemented by the instance declaration (instance int C) inside M. > > > ? class 'a C with ... > > ? structure M = struct > ? ? ?type t = int > ? ? ?instance int C with ... > ? end > > ? structure N = M :> sig > ? ? ? ?type t > ? ? ? ?instance t C > ? end Hmm...well, just to clarify: MTC does not include explicit "class" or "instance" declarations...that's the whole point. But you can certainly define a module M with a type t = int and a substructure C_Int, let's say, of signature "C where type t = int". Then, you can make M.C_Int the "canonical" instance of C at type int via a "using" declaration. Then, later on, you can define N as you did, except where "instance t C" is replaced by "structure C_t : C where type t = t", and then make N.C_t the "canonical" instance of C at type N.t (\neq int) via a "using" declaration. So in the end you have two non-overlapping canonical instances of C whose t's are represented underneath by the same representation type int. Derek From rossberg at ps.uni-sb.de Fri Apr 10 06:24:12 2009 From: rossberg at ps.uni-sb.de (rossberg@ps.uni-sb.de) Date: Fri, 10 Apr 2009 12:24:12 +0200 (CEST) Subject: [TYPES] Issues regarding typeclasses and modular languages like ML Message-ID: <50841.77.183.235.154.1239359052.squirrel@ps.uni-sb.de> [I'm reposting this, because I had sent it from the wrong address, such that it got auto-rejected.] Following up on Derek's reply... On Apr 9, 2009, at 05.11 h, Brian Hulley wrote: > Another issue that is problematic concerns the question of identity of a > type, for example: > > [...] > > I have not been able to find any papers that address the above issue, > since they all seem to just use a pervasive type like (int) or tuples/ > lists thereof to illustrate their encoding of typeclasses. The issue > concerns the meaning of "a posteriori" abstraction in ML, and the > literature seems to be divided between merely hiding the relationship > between an abstract type and its implementing type, or actually really > generating a new distinct type (for the latter see [2]). Where does the > notion of instance definition for "*the* type t" fit into this question? The problem discussed in that paper is orthogonal to your concern - it describes a way of distinguishing an abstract type and its representation *at runtime*, as necessary in a language that has dynamic type analysis. Whether or not you dynamically generate fresh type names is immaterial in a language without such a feature (like conventional ML or Haskell). For type classes, it only matters whether the types are distinguished *statically*. As explained by Derek, that is the case (either way). > The question *may* be addressed in "ML mit Typklassen" [3] but > unfortunately although the examples are in English the text is in German > and I think I would need more than just a pocket dictionary to try to > follow the arguments therein. Does anyone know of an English > translation, or, if the issue is addressed there, could anyone point me > to the relevant section since with enough time I probably could > translate a few paragraphs. Since I co-supervised that thesis, I can lift that mystery, too. The language presented in that thesis simply adds together ML-style modules and Haskell-style type classes without any attempt to merge them. As such, it is much less elegant than the MTC paper. With respect to abstract types, it pretty much behaves the same: you can freely define instances for an abstract type, because it is distinct from any other type. Best, - Andreas From rossberg at mpi-sws.org Fri Apr 10 07:57:49 2009 From: rossberg at mpi-sws.org (Andreas Rossberg) Date: Fri, 10 Apr 2009 13:57:49 +0200 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML In-Reply-To: <7fa251b70904091737k1ddb6debmc33316f61ef3b5ef@mail.gmail.com> References: <49DD6772.4080503@metamilk.com> <06AB8D56-3E77-4563-B1F9-BE6D4F8E8C9C@mpi-sws.mpg.de> <49DE8F1F.9060101@metamilk.com> <7fa251b70904091737k1ddb6debmc33316f61ef3b5ef@mail.gmail.com> Message-ID: <7FA0B36F-9286-40B1-96D0-9EEF282C0A24@mpi-sws.org> On Apr 10, 2009, at 02.37 h, Derek Dreyer wrote: > >> Thanks for elucidating the relevance of [2] and [3] to my >> questions. If I've now understood things right, then I assume that >> in the example below (in the language of [3]), the sealing >> operation has actually done something quite interesting: not only >> has it created a new statically distinct type (t) but it has also >> created a new instance declaration (instance t C) that is >> implemented by the instance declaration (instance int C) inside M. >> >> >> class 'a C with ... >> >> structure M = struct >> type t = int >> instance int C with ... >> end >> >> structure N = M :> sig >> type t >> instance t C >> end > > Hmm...well, just to clarify: MTC does not include explicit "class" or > "instance" declarations...that's the whole point. (Brian is using the syntax from Gerhard's thesis here.) Yes, what you are describing above is what's happening. There is one additional detail, though: neither instance will initially be "active" outside the structures, because they are local to those structures. You would have to open the structures, or use the TML syntax for instance replication to pull them into global scope. That is similar in effect to the "using" declarations Derek was describing. - Andreas From brianh at metamilk.com Fri Apr 10 16:02:38 2009 From: brianh at metamilk.com (Brian Hulley) Date: Fri, 10 Apr 2009 21:02:38 +0100 Subject: [TYPES] Issues regarding typeclasses and modular languages like ML In-Reply-To: <7FA0B36F-9286-40B1-96D0-9EEF282C0A24@mpi-sws.org> References: <49DD6772.4080503@metamilk.com> <06AB8D56-3E77-4563-B1F9-BE6D4F8E8C9C@mpi-sws.mpg.de> <49DE8F1F.9060101@metamilk.com> <7fa251b70904091737k1ddb6debmc33316f61ef3b5ef@mail.gmail.com> <7FA0B36F-9286-40B1-96D0-9EEF282C0A24@mpi-sws.org> Message-ID: <49DFA5DE.5060402@metamilk.com> Andreas Rossberg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On Apr 10, 2009, at 02.37 h, Derek Dreyer wrote: >>> Thanks for elucidating the relevance of [2] and [3] to my >>> questions. If I've now understood things right, then I assume that >>> in the example below (in the language of [3]), the sealing >>> operation has actually done something quite interesting: not only >>> has it created a new statically distinct type (t) but it has also >>> created a new instance declaration (instance t C) that is >>> implemented by the instance declaration (instance int C) inside M. >>> >>> >>> class 'a C with ... >>> >>> structure M = struct >>> type t = int >>> instance int C with ... >>> end >>> >>> structure N = M :> sig >>> type t >>> instance t C >>> end >> Hmm...well, just to clarify: MTC does not include explicit "class" or >> "instance" declarations...that's the whole point. > > (Brian is using the syntax from Gerhard's thesis here.) > > Yes, what you are describing above is what's happening. There is one > additional detail, though: neither instance will initially be "active" > outside the structures, because they are local to those structures. > You would have to open the structures, or use the TML syntax for > instance replication to pull them into global scope. That is similar > in effect to the "using" declarations Derek was describing. Thanks Andreas, that last detail makes everything crystal clear. Also, thanks Derek for the explanation showing how one would translate the above into the context of MTC. Best regards, Brian. From nr at eecs.harvard.edu Fri Apr 10 17:05:54 2009 From: nr at eecs.harvard.edu (Norman Ramsey) Date: Fri, 10 Apr 2009 17:05:54 -0400 Subject: [TYPES] seeking papers with good examples of the use of GADTs Message-ID: <20090410210554.99C1713835B@drdoom.eecs.harvard.edu> I have a class of beginning functional programmers; we're approaching end of term, and I'd like them to learn about GADTs. One of my goals in the class is to give students practice learning by reading papers, so I am asking for recommendations of papers that have good examples of GADTs in action. Papers I've used in the past have included Pottier and Gauthier 2005: Polymorphic Typed Defunctionalization and Concretization, in Higher-Order and Symbolic Computation Pottier and Régis-Gianas 2006: Towards Efficient, Typed LR Parsers, in Electr. Notes Theor. Comput. Sci and Peyton Jones et al. 2006: Simple unification-based type inference for GADTs, in the 11th ACM SIGPLAN International Conference on Functional Programming. Unfortunately the first two rely on concepts in which my students have little background (LR parsing and defunctionalization respectively), and the the third, while it opens with a nice example, is primarily about the (now obsolete) type-inference algorithm, rather than about how to use GADTs. I am hoping some of you may have suggestions about other papers that would be good tutorials in the use of GADTs. Norman From dreyer at mpi-sws.org Fri Apr 10 22:28:44 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 11 Apr 2009 04:28:44 +0200 Subject: [TYPES] seeking papers with good examples of the use of GADTs In-Reply-To: <20090410210554.99C1713835B@drdoom.eecs.harvard.edu> References: <20090410210554.99C1713835B@drdoom.eecs.harvard.edu> Message-ID: <7fa251b70904101928g652caf5eq1bced275266e6154@mail.gmail.com> I would highly recommend Fritz Henglein's ICFP'08 paper, "Generic Discrimination: Sorting and Partitioning Unshared Data in Linear Time": http://portal.acm.org/ft_gateway.cfm?id=1411220&type=pdf&coll=GUIDE&dl=GUIDE&CFID=30629082&CFTOKEN=48962619 It provides a very elegant use of GADTs for expressing distributive sorting algorithms in a compact, composable way. Also, the motivation (sorting) is as basic as it gets, and the paper has a good mixture of theory and Haskell code to chew on. There are probably some elements of the paper that your students will have trouble with, but they are not essential ones. Derek On Fri, Apr 10, 2009 at 11:05 PM, Norman Ramsey wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > I have a class of beginning functional programmers; we're approaching > end of term, and I'd like them to learn about GADTs. ?One of my goals > in the class is to give students practice learning by reading papers, > so I am asking for recommendations of papers that have good examples > of GADTs in action. ? Papers I've used in the past have included > > ?Pottier and Gauthier 2005: Polymorphic Typed Defunctionalization and > ?Concretization, in Higher-Order and Symbolic Computation > > ?Pottier and R?gis-Gianas 2006: Towards Efficient, Typed LR Parsers, > ?in Electr. Notes Theor. Comput. Sci > > and > > ?Peyton Jones et al. 2006: Simple unification-based type inference > ?for GADTs, in the 11th ACM SIGPLAN International Conference on > ?Functional Programming. > > Unfortunately the first two rely on concepts in which my students have > little background (LR parsing and defunctionalization respectively), > and the the third, while it opens with a nice example, is primarily > about the (now obsolete) type-inference algorithm, rather than about > how to use GADTs. > > I am hoping some of you may have suggestions about other papers that > would be good tutorials in the use of GADTs. > > > Norman > > > From philip.wadler at ed.ac.uk Wed Apr 15 10:36:16 2009 From: philip.wadler at ed.ac.uk (Philip Wadler) Date: Wed, 15 Apr 2009 07:36:16 -0700 Subject: [TYPES] seeking papers with good examples of the use of GADTs In-Reply-To: <7fa251b70904101928g652caf5eq1bced275266e6154@mail.gmail.com> References: <20090410210554.99C1713835B@drdoom.eecs.harvard.edu> <7fa251b70904101928g652caf5eq1bced275266e6154@mail.gmail.com> Message-ID: <7F30487A-F638-4C27-A136-93FF53BFB6BD@ed.ac.uk> Good suggestion. Henglein's ICFP 08 paper is one of my favorite papers of the last few years. It is beautifully written, and turns folk wisdom (a sort is n log n) on its head. GADTs are well used in the paper, but are not essential. An exercise to set the students is to do the same thing without GADTs, using type classes instead, providing a good example of the tradeoff between the techniques. Indeed, I suspect Fritz's work may become a killer app for type classes---sorting in linear time is important enough to get type classes or an equivalent into the next generation of PLs. (That's already happening, as the new idea of 'concepts' in C++ is related to type classes.) Yours, -- P On 10 Apr 2009, at 19:28, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > I would highly recommend Fritz Henglein's ICFP'08 paper, "Generic > Discrimination: Sorting and Partitioning Unshared Data in Linear > Time": > > http://portal.acm.org/ft_gateway.cfm?id=1411220&type=pdf&coll=GUIDE&dl=GUIDE&CFID=30629082&CFTOKEN=48962619 > > It provides a very elegant use of GADTs for expressing distributive > sorting algorithms in a compact, composable way. Also, the motivation > (sorting) is as basic as it gets, and the paper has a good mixture of > theory and Haskell code to chew on. There are probably some elements > of the paper that your students will have trouble with, but they are > not essential ones. > > Derek > > On Fri, Apr 10, 2009 at 11:05 PM, Norman Ramsey > wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> >> I have a class of beginning functional programmers; we're approaching >> end of term, and I'd like them to learn about GADTs. One of my goals >> in the class is to give students practice learning by reading papers, >> so I am asking for recommendations of papers that have good examples >> of GADTs in action. Papers I've used in the past have included >> >> Pottier and Gauthier 2005: Polymorphic Typed Defunctionalization and >> Concretization, in Higher-Order and Symbolic Computation >> >> Pottier and R?gis-Gianas 2006: Towards Efficient, Typed LR Parsers, >> in Electr. Notes Theor. Comput. Sci >> >> and >> >> Peyton Jones et al. 2006: Simple unification-based type inference >> for GADTs, in the 11th ACM SIGPLAN International Conference on >> Functional Programming. >> >> Unfortunately the first two rely on concepts in which my students >> have >> little background (LR parsing and defunctionalization respectively), >> and the the third, while it opens with a nice example, is primarily >> about the (now obsolete) type-inference algorithm, rather than about >> how to use GADTs. >> >> I am hoping some of you may have suggestions about other papers that >> would be good tutorials in the use of GADTs. >> >> >> Norman >> >> >> > \ Philip Wadler, Professor of Theoretical Computer Science /\ School of Informatics, University of Edinburgh / \ http://homepages.inf.ed.ac.uk/wadler/ -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From crary at cs.cmu.edu Wed Apr 15 10:54:02 2009 From: crary at cs.cmu.edu (Karl Crary) Date: Wed, 15 Apr 2009 10:54:02 -0400 Subject: [TYPES] seeking papers with good examples of the use of GADTs Message-ID: <49E5F50A.50905@cs.cmu.edu> Norman, My paper Flexible Type Analysis with Stephanie Weirich in ICFP 1999 has some examples using GADTs in typed intermediate languages. The paper pre-dates the term "GADT" so you won't see it used explicitly, but you won't have any trouble making the connection. -- Karl Crary From sweirich at cis.upenn.edu Tue Apr 21 16:38:43 2009 From: sweirich at cis.upenn.edu (Stephanie Weirich) Date: Tue, 21 Apr 2009 16:38:43 -0400 Subject: [TYPES] New moderator: Derek Dreyer Message-ID: <23C368ED-28BD-4478-AA2C-7C632997375B@cis.upenn.edu> It is my pleasure to announce that Derek Dreyer has taken over the job as moderator of the TYPES forum. After six years of running types-list and types-announce, it is time to pass the baton. Please join me in thanking Derek for taking on this valuable service for the community. With this handover, the list addresses will not change. As always, send discussion to types-list at lists.seas.upenn.edu and announcements to types-announce at lists.seas.upenn.edu Derek is eagerly awaiting your submissions. Cheers, Stephanie Weirich Former TYPES forum moderator From bcpierce at cis.upenn.edu Tue Apr 21 23:31:15 2009 From: bcpierce at cis.upenn.edu (Benjamin Pierce) Date: Tue, 21 Apr 2009 23:31:15 -0400 Subject: [TYPES] [TYPES/announce] New moderator: Derek Dreyer In-Reply-To: <23C368ED-28BD-4478-AA2C-7C632997375B@cis.upenn.edu> References: <23C368ED-28BD-4478-AA2C-7C632997375B@cis.upenn.edu> Message-ID: <95F95CFA-7680-4117-8220-2980C3F65495@cis.upenn.edu> Excellent -- thanks very much, Derek, for stepping up! We also owe a huge round of thanks to Stephanie for her excellent management of the list for six years!!! - Benjamin (former former TYPES forum moderator :-) On Apr 21, 2009, at 4:38 PM, Stephanie Weirich wrote: > [ The Types Forum (announcements only), > http://lists.seas.upenn.edu/mailman/listinfo/types-announce ] > > It is my pleasure to announce that Derek Dreyer has taken over > the job as moderator of the TYPES forum. After six years of > running types-list and types-announce, it is time to pass the > baton. Please join me in thanking Derek for taking on this > valuable service for the community. > > With this handover, the list addresses will not change. > As always, send discussion to > types-list at lists.seas.upenn.edu > and announcements to > types-announce at lists.seas.upenn.edu > Derek is eagerly awaiting your submissions. > > Cheers, > > Stephanie Weirich > > Former TYPES forum moderator From wadler at inf.ed.ac.uk Wed Apr 22 04:23:49 2009 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Wed, 22 Apr 2009 09:23:49 +0100 Subject: [TYPES] [TYPES/announce] New moderator: Derek Dreyer In-Reply-To: <95F95CFA-7680-4117-8220-2980C3F65495@cis.upenn.edu> References: <23C368ED-28BD-4478-AA2C-7C632997375B@cis.upenn.edu> <95F95CFA-7680-4117-8220-2980C3F65495@cis.upenn.edu> Message-ID: <49EED415.9050908@inf.ed.ac.uk> Hear, hear! Thanks to Stephanie for a job well done, and thanks to Derek for stepping in to a vital role. -- Phil (former^3 Types Forum moderator) Benjamin Pierce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Excellent -- thanks very much, Derek, for stepping up! > > We also owe a huge round of thanks to Stephanie for her excellent > management of the list for six years!!! > > - Benjamin > (former former TYPES forum moderator :-) > > > > On Apr 21, 2009, at 4:38 PM, Stephanie Weirich wrote: > >> [ The Types Forum (announcements only), >> http://lists.seas.upenn.edu/mailman/listinfo/types-announce ] >> >> It is my pleasure to announce that Derek Dreyer has taken over >> the job as moderator of the TYPES forum. After six years of >> running types-list and types-announce, it is time to pass the >> baton. Please join me in thanking Derek for taking on this >> valuable service for the community. >> >> With this handover, the list addresses will not change. >> As always, send discussion to >> types-list at lists.seas.upenn.edu >> and announcements to >> types-announce at lists.seas.upenn.edu >> Derek is eagerly awaiting your submissions. >> >> Cheers, >> >> Stephanie Weirich >> >> Former TYPES forum moderator > > -- \ Philip Wadler, Professor of Theoretical Computer Science /\ School of Informatics, University of Edinburgh / \ http://homepages.inf.ed.ac.uk/wadler/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Patricia.Johann at cis.strath.ac.uk Mon Jun 22 06:57:06 2009 From: Patricia.Johann at cis.strath.ac.uk (Patricia Johann) Date: Mon, 22 Jun 2009 11:57:06 +0100 (BST) Subject: [TYPES] Implementing induction principles Message-ID: Dear Types, I'm trying to implement the induction principle for the natural numbers. Briefly, given P: Nat -> * and u: P(0) v: forall n:Nat. Pn -> P(n+1) I can bundle these together to form a 1+_-algebra on the type Sigma n:Nat. P n, i.e., a function my_algebra : (1 + Sigma n:Nat. P n) -> Sigma n:Nat. P n my_algebra = [ (0,u), \(n,p) -> (n+1, v n p)] Then I can write ind P :: forall n:Nat. P n ind P n = p where (m,p) = iterate my_algebra n To show that this is correct, I need to show that n=m. This is easily done using the uniqueness of the iteration operator. Here are my questions: 1. Can I implement ind in a programming language, theorem prover or intensional type theory where the uniqueness of the iteration operator is not available? Maybe this can be done because I only need to know that projection after a specific iterator is the identity. Or maybe there is a different approach to implementing ind? 2. Is there a programming language or theorem prover where the uniqueness of the iteration operator can be used? 3. Or are there theoretical reasons (perhaps due to undecidability of type checking) why this is not possible? Thanks very much in advance, -patty From andreas.abel at ifi.lmu.de Mon Jun 22 10:25:49 2009 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Mon, 22 Jun 2009 16:25:49 +0200 Subject: [TYPES] Implementing induction principles In-Reply-To: References: Message-ID: <4A3F946D.6070700@ifi.lmu.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear Patricia, concerning 3., you might be interested in Scott and Okada's paper A Note on Rewriting Theory for the Uniqueness of Iteration They prove that uniqueness of primitive recursion destroys decidability. Cheers, Andreas Patricia Johann wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Dear Types, > > I'm trying to implement the induction principle for the natural numbers. > > Briefly, given P: Nat -> * and > > u: P(0) > v: forall n:Nat. Pn -> P(n+1) > > I can bundle these together to form a 1+_-algebra on the type Sigma n:Nat. > P n, i.e., a function > > my_algebra : (1 + Sigma n:Nat. P n) -> Sigma n:Nat. P n > my_algebra = [ (0,u), \(n,p) -> (n+1, v n p)] > > Then I can write > > ind P :: forall n:Nat. P n > ind P n = p where (m,p) = iterate my_algebra n > > To show that this is correct, I need to show that n=m. This is easily done > using the uniqueness of the iteration operator. > > Here are my questions: > > 1. Can I implement ind in a programming language, theorem prover or > intensional type theory where the uniqueness of the iteration operator is > not available? Maybe this can be done because I only need to know that > projection after a specific iterator is the identity. Or maybe there is a > different approach to implementing ind? > > 2. Is there a programming language or theorem prover where the uniqueness > of the iteration operator can be used? > > 3. Or are there theoretical reasons (perhaps due to undecidability of type > checking) why this is not possible? > > Thanks very much in advance, > -patty > - -- Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich http://www.tcs.informatik.uni-muenchen.de/~abel/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFKP5RtPMHaDxpUpLMRAiRyAJ9Qzvphdo/IFUJwZRPpZSBsLohCsQCePdP2 g2ECXW9blh0gocV8NOhwEQ4= =P8so -----END PGP SIGNATURE----- From streicher at mathematik.tu-darmstadt.de Tue Jun 23 07:44:20 2009 From: streicher at mathematik.tu-darmstadt.de (Thomas Streicher) Date: Tue, 23 Jun 2009 13:44:20 +0200 Subject: [TYPES] Implementing induction principles In-Reply-To: References: Message-ID: <20090623114420.GB12849@mathematik.tu-darmstadt.de> That induction over N does not imply uniqueness (i.e. eta fails for R) is shown on pp.101-102 of my Habilitation Thesis (1993) available at www.mathematik.tu-darmstadt.de/~streicher/HabilStreicher.pdf Thomas From drl at cs.cmu.edu Tue Jun 23 12:59:41 2009 From: drl at cs.cmu.edu (Dan Licata) Date: Tue, 23 Jun 2009 12:59:41 -0400 Subject: [TYPES] Implementing induction principles In-Reply-To: References: Message-ID: <20090623165941.GA9105@cs.cmu.edu> Hi Patty, Here's an Agda development of the induction principle along the lines you suggest: http://www.cs.cmu.edu/~drl/code/Ind.agda Regarding your questions at the end, I think the reason that my development works is that uniqueness of the iterator is (more or less--see below) true up to propositional (but not definitional) equality. For example, if I defined induction more simply like this: natrec : (P : Nat -> Set) (base : (P Z)) (step : ((n : Nat) -> P n -> P (S n))) -> (n : Nat) -> P n natrec P base step Z = base natrec P base step (S n) = step n (natrec P base step n) then I can easily prove that any function that behaves like base on zero and behaves like (step o itself) on successor behaves like natrec(base,step) on all numbers: unique : (P : Nat -> Set) (base : (P Z)) (step : ((n : Nat) -> P n -> P (S n))) (f : (n : Nat) -> P n) (id0 : Id (f Z) base) (id1 : (n : Nat) -> Id (f (S n)) (step n (f n))) (n : Nat) -> Id (f n) (natrec P base step n) (This doesn't prove that Id f (natrec P base step)---I don't think this holds, because proposotional equality defined by Id does not treat functions extensionally.) Similarly, to develop 'ind' along the lines you suggested, you need to prove the lemma that itlemma : (n : Nat) -> Id (fst (iterate n)) n which you do by induction. -Dan On Jun22, Patricia Johann wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Dear Types, > > I'm trying to implement the induction principle for the natural numbers. > > Briefly, given P: Nat -> * and > > u: P(0) > v: forall n:Nat. Pn -> P(n+1) > > I can bundle these together to form a 1+_-algebra on the type Sigma n:Nat. > P n, i.e., a function > > my_algebra : (1 + Sigma n:Nat. P n) -> Sigma n:Nat. P n > my_algebra = [ (0,u), \(n,p) -> (n+1, v n p)] > > Then I can write > > ind P :: forall n:Nat. P n > ind P n = p where (m,p) = iterate my_algebra n > > To show that this is correct, I need to show that n=m. This is easily done > using the uniqueness of the iteration operator. > > Here are my questions: > > 1. Can I implement ind in a programming language, theorem prover or > intensional type theory where the uniqueness of the iteration operator is > not available? Maybe this can be done because I only need to know that > projection after a specific iterator is the identity. Or maybe there is a > different approach to implementing ind? > > 2. Is there a programming language or theorem prover where the uniqueness > of the iteration operator can be used? > > 3. Or are there theoretical reasons (perhaps due to undecidability of type > checking) why this is not possible? > > Thanks very much in advance, > -patty > From wlovas at stwing.upenn.edu Sat Jul 11 02:10:07 2009 From: wlovas at stwing.upenn.edu (William Lovas) Date: Sat, 11 Jul 2009 02:10:07 -0400 Subject: [TYPES] records from intersections; variants from unions? Message-ID: <20090711061006.GA31698@force.stwing.upenn.edu> Hello all, I've read several places about a trick whereby you can define a language with only single-field records, but derive the general n-ary version via intersection types. (I first read about it in Reynolds's work, but i've listed some other references below.) Does anybody know of any work that dualizes this trick, recovering n-ary variants from a single-constructor variety using union types? Thanks for any pointers! William References [1] John C. Reynolds, Design of the Programming Language Forsythe, Report CMU-CS-96-146, Carnegie Mellon University, 1996. http://reports-archive.adm.cs.cmu.edu/anon/1996/CMU-CS-96-146.ps.gz [2] Alexei Kopylov, Dependent Intersection: A New Way of Defining Records in Type Theory, LICS 2003. http://files.metaprl.org/papers/dinter.pdf [3] Jan Zwanenburg, Record Concatenation with Intersection Types, Computing Science Report CS-95-34, Eindhoven University of Technology, 1995. http://www.cs.ru.nl/~janz/publications/concatenation0.ps From blume at tti-c.org Sat Jul 11 09:47:42 2009 From: blume at tti-c.org (Matthias Blume) Date: Sat, 11 Jul 2009 08:47:42 -0500 Subject: [TYPES] records from intersections; variants from unions? In-Reply-To: <20090711061006.GA31698@force.stwing.upenn.edu> References: <20090711061006.GA31698@force.stwing.upenn.edu> Message-ID: <7F99FD53-B4BF-444D-ABEF-B82887DF4CA2@tti-c.org> Hi William, my language MLPolyR (see our ICFP'06 paper) essentially does this, although the design is based on R?my-style row types and row polymorphism rather than general intersection- and union types. My student Wonseok Chae created the following web site for MLPolyR: http://ttic.uchicago.edu/~wchae/wiki/pmwiki.php Matthias On Jul 11, 2009, at 1:10 AM, William Lovas wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hello all, > > I've read several places about a trick whereby you can define a > language > with only single-field records, but derive the general n-ary version > via > intersection types. (I first read about it in Reynolds's work, but > i've > listed some other references below.) Does anybody know of any work > that > dualizes this trick, recovering n-ary variants from a single- > constructor > variety using union types? > > Thanks for any pointers! > > William > > > References > > [1] John C. Reynolds, Design of the Programming Language Forsythe, > Report > CMU-CS-96-146, Carnegie Mellon University, 1996. > http://reports-archive.adm.cs.cmu.edu/anon/1996/CMU-CS-96-146.ps.gz > > [2] Alexei Kopylov, Dependent Intersection: A New Way of Defining > Records > in Type Theory, LICS 2003. http://files.metaprl.org/papers/dinter.pdf > > [3] Jan Zwanenburg, Record Concatenation with Intersection Types, > Computing > Science Report CS-95-34, Eindhoven University of Technology, 1995. > http://www.cs.ru.nl/~janz/publications/concatenation0.ps From samth at ccs.neu.edu Sat Jul 11 08:05:43 2009 From: samth at ccs.neu.edu (Sam TH) Date: Sat, 11 Jul 2009 14:05:43 +0200 Subject: [TYPES] records from intersections; variants from unions? In-Reply-To: <20090711061006.GA31698@force.stwing.upenn.edu> References: <20090711061006.GA31698@force.stwing.upenn.edu> Message-ID: <63bb19ae0907110505n7eb0b408h787bac79a80caac1@mail.gmail.com> On Sat, Jul 11, 2009 at 8:10 AM, William Lovas wrote: > I've read several places about a trick whereby you can define a language > with only single-field records, but derive the general n-ary version via > intersection types. ?(I first read about it in Reynolds's work, but i've > listed some other references below.) ?Does anybody know of any work that > dualizes this trick, recovering n-ary variants from a single-constructor > variety using union types? In Typed Scheme [1,2], this is exactly how variants are implemented. Here's an example of a tree type: #lang typed-scheme (define-struct: Node ([l : Tree] [r : Tree])) (define-struct: Leaf ([v : Number])) (define-type-alias Tree (U Node Leaf)) [1] http://docs.plt-scheme.org/ts-reference/ [2] Sam Tobin-Hochstadt and Matthias Felleisen, the Design and Implementation of Typed Scheme, POPL 2008, http://www.ccs.neu.edu/scheme/pubs/#popl08-thf -- sam th samth at ccs.neu.edu From uuomul at yahoo.com Tue Jul 14 14:16:11 2009 From: uuomul at yahoo.com (Andrei Popescu) Date: Tue, 14 Jul 2009 11:16:11 -0700 (PDT) Subject: [TYPES] proofs of strong normalization for System F Message-ID: <622200.52625.qm@web56102.mail.re3.yahoo.com> Hello, I would like to learn about different proofs of strong normalization for System F and related systems (featuring impredicativity), and also about formalizations of such proofs. ?In particular, I am curious if there are any proofs that depart significantly from Girard's original proof idea. ?? Thank you in advance for any help with this, ??Andrei Popescu From dreyer at mpi-sws.org Tue Jul 14 14:38:26 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Tue, 14 Jul 2009 20:38:26 +0200 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <622200.52625.qm@web56102.mail.re3.yahoo.com> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> Message-ID: <7fa251b70907141138h3688eb82offe80d0c1573bc57@mail.gmail.com> Andrei, I would refer you to Jean Gallier's article, 'On Girard's "Candidats De Reductibilit?"', from the Odifreddi volume of Logic and Computer Science, 1990, but perhaps you already know of it. It is available here: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1739&context=cis_reports The article presents a number of different variations on Girard's proof, such as typed vs. untyped formulations of the logical relation, in great detail. It also covers the proof of strong normalization for F-omega. I'm not aware of any proof that departs significantly from Girard's original idea. Best regards, Derek On Tue, Jul 14, 2009 at 8:16 PM, Andrei Popescu wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hello, > > > > I would like to learn about different proofs of strong > normalization for System F and related systems (featuring > impredicativity), and also about formalizations of such proofs. ?In > particular, I am curious if there are any proofs that depart > significantly from Girard's original proof idea. > > > > Thank you in advance for any help with this, > > ???Andrei Popescu > > > > > From blume at tti-c.org Tue Jul 14 15:18:55 2009 From: blume at tti-c.org (Matthias Blume) Date: Tue, 14 Jul 2009 14:18:55 -0500 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <7fa251b70907141138h3688eb82offe80d0c1573bc57@mail.gmail.com> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> <7fa251b70907141138h3688eb82offe80d0c1573bc57@mail.gmail.com> Message-ID: <2D48365A-B610-4582-9E85-F6EB9EE7F65C@tti-c.org> There is a 1995 paper in Information and Computation by McAllester, Kucan, and Otth titled "A Proof of Strong Normalization of F_2, F_omega and Beyond". It seems to be somewhat different from the original Girard proof, but I have not checked the details. Matthias On Jul 14, 2009, at 1:38 PM, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Andrei, > > I would refer you to Jean Gallier's article, 'On Girard's "Candidats > De Reductibilit?"', from the Odifreddi volume of Logic and Computer > Science, 1990, but perhaps you already know of it. It is available > here: > > http://repository.upenn.edu/cgi/viewcontent.cgi?article=1739&context=cis_reports > > The article presents a number of different variations on Girard's > proof, such as typed vs. untyped formulations of the logical relation, > in great detail. It also covers the proof of strong normalization for > F-omega. > > I'm not aware of any proof that departs significantly from Girard's > original idea. > > Best regards, > Derek > > > On Tue, Jul 14, 2009 at 8:16 PM, Andrei Popescu > wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> >> >> Hello, >> >> >> >> I would like to learn about different proofs of strong >> normalization for System F and related systems (featuring >> impredicativity), and also about formalizations of such proofs. In >> particular, I am curious if there are any proofs that depart >> significantly from Girard's original proof idea. >> >> >> >> Thank you in advance for any help with this, >> >> Andrei Popescu >> >> >> >> >> From dreyer at mpi-sws.org Tue Jul 14 15:35:45 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Tue, 14 Jul 2009 21:35:45 +0200 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <2D48365A-B610-4582-9E85-F6EB9EE7F65C@tti-c.org> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> <7fa251b70907141138h3688eb82offe80d0c1573bc57@mail.gmail.com> <2D48365A-B610-4582-9E85-F6EB9EE7F65C@tti-c.org> Message-ID: <7fa251b70907141235g62aaa2aam7080acab321e1384@mail.gmail.com> Yes, but the core of McAllester et al.'s proof still involves Girard's method of interpreting a polymorphic type by universal quantification over an arbitrary set. Derek On Tue, Jul 14, 2009 at 9:18 PM, Matthias Blume wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > There is a 1995 paper in Information and Computation by McAllester, > Kucan, and Otth titled "A Proof of Strong Normalization of F_2, > F_omega and Beyond". ?It seems to be somewhat different from the > original Girard proof, but I have not checked the details. > > Matthias > > On Jul 14, 2009, at 1:38 PM, Derek Dreyer wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ?] >> >> Andrei, >> >> I would refer you to Jean Gallier's article, 'On Girard's "Candidats >> De Reductibilit?"', from the Odifreddi volume of Logic and Computer >> Science, 1990, but perhaps you already know of it. ?It is available >> here: >> >> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1739&context=cis_reports >> >> The article presents a number of different variations on Girard's >> proof, such as typed vs. untyped formulations of the logical relation, >> in great detail. ?It also covers the proof of strong normalization for >> F-omega. >> >> I'm not aware of any proof that departs significantly from Girard's >> original idea. >> >> Best regards, >> Derek >> >> >> On Tue, Jul 14, 2009 at 8:16 PM, Andrei Popescu >> wrote: >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ?] >>> >>> >>> >>> Hello, >>> >>> >>> >>> I would like to learn about different proofs of strong >>> normalization for System F and related systems (featuring >>> impredicativity), and also about formalizations of such proofs. ?In >>> particular, I am curious if there are any proofs that depart >>> significantly from Girard's original proof idea. >>> >>> >>> >>> Thank you in advance for any help with this, >>> >>> ? ?Andrei Popescu >>> >>> >>> >>> >>> > > From blume at tti-c.org Tue Jul 14 15:37:29 2009 From: blume at tti-c.org (Matthias Blume) Date: Tue, 14 Jul 2009 14:37:29 -0500 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <7fa251b70907141235g62aaa2aam7080acab321e1384@mail.gmail.com> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> <7fa251b70907141138h3688eb82offe80d0c1573bc57@mail.gmail.com> <2D48365A-B610-4582-9E85-F6EB9EE7F65C@tti-c.org> <7fa251b70907141235g62aaa2aam7080acab321e1384@mail.gmail.com> Message-ID: Yes, indeed. Matthias On Jul 14, 2009, at 2:35 PM, Derek Dreyer wrote: > Yes, but the core of McAllester et al.'s proof still involves Girard's > method of interpreting a polymorphic type by universal quantification > over an arbitrary set. > > Derek > > On Tue, Jul 14, 2009 at 9:18 PM, Matthias Blume > wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> There is a 1995 paper in Information and Computation by McAllester, >> Kucan, and Otth titled "A Proof of Strong Normalization of F_2, >> F_omega and Beyond". It seems to be somewhat different from the >> original Girard proof, but I have not checked the details. >> >> Matthias >> >> On Jul 14, 2009, at 1:38 PM, Derek Dreyer wrote: >> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ] >>> >>> Andrei, >>> >>> I would refer you to Jean Gallier's article, 'On Girard's "Candidats >>> De Reductibilit?"', from the Odifreddi volume of Logic and Computer >>> Science, 1990, but perhaps you already know of it. It is available >>> here: >>> >>> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1739&context=cis_reports >>> >>> The article presents a number of different variations on Girard's >>> proof, such as typed vs. untyped formulations of the logical >>> relation, >>> in great detail. It also covers the proof of strong normalization >>> for >>> F-omega. >>> >>> I'm not aware of any proof that departs significantly from Girard's >>> original idea. >>> >>> Best regards, >>> Derek >>> >>> >>> On Tue, Jul 14, 2009 at 8:16 PM, Andrei Popescu >>> wrote: >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ] >>>> >>>> >>>> >>>> Hello, >>>> >>>> >>>> >>>> I would like to learn about different proofs of strong >>>> normalization for System F and related systems (featuring >>>> impredicativity), and also about formalizations of such proofs. In >>>> particular, I am curious if there are any proofs that depart >>>> significantly from Girard's original proof idea. >>>> >>>> >>>> >>>> Thank you in advance for any help with this, >>>> >>>> Andrei Popescu >>>> >>>> >>>> >>>> >>>> >> >> From wojtek at cs.cornell.edu Tue Jul 14 15:49:57 2009 From: wojtek at cs.cornell.edu (Wojtek Moczydlowski) Date: Tue, 14 Jul 2009 15:49:57 -0400 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: References: <622200.52625.qm@web56102.mail.re3.yahoo.com> <7fa251b70907141138h3688eb82offe80d0c1573bc57@mail.gmail.com> <2D48365A-B610-4582-9E85-F6EB9EE7F65C@tti-c.org> <7fa251b70907141235g62aaa2aam7080acab321e1384@mail.gmail.com> Message-ID: Given that AFAIR system F is equiconsistent with second-order arithmetic, how could you expect any normalization proof not to involve universal quantification over all sets of terms/numbers? Wojtek On Tue, 14 Jul 2009, Matthias Blume wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Yes, indeed. > > Matthias > > On Jul 14, 2009, at 2:35 PM, Derek Dreyer wrote: > >> Yes, but the core of McAllester et al.'s proof still involves Girard's >> method of interpreting a polymorphic type by universal quantification >> over an arbitrary set. >> >> Derek >> >> On Tue, Jul 14, 2009 at 9:18 PM, Matthias Blume >> wrote: >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ] >>> >>> There is a 1995 paper in Information and Computation by McAllester, >>> Kucan, and Otth titled "A Proof of Strong Normalization of F_2, >>> F_omega and Beyond". It seems to be somewhat different from the >>> original Girard proof, but I have not checked the details. >>> >>> Matthias >>> >>> On Jul 14, 2009, at 1:38 PM, Derek Dreyer wrote: >>> >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ] >>>> >>>> Andrei, >>>> >>>> I would refer you to Jean Gallier's article, 'On Girard's "Candidats >>>> De Reductibilité"', from the Odifreddi volume of Logic and Computer >>>> Science, 1990, but perhaps you already know of it. It is available >>>> here: >>>> >>>> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1739&context=cis_reports >>>> >>>> The article presents a number of different variations on Girard's >>>> proof, such as typed vs. untyped formulations of the logical >>>> relation, >>>> in great detail. It also covers the proof of strong normalization >>>> for >>>> F-omega. >>>> >>>> I'm not aware of any proof that departs significantly from Girard's >>>> original idea. >>>> >>>> Best regards, >>>> Derek >>>> >>>> >>>> On Tue, Jul 14, 2009 at 8:16 PM, Andrei Popescu >>>> wrote: >>>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>> ] >>>>> >>>>> >>>>> >>>>> Hello, >>>>> >>>>> >>>>> >>>>> I would like to learn about different proofs of strong >>>>> normalization for System F and related systems (featuring >>>>> impredicativity), and also about formalizations of such proofs. In >>>>> particular, I am curious if there are any proofs that depart >>>>> significantly from Girard's original proof idea. >>>>> >>>>> >>>>> >>>>> Thank you in advance for any help with this, >>>>> >>>>> Andrei Popescu >>>>> >>>>> >>>>> >>>>> >>>>> >>> >>> > > From txa at Cs.Nott.AC.UK Tue Jul 14 16:20:00 2009 From: txa at Cs.Nott.AC.UK (Thorsten Altenkirch) Date: Tue, 14 Jul 2009 21:20:00 +0100 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <622200.52625.qm@web56102.mail.re3.yahoo.com> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> Message-ID: My first published paper was about a formalisation of strong normalisation for System F, see http://www.cs.nott.ac.uk/~txa/publ/tlca93.pdf This proof however followed Girard's proof. Later we used normalisation by evaluation to construct a normalisation function for System F, see http://www.cs.nott.ac.uk/~txa/publ/lics96.pdf and http://www.cs.nott.ac.uk/~txa/publ/f97.pdf the latter paper was never published. As Wojtek remarked all proofs of SN for System F already use impredicative reasoning. However, Klaus Aehlig showed in his PhD that System F with only one type variable can be shown consistent using generalized inductive definitions (ID [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > > Hello, > > > > I would like to learn about different proofs of strong > normalization for System F and related systems (featuring > impredicativity), and also about formalizations of such proofs. In > particular, I am curious if there are any proofs that depart > significantly from Girard's original proof idea. > > > > Thank you in advance for any help with this, > > Andrei Popescu > > > > This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From wojtek at cs.cornell.edu Tue Jul 14 17:03:42 2009 From: wojtek at cs.cornell.edu (Wojtek Moczydlowski) Date: Tue, 14 Jul 2009 17:03:42 -0400 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: References: <622200.52625.qm@web56102.mail.re3.yahoo.com> Message-ID: It'd be nice to have a family of natural type systems corresponding to the important subsystems of second-order arithmetics described in the Simpson's book. Wojtek On Tue, 14 Jul 2009, Thorsten Altenkirch wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > My first published paper was about a formalisation of strong > normalisation for System F, > see http://www.cs.nott.ac.uk/~txa/publ/tlca93.pdf > This proof however followed Girard's proof. > > Later we used normalisation by evaluation to construct a normalisation > function for System F, > see > http://www.cs.nott.ac.uk/~txa/publ/lics96.pdf > and > http://www.cs.nott.ac.uk/~txa/publ/f97.pdf > the latter paper was never published. > > As Wojtek remarked all proofs of SN for System F already use > impredicative reasoning. However, Klaus Aehlig showed in his PhD that > System F with only one type variable can be shown consistent using > generalized inductive definitions (ID this up to give a predicative normalisation proof for this fragment. > Another question is whether there are other (larger) fragments which > can be attacked with predicative methods. > > Cheers, > Thorsten > > > On 14 Jul 2009, at 19:16, Andrei Popescu wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> >> >> Hello, >> >> >> >> I would like to learn about different proofs of strong >> normalization for System F and related systems (featuring >> impredicativity), and also about formalizations of such proofs. In >> particular, I am curious if there are any proofs that depart >> significantly from Girard's original proof idea. >> >> >> >> Thank you in advance for any help with this, >> >> Andrei Popescu >> >> >> >> > > > This message has been checked for viruses but the contents of an attachment > may still contain software viruses, which could damage your computer system: > you are advised to perform your own checks. Email communications with the > University of Nottingham may be monitored as permitted by UK legislation. > From jean at cis.upenn.edu Thu Jul 16 13:45:39 2009 From: jean at cis.upenn.edu (Jean Gallier) Date: Thu, 16 Jul 2009 13:45:39 -0400 Subject: [TYPES] Proofs of strong normalization for system F Message-ID: <7ACB868A-5417-418E-8BB1-1DE7A9B9ED68@cis.upenn.edu> Besides my "Candidats de reductibilite" paper, in 1995, I attempted to axiomatize the conditions that make the reducibility method work, not just for SN, but also other properties such as confluence, in "Proving properties of lambda-terms using realizability, covers and sheaves" (TCS, No 142(2), 299-368), also found there: http://www.cis.upenn.edu/~jean/gbooks/logic.html (after my old logic book) . A related attempt is in "Kripke models and the (in)equational logic of the second-order lambda-calculus" (Annals of Pure and Applied Logic, 84, 257-316, 1997) also found in http://www.cis.upenn.edu/~jean/gbooks/logic.html Krivine also gives a very clean proof of SN for system F in his book on the lambda-calculus. I personally don't feel that the "McAllester proof" is significantly new or illuminating. Best, -- Jean Gallier From tortora at uniroma3.it Sun Jul 19 05:51:30 2009 From: tortora at uniroma3.it (Lorenzo Tortora de Falco) Date: Sun, 19 Jul 2009 11:51:30 +0200 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <4A0A495C0012EBC7@mailu.cs.poste.it> References: <4A0A495C0012EBC7@mailu.cs.poste.it> Message-ID: <1247997090.6459.11.camel@parmenide> I would like to mention the recent joint work with Michele Pagani: Michele Pagani, Lorenzo Tortora de Falco "Strong normalization property for second order linear logic" to appear in Theoretical Computer Science available on my web page: http://logica.uniroma3.it/~tortora/ The proof we give uses Girard's method to prove Weak Normalization (there is absolutely nothing new here), but to infer Strong Normalization we need what is called in different ways in the litterature: "conservation theorem" in lambda-calculus see Barendregt's book for example, "standardization theorem" by Girard and also "propri?t? de striction" by Danos. The proof of this result is rather delicate for full LL: it is the main point of the paper and it is purely combinatorial. Lorenzo. Il giorno mar, 14/07/2009 alle 21.24 +0200, Andrei Popescu ha scritto: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > > > Hello, > > > > > > > > I would like to learn about different proofs of strong > > normalization for System F and related systems (featuring > > impredicativity), and also about formalizations of such proofs. In > > particular, I am curious if there are any proofs that depart > > significantly from Girard's original proof idea. > > > > > > > > Thank you in advance for any help with this, > > > > Andrei Popescu > > > > > > > > From tortora at uniroma3.it Tue Jul 21 13:24:30 2009 From: tortora at uniroma3.it (Lorenzo Tortora de Falco) Date: Tue, 21 Jul 2009 19:24:30 +0200 Subject: [TYPES] [Fwd: Re: proofs of strong normalization for System F] Message-ID: <1248197070.7718.52.camel@parmenide> Concerning the "propri?t? de striction" I mentioned in my previous message: one should mention that like much of the work on proof-nets of those years at the beginning of Linear Logic, it is was achieved by the very close cooperation of Vincent Danos and Laurent Regnier. Lorenzo. From rcl at ihug.co.nz Wed Jul 22 05:17:53 2009 From: rcl at ihug.co.nz (Ralph Loader) Date: Wed, 22 Jul 2009 21:17:53 +1200 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <622200.52625.qm@web56102.mail.re3.yahoo.com> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> Message-ID: <20090722211753.30c4e04a@i.geek.nz> Andrei, Once upon a time I gave a proof based on functional interpretations. See http://homepages.ihug.co.nz/~suckfish/papers/normal.pdf Given a term, I derive another term that calculates the size of the normalisation tree of the first. Then a model (or weak normalisation for that matter) can be used to verify correctness of the calculation; the condition on the model to make this work is essentially just that some type correctly represents the natural numbers. All the impredicativity is pushed back into the model, the syntactical constructions are all primitive recursive. Cheers, Ralph. On Tue, 14 Jul 2009 11:16:11 -0700 (PDT) Andrei Popescu wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hello, > > > > I would like to learn about different proofs of strong > normalization for System F and related systems (featuring > impredicativity), and also about formalizations of such proofs. ?In > particular, I am curious if there are any proofs that depart > significantly from Girard's original proof idea. ?? > > > > Thank you in advance for any help with this, > > ??Andrei Popescu > > > > From bcpierce at cis.upenn.edu Mon Jul 27 12:08:51 2009 From: bcpierce at cis.upenn.edu (Benjamin Pierce) Date: Mon, 27 Jul 2009 12:08:51 -0400 Subject: [TYPES] Teaching PL using proof assistants Message-ID: Couple of quick questions... 1) I'm putting together a talk for ICFP about using proof assistants to teach programming language foundations. As part of this, I'd like to put together a comprehensive list of past experiments in this area -- both to educate myself on what's out there and as a resource for others. If you've got a course web site or any other materials that I can include, please send me a link. I'd also love to hear any thoughts or stories you may have. 2) A few months ago I asked here whether other people would be interested in teaching their own courses from the Coq-based course notes I've been developing over the past couple of years. I got a lot of positive responses, so I've been working to put them in a more polished form. I know of one person already who plans to use them in the coming year, and it would be great to have a couple more. Please let me know if you're interested. Thanks, - Benjamin From jonathan.aldrich at cs.cmu.edu Wed Jul 29 17:34:59 2009 From: jonathan.aldrich at cs.cmu.edu (Jonathan Aldrich) Date: Wed, 29 Jul 2009 17:34:59 -0400 Subject: [TYPES] SASyLF proof assistant: teaching experience In-Reply-To: References: Message-ID: <4A70C083.30101@cs.cmu.edu> If you're planning to teach a programming language or logic foundations course and are thinking about using a proof assistant, you might consider SASyLF. SASyLF (Second-order Abstract Syntax Logical Framework) is an LF-based proof assistant that the same logical foundation and many of the same advantages as Twelf (in particular, variable binding is "built in"). However, it uses a syntax very close to that used for proofs on paper, giving the tool a much gentler learning curve--and much better error messages--than many alternatives. In Fall 2008, John Boyland at UWM and Todd Millstein at UCLA used SASyLF for their type theory courses. Sample homeworks are available at Boyland's course page: they include proof exercises from Pierce's book, which students did in SASyLF, all the way up to advanced chapters such as 20, 22, 23, and 30: http://www.cs.uwm.edu/classes/cs732/index-fa08.html Experience with SASyLF from the courses was very positive. In particular, feedback from a student post-survey included (Likert scale 1-5, 5 is strongly agree): * Would like to use SASyLF in another PL course: 4.2 * Able to learn SASyLF quickly: 3.8 * SASyLF improved my ability to prove theorems, even on paper: 4.0 * SASyLF enabled me to accomplish tasks more quickly: 3.3 More information, the SASyLF command-line proof checker, and a very simple Eclipse-based IDE, are available at: http://www.sasylf.org/ SASyLF is not ideal for everything. It shares many of the disadvantages of Twelf--e.g. things like sets are neither built-in nor available as generic libraries, and it supports only forall-exists theorems (good enough for many PL theorems including typical type soundness theorems but not enough for e.g. logical relations). SASyLF is also more verbose than Twelf (probably a good thing for teaching, but perhaps not so good for research-scale proofs) and since it is second-order it does not support some higher-order encodings that Twelf does. But we nevertheless believe it can be a valuable tool in many classrooms! Contact me if you have any questions. Jonathan Aldrich From fezsentido at gmail.com Wed Aug 5 15:33:06 2009 From: fezsentido at gmail.com (Flavio Botelho) Date: Wed, 5 Aug 2009 16:33:06 -0300 Subject: [TYPES] Idea about how to simulate variable assignment in lambda calculus Message-ID: <695e78d20908051233g18657edalece6421944fd3f8c@mail.gmail.com> I don't know if the idea might be useful for anything or maybe it's not even new (i tried to look for it), but i would like to share it anyways. M,N E Terms ::= x | ?x.M | M N | # M # | x := M Reduction rules: Beta: (?x.M) N -> M[N/x] Boundary Removal: # M # -> M (condition: If doesn't exist any variable assignment ':=' inside M) Variable Assignment: (?x1,x2,..,xn.# ... x:=M ... #) V1 V2 ... Vn -> (?x1,x2,..,xn. (?x.# ... I ... #) M) V1 V2 ... Vn where I = Identity = ?x.x x:=M becomes I and the variable x becomes binded at the boundary where a redex is created with M as the applied term. *** Important restriction, M should only have variables which are free within the boundary and/or closed terms. More specifically, there shouldn't exist a variable that is binded within the boundary (#...#). As with this rewrite rule the variable would no longer be binded to the right lambda (it would be either free or binded to another lambda outside the boundary). If the variable is reassigned, the boundary will insure that the last assignment is the closest abstraction of that variable name within the boundary. Examples: #(x:=2) (x:=1) x# --VA--> (?x.# I (x:=1) x #) 2 --VA--> (?x.(?x.# I I x #) 1) 2 --Boundary Removal--> (?x.(?x. I I x) 1) 2 --Betas--> 1 #(x:=2) (x:=1) x# --VA--> (?x.# (x:=2) I x #) 1 --VA--> (?x.(?x.# I I x #) 2) 1 --Boundary Removal--> (?x.(?x. I I x) 2) 1 --Betas--> 2 Factorial: Pure lambda calculus: Y (? f n. IF (n==0) 1 (n*(f (n-1)))) With this variable assignment idea : (? n.# (x:=1) Y (? f. IF (n==0) x ((x:=x*n) (n:=n-1) f) ) #) Notice that you need that (x:=1) to be evaluated before the first x:=x*n. And that (x:=x*n) be always evaluated before the (n:=n-1) And to keep the VAs from being applied if your strategy is spuriously creating new instances through Y. --- Clearly Church-Rosser is lost as expected for such a imperative feature. It simply adds the possibility of many different normal forms for a given term. A specific strategy is not necessary, but you need to ensure a specific order for the evaluation of the assignments and the "reading" of these mutable variables to ensure equivalent results. The only trick part is when assignments are duplicated (or triplicated etc etc), we need to modify the beta rule so we can properly reconstruct a ordered lattice. Usually you want to follow whatever strategy you have for Beta until you find a variable assignment that requires some other to be executed before. Actually if later on we can prove certain terms with VA to be commutative (for example x:=x+2 and x:=x+3), then they also don't need to be ordered one to the other. ie.: #(x:=x+2) (x:=x+3) x# --VA--> (?x.# (x:=x+2) I x #) x+3 --VA--> (?x.(?x.# I I x #) x+2) x+3 --Boundary Removal--> (?x.(?x. I I x) x+2) x+3 --Betas--> x+2+3 #(x:=x+2) (x:=x+3) x# --VA--> (?x.# I (x:=x+3) x #) x+2 --VA--> (?x.(?x.# I I x #) x+3) x+2 --Boundary Removal--> (?x.(?x. I I x) x+3) x+2 --Betas--> x+2+3 --- A possible extension of this is to have "local" boundaries, attaching a list of variable names to a boundary such that those variables that are not in that boundary go to a farther boundary, this can be used to mimic programming language's local variable scope... Kudos, Flavio Botelho From rwh at cs.cmu.edu Sat Aug 15 13:01:01 2009 From: rwh at cs.cmu.edu (Robert Harper) Date: Sat, 15 Aug 2009 13:01:01 -0400 Subject: [TYPES] mechanized metatheory of standard ml alpha release Message-ID: Motivated by the current interest in many aspects of mechanized metatheory for programming languages, we are releasing an "alpha" version of our mechanization of the metatheory of standard ml, which is available from our respective publications pages. The tarball that you will find there contains the formalization and verification of Standard ML by elaboration into an internal language, as described in our POPL paper with Daniel Lee. It should be understood that this is an alpha release that certainly needs polishing for presentation and clarity, and may be revised before we go to a full publication. Additionally, it is also possible that it may contain errors, not that compromise safety, but that may depart from the intended semantics. We are currently validating the mechanization to rule out such mistakes. We do, however, deliberately exclude some obsolete or deprecated features of Standard ML (notably equality types). Thanks for your interest! Karl Crary Bob Harper From giuseppemag at gmail.com Wed Aug 19 02:21:17 2009 From: giuseppemag at gmail.com (Giuseppe Maggiore) Date: Wed, 19 Aug 2009 08:21:17 +0200 Subject: [TYPES] CPS and accumulators Message-ID: <67fbaca40908182321m2c4da3fej9255d844bc6eac3a@mail.gmail.com> Hello! First of all I'd like to introduce myself: I'm Giuseppe Maggiore, MSc student doing his thesis who has very recently found out about the wonders of functional languages. Having always been extremely fond of computer graphics, when I found out that there is another field at least as worthy of my affections tore me a bit. Luckily all is well now, since I have realized that functional languages are actually one of the best imaginable tools for writing the extremely high parallel code that modern graphics cards (GPUs) execute. I have realized after a bunch of long sessions with my advisor (prof. Michele Bugliesi) that all we need is a way to generate tail recursive code to find all those hidden dependencies between computations so that we can map those to the same parallel processors (or if there are too many dependencies we just don't execute the computation in parallel at all). A CPS transform is apparently all we need, but I was wondering if there is a more straightforward way to generate tail recursion with accumulating parameters. Am I rambling about this last part? Can I get some more feedback from you experts (my advisor is really great, but there isn't that many people in our university you can discuss these topics with :D)? thanks! -- Giuseppe Maggiore Microsoft Student Partner - Ca' Foscari University of Venice, Italy B.Sc. in Computer Science Tel. +393319040031 From matthias at ccs.neu.edu Wed Aug 19 10:25:43 2009 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Wed, 19 Aug 2009 10:25:43 -0400 Subject: [TYPES] CPS and accumulators In-Reply-To: <67fbaca40908182321m2c4da3fej9255d844bc6eac3a@mail.gmail.com> References: <67fbaca40908182321m2c4da3fej9255d844bc6eac3a@mail.gmail.com> Message-ID: On Aug 19, 2009, at 2:21 AM, Giuseppe Maggiore wrote: > ... all we need is a way to generate tail recursive code to find all > those hidden dependencies between computations ... The above appears to be your central sentence. But it contains two distinct ideas: -- generate TR code, which is about code generation and the organization of the compiler's intermediate stages ("middle end"); and -- find (hidden) dependencies, which concerns the analysis of source code to predict the flow of values before you even run the program. There is indeed a relationship between these two but not necessarily the one that your English connective phrase "to find all those" implies. Before I send you off on a wild-goose chase, what is the real purpose of your project? Write the analysis component for a compiler so that you can ship come of the code to the GPU? Write the intermediate representation? (CPS? ANF? In between?) Connect the latter to the former? (See a paper in the upcoming ICFP on just this issue.) Or an entire high-performing parallelizing compiler? (Which is usually a project for several people over several years). All the best -- Matthias From giuseppemag at gmail.com Thu Aug 20 02:24:12 2009 From: giuseppemag at gmail.com (Giuseppe Maggiore) Date: Thu, 20 Aug 2009 08:24:12 +0200 Subject: [TYPES] CPS and accumulators In-Reply-To: <46F0C8EF3D7C164A8C6AB8FE615D016C0B2C911C4F@ICEXM3.ic.ac.uk> References: <67fbaca40908182321m2c4da3fej9255d844bc6eac3a@mail.gmail.com> <46F0C8EF3D7C164A8C6AB8FE615D016C0B2C911C4F@ICEXM3.ic.ac.uk> Message-ID: <67fbaca40908192324g41ba4175qc7161b0fcab41096@mail.gmail.com> Of course what you cited from my first mail was a bit hyperbolic :) To be precise, the thesis focuses on optimizing certain pieces of code that work on sequences. The idea is roughly to take functions that manipulate lists and turn them into equivalent code that runs on a GPU. I will use DirectX compute shaders called from F#, since CUDA works only on nVidia and with Larrabee on the horizon I have no intention of being locked to the hardware. The idea is to avoid trying to compile entire programs on the GPU, since that would be crazy: GPUs are SIMD machines, so there really is no point in trying to map independent pieces of code to different cores because having diverging instructions they would just be executed sequentially. Whenever a sequence is processed recursively, though, there might be some hope of recognizing it and translating the code (if it is simple enough, otherwise it's not worth it) to the GPU, basically recognizing all those functions that somehow "look" like map, filter, sort, etc. This approach is vastly simpler than "writing a compiler for a GPU", but given the current state of the hardware it also has more chances of becoming something reasonably working (compiling an entire FL to the GPU would mostly generate slower code). PS: I have done quite a lot of computer vision on GPUs with a local research group to get the hang of the process (I have done shaders since, well, ever): the amount of time you could save by not using those awful and unreadable GPU languages would be immense, and you would still be able to tap into the power of the GPU even without all the fine-tuning! On Wed, Aug 19, 2009 at 10:35 PM, Cunningham, David < david.cunningham04 at imperial.ac.uk> wrote: > > Luckily all is well now, since I have realized that functional languages > are actually one of the best imaginable tools for writing the extremely high > parallel code that modern graphics cards (GPUs) execute. > > I'm not sure this is particularly true, depending on what you mean by > functional languages. I take it you are planning to compile a toy > functional language to OpenCL or CUDA? You will have to work very hard to > avoid performance overhead so vast that they will make the GPU look like a > CPU. The best I can imagine you could achieve is an eager language without > typical functional datastructures like lists and trees. Arrays are the name > of the game on the GPU because memory accesses have to be regular. You > would have to implement general recursion yourself using a stack but this is > tricky because there are very strict memory overheads (as little as 128 > bytes per 'thread' on a fully-loaded G200) which if exceeded will kill > performance. You'd have to put the stack in main memory but then there are > latency issues. You would have to implement functions-as-values by inlining > or using some sort of big switch statement, but keeping the environment > alive becomes very difficult with no 'heap'. You would have to invent your > own memory allocator and perhaps garbage collector. There is no cache, you > can make your own in software but this will probably kill performance. > Typically GPU algorithms are carefully hand-tuned to make sure they fit in > the various bits of memory, and careful compromises to the algorithms are > made to ensure this is possible. Any high level language for the GPU must > have enough performance transparency to allow this kind of activity. > > I highly recommend writing some parallel algorithms for the GPU and get > some experience benchmarking and hand-tuning them to appreciate how > difficult it can be to exceed the performance of the CPU, even for > data-parallel problems like dense matrix multiplication and k-means. -- Giuseppe Maggiore Microsoft Student Partner - Ca' Foscari University of Venice, Italy B.Sc. in Computer Science Tel. +393319040031 Website: http://xnalearners.spaces.live.com From M.F.Berger at sussex.ac.uk Fri Aug 21 06:56:18 2009 From: M.F.Berger at sussex.ac.uk (Martin Berger) Date: Fri, 21 Aug 2009 11:56:18 +0100 Subject: [TYPES] proofs of strong normalization for System F In-Reply-To: <622200.52625.qm@web56102.mail.re3.yahoo.com> References: <622200.52625.qm@web56102.mail.re3.yahoo.com> Message-ID: <4A8E7D52.4010701@sussex.ac.uk> > I would like to learn about different proofs of strong > normalization for System F and related systems (featuring > impredicativity), and also about formalizations of such proofs. > In particular, I am curious if there are any proofs that depart > significantly from Girard's original proof idea. Hello Andrei, sorry for the late reply. Another way of proving termination for System F (under the CBV reduction relation) is to embed System F in the polymorphic pi-calculus in the following steps. a. Take the CBV version of Milner's translation of (untyped) lambda-calculus into (untyped) pi-calculus. b. Translate System F types into polymorphic pi-calculus types. c. Show that the translation in (a) embeds terms that are typable in System F as processes that are typable in the polymorphic pi-calculus. d. Show that whenever a System F term M has a one-step CBV reduction then the translation has an n-step reduction in the pi-calculus for some n > 0. e. Show that the polymorphic pi-calculus is strongly normalising. All steps have been proven in [1], all steps but (e) are straightforward. Our proof of (e) is an adaptation of the usual candidate-method to processes. Although we have not done this in detail, the choice of CBV does not seem to be fundamental, I believe a similar development could be done for CBN. Is our proof departing significantly from Girard's original proof? I'll leave that to you to decide. [1] M. Berger, K. Honda, N. Yoshida: Genericity and the Pi-Calculus. Acta Informatica, Vol. 42(2), pp.83?141, 2005. Available from: http://www.informatics.sussex.ac.uk/users/mfb21/publications/fossacs03 Martin From jean at cis.upenn.edu Fri Aug 21 17:36:18 2009 From: jean at cis.upenn.edu (Jean Gallier) Date: Fri, 21 Aug 2009 17:36:18 -0400 Subject: [TYPES] teaching discrete math and logic to undergraduates Message-ID: Hi guys, For now three years, I have been teaching a Discrete Math course for undergraduates. I believe that it is important to expose these kids to logic, and since they should not be too brainwashed yet, I try to present a proof system in natural deduction format in order to show them the difficulties with the "proof by contradiction principle" and make them aware of some of the non-constructive aspects of classical logic. I wrote some notes that I may want to convert into a book. There is also some good stuff on combinatorics and graphs. I have most solutions to my problems written up and more problems should be added. I'd be happy if you have any comments or suggestions. You'll find the manuscript there: http://www.cis.upenn.edu/~jean/gbooks/discmath.html Best, -- Jean Gallier From matthias at ccs.neu.edu Fri Aug 21 17:45:10 2009 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Fri, 21 Aug 2009 17:45:10 -0400 Subject: [TYPES] teaching discrete math and logic to undergraduates In-Reply-To: References: Message-ID: <34698616-57B0-4A2D-9761-E0192C8CA25D@ccs.neu.edu> On Aug 21, 2009, at 5:36 PM, Jean Gallier wrote: > For now three years, I have been teaching a Discrete Math course > for undergraduates. I believe that it is important to expose these > kids > to logic, ... Agreed! On this note, some three years ago NEU started teaching a follow-up spring-semester course to Discrete Mathematics that introduces _freshmen_ to theorem proving about code. We use ACL2, which nicely fits to the design discipline we teach with Scheme/"How to Design Programs" in the first (fall) course. In our dialect of ACL2 students can write interactive video games in a functional language and can them prove theorems about the actual running code. The proof discipline matches the design discipline and prepares students for reasoning about abstract/virtual machines. While my graduate students and I started the course, Pete Manolios (cc'ed) has taken over and has delivered the course for the past two years. I am sure he can point interested parties to relevant web pages (privately or if there's enough interest on the list). -- Matthias From prakash at cs.mcgill.ca Fri Aug 21 18:39:15 2009 From: prakash at cs.mcgill.ca (prakash) Date: Fri, 21 Aug 2009 18:39:15 -0400 Subject: [TYPES] teaching discrete math and logic to undergraduates In-Reply-To: References: Message-ID: <4A8F2213.6090908@cs.mcgill.ca> Thanks for sharing these excellent notes with us. I had a quick look at the logic section. I think it is worth emphasizing that a proof by contradiction of a *negative* statement is perfectly constructive. In fact it is not a proof by contradiction usually, it just looks like it. You give the example of showing that sqrt(2) is irrational. Irrational means rational ==> false and thus a constructive proof of this should proceed exactly in the way you have shown. Students and professional mathematicians over-rate the importance of proof-by-contradiction as a technique. When do we really use proof by contradiction to prove a positive statement? Fairly often, but not as often as is made out. A lot of proofs by contradiction have this pattern: I want to show A implies B. So assume A and NOT-B. Now prove B then exclaim, contradiction! OK, so I was half-joking. I am sure none of the readers of this group do this, but I have seen students do it. In the example you gave of a truly non-constructive proof: there exist two numbers a,b such that a,b are both irrational but a^b is rational you can get more of a bang if you point out that there is an obvious constructive proof: take a to be sqrt(2) and b to be 2 log_2 3. Then a^b = (sqrt(2))^[2log_2 3] = 3. [I found this in Lambek and Scott]. Cheers Prakash Jean Gallier wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi guys, > > For now three years, I have been teaching a Discrete Math course > for undergraduates. I believe that it is important to expose these kids > to logic, and since they should not be too brainwashed yet, I try > to present a proof system in natural deduction format in order > to show them the difficulties with the "proof by contradiction > principle" > and make them aware of some of the non-constructive aspects > of classical logic. I > > wrote some notes that I may want to convert into a book. > There is also some good stuff on combinatorics and graphs. > I have most solutions to my problems written up and more problems > should be added. > I'd be happy if you have any comments or suggestions. > You'll find the manuscript there: > > http://www.cis.upenn.edu/~jean/gbooks/discmath.html > > Best, > -- Jean Gallier > From jean at cis.upenn.edu Fri Aug 21 19:18:26 2009 From: jean at cis.upenn.edu (Jean Gallier) Date: Fri, 21 Aug 2009 19:18:26 -0400 Subject: [TYPES] non-constuctive proofs Message-ID: <9A47AFF0-B324-4224-A2F9-2764A65B9AC5@cis.upenn.edu> Thanks Prakash! By the way, I reveal the a= sqrt{2} and b = log_2 9 trick on page 51 of my notes! Best, -- Jean From txa at Cs.Nott.AC.UK Fri Aug 21 19:29:48 2009 From: txa at Cs.Nott.AC.UK (Thorsten Altenkirch) Date: Sat, 22 Aug 2009 00:29:48 +0100 Subject: [TYPES] non-constuctive proofs In-Reply-To: <9A47AFF0-B324-4224-A2F9-2764A65B9AC5@cis.upenn.edu> References: <9A47AFF0-B324-4224-A2F9-2764A65B9AC5@cis.upenn.edu> Message-ID: <785598B5-A9A2-49DD-9A5B-10AA359CE875@cs.nott.ac.uk> > By the way, I reveal the a= sqrt{2} and b = log_2 9 trick on page > 51 of my notes! Thanks to both of you. However, we may have learned at School how to prove that sqrt{2} is irrational. Is there an easy proof that log_2 9 is? Cheers, Thorsten This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From oleg at okmij.org Fri Aug 21 20:38:04 2009 From: oleg at okmij.org (oleg@okmij.org) Date: 22 Aug 2009 00:38:04 -0000 Subject: [TYPES] non-constuctive proofs In-Reply-To: <785598B5-A9A2-49DD-9A5B-10AA359CE875@cs.nott.ac.uk> Message-ID: <20090822003804.48354.qmail@eeoth.pair.com> Thorsten Altenkirch posed a problem of rationality of log_2 9. There are actually two problems: proving that log_2 9 is not rational, and proving that log_2 9 is irrational. They are different problem in intuitionistic logic, and the second is much harder. Incidentally, the high-school proofs about sqrt(2) only proved that sqrt(2) is not rational. If we use classical logic (as most theorem provers did in Freek's benchmark), we may conclude the number is irrational. We may not intuitionistically conclude so. Let us first solve the first problem, the negation of rationality of log_2 9. Let us use weak reductio, which is intuitionistically justified. Let us assume log_2 9 is rational, that is, p/q, where p and q are two positive integers. log_2 9 = p/q q*log_2 9 = p log_2 (9^q) = p 9^q = 2^p which can only happen if both p and q are 0. Otherwise, the two sides have no common factor other than 1. QED. To prove irrationality of log_2 9 intuitionistically, we should prove that we can construct a sequence of intervals over Q (rational numbers) whose length is decreasing and which contain log_2 9. In other words, we should be able to bracket log_2 9 by two rational numbers given any rational eps. We appeal, as in the case for sqrt(2), to the Newton method (or golden ration method or any other suitable method of root finding). From danvy at brics.dk Fri Aug 21 23:33:05 2009 From: danvy at brics.dk (Olivier Danvy) Date: Sat, 22 Aug 2009 05:33:05 +0200 (CEST) Subject: [TYPES] teaching discrete math and logic to undergraduates In-Reply-To: <4A8F2213.6090908@cs.mcgill.ca> (message from prakash on Fri, 21 Aug 2009 18:39:15 -0400) References: <4A8F2213.6090908@cs.mcgill.ca> Message-ID: <20090822033305.51EAC1B5732@thyra01.cs.au.dk> > A lot of proofs by contradiction have this pattern: > > I want to show A implies B. So assume A and NOT-B. Now prove B then > exclaim, contradiction! OK, so I was half-joking. I am sure none of > the readers of this group do this, but I have seen students do it. Yet students who play the Minesweeper game http://en.wikipedia.org/wiki/Minesweeper_(computer_game) do use proofs by contradiction once they can't use any further modus-ponens proofs: * The modus-ponens proofs pertain to saturation: if there is a 1 here and I know the neighbour which is a mine, all the other neighbours are not mines. * The proofs by contradiction posit that there is a mine here, and then proceed by saturation; if saturation yields a contradiction, there was no mine here. And the better players do it all the time, sometimes in a nested fashion. We need to connect the intuitive, self-invented proofs by contradiction of the students with the formal proofs by contradiction of the teachers without removing the fun of playing Minesweeper and of doing formal proofs. -- Olivier From dreyer at mpi-sws.org Sat Aug 22 04:39:17 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 22 Aug 2009 10:39:17 +0200 Subject: [TYPES] teaching discrete math and logic to undergraduates In-Reply-To: <20090822033305.51EAC1B5732@thyra01.cs.au.dk> References: <4A8F2213.6090908@cs.mcgill.ca> <20090822033305.51EAC1B5732@thyra01.cs.au.dk> Message-ID: <7fa251b70908220139v77648f40ya4a4f527ea6e7546@mail.gmail.com> Olivier's example brings up an important point that as a student I remember being confused about. It's possible that Jean's very interesting notes already discuss this point, but at first glance I didn't notice it. Proof by contradiction (and law of excluded middle) are perfectly valid principles even in constructive logic, for SOME propositions of interest. In the Minesweeper example, a critical axiom of the game is that every cell is in one of two states: either it contains a mine or it is empty, but not both. Given this axiom, the student's intuitive proof-by-contradiction or excluded-middle-elimination strategy for Minesweeper is perfectly constructive, since the proposition in question (whether a cell contains a mine) is known constructively to be either true or false. For me, the important moral is to be aware that when you do proof by contradiction, you should have a pretty good reason to believe that the theorem you are proving is actually either true or false. Derek On Sat, Aug 22, 2009 at 5:33 AM, Olivier Danvy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> A lot of proofs by contradiction have this pattern: >> >> I want to show A implies B. ?So assume A and NOT-B. ?Now prove B then >> exclaim, contradiction! ?OK, so I was half-joking. ?I am sure none of >> the readers of this group do this, but I have seen students do it. > > Yet students who play the Minesweeper game > ?http://en.wikipedia.org/wiki/Minesweeper_(computer_game) > do use proofs by contradiction once they can't use any further > modus-ponens proofs: > > * The modus-ponens proofs pertain to saturation: if there is a 1 here and > ?I know the neighbour which is a mine, all the other neighbours are not > ?mines. > > * The proofs by contradiction posit that there is a mine here, and then > ?proceed by saturation; if saturation yields a contradiction, there was > ?no mine here. ?And the better players do it all the time, sometimes in > ?a nested fashion. > > We need to connect the intuitive, self-invented proofs by contradiction > of the students with the formal proofs by contradiction of the teachers > without removing the fun of playing Minesweeper and of doing formal > proofs. > > -- Olivier > From txa at Cs.Nott.AC.UK Sat Aug 22 12:16:22 2009 From: txa at Cs.Nott.AC.UK (Thorsten Altenkirch) Date: Sat, 22 Aug 2009 17:16:22 +0100 Subject: [TYPES] non-constuctive proofs In-Reply-To: <785598B5-A9A2-49DD-9A5B-10AA359CE875@cs.nott.ac.uk> References: <9A47AFF0-B324-4224-A2F9-2764A65B9AC5@cis.upenn.edu> <785598B5-A9A2-49DD-9A5B-10AA359CE875@cs.nott.ac.uk> Message-ID: <381C6CC6-1A81-4B6F-80E0-6C0B896C5FB4@Cs.Nott.AC.UK> Thank you all. This teaches me to think properly before asking a question. Alternatively, I have to tell my mail client not to accept any mails on Friday evenings... :-) Thorsten (with a red face) On 22 Aug 2009, at 04:35, prakash wrote: > Suppose log_2 3 = n/m where n,m are positive integers with no common > factors. Then m * log_2 3 = n so log_2 3^m = n hence 2^n = 3^m. > This is impossible as one side is even and the other is odd. I > think this is easier than the sqrt(2) proof. On 22 Aug 2009, at 00:29, Thorsten Altenkirch wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > >> By the way, I reveal the a= sqrt{2} and b = log_2 9 trick on page >> 51 of my notes! > > Thanks to both of you. However, we may have learned at School how to > prove that sqrt{2} is irrational. Is there an easy proof that log_2 9 > is? > > Cheers, > Thorsten > > This message has been checked for viruses but the contents of an > attachment > may still contain software viruses, which could damage your computer > system: > you are advised to perform your own checks. Email communications > with the > University of Nottingham may be monitored as permitted by UK > legislation. > This message has been checked for viruses but the contents of an attachment may still contain software viruses, which could damage your computer system: you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From marco.devillers at gmail.com Sat Aug 22 16:25:48 2009 From: marco.devillers at gmail.com (Marco Devillers) Date: Sat, 22 Aug 2009 22:25:48 +0200 Subject: [TYPES] teaching discrete math and logic to undergraduates In-Reply-To: <7fa251b70908220139v77648f40ya4a4f527ea6e7546@mail.gmail.com> References: <4A8F2213.6090908@cs.mcgill.ca> <20090822033305.51EAC1B5732@thyra01.cs.au.dk> <7fa251b70908220139v77648f40ya4a4f527ea6e7546@mail.gmail.com> Message-ID: Hi all, I usually don't post, so don't bother. But here I couldn't resist. This is not law of the excluded middle but case distinction on (algebraically?) constructed values. A logic is classic, or not, by assuming propositions are, or are not, two-valued. Bringing those values into the logic is what matters. I don't think presenting case distinctions as law of excluded middle makes sense. Just 2 cents of thought. Bye all. On Sat, Aug 22, 2009 at 10:39 AM, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Olivier's example brings up an important point that as a student I > remember being confused about. ?It's possible that Jean's very > interesting notes already discuss this point, but at first glance I > didn't notice it. > > Proof by contradiction (and law of excluded middle) are perfectly > valid principles even in constructive logic, for SOME propositions of > interest. ?In the Minesweeper example, a critical axiom of the game is > that every cell is in one of two states: either it contains a mine or > it is empty, but not both. ?Given this axiom, the student's intuitive > proof-by-contradiction or excluded-middle-elimination strategy for > Minesweeper is perfectly constructive, since the proposition in > question (whether a cell contains a mine) is known constructively to > be either true or false. > > For me, the important moral is to be aware that when you do proof by > contradiction, you should have a pretty good reason to believe that > the theorem you are proving is actually either true or false. > > Derek > > On Sat, Aug 22, 2009 at 5:33 AM, Olivier Danvy wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >>> A lot of proofs by contradiction have this pattern: >>> >>> I want to show A implies B. ?So assume A and NOT-B. ?Now prove B then >>> exclaim, contradiction! ?OK, so I was half-joking. ?I am sure none of >>> the readers of this group do this, but I have seen students do it. >> >> Yet students who play the Minesweeper game >> ?http://en.wikipedia.org/wiki/Minesweeper_(computer_game) >> do use proofs by contradiction once they can't use any further >> modus-ponens proofs: >> >> * The modus-ponens proofs pertain to saturation: if there is a 1 here and >> ?I know the neighbour which is a mine, all the other neighbours are not >> ?mines. >> >> * The proofs by contradiction posit that there is a mine here, and then >> ?proceed by saturation; if saturation yields a contradiction, there was >> ?no mine here. ?And the better players do it all the time, sometimes in >> ?a nested fashion. >> >> We need to connect the intuitive, self-invented proofs by contradiction >> of the students with the formal proofs by contradiction of the teachers >> without removing the fun of playing Minesweeper and of doing formal >> proofs. >> >> -- Olivier >> > From dreyer at mpi-sws.org Sat Aug 22 16:58:36 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 22 Aug 2009 22:58:36 +0200 Subject: [TYPES] teaching discrete math and logic to undergraduates In-Reply-To: References: <4A8F2213.6090908@cs.mcgill.ca> <20090822033305.51EAC1B5732@thyra01.cs.au.dk> <7fa251b70908220139v77648f40ya4a4f527ea6e7546@mail.gmail.com> Message-ID: <7fa251b70908221358t402bf858m3580d6c589e5ac26@mail.gmail.com> Yes, of course. My point was simply that, for certain classes of propositions A, "A or not A" is constructively true (e.g. if A is "x = some particular algebraically constructed value"). Thus, having a step in one's proof that says "we know that either A or not A" does not imply that one's proof is classical. This is obvious to people here, but I'm not sure it's obvious to many students, especially when they see proofs written out informally. (I certainly remember being confused about it a long time ago and having to just figure it out for myself.) It's very common, for example, in metatheory appearing in PL papers to see things like "either e is a term of the form e1(e2) or it is not...Case 1: suppose it is...Case 2: suppose it isn't..." Writing that in one's proof does not mean that one has relied on excluded middle, it just means that in the informal proof one has skipped the step where one constructively proved the either/or using a case distinction on the structure of terms. However, to the student, such an informal constructive proof may appear indistinguishable from the non-constructive proof of the irrational example. I think it's worth at least pointing this out to students, by way of noting that most of the intuitive reasoning they do is in fact intuitionistic. Derek On Sat, Aug 22, 2009 at 10:25 PM, Marco Devillers wrote: > Hi all, I usually don't post, so don't bother. But here I couldn't resist. > > This is not law of the excluded middle but case distinction on > (algebraically?) constructed values. A logic is classic, or not, by > assuming propositions are, or are not, two-valued. Bringing those > values into the logic is what matters. > > I don't think presenting case distinctions as law of excluded middle > makes sense. > > Just 2 cents of thought. Bye all. > > On Sat, Aug 22, 2009 at 10:39 AM, Derek Dreyer wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Olivier's example brings up an important point that as a student I >> remember being confused about. ?It's possible that Jean's very >> interesting notes already discuss this point, but at first glance I >> didn't notice it. >> >> Proof by contradiction (and law of excluded middle) are perfectly >> valid principles even in constructive logic, for SOME propositions of >> interest. ?In the Minesweeper example, a critical axiom of the game is >> that every cell is in one of two states: either it contains a mine or >> it is empty, but not both. ?Given this axiom, the student's intuitive >> proof-by-contradiction or excluded-middle-elimination strategy for >> Minesweeper is perfectly constructive, since the proposition in >> question (whether a cell contains a mine) is known constructively to >> be either true or false. >> >> For me, the important moral is to be aware that when you do proof by >> contradiction, you should have a pretty good reason to believe that >> the theorem you are proving is actually either true or false. >> >> Derek >> >> On Sat, Aug 22, 2009 at 5:33 AM, Olivier Danvy wrote: >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>>> A lot of proofs by contradiction have this pattern: >>>> >>>> I want to show A implies B. ?So assume A and NOT-B. ?Now prove B then >>>> exclaim, contradiction! ?OK, so I was half-joking. ?I am sure none of >>>> the readers of this group do this, but I have seen students do it. >>> >>> Yet students who play the Minesweeper game >>> ?http://en.wikipedia.org/wiki/Minesweeper_(computer_game) >>> do use proofs by contradiction once they can't use any further >>> modus-ponens proofs: >>> >>> * The modus-ponens proofs pertain to saturation: if there is a 1 here and >>> ?I know the neighbour which is a mine, all the other neighbours are not >>> ?mines. >>> >>> * The proofs by contradiction posit that there is a mine here, and then >>> ?proceed by saturation; if saturation yields a contradiction, there was >>> ?no mine here. ?And the better players do it all the time, sometimes in >>> ?a nested fashion. >>> >>> We need to connect the intuitive, self-invented proofs by contradiction >>> of the students with the formal proofs by contradiction of the teachers >>> without removing the fun of playing Minesweeper and of doing formal >>> proofs. >>> >>> -- Olivier >>> >> > From marco.devillers at gmail.com Sat Aug 22 17:21:55 2009 From: marco.devillers at gmail.com (Marco Devillers) Date: Sat, 22 Aug 2009 23:21:55 +0200 Subject: [TYPES] CPS and accumulators In-Reply-To: <67fbaca40908192324g41ba4175qc7161b0fcab41096@mail.gmail.com> References: <67fbaca40908182321m2c4da3fej9255d844bc6eac3a@mail.gmail.com> <46F0C8EF3D7C164A8C6AB8FE615D016C0B2C911C4F@ICEXM3.ic.ac.uk> <67fbaca40908192324g41ba4175qc7161b0fcab41096@mail.gmail.com> Message-ID: Ah well, since I responded anyway to the other question, I might as well respond to this. Transforming general recursion to tail recursion is an optimization generally used in compilers for functional languages which has proven its merit. I.e. the task is to get rid of all those pesky intermediate stack frames build up if you would translate the code naively such that you end up with jumps and hardly any, to no, stack. If you don't assume a stack machine, but a graph rewrite machine, like -what, was it again, early Haskell, possibly early pre-ABC Clean- no stack frames are build, or put differently, all stacks frames are in the heap and may be discarded when not referenced anymore. The latter gives a more simple translation, but I am unaware what price, if any, you pay for that particular set-up. I only know, mostly observe, that Lisp (ML?) compilers seem to favor getting rid of general recursion by CPS transform such that it can be translated easier to an intermediate machine, but it is the intermediate machine which defines what transform you need. It may be that a translation to hyper-combinators for a GRS allocates just as much in the heap as a translation with CPS to a more traditional intermediate machine does. The essential part is that you want to translate from a system with operational semantics where naive rewriting is hard to implement (efficiently or robust) to a system where individual rewrites are easily implemented (and are efficient and robust). CPS does that, hyper-combinators too, I tried some other forms myself. Fine-tuning the transform and the accompanying intermediate machine is -was?- probably worth a thesis by itself. There is a lot of work in the early eighties on that. *Giuseppe, there was a typo in the previous mail :) From Tim.Sweeney at epicgames.com Sat Aug 22 20:37:17 2009 From: Tim.Sweeney at epicgames.com (Tim Sweeney) Date: Sat, 22 Aug 2009 20:37:17 -0400 Subject: [TYPES] Typechecking by partial evaluation Message-ID: <5080364891516245B4D2B5D158AC74550C32886CF4@MAIL-02.epicgames.net> Are any serious typecheckers based on a design like the following? 1. Implement a dynamically-typed runtime for language L' as a program P (written in language L), which takes a user program P' written in L', and runs P(P'), producing a type mismatch in any context where a value is used at the wrong type according to the type system of L'. 2. Given a partial evaluator for language L, partially evaluate P(P'). 3. Reject the user program P' iff any runtime error productions remain in the partial-evaluated version of P(P'). I'm aware of the Futamura projection (http://en.wikipedia.org/wiki/Partial_evaluation), which applies steps 1-2 to give us a "compiler for free". But haven't seen it carried to step 3, giving us a "typechecker for free". This question arises during an attempt to implement a language with an advanced type system as a dynamically-typed runtime with zero compilation. With this up and running, it intuitively feels like "there's a typechecker in there somewhere" which a partial evaluator could potentially extract. For the curious, the language is based on concurrent constrant logic programming. It performs, at runtime, the kinds of typeclass and overloading resolution that occur at compile-time in packages like GHC, using backtracking and narrowing to perform quantifier instantiation. Tim Sweeney Epic Games From voigt at tcs.inf.tu-dresden.de Sun Aug 23 07:58:11 2009 From: voigt at tcs.inf.tu-dresden.de (voigt@tcs.inf.tu-dresden.de) Date: Sun, 23 Aug 2009 13:58:11 +0200 (MEST) Subject: [TYPES] Typechecking by partial evaluation In-Reply-To: <5080364891516245B4D2B5D158AC74550C32886CF4@MAIL-02.epicgames.net> References: <5080364891516245B4D2B5D158AC74550C32886CF4@MAIL-02.epicgames.net> Message-ID: <8a9262238424491da71c8fb6a6ca5cb6.squirrel@mail.tcs.inf.tu-dresden.de> > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Are any serious typecheckers based on a design like the following? > > 1. Implement a dynamically-typed runtime for language L' as a program P > (written in language L), which takes a user program P' written in L', and > runs P(P'), producing a type mismatch in any context where a value is used > at the wrong type according to the type system of L'. > > 2. Given a partial evaluator for language L, partially evaluate P(P'). > > 3. Reject the user program P' iff any runtime error productions remain in > the partial-evaluated version of P(P'). The following paper: http://doi.acm.org/10.1145/1480881.1480889 proposes a design that is similar in spirit. There clearly are differences. The issue there is not type errors, but more properties like "head of empty list". And instead of a partial evaluator, GHC's built-in optimization strategies are used. But the similarity lies in the combination of introducing explicit error calls for undesired behavior (step 1), aggressive optimization (step 2), and naive syntactic checks for remaining error calls (step 3). Ciao, Janis. From oleg at okmij.org Mon Aug 24 00:32:18 2009 From: oleg at okmij.org (oleg@okmij.org) Date: 24 Aug 2009 04:32:18 -0000 Subject: [TYPES] Typechecking by partial evaluation In-Reply-To: <5080364891516245B4D2B5D158AC74550C32886CF4@MAIL-02.epicgames.net> Message-ID: <20090824043218.26969.qmail@eeoth.pair.com> > 1. Implement a dynamically-typed runtime for language L' as a program > P (written in language L), which takes a user program P' written in > L', and runs P(P'), producing a type mismatch in any context where a > value is used at the wrong type according to the type system of L'. > > 2. Given a partial evaluator for language L, partially evaluate P(P'). > > 3. Reject the user program P' iff any runtime error productions remain > in the partial-evaluated version of P(P'). I am afraid partial evaluation -- which is said to be a glorified constant propagation -- is not powerful enough. Let's consider the program let n = read in if n then n+1 else n or, fun n -> if n then n+1 else n The variable n is marked dynamic, the test and the both branches of the if-expression are dynamic. The partial evaluator can't do anything but residualize. The type error (assuming HM type system) will not be noticed. We need a static analysis that notices that the test of the if expression makes sense only if n is a boolean. The static analysis would then _propagate_ this _positive information_ down the if-branches, and run into a contradiction. It seems that we need at least positive supercompilation. Positive and the general supercompilation are very well explained in @InProceedings{ sorensen-unifying, author = {S{\o}rensen, Morten Heine B. and Gl{\"u}ck, Robert and Jones, Neil D.}, title = {Towards Unifying Deforestation, Supercompilation, Partial Evaluation, and Generalized Partial Computation}, pages = {485--500}, crossref = "esop1994", url = "ftp://ftp.diku.dk/diku/semantics/papers/D-190.ps.gz", } From Tim.Sweeney at epicgames.com Mon Aug 24 03:31:05 2009 From: Tim.Sweeney at epicgames.com (Tim Sweeney) Date: Mon, 24 Aug 2009 03:31:05 -0400 Subject: [TYPES] Typechecking by partial evaluation In-Reply-To: <20090824043218.26969.qmail@eeoth.pair.com> References: <5080364891516245B4D2B5D158AC74550C32886CF4@MAIL-02.epicgames.net>, <20090824043218.26969.qmail@eeoth.pair.com> Message-ID: <5080364891516245B4D2B5D158AC74550C32886CFC@MAIL-02.epicgames.net> Oleg, Thanks for the insight and reference. I've followed your writings for many years and found them inspiring. I became painfully familiar with the limitations of partial-evaluation-as-typechecking in a previous attempt to define a functional programming language this way. But when applied to a Concurrent Constraint Logic Programming language, this scheme appears to become unusually powerful and compact: * Since constraints can propagate values in both directions within a CCLP language runtime, partially-evaluated constraints can propagate partial value information (values, types, approximations, or similar) in both directions. * Since backtracking is built into the constraint-logic language, the partial-evaluator can use native backtracking to verify potential execution paths. * Since a CCLP can instantiate the equivalent of existential and universal quantifiers in certain contexts, we gain type-like expressiveness with these constructs, using only the runtime's underlying value operations. I have yet to carry out the full experiment, but the early results do look promising. It seems plausible that this approach can achieve a level of typechecking expressiveness comparable with e.g. GHC's support for Haskell typeclasses, given a runtime that implements all of those features dynamically, and a relatively small amount of additional logic for reasoning about types and approximations. Tim ________________________________________ From: oleg at okmij.org [oleg at okmij.org] Sent: Monday, August 24, 2009 12:32 AM To: Tim Sweeney; types-list at lists.seas.upenn.edu Subject: [TYPES] Typechecking by partial evaluation > 1. Implement a dynamically-typed runtime for language L' as a program > P (written in language L), which takes a user program P' written in > L', and runs P(P'), producing a type mismatch in any context where a > value is used at the wrong type according to the type system of L'. > > 2. Given a partial evaluator for language L, partially evaluate P(P'). > > 3. Reject the user program P' iff any runtime error productions remain > in the partial-evaluated version of P(P'). I am afraid partial evaluation -- which is said to be a glorified constant propagation -- is not powerful enough. Let's consider the program let n = read in if n then n+1 else n or, fun n -> if n then n+1 else n The variable n is marked dynamic, the test and the both branches of the if-expression are dynamic. The partial evaluator can't do anything but residualize. The type error (assuming HM type system) will not be noticed. We need a static analysis that notices that the test of the if expression makes sense only if n is a boolean. The static analysis would then _propagate_ this _positive information_ down the if-branches, and run into a contradiction. It seems that we need at least positive supercompilation. Positive and the general supercompilation are very well explained in @InProceedings{ sorensen-unifying, author = {S{\o}rensen, Morten Heine B. and Gl{\"u}ck, Robert and Jones, Neil D.}, title = {Towards Unifying Deforestation, Supercompilation, Partial Evaluation, and Generalized Partial Computation}, pages = {485--500}, crossref = "esop1994", url = ftp://ftp.diku.dk/diku/semantics/papers/D-190.ps.gz, } From lengrand at lix.polytechnique.fr Mon Aug 24 14:48:23 2009 From: lengrand at lix.polytechnique.fr (Stephane Lengrand) Date: Mon, 24 Aug 2009 20:48:23 +0200 Subject: [TYPES] non-constuctive proofs In-Reply-To: <381C6CC6-1A81-4B6F-80E0-6C0B896C5FB4@Cs.Nott.AC.UK> References: <9A47AFF0-B324-4224-A2F9-2764A65B9AC5@cis.upenn.edu> <785598B5-A9A2-49DD-9A5B-10AA359CE875@cs.nott.ac.uk> <381C6CC6-1A81-4B6F-80E0-6C0B896C5FB4@Cs.Nott.AC.UK> Message-ID: <4A92E077.8090906@lix.polytechnique.fr> Incidentally, this is the very reason why keyboards are imperfectly tuned, i.e. well-tempered. If you want a scale that is closed under composition and that contains the fifth, you end up looking at the group (of musical intervals) generated by the ratio 3, identifying all notes that only differ by octaves (i.e. whose frequency ratio is a power of 2). Since 3^n is never a power of 2, you never get back to a note you already have, and the group is infinite, which is a pain for a keyboard-instrument. (However, with n=12, you get pretty close to a power of 2, and you can spread the error uniformly over the 12 elements of the group, i.e. the 12 well-tempered intervals.) Stephane Thorsten Altenkirch wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Thank you all. This teaches me to think properly before asking a > question. > > Alternatively, I have to tell my mail client not to accept any mails > on Friday evenings... :-) > > Thorsten (with a red face) > > On 22 Aug 2009, at 04:35, prakash wrote: > > >> Suppose log_2 3 = n/m where n,m are positive integers with no common >> factors. Then m * log_2 3 = n so log_2 3^m = n hence 2^n = 3^m. >> This is impossible as one side is even and the other is odd. I >> think this is easier than the sqrt(2) proof. >> > > > On 22 Aug 2009, at 00:29, Thorsten Altenkirch wrote: > > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> >>> By the way, I reveal the a= sqrt{2} and b = log_2 9 trick on page >>> 51 of my notes! >>> >> Thanks to both of you. However, we may have learned at School how to >> prove that sqrt{2} is irrational. Is there an easy proof that log_2 9 >> is? >> >> Cheers, >> Thorsten >> >> This message has been checked for viruses but the contents of an >> attachment >> may still contain software viruses, which could damage your computer >> system: >> you are advised to perform your own checks. Email communications >> with the >> University of Nottingham may be monitored as permitted by UK >> legislation. >> >> > > > This message has been checked for viruses but the contents of an attachment > may still contain software viruses, which could damage your computer system: > you are advised to perform your own checks. Email communications with the > University of Nottingham may be monitored as permitted by UK legislation. > > From marco.devillers at gmail.com Thu Aug 27 19:12:27 2009 From: marco.devillers at gmail.com (Marco Devillers) Date: Fri, 28 Aug 2009 01:12:27 +0200 Subject: [TYPES] Typechecking by partial evaluation In-Reply-To: <5080364891516245B4D2B5D158AC74550C32886CFC@MAIL-02.epicgames.net> References: <5080364891516245B4D2B5D158AC74550C32886CF4@MAIL-02.epicgames.net> <20090824043218.26969.qmail@eeoth.pair.com> <5080364891516245B4D2B5D158AC74550C32886CFC@MAIL-02.epicgames.net> Message-ID: Although I cannot totally follow this discussion, I don't think it'll work. I don't fully understand what a CCLP language would be, I think of your proposition as: can we (partially) type-check programs with a 'prolog-like' interpreter? > * Since constraints can propagate values in both directions within a CCLP language runtime, partially-evaluated constraints can propagate partial value information (values, types, approximations, or similar) in both directions. Yeah, that'll work - but what is the real difference with an elaborate interpreter where partial evaluation isn't constrained to, say, just some form of beta-reduction, but also uses some heavy optimization like lifting function over/into other functions? Wouldn't that be roughly exactly as strong as two-directional information passing? I.e., if you can pass values upward/downward 'flexible' enough, you get the same power as a simple type inferencer. (Guess I am stating the obvious here for you all.) > * Since backtracking is built into the constraint-logic language, the partial-evaluator can use native backtracking to verify potential execution paths. What would the rules be used in such a partial-evaluator, and are you sure it would be a strong as a 'heavy optimizer' where more elaborate rules might be given? > * Since a CCLP can instantiate the equivalent of existential and universal quantifiers in certain contexts, we gain type-like expressiveness with these constructs, using only the runtime's underlying value operations. I am not informed enough to follow, but I don't have the feeling that any typed program with partial evaluation would 'really' be able to infer more information than what could be inferred by partial evaluation by a strong optimizer. I would like to see some examples elaborated. But yeah, this is where its getting interesting. On Mon, Aug 24, 2009 at 9:31 AM, Tim Sweeney wrote: > Oleg, > > Thanks for the insight and reference. ?I've followed your writings for many years and found them inspiring. So did I :), great slides on your last talk Tim. From gkuan at cs.uchicago.edu Tue Sep 8 18:50:01 2009 From: gkuan at cs.uchicago.edu (George Kuan) Date: Tue, 8 Sep 2009 17:50:01 -0500 Subject: [TYPES] System F and Type Generativity Message-ID: ML datatypes and opaque sealing are the source of type generativity in the ML module system. This generativity is sometimes modeled using existential types. However, a more direct modeling of generativity would be to consider generativity as a special operator G in a variant of System F that takes a type expression and produces a new unique type unequal to all others. This would be essentially a type-level effect. Are there any variants of System F that include such an extension? - George From blume at tti-c.org Wed Sep 9 18:13:26 2009 From: blume at tti-c.org (Matthias Blume) Date: Wed, 9 Sep 2009 17:13:26 -0500 Subject: [TYPES] System F and Type Generativity In-Reply-To: References: Message-ID: <199CCBE6-0D77-495F-AD9D-07FAF337F4E8@tti-c.org> Have you looked at Derek Dreyer's RTG paper? (RTG = "Recursive Type Generativity") You might also want to look at "Principal Type Schemes for Modular Programs" by Derek and myself. It uses RTG in the kind of non- recursive setting that you probably need. Matthias On Sep 8, 2009, at 5:50 PM, George Kuan wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > ML datatypes and opaque sealing are the source of type generativity in > the ML module system. This generativity is sometimes modeled using > existential types. However, a more direct modeling of generativity > would be to consider generativity as a special operator G in a variant > of System F that takes a type expression and produces a new unique > type unequal to all others. This would be essentially a type-level > effect. Are there any variants of System F that include such an > extension? > > - George From monnier at iro.umontreal.ca Wed Sep 9 22:18:00 2009 From: monnier at iro.umontreal.ca (Stefan Monnier) Date: Wed, 09 Sep 2009 22:18:00 -0400 Subject: [TYPES] System F and Type Generativity In-Reply-To: (George Kuan's message of "Tue, 8 Sep 2009 17:50:01 -0500") References: Message-ID: > type unequal to all others. This would be essentially a type-level > effect. Are there any variants of System F that include such an > extension? There's obviously Derek Dreyer's RTG (Recursive type generativity. Journal of Functional Programming, 17(4&5):433?471, 2007). Stefan From oleg at okmij.org Wed Sep 9 22:15:05 2009 From: oleg at okmij.org (oleg@okmij.org) Date: Wed, 9 Sep 2009 19:15:05 -0700 (PDT) Subject: [TYPES] Constructive Law of Excluded Middle Message-ID: <20090910021505.D1ECC176DC@Adric.metnet.navy.mil> Continuing the discussion of teaching constructive proofs by contradiction, we show how to program with the law of excluded middle. We specifically avoid call/cc, which is overrated. Proofs by contradiction are not exclusive to classical logic. For example, weak reductio -- proving the negation of a proposition A by assuming A and exhibiting a contradiction -- is intuitionistically justified. Constructively, to prove the negation of A one has to build a witness of the type A->F where F is a distinguished type with no constructors. Strong reductio -- proving A by contradiction with the assumption of NOT A -- is equivalent to the implication NOT NOT A -> A, or ((A->F)->F) -> A. This implication is equivalent to the law of excluded middle (LEM) (which can be shown by negating the law of contradiction) and is certainly not intuitionistically or constructively justified. It might be surprising therefore that one can still write a non-divergent Haskell term whose type is isomorphic to forall a. ((a->F)-F) -> a To be precise, the type is forall a. (forall m. Monad m => ((a -> m F) -> m F)) -> a We will argue how all this isn't quite the double-negation translation and how call/cc and the related Cont monad can be avoided. In fact, call/cc is an overkill for the computational version of LEM. But first, a few preliminaries. > {-# LANGUAGE EmptyDataDecls, Rank2Types #-} > module LEM where > import Control.Monad.Identity > > data F -- no constructors We should justify that a function of the type forall m. Monad m => t1 -> m t2 is just as pure as the function t1 -> t2. (Meta-variable t denotes some type). The key is the universal quantification over m. Informally, a term of the type (forall m. Monad m => m t) cannot have any effect because types of that form can only be introduced by `return'. More precisely, forall m. Monad m => m t is isomorphic to t forall m. Monad m => t1 -> m t2 is isomorphic to t1 -> t2, etc. One direction of the isomorphism is easy to show, for example > purify :: (forall m. Monad m => ((a -> m b) -> m b)) -> ((a->b)->b) > purify f = \k -> runIdentity (f (return . k)) To show the other direction, we traverse a term, inserting return before each constant or variable and replacing applications with flip bind, aka (=<<). Thus, if a proof term for the proposition (NOTNOT a) isn't already in the form (forall m. Monad m => ((a -> m F) -> m F)), we can always make it so. As the second preliminary, we introduce a Sum type and make it an instance of Monad. We could have used the built-in Either type (however, its MonadError instance needs an Error constraint, which gets in the way). > data Sum a b = L a | R b > instance Monad (Sum e) where > return = R > R x >>= f = f x > L x >>= f = L x We are now ready to demonstrate the proof term for the proposition (NOT NOT a -> a). > h :: (forall m. Monad m => ((a -> m F) -> m F)) -> a > h g = case g (\x -> L x) of > R x -> error "Not reachable" > L x -> x One may be concerned about the `error' function (which can be replaced by any divergent term, e.g., undefined). We can prove however that the case R x is never taken, and so no divergence actually occurs. If the function g ever invokes its argument, an `exception' (the L alternative of Sum) is raised, which propagates up. The function g, whose type is polymorphic over m, has no way to intercept the exception. Hence, the L case will be taken in the function h. The R case can only be taken if g can return a value without invoking its argument. That is not possible however since no value of the type F can be constructed. We will demonstrate the strong reductio later; for now we show a simple illustration. First, we introduce > -- A proof term for a -> NOT NOT a > lift :: forall a m. Monad m => a -> ((a -> m F) -> m F) > lift x = \k -> k x Then *LEM> h $ lift "forty-two" "forty-two" In fact, we can show that h (lift x) evaluates to x for all x. One may ask if our constructive strong reductio works only for atomic proposition or for all propositions. If the proposition is an implication, one could try to be clever, defining > t3 k = k (\() -> k return >> return ()) Note the non-linear use of k. The evaluation h t3 will bind k to (\x -> L x) and so the `exception' L will `leak out' of h. Fortunately, this clever cheating does not work, again because of the polymorphism over m. The inferred type of t3 is t3 :: (Monad m) => ((() -> m ()) -> m a) -> m a and hence the application h t3 is ill-typed because `Quantified type variable `m' escapes'. The type system ensures that the result of h has a pure type (or isomorphic to a pure type), and so does indeed denote a proposition. We now show how to obtain a proof term that represents LEM directly. We represent the disjunction in LEM in its elimination form (that is, Church-encoded). We show two versions, without or with currying: > lem :: (forall m. Monad m => (a -> m w, (a -> m F) -> m w)) -> w > lem sum = case neg (\x -> L x) of > R y -> y > L y -> case pos y of > R z -> z > L _ -> error "unreachable" > where (pos,neg) = sum > lem1 :: (forall m. Monad m => (a -> m w)) -> > (forall m. Monad m => (a -> m F) -> m w) -> w > lem1 pos neg = case neg (\x -> L x) of > R y -> y > L y -> runIdentity $ pos y No matter what 'a' is, we can always produce either it or its negation. Here is a more interesting illustration: > ex_falso_quodlibet :: F -> a > ex_falso_quodlibet falsum = h (\k -> return falsum) > tlem = lem (return, \k -> ex_falso_quodlibet =<< k 42) > -- 42 > tlem1 = lem1 return (\k -> ex_falso_quodlibet =<< k 42) > -- 42 We have just proven the existence of 42 by contradiction, using strong reductio. Our terms lem and lem1 are not in the image of the double-negation translation of (A + NOT A). Indeed, the double-negation translation of the latter is NOTNOT (NOT A + NOT NOT A), which is ((Either (a -> F, (a->F)->F))->F)->F This is different from the type of our lem and lem1. Applying Glivenko theorem directly to (A + NOT A) gives NOT (NOT a & NOTNOT a), or (a->F, (a->F)->F) -> F which is again different from the type of our lem and lem1. Furthermore, although our terms h and lem have `classical' types, we remain within constructive logic. The term h is total, but the function g may not necessarily be so. We know that if g ever invokes its argument (that is, it constructed an object of the type a), then (h g) does produce that object. NOTNOT A does imply A in this case. The function g cannot return without invoking its argument. But the function g may diverge before invoking its argument. We have thus confirmed the old result that LEM is intuitionistically justified for decidable propositions. We stress that we have never used the continuation monad. Our monad Sum is much weaker: it does not permit one to restart an exception, let alone restart the exception multiple times. It seems we can do a lot without call/cc. This subject is elaborated in Limit-Computable Mathematics (LCM). From rossberg at mpi-sws.org Thu Sep 10 06:40:16 2009 From: rossberg at mpi-sws.org (Andreas Rossberg) Date: Thu, 10 Sep 2009 12:40:16 +0200 Subject: [TYPES] System F and Type Generativity In-Reply-To: References: Message-ID: On Sep 9, 2009, at 00.50 h, George Kuan wrote: > ML datatypes and opaque sealing are the source of type generativity in > the ML module system. This generativity is sometimes modeled using > existential types. However, a more direct modeling of generativity > would be to consider generativity as a special operator G in a variant > of System F that takes a type expression and produces a new unique > type unequal to all others. This would be essentially a type-level > effect. Are there any variants of System F that include such an > extension? I described such a system in my PPDP'03 paper [1]. A similar type generation construct is used in [2]. An extension of [1] with singleton kinds appears in [3]. There also is Derek's RTG calculus [4], which targets recursive modules and hence is more flexible, but also more complex. A streamlined version of the system from [1], called G (sic!), is used in our recent ICFP paper [5], and is probably the most minimalist extension of F with type generation. - Andreas [1] Andreas Rossberg. Generativity and Dynamic Opacity for Abstract Types. PPDP'03 [2] Dimitrios Vytiniotis, Geoffrey Washburn, Stephanie Weirich. An open and shut typecase. TLDI'05 [3] Andreas Rossberg. Dynamic Translucency with Abstraction Kinds and Higher-Order Coercions, MFPS'08 [4] Derek Dreyer. Recursive Type Generativity. JFP 17(4+5) [5] Georg Neis, Derek Dreyer, Andreas Rossberg. Non-Parametric Parametricity. ICFP'09. From Xavier.Leroy at inria.fr Sun Sep 13 05:36:43 2009 From: Xavier.Leroy at inria.fr (Xavier Leroy) Date: Sun, 13 Sep 2009 11:36:43 +0200 Subject: [TYPES] System F and Type Generativity In-Reply-To: References: Message-ID: <4AACBD2B.7010603@inria.fr> George Kuan wrote: > ML datatypes and opaque sealing are the source of type generativity in > the ML module system. This generativity is sometimes modeled using > existential types. However, a more direct modeling of generativity > would be to consider generativity as a special operator G in a variant > of System F that takes a type expression and produces a new unique > type unequal to all others. This would be essentially a type-level > effect. Are there any variants of System F that include such an > extension? You might be interested in recent work by Beno?t Montagu and Didier R?my: "Modeling Abstract Types in Modules with Open Existential Types". In Proceedings of the 36th ACM Symposium on Principles of Programming Languages (POPL'09), pages 63-74, Savannah, Georgia, USA, January 2009. http://gallium.inria.fr/~remy/modules/Montagu-Remy at popl09:fzip.pdf - Xavier Leroy From ama08r at ecs.soton.ac.uk Wed Sep 16 10:02:55 2009 From: ama08r at ecs.soton.ac.uk (Austin Anderson) Date: Wed, 16 Sep 2009 15:02:55 +0100 Subject: [TYPES] static analyses for shared resource accesses in multi-threaded systems Message-ID: I would be interested to know if there is any existing work on static analyses to guarantee that threads in a multi-threaded system access some shared resources (such as state) according to some access policy. I have become interested in such a problem from reading about session typing, which is an analogous analysis used to guarantee that threads in a multi-threaded system communicate according to a specific protocol. Session typing makes several assumptions, however, e.g. a global communications queue with temporal ordering, which seem to be inappropriate for non-message-passing systems. It would, I think, be possible to apply the analyses used in session typing if we made use of a global read/write queue, but I think this is too much of a restriction for most shared state paradigms. Hence I am looking for analyses which guarantee that resources in a multi-threaded environment are accessed according to a protocol, in a system where we can have interleaving of actions. I am primarily interested in approaches which are purely static and do not involve model checking. One approach which I have already discovered which does make use of model checking is the thesis: Typed Static Analysis for Concurrent, Policy-Based, Resource Access Control Nicholas Nguyen University of Sussex Any suggestions would be much appreciated. Austin Anderson University of Southampton From nbeckman at cs.cmu.edu Wed Sep 16 12:20:50 2009 From: nbeckman at cs.cmu.edu (Nels Beckman) Date: Wed, 16 Sep 2009 12:20:50 -0400 Subject: [TYPES] static analyses for shared resource accesses in multi-threaded systems Message-ID: <668bd4500909160920qc4a198ndd4646798a5dee50@mail.gmail.com> Hello Austin, What you are describing sounds quite a bit like my thesis research. This work could best be described as "static typestate checking for multi-threaded OO programs." Our analysis, which is presented as a type system, uses aliasing permissions, from Kevin Bierhoff's thesis work but inspired by Boyland's fractional permissions, to approximate thread-sharedness. There are various permission kinds, such as unique, immutable, and shared, which permit or prohibit modification through a program references, and which allow for quite a bit of flexibility in aliasing. We track the state of an object in an object protocol statically, but discard the object state (to simulate thread interleaving) when the permission to an object indicates the object may be concurrently modified. The best reference is from OOPSLA 2008: N. Beckman, K. Bierhoff, J. Aldrich. Verifying Correct Usage of Atomic Blocks and Typestate. In Proceedings of ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications 2008 (OOPSLA '08) Nashville, TN, USA. October 19-23, 2008. http://doi.acm.org/10.1145/1449764.1449783 This paper used atomic blocks as the means of mutual exclusion, although we have recently extended the system to support synchronized blocks as well. Additionally, we've implemented our approach as a static analysis for Java called Plural. That tool is available, open source, on Google code: http://code.google.com/p/pluralism/ Thanks a lot, Nels -- -------------------------------------- Nels Beckman Carnegie Mellon University nbeckman at cs.cmu.edu http://www.nelsbeckman.com From cchristo+types at cs.cmu.edu Wed Sep 16 17:41:05 2009 From: cchristo+types at cs.cmu.edu (Ciera Jaspan) Date: Wed, 16 Sep 2009 17:41:05 -0400 Subject: [TYPES] static analyses for shared resource accesses in multi-threaded systems In-Reply-To: References: Message-ID: > I would be interested to know if there is any existing work on static > analyses to guarantee that threads in a multi-threaded system access some > shared resources (such as state) according to some access policy. Austin, You may also be interested in thread coloring, which is somewhat different in that it relates threads to shared data and code context, rather than states. The first reference below is a light read; Dean's thesis contains the details. Dean F. Sutherland, Aaron Greenhouse, and William L. Scherlis. The Code of Many Colors: Relating Threads to Code and Shared State. Program Analysis for Software Tools and Environments (PASTE) '02 at FSE-10, Charleston, SC, November 2002. Dean F. Sutherland. Ph.D. Thesis, CMU-ISR-08-112, The Code of Many Colors: Semi-Automated Reasoning about Multi-Thread Policy for Java. Carnegie Mellon University, 2008. ~Ciera -- Ciera N.C. Jaspan Institute for Software Research Carnegie Mellon University From koba at kb.ecei.tohoku.ac.jp Wed Sep 16 21:25:03 2009 From: koba at kb.ecei.tohoku.ac.jp (koba) Date: Thu, 17 Sep 2009 10:25:03 +0900 (JST) Subject: [TYPES] static analyses for shared resource accesses in multi-threaded systems In-Reply-To: References: Message-ID: <20090917.102503.104029052.koba@kb.ecei.tohoku.ac.jp> Hi, From: Ciera Jaspan Subject: Re: [TYPES] static analyses for shared resource accesses in multi-threaded systems Date: Wed, 16 Sep 2009 17:41:05 -0400 > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > I would be interested to know if there is any existing work on static > > analyses to guarantee that threads in a multi-threaded system access some > > shared resources (such as state) according to some access policy. > The following paper may also be related. Naoki Kobayashi and Kohei Suenaga and Lucian Wischik, Resource Usage Analysis for the Pi-Calculus, Logical Methods in Computer Science, 2(3:4), pp.1-42, 2006. It presents a static analysis to guarantee that shared resources are accessed according to declared policies, for the pi-calculus extended with primitives for dynamically creating and accessing resources. Naoki Kobayashi Department of Computer and Mathematical Sciences Graduate School of Information Sciences Tohoku University 6-3-9 Aoba, Aramaki, Aoba-ku, Sendai-shi, Miyagi 980-8579, Japan e-mail:koba at ecei.tohoku.ac.jp From lgreg.meredith at gmail.com Thu Sep 17 10:53:11 2009 From: lgreg.meredith at gmail.com (Meredith Gregory) Date: Thu, 17 Sep 2009 07:53:11 -0700 Subject: [TYPES] static analyses for shared resource accesses in multi-threaded systems In-Reply-To: <20090917.102503.104029052.koba@kb.ecei.tohoku.ac.jp> References: <20090917.102503.104029052.koba@kb.ecei.tohoku.ac.jp> Message-ID: <5de3f5ca0909170753q4ed5d191y2a55a444eeddb791@mail.gmail.com> Dear Austin, Are you aware of the work of Luis Caires? Here 's a link to an overview paper; and here's a link to more of his publications on the work. Best wishes, --greg On Wed, Sep 16, 2009 at 6:25 PM, koba wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > > Hi, > > From: Ciera Jaspan > > > Subject: Re: [TYPES] static analyses for shared resource accesses in > multi-threaded systems > Date: Wed, 16 Sep 2009 17:41:05 -0400 > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > I would be interested to know if there is any existing work on static > > > analyses to guarantee that threads in a multi-threaded system access > some > > > shared resources (such as state) according to some access policy. > > > > The following paper may also be related. > > Naoki Kobayashi and Kohei Suenaga and Lucian Wischik, > Resource Usage Analysis for the Pi-Calculus, > Logical Methods in Computer Science, 2(3:4), pp.1-42, 2006. > > It presents a static analysis to guarantee that shared resources > are accessed according to declared policies, for the pi-calculus > extended with primitives for dynamically creating and accessing resources. > > > > Naoki Kobayashi > Department of Computer and Mathematical Sciences > Graduate School of Information Sciences > Tohoku University > 6-3-9 Aoba, Aramaki, Aoba-ku, Sendai-shi, Miyagi 980-8579, Japan > e-mail:koba at ecei.tohoku.ac.jp > -- L.G. Meredith Managing Partner Biosimilarity LLC 1219 NW 83rd St Seattle, WA 98117 +1 206.650.3740 http://biosimilarity.blogspot.com From bpientka at cs.mcgill.ca Fri Oct 2 21:46:02 2009 From: bpientka at cs.mcgill.ca (Brigitte Pientka) Date: Fri, 2 Oct 2009 21:46:02 -0400 Subject: [TYPES] Release of Beluga prototype Message-ID: Dear all, we'd like to announce the first release of Beluga. The Beluga implementation provides an implementation of the logical framework LF, and in addition a functional programming environment which supports dependent types and analyzing LF data via pattern matching. It is a completely new implementation effort. Its implementation of the logical framework LF in OCaml shares, adapts and extends some fundamental ideas from the Twelf system. The functional programming environment which is built on top of the logical framework LF is a completely new implementation based on previous theoretical ideas presented in [Pientka:POPL08] and [PientkaDunfield:PPDP08]. At this point our prototype supports type reconstruction for LF signatures as well as type reconstruction for dependently typed functional programs over LF data. To get a feel for Beluga, you can download our prototype together with examples and the papers describing the foundation from http://complogic.cs.mcgill.ca/beluga/ Comments, questions, and bugs should be sent to beluga-dev at cs.mcgill.ca. Best, Beluga-Developers (Joshua Dunfield, Renaud Germain, Stefan Monnier, Brigitte Pientka) From brianh at metamilk.com Mon Oct 5 19:15:40 2009 From: brianh at metamilk.com (Brian Hulley) Date: Tue, 06 Oct 2009 00:15:40 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? Message-ID: <4ACA7E1C.9020000@metamilk.com> Hi everyone, with regards to the integration of Haskell-style typeclasses into an SML-style module system I've come across a really difficult problem caused by the interaction of GADTs and scoped type-class instances, and am wondering if anyone has any ideas on how it might be solved. Consider the following Haskell module that compiles under ghc 6.8.2 with -fglasgow-exts enabled: module Test where data G a where Ok :: forall a. Num a => a -> G () Problem :: forall a. Num a => a -> G a analyse :: forall a. (G a, G a) -> a analyse (Ok x, Ok y) = () analyse (Ok x, Problem y) = () analyse (Problem x, Ok y) = () analyse (Problem x, Problem y) = x + y $ ghci -fglasgow-exts test.hs GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help Loading package base ... linking ... done. [1 of 1] Compiling Test ( test.hs, interpreted ) Ok, modules loaded: Test. *Test> analyse (Problem (1 :: Int), Problem (2 :: Int)) 3 *Test> Leaving GHCi. Conceptually, the Problem constructor stores a Num dictionary for the type of its value argument, so in the above example each Problem constructor in the tuple stores a Num dictionary for the type Int (as well as an Int value). Because type class instances in Haskell are global, so that there can only be one instance per type, in the fourth clause of (analyse) even though pattern matching makes available *two* Num dictionaries for Int, the compiler knows that they must be the same and so can just forget about one of them (I assume), allowing the arithmetic operator (+) to be resolved for example to the function stored in the first dictionary. However in a modular setting, there is no such guarantee, so when compiling (x + y) there would be an ambiguity about which Num Int dictionary to use e.g.: structure P1 = struct instance Num Int where ... val p1 : G Int = Problem 1 structure P2 = struct instance Num Int where ... val p2 : G Int = Problem 2 (* different dictionary from p1 *) structure Client = struct val x = analyse (p1, p2) (* 2 different dicts for Num Int *) (I've written types the same way as in Haskell to avoid confusion and used layout to avoid having to write "end".) I've sidestepped the issue of how type class instances are defined and made canonical since this does not seem to me to be relevant: the point is that we can have two incompatible dictionaries stored in values of the same type where the types associated with the dictionaries are visible in the scope of the client, because of the way GADTs allow us to refine a type by adding extra predicates. (In contrast, uses of the Ok constructor would be fine since the type associated with the dictionary (and value) is hidden.) Many similar examples could be constructed e.g. foo :: forall a. Num a => G a -> a -> a foo (Problem x) y = x + y ... Here the compiler must choose between the GADT dict and the dict sent into the (foo) function from the calling context. So far the only solution I can think of to the problem of designing a modular language with typeclasses is to simply forbid the use of user-defined predicates for GADT constructors (they are fine in existentials as pointed out above with the Ok constructor). This might be ok but it seems a bit draconian and also I'm wondering if it reveals a fundamental limitation with the whole concept of typeclasses as used in modern Haskell: are they fundamentally non-modular? Anyway any thoughts or pointers to literature addressing any of these issues would be welcome, Thanks, Brian. From crary at cs.cmu.edu Tue Oct 6 11:21:09 2009 From: crary at cs.cmu.edu (Karl Crary) Date: Tue, 06 Oct 2009 11:21:09 -0400 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACA7E1C.9020000@metamilk.com> References: <4ACA7E1C.9020000@metamilk.com> Message-ID: <4ACB6065.4090408@cs.cmu.edu> Hi Brian, Basically you're observing that a qualified GADT package includes an instance declaration, so unpacking it amounts to a using declaration. Accordingly, the "Problem" clause of your code would need to be treated as a using declaration, and it would be rejected for introducing overlapping instances. It strikes me that in GHC you might well have the same problem with the "Ok" constructor. A qualified existential package also includes an instance declaration, and unpacking it too amounts to a using declaration. That ought not be a problem because it's an instance for an abstract type, so it cannot overlap any existing instances. But, my understanding is that internally GHC does not really support abstract types, so that might not help. By the way, I think that this issue exposes a design flaw in GADTs. GADTs provide a convenient way to do dynamic type analysis by integrating type representations into the datatype mechanism. But, I think it's a mistake for GADTs to use actual types for those type representations. Conceptually, the indices for a GADT are data, not sets of terms. A better way to do GADTs would be to introduce a "datakind" mechanism, and use algebraic data kinds as the indices for a GADT. You can get a picture of what this would look like from my 1999 paper "Flexible Type Analysis" with Stephanie Weirich. (That paper didn't have the tie-in with datatypes, which was invented later.) The relevance to modular type classes is this: If the index for a GADT were an algebraic data kind, rather than an actual type, there would be no temptation to try to attach an instance to that index. The "Problem" constructor would be ill-kinded. Obviously, introducing a datakind mechanism and supporting machinery would be a much more substantial fix than simply checking for overlapping instances when unpacking a GADT, and it also might break existing code. But I do think it would be a better design looking forward. -- Karl Brian Hulley wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi everyone, > with regards to the integration of Haskell-style typeclasses into an SML-style module system I've come across a really difficult problem caused by the interaction of GADTs and scoped type-class instances, and am wondering if anyone has any ideas on how it might be solved. > > Consider the following Haskell module that compiles under ghc 6.8.2 with -fglasgow-exts enabled: > > module Test where > data G a where > Ok :: forall a. Num a => a -> G () > Problem :: forall a. Num a => a -> G a > > analyse :: forall a. (G a, G a) -> a > analyse (Ok x, Ok y) = () > analyse (Ok x, Problem y) = () > analyse (Problem x, Ok y) = () > analyse (Problem x, Problem y) = x + y > > $ ghci -fglasgow-exts test.hs > GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help > Loading package base ... linking ... done. > [1 of 1] Compiling Test ( test.hs, interpreted ) > Ok, modules loaded: Test. > *Test> analyse (Problem (1 :: Int), Problem (2 :: Int)) > 3 > *Test> Leaving GHCi. > > Conceptually, the Problem constructor stores a Num dictionary for the type of its value argument, so in the above example each Problem constructor in the tuple stores a Num dictionary for the type Int (as well as an Int value). > > Because type class instances in Haskell are global, so that there can only be one instance per type, in the fourth clause of (analyse) even though pattern matching makes available *two* Num dictionaries for Int, the compiler knows that they must be the same and so can just forget about one of them (I assume), allowing the arithmetic operator (+) to be resolved for example to the function stored in the first dictionary. > > However in a modular setting, there is no such guarantee, so when compiling (x + y) there would be an ambiguity about which Num Int dictionary to use e.g.: > > > structure P1 = struct > instance Num Int where ... > > val p1 : G Int = Problem 1 > > structure P2 = struct > instance Num Int where ... > > val p2 : G Int = Problem 2 (* different dictionary from p1 *) > > structure Client = struct > > val x = analyse (p1, p2) (* 2 different dicts for Num Int *) > > (I've written types the same way as in Haskell to avoid confusion and used layout to avoid having to write "end".) > > I've sidestepped the issue of how type class instances are defined and made canonical since this does not seem to me to be relevant: the point is that we can have two incompatible dictionaries stored in values of the same type where the types associated with the dictionaries are visible in the scope of the client, because of the way GADTs allow us to refine a type by adding extra predicates. > > (In contrast, uses of the Ok constructor would be fine since the type associated with the dictionary (and value) is hidden.) > > Many similar examples could be constructed e.g. > > foo :: forall a. Num a => G a -> a -> a > foo (Problem x) y = x + y > ... > > Here the compiler must choose between the GADT dict and the dict sent into the (foo) function from the calling context. > > So far the only solution I can think of to the problem of designing a modular language with typeclasses is to simply forbid the use of user-defined predicates for GADT constructors (they are fine in existentials as pointed out above with the Ok constructor). > > This might be ok but it seems a bit draconian and also I'm wondering if it reveals a fundamental limitation with the whole concept of typeclasses as used in modern Haskell: are they fundamentally non-modular? > > Anyway any thoughts or pointers to literature addressing any of these issues would be welcome, > > Thanks, Brian. > > > From monnier at IRO.UMontreal.CA Tue Oct 6 13:27:33 2009 From: monnier at IRO.UMontreal.CA (Stefan Monnier) Date: Tue, 06 Oct 2009 13:27:33 -0400 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACB6065.4090408@cs.cmu.edu> (Karl Crary's message of "Tue, 06 Oct 2009 11:21:09 -0400") References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> Message-ID: > It strikes me that in GHC you might well have the same problem with the > "Ok" constructor. A qualified existential package also includes an > instance declaration, and unpacking it too amounts to a using > declaration. That ought not be a problem because it's an instance for > an abstract type, so it cannot overlap any existing instances. But, my > understanding is that internally GHC does not really support abstract > types, so that might not help. Actually, since GHC uses a variant of System F internally, it should have no trouble supporting real abstract types. > By the way, I think that this issue exposes a design flaw in GADTs. > GADTs provide a convenient way to do dynamic type analysis by > integrating type representations into the datatype mechanism. But, I > think it's a mistake for GADTs to use actual types for those type > representations. Conceptually, the indices for a GADT are data, not > sets of terms. I presume you're talking about GHC's GADTs. As it turns out, they are not specifically indexed by types of kind * (aka ?), instead they can be indexed by type expressions of pretty much any kind. > A better way to do GADTs would be to introduce a "datakind" mechanism, > and use algebraic data kinds as the indices for a GADT. If GHC had datakinds, then its GADTs would support them as well. I.e. you're talking about a design flaw of GHC/Haskell rather than its GADTs which just happen to inherit it. > The relevance to modular type classes is this: If the index for a GADT > were an algebraic data kind, rather than an actual type, there would be > no temptation to try to attach an instance to that index. Why would you think so? Going back to the "type classes as SML modules" point of view, it's like saying you wouldn't want modules parameterized by types of other kinds than ?. It didn't take long for type classes to be generalized to constructor classes, so I can't see any reason why people wouldn't want to have classes over types of other kinds than ? and ???. > The "Problem" constructor would be ill-kinded. Obviously, > introducing a datakind mechanism and supporting machinery would be > a much more substantial fix than simply checking for overlapping > instances when unpacking a GADT, and it also might break existing > code. But I do think it would be a better design looking forward. Sadly, I don't think it would be a fix at all. It might be a good new feature, tho. Stefan From brianh at metamilk.com Tue Oct 6 20:52:04 2009 From: brianh at metamilk.com (Brian Hulley) Date: Wed, 07 Oct 2009 01:52:04 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACB6065.4090408@cs.cmu.edu> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> Message-ID: <4ACBE634.5090301@metamilk.com> Karl Crary wrote: > Hi Brian, > > Basically you're observing that a qualified GADT package includes an > instance declaration, so unpacking it amounts to a using declaration. > Accordingly, the "Problem" clause of your code would need to be treated > as a using declaration, and it would be rejected for introducing > overlapping instances. Hi Karl, Thanks for pointing out that the compiler could just reject this clause: perhaps this would be an acceptable penalty to pay for the advantages of having scoped instances. However there are situations where it would not be so clear what the compiler should do eg: bar :: forall a. G a -> (a, Int) bar (Problem x) = (x + x, 1 + 2) bar (Ok _) = ((), 0) In the expression (x + x, 1 + 2), what instance should the compiler use for (1 + 2)? Since (bar) is polymorphic, the compiler, when compiling (bar), has no way of knowing whether or not the match against (Problem x) is going to introduce a Num Int dictionary. Of course this could be rejected on the grounds that the match *may* introduce an overlapping instance, or we could restrict (a) so that it may not be instantiated to Int eg: bar :: forall a. (NotEqual Int a) => G a -> (a, Int) where NotEqual is a built-in predicate, though this has the unfortunate effect of leaking information about the implementation into the function's type. For example we might choose to accept the following implementation, with no restrictions on the instantiation of (a), since evaluation of the rhs expression does not involve conflicting dictionaries (assuming that we already have a top-level binding of (three) to the integer 3 so that no dictionary is needed in the evaluation of the second component of the pair): bar' (Problem x) = (x + x, three) > > It strikes me that in GHC you might well have the same problem with the > "Ok" constructor. A qualified existential package also includes an > instance declaration, and unpacking it too amounts to a using > declaration. That ought not be a problem because it's an instance for > an abstract type, so it cannot overlap any existing instances. But, my > understanding is that internally GHC does not really support abstract > types, so that might not help. I'm not quite sure if I've understood what you mean here: GHC does support qualified existentials and in fact it is a common idiom to wrap a dictionary in a datatype together with a value, to simulate the effect of using an interface in a standard object oriented language e.g. for widgets: class Widget w where drawWidget :: w -> Canvas -> IO () data WrappedWidget where Wrap :: forall w. (Widget w) => w -> WrappedWidget draw :: WrappedWidget -> Canvas -> IO () draw (Wrap w) c = drawWidget w c Although arguably it might be better practice to pass a simple record of functions instead of a qualified existential in such cases. > > By the way, I think that this issue exposes a design flaw in GADTs. > GADTs provide a convenient way to do dynamic type analysis by > integrating type representations into the datatype mechanism. But, I > think it's a mistake for GADTs to use actual types for those type > representations. Conceptually, the indices for a GADT are data, not > sets of terms. A better way to do GADTs would be to introduce a > "datakind" mechanism, and use algebraic data kinds as the indices for a > GADT. You can get a picture of what this would look like from my 1999 > paper "Flexible Type Analysis" with Stephanie Weirich. (That paper > didn't have the tie-in with datatypes, which was invented later.) Thanks for the reference, which looks to be very interesting for issues related to typed compilation in general not just typeclasses. Also it's useful that you suggest that GADTs may be at fault rather than typeclasses per se since it indicates that perhaps the best way to proceed is simply to ignore GADTs for the moment rather than letting the problems their current design introduces block progress towards an implementation that would allow other uses of typeclasses in a modular setting. > > The relevance to modular type classes is this: If the index for a GADT > were an algebraic data kind, rather than an actual type, there would be > no temptation to try to attach an instance to that index. The "Problem" > constructor would be ill-kinded. Obviously, introducing a datakind > mechanism and supporting machinery would be a much more substantial fix > than simply checking for overlapping instances when unpacking a GADT, > and it also might break existing code. But I do think it would be a > better design looking forward. > > -- Karl Such a mechanism would also be useful in its own right, for example to allow a limited form of dependently-typed programming with indices as described in [1] (which references the paper you co-authored above). Cheers, Brian. [1] "A formulation of Dependent ML with explicit equality proofs" by Daniel R. Licata and Robert Harper, Dec 2005, CMU-CS-05-178. From brianh at metamilk.com Tue Oct 6 23:28:11 2009 From: brianh at metamilk.com (Brian Hulley) Date: Wed, 07 Oct 2009 04:28:11 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACB707A.8020408@sundresh.org> References: <4ACA7E1C.9020000@metamilk.com> <4ACB707A.8020408@sundresh.org> Message-ID: <4ACC0ACB.3050703@metamilk.com> Sameer Sundresh wrote: > Brian Hulley wrote: >> However in a modular setting, there is no such guarantee, so when >> compiling (x + y) there would be an ambiguity about which Num Int >> dictionary to use e.g.: >> >> >> structure P1 = struct >> instance Num Int where ... >> >> val p1 : G Int = Problem 1 >> >> structure P2 = struct >> instance Num Int where ... >> >> val p2 : G Int = Problem 2 (* different dictionary from p1 *) >> >> structure Client = struct >> >> val x = analyse (p1, p2) (* 2 different dicts for Num Int *) > > > When we're type checking the Client module, how do we know that Int has > been declared as an instance of Num? Are you assuming there is also a > global instance Num Int, and P1 and P2 are locally overriding it? > > - If yes, would it be acceptable to use the global instance declaration > for Num Int within module Client? > > - If no, would it be acceptable to require module Client to explicitly > declare or import an instance declaration for Int as a Num? The > alternative--implicitly importing instance declarations into the context > where the values p1 and p2 are used--introduces overlapping instances, > as Karl Crary pointed out. Hi Sameer, I think my example above may have been a bit unclear. It was supposed to be something like: structure P1 :> sig val p1 : G Int end = struct ... structure P2 :> sig val p2 : G Int end = struct ... structure A = struct val analyse : forall a. G a * G a -> a = fn (Ok x, Ok y) => () | (Ok x, Problem y) => () | (Problem x, Ok y) => () | (Problem x, Problem y) => x + y end structure Client = struct val x : Int = A.analyse (P1.p1, P2.p2) end so when typechecking the implementation of (analyse) all we know is that the fourth clause will introduce two conflicting instances for (Num _A) where (_A) is whatever type has been chosen by the caller to instantiate the type variable (a). These instances would be brought into the scope of the rhs of the fourth clause by pattern matching. For the particular call to (analyse) in the Client module, they would happen to be instances for Num Int, so the point of the example was not that this conflicts with say a standard instance for Num Int that's always in scope but that the two Num Int instances defined in P1 and P2 would conflict with each other. So in summary when compiling P1, we use the explicit instance for Num Int declared in P1 to construct the value p1 of type G Int, thus p1 contains a dictionary corresponding to this instance declaration, as well as the integer 1, and similarly for P2. The Client module then calls A.analyse, passing only p1 and p2. A.analyse does not take any dictionary arguments, so it doesn't matter what instances are in scope in the Client module. As for module A, it also doesn't really matter what instances are in scope since we don't know what type (a) will be instantiated to, though for other examples it would matter (see my reply to Karl for a example). > Something to think about is: what should the > scope be of implicitly imported instance declarations? Following Haskell, this would be the rhs of any pattern match clause that destructs a value whose constructor requires a qualified type. I suppose one idea consistent with the MTC paper [1] would be to separate the revelation of instances from their adoption in the rhs, so that the implementor could explicitly choose to ignore one of them etc. However this still would not address the problem: what if the creator of (p2) really intended that the integer stored in (p2) should participate in arithmetic according to the interpretation of arithmetic specified by the instance decl in P2 instead of just any old interpretation of arithmetic? > Even if we can't > use p1 and p2 together, why shouldn't we be allowed to use p1 and p2 > separately in two different places within the same module? I think this should probably be fine since p1 and p2 each contain their own interpretation of integer arithmetic for the values they contain. The only issue is what to do if you try to add the integer wrapped in p1 to another integer assuming that there is already a standard Num Int instance in the enclosing scope - I've discussed one such example in my reply to Karl. > > A related question to ask is: should the module that uses a value be > able to choose the instance used for values of a given type? Or must > the dictionary be baked-in to a value at creation time, as in your example? > > - If yes, we would associate type tags with the values p1 and p2, and > look up the corresponding dictionary in an instance environment within > module Client. This introduces an additional level of dynamic indirection. > This in turn suggests that analyse would have to have type: forall a. (Num a) => G a * G a -> a to allow the caller to pass the relevant dictionary in. (In general if different variants of the GADT required different dictionaries they would all need to be passed in which could lead to a bit of a dictionary-passing explosion.) Interestingly this is very like the infamous "contexts on data decls" controversy in Haskell, where data (Num a) => Foo a = F a does not cause values of type (Foo Z) to carry a Num Z dictionary but instead requires all functions operating on such values to accept a (Num Z) dictionary, which in a modular setting actually begins to sound quite attractive. > - If no, the type checker should require that the instance corresponding > to a given value is in scope (and not in conflict with any other > instance in scope). Within the type checker, we would need to annotate > the type of a value with which instance declarations it requires, e.g., > p2 requires P2.(Num Int). If P2 imported its (Num Int) instance from > some other module P0, then p2 would require P0.(Num Int) instead > (otherwise P1 and P2 could both import from P0, but Client wouldn't know > they use the same definition). > > Taking this idea further we could have a special kind whose inhabitants represent instance environments so that we could store the dictionaries in the values of the datatype but require that both values were created in compatible instance environments: val analyse : forall (a : *) (i1 : I) (i2 : I). (Compatible i1 i2) => (G a)[i1] * (G a)[i2] -> a Anyway thanks, you've given me a lot of ideas to think about, Cheers, Brian. [1] "Modular Type Classes" by Derek Dreyer, Robert Harper, Manuel M. T. Chakravarty, and Gabriele Keller, 2006. From simonpj at microsoft.com Wed Oct 7 04:15:06 2009 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Wed, 7 Oct 2009 08:15:06 +0000 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACB6065.4090408@cs.cmu.edu> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> Message-ID: <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> I agree with Brian's analysis: the 'Problem' constructor is only ok because any way of satisfying a Num constraint is as good as any other. The ability to have a data constructor capture a constraint of a *non-existential* type variable is newish in GHC. I added it for several reasons. Firstly, it is more uniform. It's great to be able to say "You can give a constructor any old type, with arbitrary constraints". Secondly, it's useful, You can have, the constructor for (Set a) capturing the (Ord a) constraint, eg data Set a where S :: forall a. Ord a => [a] so that union can have type union :: Set a -> Set a -> Set a. (Empty set is another matter!) Incidentally, the union function then illustrates Brian's point beautifully: union (S xs) (S ys) = S (merge xs ys) The Ord constraint needed by merge can be satisfied from either the (S xs) or the (S ys) -- and they'd better be the same Ord implementation! Thirdly, and most significantly, it's quite hard to *exclude* examples like 'Problem' if we allow general equality constraints (which we now do). For example, Problem2 :: forall a b. (Num b, a~b) => T a Here Num constrains an "existential" variable... but not really, because the (a~b) constraint makes it equal to 'a'. In general, it might not be precisely equal but somehow connected: Problem3 : forall a b. (Num b, F a ~ G b) => T a where F and G are type functions. So it's getting complicated. Since Haskell *does* have global instances, I decided not to try to find the bottom of this particular tarpit, and instead adopted the simple story: you can give a data constructor an arbitrary set of constraints. GADTs are merely a special case, in which the type constraints are all equalities. All that said, I agree that the ice is thin here. For example, if you allow overlapping instances (which some people like) then I think you could cook up an example in which the choice mattered. So I am not arguing that GHC's story is the Right Thing. Karl suggests excluding such examples by indexing GADTs only by things of kind other than '*'. I'm very sympathetic to datakinds, and am working with Conor McBride to, in effect, add datakinds to GHC. Lots of GADT-ish programs index GADTs with '*' when it would be much better to index with, say {Bool}. (See Conor's web page on the Strathclyde Haskell Extensions, SHE.) But I don't think this issue is directly relevant to Brian's problem. If we can index GADTs with things of kind {Bool}, say, then we should certainly be able to index type classes with things of this kind too, so the same issue arises. Simon | -----Original Message----- | From: types-list-bounces at lists.seas.upenn.edu [mailto:types-list- | bounces at lists.seas.upenn.edu] On Behalf Of Karl Crary | Sent: 06 October 2009 16:21 | To: Brian Hulley | Cc: types-list at lists.seas.upenn.edu | Subject: Re: [TYPES] A show-stopping problem for modular type classes? | | [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] | | Hi Brian, | | Basically you're observing that a qualified GADT package includes an | instance declaration, so unpacking it amounts to a using declaration. | Accordingly, the "Problem" clause of your code would need to be treated | as a using declaration, and it would be rejected for introducing | overlapping instances. | | It strikes me that in GHC you might well have the same problem with the | "Ok" constructor. A qualified existential package also includes an | instance declaration, and unpacking it too amounts to a using | declaration. That ought not be a problem because it's an instance for | an abstract type, so it cannot overlap any existing instances. But, my | understanding is that internally GHC does not really support abstract | types, so that might not help. | | By the way, I think that this issue exposes a design flaw in GADTs. | GADTs provide a convenient way to do dynamic type analysis by | integrating type representations into the datatype mechanism. But, I | think it's a mistake for GADTs to use actual types for those type | representations. Conceptually, the indices for a GADT are data, not | sets of terms. A better way to do GADTs would be to introduce a | "datakind" mechanism, and use algebraic data kinds as the indices for a | GADT. You can get a picture of what this would look like from my 1999 | paper "Flexible Type Analysis" with Stephanie Weirich. (That paper | didn't have the tie-in with datatypes, which was invented later.) | | The relevance to modular type classes is this: If the index for a GADT | were an algebraic data kind, rather than an actual type, there would be | no temptation to try to attach an instance to that index. The "Problem" | constructor would be ill-kinded. Obviously, introducing a datakind | mechanism and supporting machinery would be a much more substantial fix | than simply checking for overlapping instances when unpacking a GADT, | and it also might break existing code. But I do think it would be a | better design looking forward. | | -- Karl | | | Brian Hulley wrote: | > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list | ] | > | > Hi everyone, | > with regards to the integration of Haskell-style typeclasses into an SML- | style module system I've come across a really difficult problem caused by the | interaction of GADTs and scoped type-class instances, and am wondering if | anyone has any ideas on how it might be solved. | > | > Consider the following Haskell module that compiles under ghc 6.8.2 with - | fglasgow-exts enabled: | > | > module Test where | > data G a where | > Ok :: forall a. Num a => a -> G () | > Problem :: forall a. Num a => a -> G a | > | > analyse :: forall a. (G a, G a) -> a | > analyse (Ok x, Ok y) = () | > analyse (Ok x, Problem y) = () | > analyse (Problem x, Ok y) = () | > analyse (Problem x, Problem y) = x + y | > | > $ ghci -fglasgow-exts test.hs | > GHCi, version 6.8.2: http://www.haskell.org/ghc/ :? for help | > Loading package base ... linking ... done. | > [1 of 1] Compiling Test ( test.hs, interpreted ) | > Ok, modules loaded: Test. | > *Test> analyse (Problem (1 :: Int), Problem (2 :: Int)) | > 3 | > *Test> Leaving GHCi. | > | > Conceptually, the Problem constructor stores a Num dictionary for the type | of its value argument, so in the above example each Problem constructor in | the tuple stores a Num dictionary for the type Int (as well as an Int value). | > | > Because type class instances in Haskell are global, so that there can only | be one instance per type, in the fourth clause of (analyse) even though | pattern matching makes available *two* Num dictionaries for Int, the compiler | knows that they must be the same and so can just forget about one of them (I | assume), allowing the arithmetic operator (+) to be resolved for example to | the function stored in the first dictionary. | > | > However in a modular setting, there is no such guarantee, so when compiling | (x + y) there would be an ambiguity about which Num Int dictionary to use | e.g.: | > | > | > structure P1 = struct | > instance Num Int where ... | > | > val p1 : G Int = Problem 1 | > | > structure P2 = struct | > instance Num Int where ... | > | > val p2 : G Int = Problem 2 (* different dictionary from p1 *) | > | > structure Client = struct | > | > val x = analyse (p1, p2) (* 2 different dicts for Num Int *) | > | > (I've written types the same way as in Haskell to avoid confusion and used | layout to avoid having to write "end".) | > | > I've sidestepped the issue of how type class instances are defined and made | canonical since this does not seem to me to be relevant: the point is that we | can have two incompatible dictionaries stored in values of the same type | where the types associated with the dictionaries are visible in the scope of | the client, because of the way GADTs allow us to refine a type by adding | extra predicates. | > | > (In contrast, uses of the Ok constructor would be fine since the type | associated with the dictionary (and value) is hidden.) | > | > Many similar examples could be constructed e.g. | > | > foo :: forall a. Num a => G a -> a -> a | > foo (Problem x) y = x + y | > ... | > | > Here the compiler must choose between the GADT dict and the dict sent into | the (foo) function from the calling context. | > | > So far the only solution I can think of to the problem of designing a | modular language with typeclasses is to simply forbid the use of user-defined | predicates for GADT constructors (they are fine in existentials as pointed | out above with the Ok constructor). | > | > This might be ok but it seems a bit draconian and also I'm wondering if it | reveals a fundamental limitation with the whole concept of typeclasses as | used in modern Haskell: are they fundamentally non-modular? | > | > Anyway any thoughts or pointers to literature addressing any of these | issues would be welcome, | > | > Thanks, Brian. | > | > | > From dreyer at mpi-sws.org Wed Oct 7 10:16:35 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 7 Oct 2009 16:16:35 +0200 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACBE634.5090301@metamilk.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <4ACBE634.5090301@metamilk.com> Message-ID: <7fa251b70910070716pa4552ffu2be663f9b96a1d3b@mail.gmail.com> Hi, Brian. > Thanks for pointing out that the compiler could just reject this clause: perhaps this would be an acceptable penalty to pay for the advantages of having scoped instances. However there are situations where it would not be so clear what the compiler should do eg: > > ? ?bar :: forall a. G a -> (a, Int) > ? ?bar (Problem x) = (x + x, 1 + 2) > ? ?bar (Ok _) ? ? ?= ((), 0) > > In the expression (x + x, 1 + 2), what instance should the compiler use for (1 + 2)? Since (bar) is polymorphic, the compiler, when compiling (bar), has no way of knowing whether or not the match against (Problem x) is going to introduce a Num Int dictionary. I think you're getting confused about the problem Karl pointed out with the original example (at least why it would be a problem in the modular type class setting). The problem was *not* that by unpacking the GADT arguments you were introducing instances into scope that overlapped with Num Int. The problem was that you were unpacking *two* arguments of type G a, which introduced two overlapping instances of Num a. The "bar" example above is fine, because you're just unpacking one argument of type G a, and since "a" is a freshly bound type variable, it's fine to introduce an instance of Num a (as it can't overlap with anything else). The "1 + 2" part is a different story. In the MTC type system, because the type of 1 and 2 is concrete, you can only typecheck "1 + 2" if there is a Num Int instance in the static environment, i.e. if bar is typechecked under a "using" declaration for a Num Int instance. It doesn't matter if bar is invoked later on in the program in some other scope where a different Num Int instance has been made canonical. The meaning of "1 + 2" can only depend on the static context. In particular, it doesn't matter if bar's type parameter "a" is instantiated to Int. In contrast, since the type of x is universally quantified, the canonical instance of Num a *must* be determined dynamically (i.e. passed in as an argument). Best regards, Derek From brianh at metamilk.com Wed Oct 7 17:28:43 2009 From: brianh at metamilk.com (Brian Hulley) Date: Wed, 07 Oct 2009 22:28:43 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> Message-ID: <4ACD080B.9090803@metamilk.com> Simon Peyton-Jones wrote: > > The ability to have a data constructor capture a constraint > of a *non-existential* type variable is newish in GHC. I added it > for several reasons. Firstly, it is more uniform. It's great to be > able to say "You can give a constructor any old type, with > arbitrary constraints". Secondly, it's useful, You can have, > the constructor for (Set a) capturing the (Ord a) constraint, eg > data Set a where > S :: forall a. Ord a => [a] > so that union can have type union :: Set a -> Set a -> Set a. > (Empty set is another matter!) Incidentally, the union function > then illustrates Brian's point beautifully: > union (S xs) (S ys) = S (merge xs ys) > The Ord constraint needed by merge can be satisfied from either > the (S xs) or the (S ys) -- and they'd better be the same Ord implementation! > Actually even if we go back to plain Haskell 98 and consider the original types for the functions in Data.Set we can see that none of the types would be sufficient to describe the parameter requirements in a modular setting: data Set a = Set [a] -- example implementation insert :: Ord a => a -> Set a -> Set a union :: Ord a => Set a -> Set a -> Set a The problem is, for these type signatures to be correct in the sense that they would force the programmer to supply arguments that would lead to the expected results, it is essential that there can only ever be one global Ord instance per type, whereas in a modular setting it would be all too easy to construct sets in one module (or different modules) and then attempt to take the union in a third module i.e. using 3 different Ord_A dictionaries. In contrast, suppose we had a dependently typed language: type OrdDict a = { less : a -> a -> Bool } data Set : (a : *) -> (d : OrdDict a) -> * = List a insert : (a : *) -> (d : OrdDict a) -> a -> Set a d -> Set a d union : (a : *) -> (d : OrdDict a) -> Set a d -> Set a d -> Set a d The types above would ensure that (union) can only be passed sets created using the same dictionary. This suggests to me that instead of regarding typeclasses as a kind of automatic way of applying functors or selecting modules, we should perhaps instead regard typeclasses as a special syntactic sugar for the use of dependent types in the restricted setting where whenever type T depends on (v : V), V determines (v). In the example above, translated back into Haskell, (OrdDict _A) determines it's inhabitant (d_A) hence we are allowed to say that (Set a d) depends only on (a) and (OrdDict a), hence the (d) parameter is redundant, hence dependent types are not needed. >From this angle it is then easy to see that far from regarding the global nature of Haskell typeclass instances as a minor inconvenience to be circumvented by lexical scoping or some other mechanism, it is in fact the central core of the concept itself! Perhaps the way forward is simply to take the idea of starting from ML rather than Haskell, begun in the MTC paper, further, and not expect to be able to write Haskell-style programs in ML. For example the above considerations may lead us to reject qualified polymorphism via the translation into HS semantics but keep the idea of being able to generate canonical modules for a given signature, leading to a more modest, but still useful, extension. Brian. From brianh at metamilk.com Wed Oct 7 17:41:12 2009 From: brianh at metamilk.com (Brian Hulley) Date: Wed, 07 Oct 2009 22:41:12 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACD08F0.9030105@tiscali.co.uk> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> <4ACD080B.9090803@metamilk.com> <4ACD08F0.9030105@tiscali.co.uk> Message-ID: <4ACD0AF8.1000802@metamilk.com> [Apologies if you've received this twice - I sent it first from the wrong address] Brian Hulley wrote: > From this angle it is then easy to see that far from regarding the > global nature of Haskell typeclass instances as a minor inconvenience > to be circumvented by lexical scoping or some other mechanism, it is > in fact the central core of the concept itself! For some reason my email program made it look as if the above paragraph, which I wrote, was quoted from Simon's post. Apologies for the confusion, Brian. From crary at cs.cmu.edu Wed Oct 7 17:52:05 2009 From: crary at cs.cmu.edu (Karl Crary) Date: Wed, 07 Oct 2009 17:52:05 -0400 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACBE634.5090301@metamilk.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <4ACBE634.5090301@metamilk.com> Message-ID: <4ACD0D85.8040604@cs.cmu.edu> Brian Hulley wrote: > Thanks for pointing out that the compiler could just reject this > clause: perhaps this would be an acceptable penalty to pay for the > advantages of having scoped instances. However there are situations > where it would not be so clear what the compiler should do eg: > > bar :: forall a. G a -> (a, Int) > bar (Problem x) = (x + x, 1 + 2) > bar (Ok _) = ((), 0) 1. As Derek correctly pointed out, this works fine. In the "Problem" case, "a" is an abstract type that has in instance of Num. The addition for x is resolved using the instance for "a", and other addition is resolved using the instance for Int. I wonder if you're forgetting that overloading is resolved statically? The type "a" is abstract, and it has a unique instance for Num. The possibility that "a" might ultimately be instantiated with Int is irrelevant. So one solution to your problem is to treat the unpacking of a GADT arm associated with an instance as a using declaration, with the usual test for overlapping instances 2. I didn't know that Haskell allows GADTs to be built over kinds other than Type (aka *). That's nice. What I was suggesting went beyond that. My suggestion is that the index to a GADT, even when it happens to be a type, should not be associated with an instance. Under such a design, this problem would simply go away. 3. I'm surprised to hear that the GHC internals now support type abstraction, in the sense of indefinite references. Am I understanding you correctly? Not so long ago I was told that implementing functors in GHC would be a non-starter. -- Karl From brianh at metamilk.com Wed Oct 7 18:14:11 2009 From: brianh at metamilk.com (Brian Hulley) Date: Wed, 07 Oct 2009 23:14:11 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <7fa251b70910070716pa4552ffu2be663f9b96a1d3b@mail.gmail.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <4ACBE634.5090301@metamilk.com> <7fa251b70910070716pa4552ffu2be663f9b96a1d3b@mail.gmail.com> Message-ID: <4ACD12B3.3080006@metamilk.com> Derek Dreyer wrote: > Hi, Brian. > >> Thanks for pointing out that the compiler could just reject this clause: perhaps this would be an acceptable penalty to pay for the advantages of having scoped instances. However there are situations where it would not be so clear what the compiler should do eg: >> >> bar :: forall a. G a -> (a, Int) >> bar (Problem x) = (x + x, 1 + 2) >> bar (Ok _) = ((), 0) >> >> In the expression (x + x, 1 + 2), what instance should the compiler use for (1 + 2)? Since (bar) is polymorphic, the compiler, when compiling (bar), has no way of knowing whether or not the match against (Problem x) is going to introduce a Num Int dictionary. > > I think you're getting confused about the problem Karl pointed out > with the original example (at least why it would be a problem in the > modular type class setting). Hi Derek, Yes after I'd posted that example I realised that I'd moved on to a different issue without properly explaining what I was trying to illustrate. The point I wanted to make was that although for the reasons you explained the example could be considered to be acceptable (and would be accepted by MTC), since (a) is a fresh type variable and hence the type of (x + x) has nothing to do with (1 + 2), there is still something rather uneasy about the case where the caller instantiates (a) to (Int). What I'm trying to get at is not whether or not the formal system described in the MTC paper would accept the example but whether or not the example should be accepted from the programmer's point of view i.e. staying at the design-space level, since I think there is something slightly odd about the fact that we have a situation where there is an expression with 4 integers but 2 of them are being added using a different addition function. I.e. when a == Int it is a different sort of Int from 1... So I agree it does make perfect sense to hold the view that (1 + 2) would always be evaluated in the static environment, but equally one could hold the view that (1 + 2) should be evaluated in the same instance environment as (x + x), which in the case where (a == Int), would result in a conflict. Cheers, Brian. From brianh at metamilk.com Thu Oct 8 01:28:12 2009 From: brianh at metamilk.com (Brian Hulley) Date: Thu, 08 Oct 2009 06:28:12 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACD0D85.8040604@cs.cmu.edu> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <4ACBE634.5090301@metamilk.com> <4ACD0D85.8040604@cs.cmu.edu> Message-ID: <4ACD786C.4080401@metamilk.com> Karl Crary wrote: > > Brian Hulley wrote: >> Thanks for pointing out that the compiler could just reject this >> clause: perhaps this would be an acceptable penalty to pay for the >> advantages of having scoped instances. However there are situations >> where it would not be so clear what the compiler should do eg: >> >> bar :: forall a. G a -> (a, Int) >> bar (Problem x) = (x + x, 1 + 2) >> bar (Ok _) = ((), 0) > > 1. > As Derek correctly pointed out, this works fine. In the "Problem" case, > "a" is an abstract type that has in instance of Num. The addition for x > is resolved using the instance for "a", and other addition is resolved > using the instance for Int. > > I wonder if you're forgetting that overloading is resolved statically? > The type "a" is abstract, and it has a unique instance for Num. The > possibility that "a" might ultimately be instantiated with Int is > irrelevant. So one solution to your problem is to treat the unpacking > of a GADT arm associated with an instance as a using declaration, with > the usual test for overlapping instances Following on from my reply to Derek I think the problem is that I didn't explain enough what I meant by giving that example. I can see that there exists a way to formalize a type system such that the above example is acceptable, and as you and Derek have pointed out, MTC is one such system, but from the point of view of the design space of *a* modular type system following the general discussion at the beginning of the paper, not just the particular detailed design and formalization that was chosen, it isn't clear to me that the above is the only internally consistent way of looking at things, and therefore there is a corner of the design space that could perhaps be illuminated further. For example you say that (a) is an abstract type. But I could argue that (a) is actually a placeholder that represents any particular concrete type, and that the function definition above should be understood as representing a template for an infinite number of monomorphic functions, one of which will accept the argument type (G Int). This might seem like a really obtuse interpretation, but it would seem to match the natural expectation of behaviour if the programmer expects functions to be inlined at the call site wherever possible. Anyway I understand it is not a problem for MTC as presented in the paper - I'm just being a devil's advocate to try and make sure no stone is left unturned ;-) > > > 2. > I didn't know that Haskell allows GADTs to be built over kinds other > than Type (aka *). That's nice. What I was suggesting went beyond > that. My suggestion is that the index to a GADT, even when it happens > to be a type, should not be associated with an instance. Under such a > design, this problem would simply go away. This is also what I thought you meant, though if people are used to refining types (of whatever kind) by giving type class constraints, it is possible that someone doing some very advanced programming would find the restriction irksome and in that sense the idea that the problem would go away might be brought into a homology with the idea that the Gordian Knot was solved when Alexander cut through it. > > > 3. > I'm surprised to hear that the GHC internals now support type > abstraction, in the sense of indefinite references. Am I understanding > you correctly? Not so long ago I was told that implementing functors in > GHC would be a non-starter. > I'm not an expert on GHC internals. The only reference I can think of about implementing functors in GHC is [1], and of course Oleg's related post on the Haskell mailing list where he shows how to implement an applicative translucent functor using typeclasses [2]. Cheers, Brian. [1] "Higher-order modules in system Fw and Haskell" by Chung-chieh Shan, Harvard University. (2004 ?) [2] Post to Haskell mailing list with title "Applicative translucent functors in Haskell" by oleg at pobox.com, Fri Aug 27 19:51:43 EDT 2004 From crary at cs.cmu.edu Thu Oct 8 14:16:47 2009 From: crary at cs.cmu.edu (Karl Crary) Date: Thu, 08 Oct 2009 14:16:47 -0400 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACD786C.4080401@metamilk.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <4ACBE634.5090301@metamilk.com> <4ACD0D85.8040604@cs.cmu.edu> <4ACD786C.4080401@metamilk.com> Message-ID: <4ACE2C8F.1000704@cs.cmu.edu> Brian Hulley wrote: > For example you say that (a) is an abstract type. But I could argue > that (a) is actually a placeholder that represents any particular > concrete type, and that the function definition above should be > understood as representing a template for an infinite number of > monomorphic functions, one of which will accept the argument type (G > Int). > Evidently you mean something different by "abstract type" than I do. I would say that a placeholder representing any concrete type is precisely an abstract type. -- Karl From brianh at metamilk.com Mon Oct 12 02:27:23 2009 From: brianh at metamilk.com (Brian Hulley) Date: Mon, 12 Oct 2009 07:27:23 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4ACD080B.9090803@metamilk.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> <4ACD080B.9090803@metamilk.com> Message-ID: <4AD2CC4B.7090500@metamilk.com> Brian Hulley wrote: > Actually even if we go back to plain Haskell 98 and consider the > original types for the functions in Data.Set we can see that none > of the types would be sufficient to describe the parameter > requirements in a modular setting: > > data Set a = Set [a] -- example implementation > > insert :: Ord a => a -> Set a -> Set a > > union :: Ord a => Set a -> Set a -> Set a Over the last few days I've been thinking a bit more about this example, and although I still stand by my observation that the above code would be unsafe (i.e. type-correct but bug-friendly) in a scoped-instance context, and that the safety of the above example in a global-instance context can lead to a new insight regarding the question: "what *are* typeclasses?", it now seems to me that it would be an error to regard this as a justification for abandoning attempts to provide facilities for automatic type-directed synthesis of ML-style modules, hence this post, to make sure my earlier doom-laden comments are sufficiently cancelled out, and especially to make sure that no future reader of the archive may be led astray! ;-) For me now the central point is that the concept of "typeclass" needs to be clearly distinguished from "type-directed module synthesis" so that programmers can thoroughly understand what guarantees the type system will and will not give them when they use the latter feature in a scoped synthesis-rule context. Thanks to everyone who took part in the discussion. It's given me a lot more clarity about the issues, Best regards, Brian. From brianh at metamilk.com Mon Oct 12 08:15:55 2009 From: brianh at metamilk.com (Brian Hulley) Date: Mon, 12 Oct 2009 13:15:55 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> <4ACD080B.9090803@metamilk.com> <4AD2CC4B.7090500@metamilk.com> Message-ID: <4AD31DFB.7050809@metamilk.com> Philip Wadler wrote: > On Mon, Oct 12, 2009 at 7:27 AM, Brian Hulley wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Brian Hulley wrote: >>> Actually even if we go back to plain Haskell 98 and consider the >>> original types for the functions in Data.Set we can see that none >>> of the types would be sufficient to describe the parameter >>> requirements in a modular setting: >>> >>> data Set a = Set [a] -- example implementation >>> >>> insert :: Ord a => a -> Set a -> Set a >>> >>> union :: Ord a => Set a -> Set a -> Set a >> Over the last few days I've been thinking a bit more about this example, and although I still stand by my observation that the above code would be unsafe (i.e. type-correct but bug-friendly) in a scoped-instance context > > Why do you claim this would be bug-friendly? My understanding of how > scoped-instance contexts work is that in the presence of a scoped > instance one cannot export a function with a type class in its > signature that overlaps the instance. I think that eliminate the > bug-friendly case. An example that you consider bug-friendly would be > welcome. Yours, -- P > > The example I had in mind was (using layout + Haskell/ML mixed up syntax): module Set = struct data Set a = Set [a] val insert : Ord a => ... ... data T = T Int module S0 : sig val s : Set T end = struct instance Ord T where ... val s = Set.insert ... module S1 : sig val s : Set T end = struct instance Ord T where ... val s = Set.insert ... module U : sig val u : Set T end = struct instance Ord T where ... val c : Set T = Set.union S0.s S1.s where both S0 and S1 construct sets of more than one element using different orderings, and the binding to U.c uses (perhaps) yet another ordering to implement the merge that is part of the implementation of Set.union. The problem is that the particular ordering used to construct a set is not reflected in the type of the set itself, and indeed the fact that Ord is used at all is not reflected in the Set type so the export of val s from S0 effectively causes the instance of Ord to travel (in its effects) outside its lexical scope without anyone knowing that this has happened. If you meant, by the rule you gave, that a function with a type class in its signature that is exported by a library module (Set) cannot be used from within the scope of a local instance decl (e.g. within the body of S0) then this might solve the bug-friendly problem but would seem to limit the usefulness of scoped instances. For example it would no longer be possible to use any monadic library functions with a locally defined monad instance. In section 7.1.2 of [1] there is an example (given in the language of "concepts" and "models") showing the usefulness of lexically scoped instances which I think is similar in structure to the above: there is a function taking a typeclass argument that is used in the scope of two separate instance declarations for the same type, so it seems like with scoped instances bug-friendliness and enhanced utility are two sides of the same coin. (Btw I hadn't heard of that rule before and indeed I haven't been able to find any paper that discusses possible bug-friendliness of scoped instances or gives any examples so any pointers to literature would be welcome.) Cheers, Brian. [1] "A language for generic programming" by Jeremy G. Siek, Doctoral Dissertation, Indiana University, Computer Science, August 24, 2005. From dreyer at mpi-sws.org Mon Oct 12 11:57:50 2009 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Mon, 12 Oct 2009 17:57:50 +0200 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4AD31DFB.7050809@metamilk.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> <4ACD080B.9090803@metamilk.com> <4AD2CC4B.7090500@metamilk.com> <4AD31DFB.7050809@metamilk.com> Message-ID: <7fa251b70910120857w451e5211l5c74e653f12c94c0@mail.gmail.com> Brian, > The example I had in mind was (using layout + Haskell/ML mixed up syntax): > > ...[example elided]... > > ? ? ? val c : Set T = Set.union S0.s S1.s > > where both S0 and S1 construct sets of more than one element using different orderings, and the binding to U.c uses (perhaps) yet another ordering to implement the merge that is part of the implementation of Set.union. > > The problem is that the particular ordering used to construct a set is not reflected in the type of the set itself, and indeed the fact that Ord is used at all is not reflected in the Set type so the export of val s from S0 effectively causes the instance of Ord to travel (in its effects) outside its lexical scope without anyone knowing that this has happened. This is indeed a good observation. It is closely related to an issue that arises in the semantics of ML functors, in particular applicative functors. With applicative functors, you can write a type like "Set(IntLt).t" or "Set(IntGt).t", denoting the type of sets obtained by applying the Set functor to the IntLt or IntGt modules, which provide different orderings on integers. The question is when are types resulting from applications of the same functor equivalent. Under a number of applicative functor proposals, the answer is "whenever the argument modules are *statically* equivalent", meaning that their type components are equal. Static equivalence is the most liberal possible answer one can give. But, as you've noted, this leads to "bug-friendliness" because in fact sets of integers ordered one way are not compatible with sets of integers ordered another way. OCaml answers the question using "syntactic equivalence", i.e. Set(IntLt).t only equals Set(IntLt).t. This has the benefit of not equating Set(IntLt).t and Set(IntGt).t, but it is also quite restrictive. In particular, if you happen to define a module IntLt by "module IntLt2 = IntLt", then Set(IntLt).t is considered distinct from Set(IntLt2).t even though they are obviously equivalent. Furthermore, if argument modules are allowed to be arbitrary paths involving functor applications, such as F(X), then syntactic equivalence is still not good enough. In particular, in the presence of effects, one application of F(X) could very well return IntLt, while another returns IntGt, thus returning us to the problem with static equivalence. (Of course, in practice, one is unlikely to write such code.) In Chapter 2 of my thesis, I argued that the right answer to the question is "contextual equivalence". Set(IntLt).t and Set(IntGt).t are compatible types so long as IntLt and IntGt are contextually equivalent. Of course, in general, contextual equivalence is undecidable, so we must look for a conservative approximation. But it's not hard to get a reasonable conservative approximation of it by attaching "stamps" (i.e. hidden abstract types) to each value declaration. If the language had real dependent types, that would be even better, as Set(IntLt).t is really a dependent type. But let's sidestep dependent types for now. Now, coming back to the modular type class scenario, you're completely right that the Haskell approach of defining Set.insert to be parameterized over an Ord class constraint doesn't work properly in the presence of scoped instances. The reason, as you said, is precisely that the type "Set a" doesn't make sense if there can be more than one ordering in the program. What you really should do is parameterize the Set type constructor over (some static representation of) the ordering on the element type a. Of course, that's exactly what a correct (i.e. "non-bug-friendly") implementation of applicative functors would do as well. Unfortunately, the original modular type class paper (POPL'07) only supports generative functors, but for the reasons given above I think an applicative functor extension of it would be useful. For more detail, I would refer you to the slides from a talk I gave a couple years ago, in which I simultaneously proposed modular type classes as a good motivation for non-bug-friendly applicative functors, and also sketched how an applicative functor extension to the MTC paper would look: http://www.iai.uni-bonn.de/~ralf/WG2.8/24/slides/derek.pdf Some day soon, I will get around to writing up the details. Best regards, Derek From brianh at metamilk.com Mon Oct 12 21:11:00 2009 From: brianh at metamilk.com (Brian Hulley) Date: Tue, 13 Oct 2009 02:11:00 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <7fa251b70910120857w451e5211l5c74e653f12c94c0@mail.gmail.com> References: <4ACA7E1C.9020000@metamilk.com> <4ACB6065.4090408@cs.cmu.edu> <59543203684B2244980D7E4057D5FBC102E645@DB3EX14MBXC308.europe.corp.microsoft.com> <4ACD080B.9090803@metamilk.com> <4AD2CC4B.7090500@metamilk.com> <4AD31DFB.7050809@metamilk.com> <7fa251b70910120857w451e5211l5c74e653f12c94c0@mail.gmail.com> Message-ID: <4AD3D3A4.5080201@metamilk.com> Derek Dreyer wrote: > Now, coming back to the modular type class scenario, you're completely > right that the Haskell approach of defining Set.insert to be > parameterized over an Ord class constraint doesn't work properly in > the presence of scoped instances. The reason, as you said, is > precisely that the type "Set a" doesn't make sense if there can be > more than one ordering in the program. What you really should do is > parameterize the Set type constructor over (some static representation > of) the ordering on the element type a. Of course, that's exactly > what a correct (i.e. "non-bug-friendly") implementation of applicative > functors would do as well. > > Unfortunately, the original modular type class paper (POPL'07) only > supports generative functors, but for the reasons given above I think > an applicative functor extension of it would be useful. For more > detail, I would refer you to the slides from a talk I gave a couple > years ago, in which I simultaneously proposed modular type classes as > a good motivation for non-bug-friendly applicative functors, and also > sketched how an applicative functor extension to the MTC paper would > look: > > http://www.iai.uni-bonn.de/~ralf/WG2.8/24/slides/derek.pdf Hi Derek, Thanks for the above explanation and the slides, which I think offer deeply satisfying solutions to all the issues I've been talking about. Best regards, Brian. From oleg at okmij.org Tue Oct 13 03:35:34 2009 From: oleg at okmij.org (oleg@okmij.org) Date: Tue, 13 Oct 2009 00:35:34 -0700 (PDT) Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <4AD31DFB.7050809@metamilk.com> Message-ID: <20091013073534.4B957175E1@Adric.metnet.navy.mil> The topic of local instance declarations has been discussed in Sec 6.1 of the HW 2004 paper http://www.cs.rutgers.edu/~ccshan/prepose/prepose.pdf We have proposed a restriction that a local instance must mention an opaque type. That means your example must be written as data T a = T Int module S0 : sig val s : Set T end = struct type t -- abstract instance Ord (T t) where ... val s = Set.insert ... Now, S0.s will have type (T some_t1) and S1.s will have type (T some_t2), which are of course incompatible. Within module S0, one may freely use the locally-defined Ord, along with any globally defined Ord instances. There cannot be any overlap. One may regard the technique of our HW 2004 paper as giving a translation into ordinary Haskell (Haskell98 + rank-2 types). From brianh at metamilk.com Tue Oct 13 11:40:24 2009 From: brianh at metamilk.com (Brian Hulley) Date: Tue, 13 Oct 2009 16:40:24 +0100 Subject: [TYPES] A show-stopping problem for modular type classes? In-Reply-To: <20091013073534.4B957175E1@Adric.metnet.navy.mil> References: <20091013073534.4B957175E1@Adric.metnet.navy.mil> Message-ID: <4AD49F68.9020409@metamilk.com> oleg at okmij.org wrote: > The topic of local instance declarations has been discussed > in Sec 6.1 of the HW 2004 paper > http://www.cs.rutgers.edu/~ccshan/prepose/prepose.pdf Thanks, there's a lot in there for me to think about, Brian. From oobles at gmail.com Thu Oct 15 07:15:11 2009 From: oobles at gmail.com (David Ryan) Date: Thu, 15 Oct 2009 22:15:11 +1100 Subject: [TYPES] Binary Reflective Type System for Communications Message-ID: <7f996c490910150415i4d234cc0g3d90f1a4f96de972@mail.gmail.com> Hi, Over the past six years I have been developing an extensible type system for describing the format of binary data. The core concept of this type system is the definition of 35 data types which are used to define the same set of data types. This type system may then be extended to with the ability to define new types or define new meta data to define new types. This has been created with me having done little type system theory other than compiler design, etc. Having created this self referencing type system I would now like to discover the type theory which best explains it. I am sure that a type system which has no external references will have been proposed and that other examples of this type of system will be available in computer science. Can anyone point me in the right direction of what I should look at first? If anyone is interested, the following blog entry contains the 35 data types and corresponding encoding: http://blog.livemedia.com.au/2009/09/ietf-and-squeezing-meta-dictionary.html Additional information is available at www.einet.com.au Any help would be greatly appreciated. Thanks, David. From osama.abdelwahed at gmail.com Wed Oct 14 22:24:41 2009 From: osama.abdelwahed at gmail.com (osama abdelwahed) Date: Thu, 15 Oct 2009 11:24:41 +0900 Subject: [TYPES] type systems for subset of Pascal Message-ID: HI all I have a small project concerning designing a type checker generator for pascal-s language. so how can I construct the type system for that language ? So I am using Sablecc tool to generate the parser and also the type checker of pascal-s compiler is implemented using Sablecc. So my task is how to construct the generator(in java) for that type checker so as first step in my design, I have to make the specification file for the user. the specification file should contains the type system for pascal-s compiler. Second, I use that file in cooperation with Sablecc to generate the type checker(which is already implemented using Sablecc). So my second question , Does anybody has experience for implementing a generator. I mean Does anyone know how to generate a code. for example generating "hello.java" program ? and so on. i.e. what is the mechanism behind constructing a generator especially for type checker ? I really want to know how to generate type checker code till I can manage to do my task. I appreciate everybody who know the way(help) to implement(in java) my small project.so I need any hints or so. Thanks in advance. Osama From kfisher at research.att.com Fri Oct 16 17:50:54 2009 From: kfisher at research.att.com (Kathleen Fisher) Date: Fri, 16 Oct 2009 14:50:54 -0700 Subject: [TYPES] Binary Reflective Type System for Communications In-Reply-To: <7f996c490910150415i4d234cc0g3d90f1a4f96de972@mail.gmail.com> References: <7f996c490910150415i4d234cc0g3d90f1a4f96de972@mail.gmail.com> Message-ID: <038A4A68-8BD5-4484-8941-43CFFC37D63E@research.att.com> Hi David, The work you are describing sounds a little bit like the PADS language I have been developing (along with a bunch of others.) You can find information on PADS on the project web site (www.padsproj.org). The 2006 POPL paper "The next 700 data description languages" defines a semantics for PADS and similar data description languages. The 2005 PLDI paper "PADS: Processing Arbitrary Data Streams" describes the design and implementation of the original PADS system. Many of the examples for PADS are for ASCII data sources, but PADS can also describe binary data by using base types specific to a binary encoding. If you have questions or comments, I'd be happy to hear them. Kathleen On Oct 15, 2009, at 4:15 AM, David Ryan wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi, > > Over the past six years I have been developing an extensible type > system for describing the format of binary data. The core concept of > this type system is the definition of 35 data types which are used to > define the same set of data types. This type system may then be > extended to with the ability to define new types or define new meta > data to define new types. This has been created with me having done > little type system theory other than compiler design, etc. > > Having created this self referencing type system I would now like to > discover the type theory which best explains it. I am sure that a > type system which has no external references will have been proposed > and that other examples of this type of system will be available in > computer science. Can anyone point me in the right direction of what > I should look at first? > > If anyone is interested, the following blog entry contains the 35 data > types and corresponding encoding: > > http://blog.livemedia.com.au/2009/09/ietf-and-squeezing-meta-dictionary.html > > Additional information is available at > > www.einet.com.au > > Any help would be greatly appreciated. > > Thanks, > David. From oobles at gmail.com Tue Oct 20 23:55:09 2009 From: oobles at gmail.com (David Ryan) Date: Wed, 21 Oct 2009 14:55:09 +1100 Subject: [TYPES] Binary Reflective Type System for Communications In-Reply-To: <038A4A68-8BD5-4484-8941-43CFFC37D63E@research.att.com> References: <7f996c490910150415i4d234cc0g3d90f1a4f96de972@mail.gmail.com> <038A4A68-8BD5-4484-8941-43CFFC37D63E@research.att.com> Message-ID: <7f996c490910202055s218a666cy74d786b92d388f8b@mail.gmail.com> Hi, Thanks for the response. I had a read of the some of the padsproj manual to get a better understanding of PADS. The goals are different, although I think some of the underlying type system is similar. On Monday we submitted Argot under the name XPL to the 6lowapp working group. The IETF draft provides the best description of Argot/XPL. We would appreciate feedback. The draft is available at: http://bit.ly/1C5NS3 Regards, David. On Sat, Oct 17, 2009 at 8:50 AM, Kathleen Fisher wrote: > Hi David, > > The work you are describing sounds a little bit like the PADS language I > have been developing (along with a bunch of others.) ?You can find > information on PADS on the project web site (www.padsproj.org). > > The 2006 POPL paper "The next 700 data description languages" defines a > semantics for PADS and similar data description languages. > > The 2005 PLDI paper "PADS: Processing Arbitrary Data Streams" describes the > design and implementation of the original PADS system. > > Many of the examples for PADS are for ASCII data sources, but PADS can also > describe binary data by using base types specific to a binary encoding. > > If you have questions or comments, I'd be happy to hear them. > > Kathleen > > On Oct 15, 2009, at 4:15 AM, David Ryan wrote: > >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list?] >> >> Hi, >> >> Over the past six years I have been developing an extensible type >> system for describing the format of binary data. ?The core concept of >> this type system is the definition of 35 data types which are used to >> define the same set of data types. ?This type system may then be >> extended to with the ability to define new types or define new meta >> data to define new types. This has been created with me having done >> little type system theory other than compiler design, etc. >> >> Having created this self referencing type system I would now like to >> discover the type theory which best explains it. ?I am sure that a >> type system which has no external references will have been proposed >> and that other examples of this type of system will be available in >> computer science. ?Can anyone point me in the right direction of what >> I should look at first? >> >> If anyone is interested, the following blog entry contains the 35 data >> types and corresponding encoding: >> >> >> http://blog.livemedia.com.au/2009/09/ietf-and-squeezing-meta-dictionary.html >> >> Additional information is available at >> >> www.einet.com.au >> >> Any help would be greatly appreciated. >> >> Thanks, >> David. > > From abc at cs.stevens.edu Mon Nov 2 17:44:35 2009 From: abc at cs.stevens.edu (Adriana Compagnoni) Date: Mon, 2 Nov 2009 18:44:35 -0400 Subject: [TYPES] Fwd: Sad news: Amir Pnueli dies In-Reply-To: <356ac4560911021440y66afba91m485ae49da9fcacf1@mail.gmail.com> References: <356ac4560911021440y66afba91m485ae49da9fcacf1@mail.gmail.com> Message-ID: <356ac4560911021444i5ecf2edega11e4330b02fdcd2@mail.gmail.com> Sad news. ---------- Forwarded message ---------- From: Mike Gordon Date: Mon, Nov 2, 2009 at 6:28 PM Subject: Sad news: Amir Pnueli dies To: HVG , theory at cl.cam.ac.uk This is indeed sad. -- Mike ---------- Forwarded message ---------- From: Edmund Clarke Date: Mon, Nov 2, 2009 at 10:14 PM Subject: Fwd: Amir To: Bob Kurshan , "mike.gordon at cl.cam.ac.uk Gordon" < Mike.Gordon at cl.cam.ac.uk> Very sad news. ---Ed Begin forwarded message: *From: *David Harel *Date: *November 2, 2009 2:52:52 PM EST *To: *David Harel *Subject: **Amir* *From:* Noga Pnueli [mailto:nogapnueli at yahoo.com ] *Sent:* Monday, November 02, 2009 21:26 *To:* Oded.Maler at imag.fr; goldberg at benjamingoldberg.com; david.harel at weizmann.ac.il; uriklein at gmail.com; lenorezuck at mac.com; wpr at informatik.uni-kiel.de *Subject:* Amir Hi Everyone, We are sad to inform you that Amir has suffered a serious brain hemorrhage which he could not recover from. He passed away today at noon. The funeral will be held in Israel on either Wednesday or Thursday. We will be in touch with the details. In the meantime, if you could notify all your colleagues that would be a great help. Thank you, -The Family. -- Adriana B. Compagnoni Associate Professor Computer Science Department Stevens Institute of Technology http://www.cs.stevens.edu/~abc From Sukyoung.Ryu at Sun.COM Thu Nov 5 16:02:59 2009 From: Sukyoung.Ryu at Sun.COM (Sukyoung Ryu) Date: Thu, 05 Nov 2009 16:02:59 -0500 Subject: [TYPES] Announcing a Fortress blog Message-ID: <4537B530-0EDA-4C51-802E-28BD7470B5E9@sun.com> The Fortress team has started a blog, to post a series of announcements and news items about Fortress. Our goal is to let people know about ongoing technical discussions and decisions, as well as the current status of the implementation. We will also post interesting examples of Fortress code. We hope to put up new posts at least weekly. So far we have four posts. The first and fourth posts discuss the new wiki markup for tables and images for use in Fortress comments; the second post discusses some changes to the typing rules for conditional expressions that will help them to interact better with coercion; the third post reports on an implementation of the "treap" data structure in Fortress. We also plan to report soon on the status of our efforts to construct a Fortress compiler. Please visit http://projectfortress.sun.com/Projects/Community/blog or click on the "Blog" item at the right-hand end of the menu bar on the main Wiki page. Best, -- Sukyoung Ryu Programming Language Research Group, Sun Microsystems Laboratories From marco.devillers at gmail.com Fri Dec 25 11:47:22 2009 From: marco.devillers at gmail.com (M.C.A. (Marco) Devillers) Date: Fri, 25 Dec 2009 17:47:22 +0100 Subject: [TYPES] Operational Semantics Preserving Translations Message-ID: I am looking for operational semantics preserving translations regarding eager and lazily evaluated LC, where operational semantics is purely defined on the side effects during the evaluation of a term. I assume this shouldn't be too hard since one may assume these functions exists since lambda evaluators can be written for each LC with a certain operational semantics in each LC with a certain semantics. At least, I hope I am correct in assuming so. Assume a lambda calculus with side effects/impure function calls: M ::= c | v | \v . M | (M M) | call M* Note that * is a sequence of terms. An example term: (\x. \y. call (+) x y) 2 3. I define the operational semantics of a term as the series of calls made to the system during the evaluation under a certain rewrite strategy. Two terms evaluated under two rewrite strategies are considered operationally equal if they made exactly the same calls in exactly the same order. What I am looking for are translation functions between lambda terms of given form which map a lambda term given a certain rewrite strategy to an operationally equivalent lambda term given another rewrite strategy. I.e. say l makes a number of calls when lazily evaluated, then I am looking for a translation function which gives a term l' which, when evaluated eagerly, makes exactly the same calls in exactly the same order. Of course, I am hoping for a linear translation. Strangely, this is an opposing view to normal operational semantics of LC, where that is usually viewed as a mapping into a resulting value, where I, as a compiler writer, am mostly interested in what a term sequentially does to a system, and have hardly any interest in the resulting term. Rationale, with some hand waving: say you want a compiler for an (impure) variant of Haskel but you only have a Lisp compiler. Given an operational semantics preserving translation f, one could translate Haskell to a 'lazy' LC term l, translate that to an 'eager' LC term f(l), and translate that f(l) to Lisp. Thanks anyway, best wishes and a merry Christmas, Marco