From wintersmind at gmail.com Mon Jan 28 17:38:24 2013 From: wintersmind at gmail.com (Christian Skalka) Date: Mon, 28 Jan 2013 17:38:24 -0500 Subject: [TYPES] Decidability of type reconstruction in predicative System-F? Message-ID: I am wondering if there are formal results related to the (un?)decidability of type reconstruction in predicative System-F. As I understand it, Joe Wells' well-known undecidability result only applies to impredicative System-F. Pointers/citations appreciated. Thanks, -chris -- Christian Skalka Associate Professor Department of Computer Science University of Vermont http://www.cs.uvm.edu/~skalka From fp at cs.cmu.edu Mon Jan 28 19:23:15 2013 From: fp at cs.cmu.edu (Frank Pfenning) Date: Mon, 28 Jan 2013 19:23:15 -0500 Subject: [TYPES] Decidability of type reconstruction in predicative System-F? In-Reply-To: References: Message-ID: Hi Chris, It depends on exactly how you defined type reconstruction. For one definition, I proved undecidability for the predicative case in the paper below, based on a suggestion by Bob Harper. Frank Pfenning. On the undecidability of partial polymorphic type reconstruction. Fundamenta Informaticae, 19(1,2):185-199, 1993. Preliminary version available as Technical Report CMU-CS-92-105, School of Computer Science, Carnegie Mellon University, January 1992. http://www.cs.cmu.edu/~fp/papers/CMU-CS-92-105.pdf Fujita and Schubert have results for other definitions, while remaining predicative (RTA 2010, with a longer version in I&C 2012). Best, - Frank On Mon, Jan 28, 2013 at 5:38 PM, Christian Skalka wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > I am wondering if there are formal results related to the (un?)decidability > of type reconstruction in predicative System-F. As I understand it, Joe > Wells' well-known undecidability result only applies to impredicative > System-F. > > Pointers/citations appreciated. > > Thanks, > > -chris > > -- > Christian Skalka > Associate Professor > Department of Computer Science > University of Vermont > http://www.cs.uvm.edu/~skalka > From fujita at cs.gunma-u.ac.jp Tue Feb 12 08:06:53 2013 From: fujita at cs.gunma-u.ac.jp (Fujita Kenetsu) Date: Tue, 12 Feb 2013 22:06:53 +0900 Subject: [TYPES] Decidability of type reconstruction in predicative System-F? In-Reply-To: References: Message-ID: <511A3E6D.7050407@cs.gunma-u.ac.jp> Dear Chris, In some (or many) cases, already known proofs can work for predicative versions of System F as well. You can find one example from the paper with Aleksy, as informed by Frank Pfenning: http://www.sciencedirect.com/science/article/pii/S0890540112001150 Fujita, Schubert: The undecidability of type related problems in the type-free style System F with finitely stratified polymorphic types, Information and Computation Vol. 218, pp. 69--87, 2012. With best regards, Ken --- Ken-etsu Fujita Gunma University (2013/01/29 9:23), Frank Pfenning wrote: > [ The Types Forum,http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi Chris, > > It depends on exactly how you defined type reconstruction. > For one definition, I proved undecidability for the predicative > case in the paper below, based on a suggestion by Bob Harper. > > Frank Pfenning. On the undecidability of partial polymorphic type > reconstruction. Fundamenta Informaticae, 19(1,2):185-199, 1993. > Preliminary version available as Technical Report CMU-CS-92-105, School of > Computer Science, Carnegie Mellon University, January 1992. > http://www.cs.cmu.edu/~fp/papers/CMU-CS-92-105.pdf > > Fujita and Schubert have results for other definitions, while > remaining predicative (RTA 2010, with a longer version in I&C 2012). > > Best, > - Frank > > > On Mon, Jan 28, 2013 at 5:38 PM, Christian Skalkawrote: > >> [ The Types Forum,http://lists.seas.upenn.edu/mailman/listinfo/types-list] >> >> I am wondering if there are formal results related to the (un?)decidability >> of type reconstruction in predicative System-F. As I understand it, Joe >> Wells' well-known undecidability result only applies to impredicative >> System-F. >> >> Pointers/citations appreciated. >> >> Thanks, >> >> -chris >> >> -- >> Christian Skalka >> Associate Professor >> Department of Computer Science >> University of Vermont >> http://www.cs.uvm.edu/~skalka From dreamingforward at gmail.com Sun Apr 14 23:48:05 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Sun, 14 Apr 2013 20:48:05 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages Message-ID: Hello, I'm new to the list and hoping this might be the right place to introduce something that has provoked a bit of an argument in my programming community. I'm from the Python programming community. Python is an "interpreted" language. Since 2001, Python's has migrated towards a "pure" Object model (ref: http://www.python.org/download/releases/2.2/descrintro/). Prior to then, it had both types and classes and these types were anchored to the underlying C code and the machine/hardware architecture itself. After the 2001 "type/class unification" , it went towards Alan Kay's ideal of "everything is an object". From then, every user-defined class inherited from the abstract Object, rooted in nothing but a pure abstract ideal. The parser, lexer, and such spin these abstrations into something that can be run on the actual hardware. As a contrast, this is very distinct from C++, where everything is concretely rooted in the language's type model which in *itself* is rooted (from it's long history) in the CPU architecture. The STL, for example, has many Container types, but each of them requires using a single concrete type for homogenous containers or uses machine pointers to hold arbitrary items in heterogeneous containers (caveat: I haven't programmed in C++ for a long time, so it's possible this might not be correct anymore). My question is: Is there something in the Computer Science literature that has noticed this distinction/development in programming language design and history? It's very significant to me, because as languages went higher and higher to this pure OOP model, the programmer+data ecosystem tended towards very personal object hierarchies because now the hardware no longer formed a common basis of interaction (note also, OOPs promise of re-usable code never materialized). It's not unlike LISP, where the power of its general language architecture tended towards hyperpersonal mini macro languages -- making it hardly used, in practice, though it was and is so powerful, in theory. That all being said, the thrust of this whole effort is to possibly advance Computer Science and language design, because in-between the purely concrete "object" architecture of the imperative programming languages and the purely abstract object architecture of object-oriented programming languages is a possible middle ground that could unite them all. Thank you for your time. Mark Janssen Tacoma, Washington From delesley at gmail.com Mon Apr 15 01:55:59 2013 From: delesley at gmail.com (DeLesley Hutchins) Date: Sun, 14 Apr 2013 22:55:59 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: I'm not quite sure I understand your question, but I'll give it a shot. :-) The C/C++ model, in which the types are anchored to the machine hardware, in the exception, not the rule. In the academic literature, "type theory" is almost entirely focused on studying abstract models of computation that are purely mathematical, and bear no resemblance to the underlying hardware. The lambda calculus is the most general, and most commonly used formalism, but there are many others; e.g. Featherweight Java provides a formal model of objects and classes as they are used in Java. "Types and Programming Languages", by Benjamin Pierce, is an excellent introductory textbook which describes how various language features, including objects, can be formalized. If you are interested in OOP, Abadi and Cardelli's "Theory of Objects" is the obvious place to start, although I'd recommend reading Pierce's book first if you want to understand it. :-) Abadi and Cardelli discuss both class-based languages, and pure object languages. If you are interested in the type/object distinction in particular, then I'll shamelessly plug my own thesis: "Pure Subtype Systems" (available online), which describes a formal model in which types are objects, and objects are types. If you are familiar with the Self language, then you can think of it as a type system for Self. Once you have a type system in place, it's usually fairly straightforward to compile a language down to actual hardware. An interpreter, like that used in Python, is generally needed only for untyped or "dynamic" languages. There are various practical considerations -- memory layout, boxed or unboxed data types, garbage collection, etc. -- but the basic techniques are described in any compiler textbook. Research in the areas of "typed assembly languages" and "proof carrying code" are concerned with ensuring that the translation from high-level language to assembly language is sound, and well-typed at all stages. -DeLesley On Sun, Apr 14, 2013 at 8:48 PM, Mark Janssen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Hello, > > I'm new to the list and hoping this might be the right place to > introduce something that has provoked a bit of an argument in my > programming community. > > I'm from the Python programming community. Python is an "interpreted" > language. Since 2001, Python's has migrated towards a "pure" Object > model (ref: http://www.python.org/download/releases/2.2/descrintro/). > Prior to then, it had both types and classes and these types were > anchored to the underlying C code and the machine/hardware > architecture itself. After the 2001 "type/class unification" , it > went towards Alan Kay's ideal of "everything is an object". From > then, every user-defined class inherited from the abstract Object, > rooted in nothing but a pure abstract ideal. The parser, lexer, and > such spin these abstrations into something that can be run on the > actual hardware. > > As a contrast, this is very distinct from C++, where everything is > concretely rooted in the language's type model which in *itself* is > rooted (from it's long history) in the CPU architecture. The STL, > for example, has many Container types, but each of them requires using > a single concrete type for homogenous containers or uses machine > pointers to hold arbitrary items in heterogeneous containers (caveat: > I haven't programmed in C++ for a long time, so it's possible this > might not be correct anymore). > > My question is: Is there something in the Computer Science literature > that has noticed this distinction/development in programming language > design and history? > > It's very significant to me, because as languages went higher and > higher to this pure OOP model, the programmer+data ecosystem tended > towards very personal object hierarchies because now the hardware no > longer formed a common basis of interaction (note also, OOPs promise > of re-usable code never materialized). > > It's not unlike LISP, where the power of its general language > architecture tended towards hyperpersonal mini macro languages -- > making it hardly used, in practice, though it was and is so powerful, > in theory. > > That all being said, the thrust of this whole effort is to possibly > advance Computer Science and language design, because in-between the > purely concrete "object" architecture of the imperative programming > languages and the purely abstract object architecture of > object-oriented programming languages is a possible middle ground that > could unite them all. > > Thank you for your time. > > Mark Janssen > Tacoma, Washington > From u.s.reddy at cs.bham.ac.uk Mon Apr 15 05:06:21 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Mon, 15 Apr 2013 10:06:21 +0100 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: <20843.49933.673000.183228@gargle.gargle.HOWL> Mark Janssen writes: > After the 2001 "type/class unification" , it went towards Alan Kay's ideal > of "everything is an object".... > > As a contrast, this is very distinct from C++, where everything is > concretely rooted in the language's type model which in *itself* is > rooted (from it's long history) in the CPU architecture. ... > > My question is: Is there something in the Computer Science literature > that has noticed this distinction/development in programming language > design and history? In programming language theory, there is no law to the effect that "everything" should be of one kind or another. So, we would not go with Alan Kay's ideal. Having said that, theorists do want to unify concepts wherever possible and wherever they make sense. Imperative programming types, which I will call "storage types", are semantically the same as classes. Bare storage types have predefined operations for 'getting' and 'setting' whereas classes allow user-defined operations. So, the distinction made between them in typical programming languages is artificial and implementation-focused. C and C++ are especially prone to this problem because they were designed for writing compilers and operating systems where proximity to the machine architecture seems quite necessary. The higher-level languages such as Java are moving towards abolishing the distinction. Scala might be the best model in this respect, though I do not know its type system fully. Here are a couple of references in theoretical work that might be helpful in understanding these connections: - John Reynolds, The Essence of Algol, in de Bakker and van Vliet (eds) Algorithmic Languages, 1981. Also published in O'Hearn and Tennent (eds) Algol-like Languages, Vol. A, 1997. - Uday Reddy, Objects and Classes in Algol-like Languages, Information and Computation, 172:63-97, 2002. (previously in FOOL workshop 1998.) http://www.cs.bham.ac.uk/~udr/papers/classes.pdf However, there are properties that are special to storage types, which are not shared by all class types. Sometimes, they simplify some theoretical aspects. It is not uncommon for authors to make a distinction between storage types and general types. An example is one of our own papers: - Swarup, Reddy and Ireland: Assignments for applicative languages, FPCA 1991. http://www.cs.bham.ac.uk/~udr/papers/assign.pdf Cheers, Uday Reddy From moezadel at outlook.com Mon Apr 15 05:53:38 2013 From: moezadel at outlook.com (Moez AbdelGawad) Date: Mon, 15 Apr 2013 04:53:38 -0500 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: , Message-ID: > Date: Sun, 14 Apr 2013 22:55:59 -0700 > From: delesley at gmail.com > To: dreamingforward at gmail.com > CC: types-list at lists.seas.upenn.edu; python-list at python.org > Subject: Re: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I'm not quite sure I understand your question, but I'll give it a shot. :-) > I'm in this same camp too :) > The C/C++ model, in which the types are anchored to the machine hardware, > in the exception, not the rule. In the academic literature, "type theory" > is almost entirely focused on studying abstract models of computation that > are purely mathematical, and bear no resemblance to the underlying > hardware. The lambda calculus is the most general, and most commonly used > formalism, but there are many others; e.g. Featherweight Java provides a > formal model of objects and classes as they are used in Java. > > "Types and Programming Languages", by Benjamin Pierce, is an excellent > introductory textbook which describes how various language features, > including objects, can be formalized. If you are interested in OOP, Abadi > and Cardelli's "Theory of Objects" is the obvious place to start, although > I'd recommend reading Pierce's book first if you want to understand it. > :-) Abadi and Cardelli discuss both class-based languages, and pure > object languages. If you are interested in the type/object distinction in > particular, then I'll shamelessly plug my own thesis: "Pure Subtype > Systems" (available online), which describes a formal model in which types > are objects, and objects are types. If you are familiar with the Self > language, then you can think of it as a type system for Self. > Offering a different view, I'd like to (also, shamelessly) plug my own thesis: "NOOP: A Mathematical Model of OOP" (available online) in which I denotationally model nominally-typed (ie, statically-typed class-based) OO languages such as Java, C#, C++ and Scala (but not Python). In agreement with the most common tradition in PL research, types in NOOP are modeled abstractly as (certain) sets (of objects). NOOP largely mimics, for nominally-typed OO languages, what Cardelli, Cook, and others earlier did for structurally-typed OO languages. Regards, -Moez > Once you have a type system in place, it's usually fairly straightforward > to compile a language down to actual hardware. An interpreter, like that > used in Python, is generally needed only for untyped or "dynamic" > languages. There are various practical considerations -- memory layout, > boxed or unboxed data types, garbage collection, etc. -- but the basic > techniques are described in any compiler textbook. Research in the areas > of "typed assembly languages" and "proof carrying code" are concerned with > ensuring that the translation from high-level language to assembly language > is sound, and well-typed at all stages. > > -DeLesley > > > > On Sun, Apr 14, 2013 at 8:48 PM, Mark Janssen wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > > > Hello, > > > > I'm new to the list and hoping this might be the right place to > > introduce something that has provoked a bit of an argument in my > > programming community. > > > > I'm from the Python programming community. Python is an "interpreted" > > language. Since 2001, Python's has migrated towards a "pure" Object > > model (ref: http://www.python.org/download/releases/2.2/descrintro/). > > Prior to then, it had both types and classes and these types were > > anchored to the underlying C code and the machine/hardware > > architecture itself. After the 2001 "type/class unification" , it > > went towards Alan Kay's ideal of "everything is an object". From > > then, every user-defined class inherited from the abstract Object, > > rooted in nothing but a pure abstract ideal. The parser, lexer, and > > such spin these abstrations into something that can be run on the > > actual hardware. > > > > As a contrast, this is very distinct from C++, where everything is > > concretely rooted in the language's type model which in *itself* is > > rooted (from it's long history) in the CPU architecture. The STL, > > for example, has many Container types, but each of them requires using > > a single concrete type for homogenous containers or uses machine > > pointers to hold arbitrary items in heterogeneous containers (caveat: > > I haven't programmed in C++ for a long time, so it's possible this > > might not be correct anymore). > > > > My question is: Is there something in the Computer Science literature > > that has noticed this distinction/development in programming language > > design and history? > > > > It's very significant to me, because as languages went higher and > > higher to this pure OOP model, the programmer+data ecosystem tended > > towards very personal object hierarchies because now the hardware no > > longer formed a common basis of interaction (note also, OOPs promise > > of re-usable code never materialized). > > > > It's not unlike LISP, where the power of its general language > > architecture tended towards hyperpersonal mini macro languages -- > > making it hardly used, in practice, though it was and is so powerful, > > in theory. > > > > That all being said, the thrust of this whole effort is to possibly > > advance Computer Science and language design, because in-between the > > purely concrete "object" architecture of the imperative programming > > languages and the purely abstract object architecture of > > object-oriented programming languages is a possible middle ground that > > could unite them all. > > > > Thank you for your time. > > > > Mark Janssen > > Tacoma, Washington > > > From matthias at ccs.neu.edu Mon Apr 15 09:50:38 2013 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Mon, 15 Apr 2013 09:50:38 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: <1AD3C7F6-96F4-4E90-8849-84D6E53308E2@ccs.neu.edu> On Apr 14, 2013, at 11:48 PM, Mark Janssen wrote: > After the 2001 "type/class unification" , it went towards Alan Kay's ideal Are you sure? Remember Kay's two motivations [*], which he so elegantly describes with "[the] large scale one was to find a better module scheme for complex systems involving hiding of details, and the small scale one was to find a more flexible version of assignment, and then to try to eliminate it altogether." At least for me, this quote sends a signal to language designers that is still looking for a receiver -- Matthias [*] http://gagne.homedns.org/~tgagne/contrib/EarlyHistoryST.html From dreamingforward at gmail.com Tue Apr 16 19:16:42 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Tue, 16 Apr 2013 16:16:42 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: > I'm not quite sure I understand your question, but I'll give it a shot. :-) Thank you, and my apologies for my late reply. > The C/C++ model, in which the types are anchored to the machine hardware, in > the exception, not the rule. In the academic literature, "type theory" is > almost entirely focused on studying abstract models of computation that are > purely mathematical, and bear no resemblance to the underlying hardware. > The lambda calculus is the most general, and most commonly used formalism, > but there are many others; e.g. Featherweight Java provides a formal model > of objects and classes as they are used in Java. Understood, but I feel this is where theory has gone too far away from reality. Wikipedia (admittedly not an authoritative resource), lists a clear distinction between languages rooted to the Turing machine, and those rooted in lambda calculus: From: en.wikipedia.org: Programming_paradigm: "A programming paradigm is a fundamental style of computer programming. There are four main paradigms: object-oriented, imperative, functional and declarative. Their foundations are distinct models of computation: Turing machine for object-oriented and imperative programming, lambda calculus for functional programming, and first order logic for logic programming." While I understand the interest in purely theoretical models, I wonder two things: 1) Are these distinct models of computation valid? And, 2) If so, shouldn't a theory of types announce what model of computation they are working from? You say the C/C++ model is the exception, but in the programmer community (excepting web-based languages) it is the opposite. The machine model dominates. In fact, I'm not even sure how Java operates, but through some sorcery I don't want to take part in. > "Types and Programming Languages", by Benjamin Pierce, is an excellent > introductory textbook which describes how various language features, > including objects, can be formalized. If you are interested in OOP, Abadi > and Cardelli's "Theory of Objects" is the obvious place to start, although > I'd recommend reading Pierce's book first if you want to understand it. :-) > Abadi and Cardelli discuss both class-based languages, and pure object > languages. If you are interested in the type/object distinction in > particular, then I'll shamelessly plug my own thesis: "Pure Subtype Systems" > (available online), which describes a formal model in which types are > objects, and objects are types. If you are familiar with the Self language, > then you can think of it as a type system for Self. Thank you very much. I will look for them. > Once you have a type system in place, it's usually fairly straightforward to > compile a language down to actual hardware. An interpreter, like that used > in Python, is generally needed only for untyped or "dynamic" languages. > There are various practical considerations -- memory layout, boxed or > unboxed data types, garbage collection, etc. -- but the basic techniques are > described in any compiler textbook. Research in the areas of "typed > assembly languages" and "proof carrying code" are concerned with ensuring > that the translation from high-level language to assembly language is sound, > and well-typed at all stages. Very interesting. I appreciate the those leads.... -- MarkJ Tacoma, Washington From dreamingforward at gmail.com Tue Apr 16 19:40:16 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Tue, 16 Apr 2013 16:40:16 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20843.49933.673000.183228@gargle.gargle.HOWL> References: <20843.49933.673000.183228@gargle.gargle.HOWL> Message-ID: On Mon, Apr 15, 2013 at 2:06 AM, Uday S Reddy wrote: > In programming language theory, there is no law to the effect that > "everything" should be of one kind or another. So, we would not go with > Alan Kay's ideal. I understand. I state Kay's points to show how the evolution of (this part of) the programming language world *in practice* has gone in its explorations. > Having said that, theorists do want to unify concepts wherever possible and > wherever they make sense. Imperative programming types, which I will call > "storage types", are semantically the same as classes. I like that word "storage type", it makes it much clearer what one is referring to. I feel like I'm having to "come up to speed" of the academic community, but wonder how and why this large chasm happened between the applied community and the theoretical. In my mind, despite the ideals of academia, students graduate and they inevitably come to work on Turing machines of some kind (Intel hardware, for example, currently dominates). If this is not in some way part of some "ideal", why did the business community adopt and deploy these most successfully? Or is it, in some *a priori* way, not possible to apply the abstract notions in academia into the real-world? > Bare storage types > have predefined operations for 'getting' and 'setting' whereas classes allow > user-defined operations. So, the distinction made between them in typical > programming languages is artificial and implementation-focused. C and C++ > are especially prone to this problem because they were designed for writing > compilers and operating systems where proximity to the machine architecture > seems quite necessary. The higher-level languages such as Java are moving > towards abolishing the distinction. Right, same with Python, but IMO this is where the evolution of programming languages is going awry. As languages move away from the machine, they're getting more based in different and disparate notions of types. From a practical standpoint, this makes interoperability and OOPs desire for "re-useability" recede. > Here are a couple of references in theoretical work that might be helpful in > understanding these connections: Thank you for those references. I will look into them. -- MarkJ Tacoma, Washington From u.s.reddy at cs.bham.ac.uk Wed Apr 17 05:10:58 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 17 Apr 2013 10:10:58 +0100 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20843.49933.673000.183228@gargle.gargle.HOWL> Message-ID: <20846.26402.656000.31662@gargle.gargle.HOWL> Mark Janssen writes: > > Having said that, theorists do want to unify concepts wherever possible > > and wherever they make sense. Imperative programming types, which I > > will call "storage types", are semantically the same as classes. > > I like that word "storage type", it makes it much clearer what one is > referring to. Indeed. However, this is not the only notion of type in imperative programming languages. For example, a function type in C or its descendants is not there to describe storage, but to describe the interface of an abstraction. I will use Reynolds's term "phrase types" to refer to such types. Reynolds's main point in "The Essence of Algol" was to say that phrase types are much more general, and a language can be built around them in a streamlined way. Perhaps "Streamlining Algol" would have been a more descriptive title for his paper. Nobody should be designing an imperative programming language without having read "The Essence of Algol", but they do. Whether storage types (and their generalization, class types) should be there in a type system at all is an open question. I can think of arguments both ways. In Java, classes are types. So are interfaces (i.e., phrase types). I think Java does a pretty good job of combining the two in a harmonious way. If you have trouble getting hold of "The Essence of Algol", please write to me privately and I can send you a scanned copy. The Handout 5B in my "Principles of Programming Languages" lecture notes is a quick summary of the Reynolds's type system. http://www.cs.bham.ac.uk/~udr/popl/index.html > I feel like I'm having to "come up to speed" of the academic > community, but wonder how and why this large chasm happened between > the applied community and the theoretical. In my mind, despite the > ideals of academia, students graduate and they inevitably come to work > on Turing machines of some kind (Intel hardware, for example, > currently dominates). If this is not in some way part of some > "ideal", why did the business community adopt and deploy these most > successfully? Or is it, in some *a priori* way, not possible to apply > the abstract notions in academia into the real-world? The chasms are too many, not only between theoretical and applied communities, but within each of them. My feeling is that this is inevitable. Our field progresses too fast for people to sit back, take stock of what we have and reconcile the multiple points of view. There is nothing wrong with "Turing machines". But the question in programming language design is how to integrate the Turing machine concepts with all the other abstractions we need (functions/procedures, modules, abstract data types etc.), i.e., how to fit the Turing machine concepts into the "big picture". That is not an easy question to resolve, and there isn't a single way of doing it. So you see multiple approaches being used in the practical programming languages, some cleaner than the others. The abstract notions of academia do make it into the real world, but rather more slowly than one would hope. Taking Java for example, the initial versions of Java treated interfaces in a half-hearted way, ignored generics/polymorphism, and ignored higher-order functions. But all of them are slowly making their way into Java, with pressure not only from the academic community but also through competition from other "practical" languages like Python, C# and Scala. If this kind of progress continues, that is the best we can hope for in a fast-paced field like ours. Cheers, Uday Reddy -- Prof. Uday Reddy Tel: +44 121 414 2740 Professor of Computer Science Fax: +44 121 414 4281 School of Computer Science University of Birmingham Email: U.S.Reddy at cs.bham.ac.uk Edgbaston Birmingham B15 2TT Web: http://www.cs.bham.ac.uk/~udr From u.s.reddy at cs.bham.ac.uk Wed Apr 17 05:30:36 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 17 Apr 2013 10:30:36 +0100 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: <20846.27580.375000.899631@gargle.gargle.HOWL> Mark Janssen writes: > From: en.wikipedia.org: Programming_paradigm: > > "A programming paradigm is a fundamental style of computer > programming. There are four main paradigms: object-oriented, > imperative, functional and declarative. Their foundations are distinct > models of computation: Turing machine for object-oriented and > imperative programming, lambda calculus for functional programming, > and first order logic for logic programming." > > While I understand the interest in purely theoretical models, I wonder > two things: 1) Are these distinct models of computation valid? And, > 2) If so, shouldn't a theory of types announce what model of > computation they are working from? These distinctions are not fully valid. - Functional programming, logic programming and imperative programming are three different *computational mechanisms*. - Object-orientation and abstract data types are two different ways of building higher-level *abstractions*. The authors of this paragraph did not understand that computational mechanisms and higher-level abstractions are separate, orthogonal dimensions in programming language design. All six combinations, obtained by picking a computational mechanism from the first bullet and an abstraction mechanism from the second bullet, are possible. It is a mistake to put object-orientation in the first bullet. Their idea of "paradigm" is vague and ill-defined. Cheers, Uday Reddy From nikhil at acm.org Wed Apr 17 10:04:02 2013 From: nikhil at acm.org (Rishiyur Nikhil) Date: Wed, 17 Apr 2013 10:04:02 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20846.27580.375000.899631@gargle.gargle.HOWL> References: <20846.27580.375000.899631@gargle.gargle.HOWL> Message-ID: > If you have trouble getting hold of "The Essence of Algol", ... There seems to be a downloadable copy at: www.cs.cmu.edu/~crary/819-f09/Reynolds81.ps It's in PostScript, which is easily convertible to PDF if you wish. Nikhil On Wed, Apr 17, 2013 at 5:30 AM, Uday S Reddy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Mark Janssen writes: > > > From: en.wikipedia.org: Programming_paradigm: > > > > "A programming paradigm is a fundamental style of computer > > programming. There are four main paradigms: object-oriented, > > imperative, functional and declarative. Their foundations are distinct > > models of computation: Turing machine for object-oriented and > > imperative programming, lambda calculus for functional programming, > > and first order logic for logic programming." > > > > While I understand the interest in purely theoretical models, I wonder > > two things: 1) Are these distinct models of computation valid? And, > > 2) If so, shouldn't a theory of types announce what model of > > computation they are working from? > > These distinctions are not fully valid. > > - Functional programming, logic programming and imperative programming are > three different *computational mechanisms*. > > - Object-orientation and abstract data types are two different ways of > building higher-level *abstractions*. > > The authors of this paragraph did not understand that computational > mechanisms and higher-level abstractions are separate, orthogonal > dimensions > in programming language design. All six combinations, obtained by picking > a > computational mechanism from the first bullet and an abstraction mechanism > from the second bullet, are possible. It is a mistake to put > object-orientation in the first bullet. Their idea of "paradigm" is vague > and ill-defined. > > Cheers, > Uday Reddy > From andreas.abel at ifi.lmu.de Wed Apr 17 10:42:25 2013 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Wed, 17 Apr 2013 16:42:25 +0200 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20846.27580.375000.899631@gargle.gargle.HOWL> References: <20846.27580.375000.899631@gargle.gargle.HOWL> Message-ID: <516EB4D1.1020103@ifi.lmu.de> On 17.04.2013 11:30, Uday S Reddy wrote: > Mark Janssen writes: > >> From: en.wikipedia.org: Programming_paradigm: >> >> "A programming paradigm is a fundamental style of computer >> programming. There are four main paradigms: object-oriented, >> imperative, functional and declarative. Their foundations are distinct >> models of computation: Turing machine for object-oriented and >> imperative programming, lambda calculus for functional programming, >> and first order logic for logic programming." I removed the second sentence relating paradigms to computation models and put it on the talk page instead. It does not make sense to connect imperative programming to Turing machines like functional programming to lambda calculus. A better match would be random access machines, but the whole idea of a connection between a programming paradigm and a computation model is misleading. >> While I understand the interest in purely theoretical models, I wonder >> two things: 1) Are these distinct models of computation valid? And, >> 2) If so, shouldn't a theory of types announce what model of >> computation they are working from? > > These distinctions are not fully valid. > > - Functional programming, logic programming and imperative programming are > three different *computational mechanisms*. > > - Object-orientation and abstract data types are two different ways of > building higher-level *abstractions*. > > The authors of this paragraph did not understand that computational > mechanisms and higher-level abstractions are separate, orthogonal dimensions > in programming language design. All six combinations, obtained by picking a > computational mechanism from the first bullet and an abstraction mechanism > from the second bullet, are possible. It is a mistake to put > object-orientation in the first bullet. Their idea of "paradigm" is vague > and ill-defined. > > Cheers, > Uday Reddy > -- Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich Oettingenstr. 67, D-80538 Munich, GERMANY andreas.abel at ifi.lmu.de http://www2.tcs.ifi.lmu.de/~abel/ From jason.a.wilkins at gmail.com Thu Apr 18 14:48:20 2013 From: jason.a.wilkins at gmail.com (Jason Wilkins) Date: Thu, 18 Apr 2013 13:48:20 -0500 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <516EB4D1.1020103@ifi.lmu.de> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> Message-ID: Warning, this is a bit of a rant. That paragraph from Wikipedia seems to be confused. It gives the fourth paradigm as "declarative" but then says "first order logic for logic programming". It seems somebody did an incomplete replacement of "declarative" for "logic". Wikipedia is often schizophrenic like that. Personally, I think that object oriented and logical programming only became official paradigms because there was a certain level of hype for them in the 1980s and nobody has thought to strike them off the list after the hype died down. Object-oriented, as constituted today, is just a layer of abstraction over imperative programming (or imperative style programming in functional languages, because objects require side-effects). What "object-oriented" language actually in use now isn't just an imperative language with fancy abstraction mechanisms? The problem with having declarative languages as a paradigm (which logical languages would be a part) is that it feels like it should be a "miscellaneous" category. Being declarative doesn't tell you much except that some machine is going to turn your descriptions of something into some kind of action. In logical programming it is a set of predicates, but it could just as easily be almost anything else. In a way all languages are "declarative", it is just that we have some standard interpretations of what is declared that are very common (imperative and functional). My wish is that the idea of there being four paradigms would be abandoned the same we the idea of four food groups has been abandoned (which may surprise some of you). We have more than four different modes of thinking when programming and some are much more important than others and some are subsets of others. We should teach students a more sophisticated view. Ironically Wikipedia also shows us this complexity. The programming language paradigm side bar actually reveals the wealth of different styles that are available. There is simply no clean and useful way to overlay the four paradigms over what we see there, so it should be abandoned because it gives students a false idea. On Wed, Apr 17, 2013 at 9:42 AM, Andreas Abel wrote: > [ The Types Forum, http://lists.seas.upenn.edu/** > mailman/listinfo/types-list] > > On 17.04.2013 11:30, Uday S Reddy wrote: > >> Mark Janssen writes: >> >> From: en.wikipedia.org: Programming_paradigm: >>> >>> "A programming paradigm is a fundamental style of computer >>> programming. There are four main paradigms: object-oriented, >>> imperative, functional and declarative. Their foundations are distinct >>> models of computation: Turing machine for object-oriented and >>> imperative programming, lambda calculus for functional programming, >>> and first order logic for logic programming." >>> >> > I removed the second sentence relating paradigms to computation models > and put it on the talk page instead. It does not make sense to connect > imperative programming to Turing machines like functional programming to > lambda calculus. A better match would be random access machines, but the > whole idea of a connection between a programming paradigm and a computation > model is misleading. > > > While I understand the interest in purely theoretical models, I wonder >>> two things: 1) Are these distinct models of computation valid? And, >>> 2) If so, shouldn't a theory of types announce what model of >>> computation they are working from? >>> >> >> These distinctions are not fully valid. >> >> - Functional programming, logic programming and imperative programming are >> three different *computational mechanisms*. >> >> - Object-orientation and abstract data types are two different ways of >> building higher-level *abstractions*. >> >> The authors of this paragraph did not understand that computational >> mechanisms and higher-level abstractions are separate, orthogonal >> dimensions >> in programming language design. All six combinations, obtained by >> picking a >> computational mechanism from the first bullet and an abstraction mechanism >> from the second bullet, are possible. It is a mistake to put >> object-orientation in the first bullet. Their idea of "paradigm" is vague >> and ill-defined. >> >> Cheers, >> Uday Reddy >> >> > > -- > Andreas Abel <>< Du bist der geliebte Mensch. > > Theoretical Computer Science, University of Munich > Oettingenstr. 67, D-80538 Munich, GERMANY > > andreas.abel at ifi.lmu.de > http://www2.tcs.ifi.lmu.de/~**abel/ > From rwh at cs.cmu.edu Thu Apr 18 17:14:13 2013 From: rwh at cs.cmu.edu (Robert Harper) Date: Thu, 18 Apr 2013 17:14:13 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> Message-ID: In short, there is no such thing as a "paradigm". I agree fully. This term is a holdover from the days when people spent time and space trying to build taxonomies based on ill-defined superficialities. See Steve Gould's essay "What, If Anything, Is A Zebra?". You'll enjoy learning that there is, in fact, no such thing as a zebra---there are, rather, three different striped horse-like mammals, two of which are genetically related, and one of which is not. The propensity to be striped, like the propensity to have five things (fingers, segments, whatever) is a deeply embedded genetic artifact that expresses itself in various ways. Bob Harper On Apr 18, 2013, at 2:48 PM, Jason Wilkins wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Warning, this is a bit of a rant. > > That paragraph from Wikipedia seems to be confused. It gives the fourth > paradigm as "declarative" but then says "first order logic for logic > programming". It seems somebody did an incomplete replacement of > "declarative" for "logic". Wikipedia is often schizophrenic like that. > > Personally, I think that object oriented and logical programming only > became official paradigms because there was a certain level of hype for > them in the 1980s and nobody has thought to strike them off the list after > the hype died down. > > Object-oriented, as constituted today, is just a layer of abstraction over > imperative programming (or imperative style programming in functional > languages, because objects require side-effects). What "object-oriented" > language actually in use now isn't just an imperative language with fancy > abstraction mechanisms? > > The problem with having declarative languages as a paradigm (which logical > languages would be a part) is that it feels like it should be a > "miscellaneous" category. Being declarative doesn't tell you much except > that some machine is going to turn your descriptions of something into some > kind of action. In logical programming it is a set of predicates, but it > could just as easily be almost anything else. In a way all languages are > "declarative", it is just that we have some standard interpretations of > what is declared that are very common (imperative and functional). > > My wish is that the idea of there being four paradigms would be abandoned > the same we the idea of four food groups has been abandoned (which may > surprise some of you). We have more than four different modes of thinking > when programming and some are much more important than others and some are > subsets of others. We should teach students a more sophisticated view. > > Ironically Wikipedia also shows us this complexity. The > programming language paradigm side bar actually reveals the wealth > of different styles that are available. There is simply no clean and > useful way to overlay the four paradigms over what we see there, so it > should be abandoned because it gives students a false idea. > > > On Wed, Apr 17, 2013 at 9:42 AM, Andreas Abel wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/** >> mailman/listinfo/types-list] >> >> On 17.04.2013 11:30, Uday S Reddy wrote: >> >>> Mark Janssen writes: >>> >>> From: en.wikipedia.org: Programming_paradigm: >>>> >>>> "A programming paradigm is a fundamental style of computer >>>> programming. There are four main paradigms: object-oriented, >>>> imperative, functional and declarative. Their foundations are distinct >>>> models of computation: Turing machine for object-oriented and >>>> imperative programming, lambda calculus for functional programming, >>>> and first order logic for logic programming." >>>> >>> >> I removed the second sentence relating paradigms to computation models >> and put it on the talk page instead. It does not make sense to connect >> imperative programming to Turing machines like functional programming to >> lambda calculus. A better match would be random access machines, but the >> whole idea of a connection between a programming paradigm and a computation >> model is misleading. >> >> >> While I understand the interest in purely theoretical models, I wonder >>>> two things: 1) Are these distinct models of computation valid? And, >>>> 2) If so, shouldn't a theory of types announce what model of >>>> computation they are working from? >>>> >>> >>> These distinctions are not fully valid. >>> >>> - Functional programming, logic programming and imperative programming are >>> three different *computational mechanisms*. >>> >>> - Object-orientation and abstract data types are two different ways of >>> building higher-level *abstractions*. >>> >>> The authors of this paragraph did not understand that computational >>> mechanisms and higher-level abstractions are separate, orthogonal >>> dimensions >>> in programming language design. All six combinations, obtained by >>> picking a >>> computational mechanism from the first bullet and an abstraction mechanism >>> from the second bullet, are possible. It is a mistake to put >>> object-orientation in the first bullet. Their idea of "paradigm" is vague >>> and ill-defined. >>> >>> Cheers, >>> Uday Reddy >>> >>> >> >> -- >> Andreas Abel <>< Du bist der geliebte Mensch. >> >> Theoretical Computer Science, University of Munich >> Oettingenstr. 67, D-80538 Munich, GERMANY >> >> andreas.abel at ifi.lmu.de >> http://www2.tcs.ifi.lmu.de/~**abel/ >> From rwh at cs.cmu.edu Thu Apr 18 17:15:15 2013 From: rwh at cs.cmu.edu (Robert Harper) Date: Thu, 18 Apr 2013 17:15:15 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> Message-ID: The term "declarative" never meant a damn thing, but was often used, absurdly, to somehow lump together functional programming with logic programming, and separate it from imperative programming. It never made a lick of sense; it's just a marketing term. Bob Harper On Apr 18, 2013, at 2:48 PM, Jason Wilkins wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Warning, this is a bit of a rant. > > That paragraph from Wikipedia seems to be confused. It gives the fourth > paradigm as "declarative" but then says "first order logic for logic > programming". It seems somebody did an incomplete replacement of > "declarative" for "logic". Wikipedia is often schizophrenic like that. > > Personally, I think that object oriented and logical programming only > became official paradigms because there was a certain level of hype for > them in the 1980s and nobody has thought to strike them off the list after > the hype died down. > > Object-oriented, as constituted today, is just a layer of abstraction over > imperative programming (or imperative style programming in functional > languages, because objects require side-effects). What "object-oriented" > language actually in use now isn't just an imperative language with fancy > abstraction mechanisms? > > The problem with having declarative languages as a paradigm (which logical > languages would be a part) is that it feels like it should be a > "miscellaneous" category. Being declarative doesn't tell you much except > that some machine is going to turn your descriptions of something into some > kind of action. In logical programming it is a set of predicates, but it > could just as easily be almost anything else. In a way all languages are > "declarative", it is just that we have some standard interpretations of > what is declared that are very common (imperative and functional). > > My wish is that the idea of there being four paradigms would be abandoned > the same we the idea of four food groups has been abandoned (which may > surprise some of you). We have more than four different modes of thinking > when programming and some are much more important than others and some are > subsets of others. We should teach students a more sophisticated view. > > Ironically Wikipedia also shows us this complexity. The > programming language paradigm side bar actually reveals the wealth > of different styles that are available. There is simply no clean and > useful way to overlay the four paradigms over what we see there, so it > should be abandoned because it gives students a false idea. > > > On Wed, Apr 17, 2013 at 9:42 AM, Andreas Abel wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/** >> mailman/listinfo/types-list] >> >> On 17.04.2013 11:30, Uday S Reddy wrote: >> >>> Mark Janssen writes: >>> >>> From: en.wikipedia.org: Programming_paradigm: >>>> >>>> "A programming paradigm is a fundamental style of computer >>>> programming. There are four main paradigms: object-oriented, >>>> imperative, functional and declarative. Their foundations are distinct >>>> models of computation: Turing machine for object-oriented and >>>> imperative programming, lambda calculus for functional programming, >>>> and first order logic for logic programming." >>>> >>> >> I removed the second sentence relating paradigms to computation models >> and put it on the talk page instead. It does not make sense to connect >> imperative programming to Turing machines like functional programming to >> lambda calculus. A better match would be random access machines, but the >> whole idea of a connection between a programming paradigm and a computation >> model is misleading. >> >> >> While I understand the interest in purely theoretical models, I wonder >>>> two things: 1) Are these distinct models of computation valid? And, >>>> 2) If so, shouldn't a theory of types announce what model of >>>> computation they are working from? >>>> >>> >>> These distinctions are not fully valid. >>> >>> - Functional programming, logic programming and imperative programming are >>> three different *computational mechanisms*. >>> >>> - Object-orientation and abstract data types are two different ways of >>> building higher-level *abstractions*. >>> >>> The authors of this paragraph did not understand that computational >>> mechanisms and higher-level abstractions are separate, orthogonal >>> dimensions >>> in programming language design. All six combinations, obtained by >>> picking a >>> computational mechanism from the first bullet and an abstraction mechanism >>> from the second bullet, are possible. It is a mistake to put >>> object-orientation in the first bullet. Their idea of "paradigm" is vague >>> and ill-defined. >>> >>> Cheers, >>> Uday Reddy >>> >>> >> >> -- >> Andreas Abel <>< Du bist der geliebte Mensch. >> >> Theoretical Computer Science, University of Munich >> Oettingenstr. 67, D-80538 Munich, GERMANY >> >> andreas.abel at ifi.lmu.de >> http://www2.tcs.ifi.lmu.de/~**abel/ >> From dreamingforward at gmail.com Thu Apr 18 18:53:15 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Thu, 18 Apr 2013 15:53:15 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: On Mon, Apr 15, 2013 at 2:53 AM, Moez AbdelGawad wrote: >> I'm not quite sure I understand your question, but I'll give it a shot. >> :-) > > I'm in this same camp too :) I am very thankful for the references given by everyone. Unfortunately my library does not have the titles and it will be some time before I can acquire them. I hope it not too intrusive to offer a few points that I've garnered from this conversation until I can study the history further. The main thing that I notice is that there is a heavy "bias" in academia towards mathematical models. I understand that Turing Machines, for example, were originally abstract computational concepts before there was an implementation in hardware, so I have some sympathies with that view, yet, should not the "Science" of "Computer Science" concern itself with how to map these abstract computational concepts into actual computational hardware? Otherwise, why not keep the field within mathematics and philosophy (where Logic traditionally has been)? I find it remarkable, for example, that the simple continued application of And/Or/Not gates can perform all the computation that C.S. concerns itself with and these form the basis for computer science in my mind, along with Boolean logic. (The implementation of digital logic into physical hardware is where C.S. stops and Engineering begins, I would argue.) But still, it seems that there are two ends, two poles, to the whole computer science enterprise that haven't been sufficiently *separated* so that they can be appreciated: logic gates vs. logical "calculus" and symbols. There is very little crossover as I can see. Perhaps the problem is the common use of the Greek root "logikos"; in the former, it pertains to binary arithmetic, where in the latter, it retains it's original Greek pertaining to *speech* and symbols, "logos"). Further, one can notice that in the former, the progression has been towards more sophisticated Data Structures (hence the evolution towards Object-Orientation), where in the latter (I'm guessing, since it's not my area of expertise) the progression has been towards function sophistication (where recursion seems to be paramount). In any case, I look forward to diving into the books and references you've all offered so generously so that I can appreciate the field and its history better. Mark Janssen Pacific Lutheran University Tacoma, Washington From jonathan.aldrich at cs.cmu.edu Thu Apr 18 19:48:50 2013 From: jonathan.aldrich at cs.cmu.edu (Jonathan Aldrich) Date: Thu, 18 Apr 2013 19:48:50 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> Message-ID: <51708662.3020909@cs.cmu.edu> > Warning, this is a bit of a rant. Rants are OK ;-). But I wanted to respond to an issue that is technical, which may shed light on the general subject. > Object-oriented, as constituted today, is just a layer of abstraction over > imperative programming (or imperative style programming in functional > languages, because objects require side-effects). The idea that objects require side-effects is a common misconception. You can find a number of object-oriented abstractions that do not involve mutable state in commonly-used libraries. If most object-oriented code is stateful, it's probably more or less because most code, in general, is stateful. William Cook's Onward 2009 essay makes an argument, which is convincing to me and to many others in the OO community, that the distinguishing characteristic of objects is dynamic dispatch (a.k.a. subtype polymorphism). In Cook's definition, "an object is a value exporting a procedural interface to data or behavior." Each object conceptually carries its behavior with it as a set of procedures (methods), and the only way to interact with the object is to call one of them (via dynamic dispatch): http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf You can write objects in just about any language (GTK+ and Microsoft's COM are good examples of objects in C) but it's quite awkward to do so. A good working definition of an "object-oriented language" is a language that makes it _easy_ to define values exporting a procedural interface to data or behavior. Likewise, you can do functional programming in C if you want, but it's a lot easier if you have language support in the form of lambda expressions, closures, etc. Does the nature of objects, as expressed in Cook's essay, matter in practice for engineering? I'm currently working on a paper exploring that question...it's not quite ready for public distribution, but if you are willing to look at a working draft and provide feedback, let me know privately. > What "object-oriented" language actually in use now isn't just > an imperative language with fancy abstraction mechanisms? As a potential answer to your last question, Scala is beginning to see significant use. It supports object-oriented programming, and has quite good support (and libraries) for both imperative and functional styles of programming. Cheers, Jonathan Aldrich From moezadel at outlook.com Fri Apr 19 02:09:50 2013 From: moezadel at outlook.com (Moez AbdelGawad) Date: Fri, 19 Apr 2013 01:09:50 -0500 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: , , , Message-ID: > Date: Thu, 18 Apr 2013 15:53:15 -0700 > From: dreamingforward at gmail.com > To: types-list at lists.seas.upenn.edu > Subject: Re: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages > > I am very thankful for the references given by everyone. > Unfortunately my library does not have the titles and it will be some > time before I can acquire them. The official version of my PhD thesis is available at https://scholarship.rice.edu/handle/1911/70199 (A version more suitable for electronic browsing and online distribution is available at http://sdrv.ms/15qsJ5x ) -Moez From jason.a.wilkins at gmail.com Fri Apr 19 02:31:08 2013 From: jason.a.wilkins at gmail.com (Jason Wilkins) Date: Fri, 19 Apr 2013 01:31:08 -0500 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: I don't quite think I understand what you are saying. Are you saying that mathematical models are not a good foundation for computer science because computers are really made out of electronic gates? All I need to do is show that my model reduces to some basic physical implementation (with perhaps some allowances for infinity) and then I can promptly forget about that messy business and proceed to use my clean mathematical model. The reason any model of computation exists is that it is easier to think about a problem in some terms than in others. By showing how to transform one model to another you make it possible to choose exactly how you wish to solve a problem. The reason we do not work directly in what are called "von Neumann machines" is that they are not convenient for all kinds of problems. However we can build a compiler to translate anything to anything else so we I don't see why anybody would care. On Thu, Apr 18, 2013 at 5:53 PM, Mark Janssen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > On Mon, Apr 15, 2013 at 2:53 AM, Moez AbdelGawad > wrote: > >> I'm not quite sure I understand your question, but I'll give it a shot. > >> :-) > > > > I'm in this same camp too :) > > I am very thankful for the references given by everyone. > Unfortunately my library does not have the titles and it will be some > time before I can acquire them. I hope it not too intrusive to offer > a few points that I've garnered from this conversation until I can > study the history further. > > The main thing that I notice is that there is a heavy "bias" in > academia towards mathematical models. I understand that Turing > Machines, for example, were originally abstract computational concepts > before there was an implementation in hardware, so I have some > sympathies with that view, yet, should not the "Science" of "Computer > Science" concern itself with how to map these abstract computational > concepts into actual computational hardware? Otherwise, why not keep > the field within mathematics and philosophy (where Logic traditionally > has been)? I find it remarkable, for example, that the simple > continued application of And/Or/Not gates can perform all the > computation that C.S. concerns itself with and these form the basis > for computer science in my mind, along with Boolean logic. (The > implementation of digital logic into physical hardware is where C.S. > stops and Engineering begins, I would argue.) > > But still, it seems that there are two ends, two poles, to the whole > computer science enterprise that haven't been sufficiently *separated* > so that they can be appreciated: logic gates vs. logical "calculus" > and symbols. There is very little crossover as I can see. Perhaps > the problem is the common use of the Greek root "logikos"; in the > former, it pertains to binary arithmetic, where in the latter, it > retains it's original Greek pertaining to *speech* and symbols, > "logos"). Further, one can notice that in the former, the progression > has been towards more sophisticated Data Structures (hence the > evolution towards Object-Orientation), where in the latter (I'm > guessing, since it's not my area of expertise) the progression has > been towards function sophistication (where recursion seems to be > paramount). > > In any case, I look forward to diving into the books and references > you've all offered so generously so that I can appreciate the field > and its history better. > > Mark Janssen > Pacific Lutheran University > Tacoma, Washington > From vijay at saraswat.org Fri Apr 19 02:32:08 2013 From: vijay at saraswat.org (Vijay Saraswat) Date: Fri, 19 Apr 2013 02:32:08 -0400 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> Message-ID: <5170E4E8.4040200@saraswat.org> On 4/18/13 5:15 PM, Robert Harper wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > The term "declarative" never meant a damn thing, but was often used, absurdly, to somehow lump together functional programming with logic programming, and separate it from imperative programming. It never made a lick of sense; it's just a marketing term. > > Why would you say this? (I hesitate diving into a forty year old debate, but you have such a charming manner, its hard to resist :-)) Declarative means a heck of a lot. (Though I do agree that distinctions between function and logic programming are probably not that significant conceptually.) Declarative as in having an associated interpretation in terms of (probabilistic / first-order / ...) models, and reasoning that is valid wrt that interpretation. Declarative as in fundamentally representational. Hence understandable / accessible to people who are interested only in the real world phenomena that is being modeled (e.g. engineers, scientists, business users) and not necessarily how the reasoning is realized within modern computational hardware. Imperative as in focused on efficient representation and manipulation of (shared) state. I can show an HCC program to an engineer and he can see the differential equations he loves and understands, and why the query being asked makes sense. But there is very little chance he would understand the implementation of the reasoning engine (all imperative code). With declarative programming, the beauty is he does not need to care. The program *is* a fragment of logic and its consequences perfectly describable in terms the user recognizes and understands, not in implementational terms. (Or a business user using ILOG JRules to represent decision trees for sales promotions... or...) Yes sub-structural logics complicate the situation, at some level of abstraction. But if computer science is about anything it is about building layers of abstractions (or "lies" as someone put it), and not understanding something solely through reductionism. From jason.a.wilkins at gmail.com Fri Apr 19 02:36:39 2013 From: jason.a.wilkins at gmail.com (Jason Wilkins) Date: Fri, 19 Apr 2013 01:36:39 -0500 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <51708662.3020909@cs.cmu.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> Message-ID: Thanks Jonathan for the information. I was not completely confident that I was correct about objects and side effects, but almost every definition of an "object" I've read heavily implies that they contain state. How else can you bind state and methods together if there is no state ;-) Perhaps the common definition is misleading. But I say this before reading your link. On Thu, Apr 18, 2013 at 6:48 PM, Jonathan Aldrich < jonathan.aldrich at cs.cmu.edu> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/** > mailman/listinfo/types-list] > > Warning, this is a bit of a rant. >> > > Rants are OK ;-). But I wanted to respond to an issue that is technical, > which may shed light on the general subject. > > > > Object-oriented, as constituted today, is just a layer of abstraction over >> imperative programming (or imperative style programming in functional >> languages, because objects require side-effects). >> > > The idea that objects require side-effects is a common misconception. You > can find a number of object-oriented abstractions that do not involve > mutable state in commonly-used libraries. If most object-oriented code is > stateful, it's probably more or less because most code, in general, is > stateful. > > > William Cook's Onward 2009 essay makes an argument, which is convincing to > me and to many others in the OO community, that the distinguishing > characteristic of objects is dynamic dispatch (a.k.a. subtype > polymorphism). In Cook's definition, "an object is a value exporting a > procedural interface to data or behavior." Each object conceptually > carries its behavior with it as a set of procedures (methods), and the only > way to interact with the object is to call one of them (via dynamic > dispatch): > > http://www.cs.utexas.edu/~**wcook/Drafts/2009/essay.pdf > > You can write objects in just about any language (GTK+ and Microsoft's COM > are good examples of objects in C) but it's quite awkward to do so. A good > working definition of an "object-oriented language" is a language that > makes it _easy_ to define values exporting a procedural interface to data > or behavior. Likewise, you can do functional programming in C if you want, > but it's a lot easier if you have language support in the form of lambda > expressions, closures, etc. > > > Does the nature of objects, as expressed in Cook's essay, matter in > practice for engineering? I'm currently working on a paper exploring that > question...it's not quite ready for public distribution, but if you are > willing to look at a working draft and provide feedback, let me know > privately. > > > > > What "object-oriented" language actually in use now isn't just > > an imperative language with fancy abstraction mechanisms? > > As a potential answer to your last question, Scala is beginning to see > significant use. It supports object-oriented programming, and has quite > good support (and libraries) for both imperative and functional styles of > programming. > > Cheers, > > Jonathan Aldrich > From u.s.reddy at cs.bham.ac.uk Fri Apr 19 03:21:09 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 19 Apr 2013 08:21:09 +0100 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: <20848.61541.156000.365368@gargle.gargle.HOWL> Mark Janssen writes: > The main thing that I notice is that there is a heavy "bias" in > academia towards mathematical models. I understand that Turing > Machines, for example, were originally abstract computational concepts > before there was an implementation in hardware, so I have some > sympathies with that view, yet, should not the "Science" of "Computer > Science" concern itself with how to map these abstract computational > concepts into actual computational hardware? I think there is some misunderstanding here. Being "mathematical" in academic work is a way of making our ideas rigorous and precise, instead of trying to peddle wooly nonsense. Providing a mathematical description does not imply in any way that these ideas are not implementable on machines. In fact, very often these mathematical descriptions state precisely how to implement the concepts (called operational semantics), but using mathematical notation instead of program code. The mathematical notation used here is usually no more than high school set theory, used in a stylized way. In contrast, there are deeper "mathematical models" (called denotational semantics and axiomatic semantics) which are invented to describe how programming language features work in a deep, intrinsic way. This is similar to, for instance, how Physics invents mathematical models to capture how the nature around us works. Physicists don't need to "implement" nature. It has already been "implemented" for us before we are born. However, to understand how it works, and how to design systems using physical materials in a predictable way, we need the mathematical models that Physics has develeped. Similarly, the mathematical models of programming languages help us to obtain a deep understanding of how languages work and how to build systems in a predictable, reliable way. It seems too much to expect, at the present stage of our field, that all programmers should understand the mathematical models. But I would definitely expect that programming language designers who are trying to build new languages should understand the mathematical models. Otherwise, they would be like automotive engineers trying to build cars without knowing any Mechanics. Cheers, Uday Reddy -- Prof. Uday Reddy Tel: +44 121 414 2740 Professor of Computer Science Fax: +44 121 414 4281 School of Computer Science University of Birmingham Email: U.S.Reddy at cs.bham.ac.uk Edgbaston Birmingham B15 2TT Web: http://www.cs.bham.ac.uk/~udr From u.s.reddy at cs.bham.ac.uk Fri Apr 19 03:30:10 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 19 Apr 2013 08:30:10 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <5170E4E8.4040200@saraswat.org> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <5170E4E8.4040200@saraswat.org> Message-ID: <20848.62082.625000.991667@gargle.gargle.HOWL> Vijay Saraswat writes: > Declarative means a heck of a lot. (Though I do agree that distinctions > between function and logic programming are probably not that significant > conceptually.) Indeed, "declarative" means a lot. But, "declarative programming language" doesn't. All programming languages have a "declarative interpretation" and a "procedural interpretation" (to use terms invented in the logic programming community). If somebody claims that some language is not "declarative", it just means that they never thought about its declarative interpretation, not that it doesn't exist. Ignorance is peddled as a fact of reality. Cheers, Uday From u.s.reddy at cs.bham.ac.uk Fri Apr 19 03:41:23 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 19 Apr 2013 08:41:23 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <5170E4E8.4040200@saraswat.org> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <5170E4E8.4040200@saraswat.org> Message-ID: <20848.62755.218000.195549@gargle.gargle.HOWL> Incidentally, Landin rejected the term "declarative" in 1966: http://dl.acm.org/citation.cfm?id=365257 and proposed "denotative" as a better description. At the time of his writing, imperative programming languages were not "denotative", i.e., no denotational semantics was known for them. Strachey fixed that problem soon afterwards. Cheers, Uday From claus.reinke at talk21.com Fri Apr 19 04:25:42 2013 From: claus.reinke at talk21.com (Claus Reinke) Date: Fri, 19 Apr 2013 10:25:42 +0200 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: <53160CDFC4344A68B946665DE7281152@VAIO> > The main thing that I notice is that there is a heavy "bias" in > academia towards mathematical models. I understand that Turing > Machines, for example, were originally abstract computational concepts > before there was an implementation in hardware, so I have some > sympathies with that view, yet, should not the "Science" of "Computer > Science" concern itself with how to map these abstract computational > concepts into actual computational hardware? I prefer to think of Turing machines as an attempt to model existing and imagined hardware (at the time, mostly human computers, or groups of them with comparatively simple tools). See sections 1. and 9. in Turing, "On computable numbers, with an application to the Entscheidungsproblem", http://web.comlab.ox.ac.uk/oucl/research/areas/ieg/e-library/sources/tp2-ie.pdf Modeling existing systems, in order to be able to reason about them, is essential for science, as is translating models into experiments, in order to compare predictions to reality. Claus From greg at eecs.harvard.edu Fri Apr 19 09:34:09 2013 From: greg at eecs.harvard.edu (Greg Morrisett) Date: Fri, 19 Apr 2013 09:34:09 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <51708662.3020909@cs.cmu.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> Message-ID: <517147D1.7010303@eecs.harvard.edu> > William Cook's Onward 2009 essay makes an argument, which is convincing > to me and to many others in the OO community, that the distinguishing > characteristic of objects is dynamic dispatch (a.k.a. subtype > polymorphism). I fail to see what dynamic dispatch has to do with subtype polymorphism. They are completely orthogonal. (And even dynamic dispatch is an abused term.) Objects are a degenerate case of first-class modules (ADTs in Uday's terms.) It's true that it's natural to couple this with sub-type polymorphism. But it's also natural to couple this (a la OCaml's object system) with e.g., row polymorphism. I do agree that the abstraction mechanisms used have little or nothing to do with state and identity, which is also typically confused with the whole "OO" mantra. It's perfectly reasonable to do value-oriented programming with first class ADTs. Happens in both Haskell and OCaml all the time. -Greg From jonathan.aldrich at cs.cmu.edu Fri Apr 19 11:38:23 2013 From: jonathan.aldrich at cs.cmu.edu (Jonathan Aldrich) Date: Fri, 19 Apr 2013 11:38:23 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <517147D1.7010303@eecs.harvard.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> <517147D1.7010303@eecs.harvard.edu> Message-ID: <517164EF.2010208@cs.cmu.edu> I'll respond to both Jason and Greg, below. Jason Wilkins wrote: > Thanks Jonathan for the information. I was not completely confident > that I was correct about objects and side effects, but almost every > definition of an "object" I've read heavily implies that they contain > state. How else can you bind state and methods together if there is > no state ;-) Perhaps the common definition is misleading. Yes, it's probably more accurate to say that objects bind data and behavior (methods) together, and provide a procedural (method-based) interface to the abstraction. The data is often mutable, but not always. Greg Morrisett wrote: > I fail to see what dynamic dispatch has to do with subtype > polymorphism. They are completely orthogonal. (And even > dynamic dispatch is an abused term.) Ok, you're right to make a distinction between these; dynamic dispatch is the more fundamental concept with respect to objects. But subtype polymorphism is related, at least in practice: see below. > Objects are a degenerate > case of first-class modules (ADTs in Uday's terms.) In Cook's essay, "an object is a value exporting a procedural interface to data or behavior," which is consistent with my understanding of the term dynamic dispatch. I agree that an object is equivalent to a first-class module that exports a purely procedural interface. I disagree that ADTs are equivalent to either modules or objects. Read Cook's essay for a discussion of the latter; for the former, consider that it's often useful to define more than one ADT in a single module. > It's > true that it's natural to couple this with sub-type polymorphism. > But it's also natural to couple this (a la OCaml's object system) > with e.g., row polymorphism. > > I do agree that the abstraction mechanisms used have little > or nothing to do with state and identity, which is also > typically confused with the whole "OO" mantra. It's perfectly > reasonable to do value-oriented programming with first > class ADTs. Happens in both Haskell and OCaml all the time. True, but type classes (which is what I assume you mean by first-class ADTs in Haskell) use type abstraction and therefore do not directly give you the properties you get from objects. A critical property of objects, which is used architecturally in many OO systems, is support for heterogeneous data structures: e.g. putting several different implementations of an abstraction into a list. You can do this is Haskell only through a "slightly clumsy" encoding that wraps a type class in another data structure, thereby existentially quantifying over the type class used. See "Simulating objects" near the end of Simon Peyton Jones's talk: http://research.microsoft.com/en-us/um/people/simonpj/papers/haskell-retrospective/ecoop-july09.pdf Interestingly, if you use pure row polymorphism to hide certain fields of a record, the identity of those fields is still carried (abstractly) by the row variable. In this sense row polymorphism is also a form of type abstraction. With pure row polymorphism, just as with type classes in Haskell, you can't implement heterogeneous data structures unless you pack the row variable in an existential. OCaml supports objects better than Haskell because it makes existentially quantifying over the row variable easy--and because OCaml supports subtyping between object/row types! Thus there's an argument that subtype polymorphism is more closely tied to objects than row polymorphism. But dispatch, or more precisely "a value exporting a procedural interface to data or behavior" is more fundamental. Best, Jonathan From psa at di.uminho.pt Fri Apr 19 12:51:23 2013 From: psa at di.uminho.pt (=?ISO-8859-1?Q?Paulo_S=E9rgio_Almeida?=) Date: Fri, 19 Apr 2013 17:51:23 +0100 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <517147D1.7010303@eecs.harvard.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> <517147D1.7010303@eecs.harvard.edu> Message-ID: <5171760B.9010007@di.uminho.pt> On 4/19/13 2:34 PM, Greg Morrisett wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> William Cook's Onward 2009 essay makes an argument, which is convincing >> to me and to many others in the OO community, that the distinguishing >> characteristic of objects is dynamic dispatch (a.k.a. subtype >> polymorphism). ... > I do agree that the abstraction mechanisms used have little > or nothing to do with state and identity, which is also > typically confused with the whole "OO" mantra. It's perfectly > reasonable to do value-oriented programming with first > class ADTs. Happens in both Haskell and OCaml all the time. > > -Greg Hi all, Yes, abstraction mechanisms have little to do with state and identity. Yes, one can program with abstractions and values, mostly avoiding state and identity. This does not mean state and identity are not relevant characteristics of object oriented programming. There are several characteristics of objects and object oriented programming: One is object in the duality Object/ADT very well presented in Cook's essay "On Understanding Data Abstraction, Revisited". This paper should be read by anyone interested in the subject. But I would say it is not *the* distinguishing characteristic of objects, but *one* (although a very important) one. This is what most interests type theorists. But there is another characteristic. Object in the duality Object/Value as presented in e.g., "Values and Objects in Programming Languages" by MacLennan. Which has to do with the "state and identity" feature of objects. This lies in the "object as entity of a model" and the origin of OO in Simula, and later reused in other OO languages. And this second characteristic of OO is not just about imperative programming or mutable state, it is about the promoting of *shareable mutable state*, where an object, with a given identity, after receiving a message and changing state, because it is known by other parts of the system, will affect those parts, possibly without that being immediately aparent. Some even say that the most pure form of OO is the actor model. One influence of OO is the promoting of parameter passing by "identity passing" as the norm, with the terrible effects due to surprising aliasing of shared state that occur. Even in classic imperative languages with mutable state, the norm was to pass values; when passing references to "objects", typically a more controled stack discipline was the norm. Sharing mutable state in an unrestricted way was already known to be dangerous; languages like Euclid tried to do something about it; in many cases pointers to heap allocation were only used if recursive data structures were needed. Many pointers were only to "borrow" the object in a restricted scope. OO languages changed the norm to be: use references everywhere, start with null references, allocate on the heap. And this was also motivated by subtype polymorphism / heterogeneous containers. Always having variables as references facilitated this. But this easy path neglected the dangers posed by the too easy way in which one can share mutable state, sometimes accidentally, and by the false sense of security given by having garbage collection, as opposed to what happens in functional languages were the user sees only values. I would say that one defining feature of OO was the way in which people ended up overusing objects and identities, when values would be more appropriate. In fact by people thinking less about values than even in classic *imperative* languages, like FORTRAN and C. This point of overusing identities is well made in some of Rich Hickey's talks about Clojure. This second characteristic of OO, and the failure of mainstream OO languages to provide mechanisms to properly provide state encapsulation can become its bane if nothing is done about it. Fortunately many people started doing something about object aliasing, state encapsulation, object ownership. (I did my 2 cents ... some years ago.) A language which aims for mainstream use, but worries about this aspect is Rust. So, my point is that the OO paradigm, *de facto*, depicted in languages like Simula, Smalltalk, and Java, is the combination of at least these two aspects. Saying that only one is the defining characteristic is not an accurate description of reality. This does not mean that OO is a "good thing", nor that some of these things cannot be picked and mixed, obtaining languages that combine, e.g., the abstraction characteristic of objects with functional programming (but can one really call such language OO?), or languages that are multi-paradigm (like Scala and Rust), that support both Objects and ADTs (in that aspect) and both Values and Objects (in the other aspect) and both functional and imperative programming styles. This is indeed the future and talking about paradigms is so passe ... Regards, Paulo From greg at eecs.harvard.edu Fri Apr 19 13:25:51 2013 From: greg at eecs.harvard.edu (Greg Morrisett) Date: Fri, 19 Apr 2013 13:25:51 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <517164EF.2010208@cs.cmu.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> <517147D1.7010303@eecs.harvard.edu> <517164EF.2010208@cs.cmu.edu> Message-ID: <5A15BB04-B7B1-48AC-9BAE-E81DD1A93A12@eecs.harvard.edu> >I disagree that ADTs are equivalent to either modules or objects. Read Cook's essay for a discussion of the latter; for the former, consider that it's often useful to define more than one ADT in a single module. That's why I said "degenerate case". Modules, as strong sums, are more general. Objects are excessively narcissistic because they are fundamentally restricted to weak sums. > True, but type classes (which is what I assume you mean by first-class ADTs in Haskell) use type abstraction and therefore do not directly give you the properties you get from objects. > > A critical property of objects, which is used architecturally in many OO systems, is support for heterogeneous data structures: e.g. putting several different implementations of an abstraction into a list. You can do this is Haskell only through a "slightly clumsy" encoding that wraps a type class in another data structure, thereby existentially quantifying over the type class used. See "Simulating objects" near the end of Simon Peyton Jones's talk: I don't see this as clumsy at all. Rather, it's clumsy to abstract over operations that are uniform in most OO languages because they insist on conflating mechanisms that should be orthogonal. Haskell has a more proper separation of concerns. From ccshan at indiana.edu Fri Apr 19 13:51:53 2013 From: ccshan at indiana.edu (Chung-chieh Shan) Date: Fri, 19 Apr 2013 13:51:53 -0400 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20848.62755.218000.195549@gargle.gargle.HOWL> <20848.62082.625000.991667@gargle.gargle.HOWL> <5170E4E8.4040200@saraswat.org> Message-ID: <20130419175153.GA21981@mantle.bostoncoop.net> I once defined "declarative programming" as "the division of _what_ to do and _how_ to do it into two modules that can be built and reused separately." A declarative programming language is one way to carry out such a division: the programmer says what and the language implementation says how. Of course, one person's "what" is another's "how". For example, maybe advertising revenue is "what" and machine learning is "how", but it's also possible that machine learning is "what" and floating point is "how". So "declarative" is meaningful insofar as we are clear about what counts as the "real world" in On 2013-04-19T02:32:08-0400, Vijay Saraswat wrote: > Hence understandable / accessible to people who > are interested only in the real world phenomena that is being > modeled (e.g. engineers, scientists, business users) and not > necessarily how the reasoning is realized within modern > computational hardware. So, saying On 2013-04-19T08:30:10+0100, Uday S Reddy wrote: > Indeed, "declarative" means a lot. But, "declarative programming language" > doesn't. All programming languages have a "declarative interpretation" and > a "procedural interpretation" (to use terms invented in the logic > programming community). If somebody claims that some language is not > "declarative", it just means that they never thought about its declarative > interpretation, not that it doesn't exist. Ignorance is peddled as a fact > of reality. is like saying that "tall" means a lot but every person is a "tall person" because every person is taller than something. In fact, each meaningful use of the phrase "tall person" implicitly refers to a standard of height, and probably not every person is taller than that standard. Similarly, each meaningful use of the phrase "declarative programming language" implicitly refers to a notion of "what" versus "how" -- a notion of "real world" versus "hardware" -- and probably not every programming language represents that "what"/"real world" in an "understandable / accessible" way. Thus, someone can meaningfully claim that some language is not "declarative" even if they know perfectly well about another, less relevant, standard according to which the same language is declarative. On 2013-04-19T08:41:23+0100, Uday S Reddy wrote: > Incidentally, Landin rejected the term "declarative" in 1966: > http://dl.acm.org/citation.cfm?id=365257 > and proposed "denotative" as a better description. At the time of his > writing, imperative programming languages were not "denotative", i.e., no > denotational semantics was known for them. Strachey fixed that problem soon > afterwards. I take `imperative programming languages were not "denotative"' above to mean `imperative programming languages were not known to be "denotative"'. Even if a language has a denotational semantics, those denotations might not be what we want to use the language for in the "real world". So Strachey didn't fix Landin's problem except if you work on domain-specific languages in the specific domain of partial orders. From rwh at cs.cmu.edu Fri Apr 19 14:22:47 2013 From: rwh at cs.cmu.edu (Robert Harper) Date: Fri, 19 Apr 2013 14:22:47 -0400 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20848.62082.625000.991667@gargle.gargle.HOWL> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <5170E4E8.4040200@saraswat.org> <20848.62082.625000.991667@gargle.gargle.HOWL> Message-ID: <5E964FF0-7B1E-4AF7-AA7D-1BAB380E38CD@cs.cmu.edu> Happy to liven up the types list, which has turned into billboard of late :). I am referring to the term "declarative programming language", and should have been more precise in saying that. It's died down now, mostly, but for a while there was an attempt to equate logic programming languages with functional programming languages under this term. If one wishes to use "declarative" as description of a denotational semantics, that's fine, but I would point out that even Prolog can only be understood fully in operational terms, e.g. the "cut" operator !, which controls the proof search procedure. On Apr 19, 2013, at 3:30 AM, Uday S Reddy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Vijay Saraswat writes: > >> Declarative means a heck of a lot. (Though I do agree that distinctions >> between function and logic programming are probably not that significant >> conceptually.) > > Indeed, "declarative" means a lot. But, "declarative programming language" > doesn't. All programming languages have a "declarative interpretation" and > a "procedural interpretation" (to use terms invented in the logic > programming community). If somebody claims that some language is not > "declarative", it just means that they never thought about its declarative > interpretation, not that it doesn't exist. Ignorance is peddled as a fact > of reality. > > Cheers, > Uday From jonathan.aldrich at cs.cmu.edu Fri Apr 19 14:30:27 2013 From: jonathan.aldrich at cs.cmu.edu (Jonathan Aldrich) Date: Fri, 19 Apr 2013 14:30:27 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <5A15BB04-B7B1-48AC-9BAE-E81DD1A93A12@eecs.harvard.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> <517147D1.7010303@eecs.harvard.edu> <517164EF.2010208@cs.cmu.edu> <5A15BB04-B7B1-48AC-9BAE-E81DD1A93A12@eecs.harvard.edu> Message-ID: <51718D43.8040201@cs.cmu.edu> >> A critical property of objects, which is used architecturally in many OO systems, is support for heterogeneous data structures: e.g. putting several different implementations of an abstraction into a list. You can do this is Haskell only through a "slightly clumsy" encoding that wraps a type class in another data structure, thereby existentially quantifying over the type class used. See "Simulating objects" near the end of Simon Peyton Jones's talk: > > I don't see this as clumsy at all. Rather, it's clumsy to abstract over operations that are uniform in most OO languages because they insist on conflating mechanisms that should be orthogonal. Haskell has a more proper separation of concerns. Well, some things that are easy to express in Haskell are awkward in OO languages, and some things that are easy to express in OO languages are awkward to express in Haskell. I think most neutral observers would agree that adding gratuitous wrap/unwrap operations whenever you want to access an object stored in a list is at least "slightly clumsy" (Simon's words, not mine). This matters in practice. If a language makes use of an idiom awkward, developers will use it only rarely, and that affects the design of software. If the requirements of a software system require using an idiom over and over again, and that idiom is awkward to express in a particular language, that language is in practice unsuitable for use in building that software system. It's my opinion (and I am in the process of gathering evidence for it) that this can explain a lot of the success of OO languages. Lots of systems in practice can't be built effectively without objects (e.g. to store heterogeneous implementations of an abstraction in a list), and those abstractions are too awkward to encode over and over again without built-in language support. The solution is not to complain about how weak objects are in comparison to one's preferred form of abstraction; it is rather to develop languages in which both forms of abstraction are well-supported (in practice, not just in theory). Jonathan From neelk at mpi-sws.org Fri Apr 19 16:17:56 2013 From: neelk at mpi-sws.org (Neelakantan R. Krishnaswami) Date: Fri, 19 Apr 2013 22:17:56 +0200 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <5A15BB04-B7B1-48AC-9BAE-E81DD1A93A12@eecs.harvard.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> <517147D1.7010303@eecs.harvard.edu> <517164EF.2010208@cs.cmu.edu> <5A15BB04-B7B1-48AC-9BAE-E81DD1A93A12@eecs.harvard.edu> Message-ID: <5171A674.4070505@mpi-sws.org> On 04/19/2013 07:25 PM, Greg Morrisett wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > >> I disagree that ADTs are equivalent to either modules or objects. >> Read Cook's essay for a discussion of the latter; for the former, >> consider that it's often useful to define more than one ADT in a >> single module. > > That's why I said "degenerate case". Modules, as strong sums, are > more general. Objects are excessively narcissistic because they are > fundamentally restricted to weak sums. Hi Greg, I don't understand your point. Weak sums, strong sums, and functional objects seem strictly incomparable. Weak sums ??.A(?) have an equality principle that lets you choose different representation types for equal packages pack(A,e) and pack(B,t), as long as you can give a relation between A and B that is preserved by e and t. OTOH, strong sums ??.A(?) don't support this principle,because if you have e = t : ??.A(?), then you know that fst(e) = fst(t), implying that the representation types of equal elements must be equal. Functional objects are different again. They are typically described as recursive records ? that is, interface types like ??.A(?) where A is a record of methods. If you try to convert such a recursive type to an existential, you'll get ??. ? ? (? ? A(?)), which is a very different type from ??.A(?). It's not only different because of the hidden state, but also because object types need fully general recursive types with negative occurrences (e.g., a distance function on a Point type which takes another Point as an argument). If you attempt to model this with one of the usual encodings in System F: ??.F(?) ? ??. (F(?) ? ?) ? ? ??.F(?) ? ??. ? ? (? ? F(?)) with a mixed-variance F, you don't get the recursive type you'd expect, and so it seems to me you can't encode functional objects. -- Neel P.S. Imperative objects seem different yet again, since many imperative OO designs (like the subject-observer pattern and MVC) rely on very wild aliasing, and reasoning about them has a rely/guarantee flavor. From u.s.reddy at cs.bham.ac.uk Fri Apr 19 17:34:00 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 19 Apr 2013 22:34:00 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20130419175153.GA21981@mantle.bostoncoop.net> References: <20848.62755.218000.195549@gargle.gargle.HOWL> <20848.62082.625000.991667@gargle.gargle.HOWL> <5170E4E8.4040200@saraswat.org> <20130419175153.GA21981@mantle.bostoncoop.net> Message-ID: <20849.47176.287000.658730@gargle.gargle.HOWL> Chung-chieh Shan writes: > I once defined "declarative programming" as "the division of _what_ > to do and _how_ to do it into two modules that can be built and > reused separately." Sorry, I don't buy it. The distinction of what vs how, i.e., external behaviour vs internal implementation, exists for *every* software system implemented in *every* conceivable programming language. I don't see what this has to do with being "declarative". > > Incidentally, Landin rejected the term "declarative" in 1966: > > http://dl.acm.org/citation.cfm?id=365257 > > and proposed "denotative" as a better description. At the time of his > > writing, imperative programming languages were not "denotative", i.e., no > > denotational semantics was known for them. Strachey fixed that problem soon > > afterwards. > > I take `imperative programming languages were not "denotative"' > above to mean `imperative programming languages were not known to be > "denotative"'. Even if a language has a denotational semantics, those > denotations might not be what we want to use the language for in the > "real world". So Strachey didn't fix Landin's problem except if you > work on domain-specific languages in the specific domain of partial > orders. I was taught programming when I was a second year undergrad, using Dijkstra & Wirth-style top-down design and stepwise refinement. So, I always thought of imperative programming in "denotative" terms. What vs how were always clear in my mind. I probably saw Landin's paper in the months and years that followed (as I looked through every CACM paper on programming languages from the 60's and 70's), and probably didn't understand the point of it. The distinction between denotative and procedural languages that Landin makes would have seemed to me to be a false dichotomy. It is perfectly feasible for something to be "procedural" as well as "denotative". So, what is he talking about? After maturing, I have learnt that many people really have a big mental block about thinking of things as being procedural as well as denotative. Simon Peyton Jones seems to have finally conquered this brain space by explaining the things belonging to the IO types as "actions". That seems to make sense to people. Three cheers to him! And, let us not get carried away with "real world" ideas. The "real world" has been using imperatives for millennia before we got into the game. We didn't invent imperatives. I believe Vijay was talking about a particular class of users and a particular class of applications where a declarative interface is the right layer of abstraction. If he were to talk about a stock trading system or an on-board flight control system, the right layer of abstraction could very well be an imperative one. There is nothing "unreal world" about imperatives (or "actions" as Simon calls them). Cheers, Uday PS You were probably confusing Strachey with Dana Scott. Strachey worked on the semantics of imperative programs. Scott worked on the semantics of recursion, using partial orders (and he also collaborated with Strachey on semantics of imperative programs). A good place to read about Strachey's semantics is the book by Mike Gordon, a classic book that I also happened to read as an undergrad. It is easy to find used copies of it these days because a lot of stupid libraries around the world are getting rid of their copies. http://www.amazon.co.uk/The-Denotational-Description-Programming-Languages/dp/0387904336/ref=sr_1_1?ie=UTF8&qid=1366406702&sr=8-1&keywords=gordon+denotational+description From greg at eecs.harvard.edu Fri Apr 19 20:29:11 2013 From: greg at eecs.harvard.edu (Greg Morrisett) Date: Fri, 19 Apr 2013 20:29:11 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <5171A674.4070505@mpi-sws.org> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> <517147D1.7010303@eecs.harvard.edu> <517164EF.2010208@cs.cmu.edu> <5A15BB04-B7B1-48AC-9BAE-E81DD1A93A12@eecs.harvard.edu> <5171A674.4070505@mpi-sws.org> Message-ID: <5171E157.3040403@eecs.harvard.edu> > I don't understand your point. Weak sums, strong sums, and > functional objects seem strictly incomparable. Yes, I shouldn't have used strong and weak. Rather, in most OO languages (and with the simple version of Haskell type classes) there is a unique type specified implicitly in the interface ("self"). This is the narcissism that I was referring to. Whereas, with modules, it's common to define multiple types in the interface (e.g., the type of maps, and the type of their keys and values.) In Haskell, we hack around this with associated types, functional dependencies and the like. It's much cleaner to just have multiple type definitions and true dependency (as well as computation at the type level.) > OTOH, strong sums ??.A(?) don't support this principle,because if you > have e = t : ??.A(?), then you know that fst(e) = fst(t), implying that > the representation types of equal elements must be equal. I didn't mean to suggest you don't want weak sums. (I am a huge fan.) > Functional objects are different again. They are typically described as > recursive records ? that is, interface types like ??.A(?) where A > is a record of methods. If you try to convert such a recursive type to > an existential, you'll get ??. ? ? (? ? A(?)), which is a very > different type from ??.A(?). Neel, I remember all of this stuff from the 80s-90's quite well. See my papers relating these to closure compilation techniques with Harper --- I do find the Pierce-Turner pattern: ??. ? ? (? ? A(?)) or a la Haskell: ??. C ? => ? which is the essence of a closure, works extremely well in many situations where I want a heterogeneous collection. I particularly like that I can choose where to put that existential and class constraint so I can choose to have e.g., ??. C ? => Set ? and factor out the method table for a whole collection. So unlike Jonathan, I find this control crucial, and don't seem to mind the price of the wrap/unwrap. As you noted, it doesn't work well when trying to perform operations *across* values (unless we've abstracted the type over a collection as I did above.) But the usual solution is to provide conversions to a common representation (e.g., provide a to_cartesian for your point interface, and then implement the distance operation using this method.) I'm not sure it works well in e.g., Java either without resorting to something like this. -Greg From andrews at csd.uwo.ca Fri Apr 19 21:16:39 2013 From: andrews at csd.uwo.ca (Jamie Andrews) Date: Fri, 19 Apr 2013 21:16:39 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> Message-ID: <5171EC77.8020208@csd.uwo.ca> Hi Bob and others... It seems to have been Floyd, in his Turing award lecture, who popularized the use of the term "paradigm" in reference to programming languages. Floyd cites Kuhn's _The Structure of Scientific Revolutions_ in the text of the lecture. Kuhn had a specific use for the word "paradigm" which is not particularly close to the original dictionary definition (in the book, he apologizes for the appropriation of the word). However, the way in which Floyd uses "paradigm" in his Turing award lecture is closer to the original meaning of the word, and is similar to what we would now call a "design pattern". Despite all this, most of the discussion of "programming paradigms" that I remember from the 1980s and 1990s seems to have taken place with Kuhn's definition lurking under the surface. This discussion seemed to assume a sort of striving between paradigms, that would result in one becoming the "dominant" paradigm. This may have reflected no more than the recognition that programming languages ranged from "wildly successful" to "virtually unused, even by the inventor", and the desire to be the inventor of a wildly successful language (or a co-author / student / colleague of the inventor). The current relative popularity of programming languages shows that even a poorly-designed language can be very popular, for no better reason than that it was once allied with a well-designed operating system popular in universities. So the word "paradigm", with its heavy Kuhnian overtones, is probably no longer useful. What Floyd seems to have intended to point out is that some paradigms (read: design patterns) are easier to use in some programming languages than in others. I think this is the really interesting point that the programming language community has actually been pursuing all along, and that the "paradigm battle" has been an annoying distraction. It is certainly possible to define programming languages that combine imperative, functional, logic-programming and object-oriented features. Many such languages have been defined in the past. Why is research into such languages not the dominant mode of programming language research? Perhaps it is partly that such languages tend to have large grammar definitions that extend over several pages (so does C++, but never mind). But partly it may be that the "paradigm battle" has led to people wanting to avoid seeming to encourage or accept a reconciliation between the supposed enemies. I really like new programming languages that give programmers the opportunity to use many different paradigms (again read: design patterns), to use new paradigms, and to mix and match paradigms in interesting new ways. I also like programming languages that do not have several-pages-long grammars. There is a certain dynamic tension between those two goals, of course, but it seems to me that some of the best programming language research balances those goals well. cheers --Jamie. On 18/04/13 5:14 PM, Robert Harper wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > In short, there is no such thing as a "paradigm". I agree fully. This term is a holdover from the days when people spent time and space trying to build taxonomies based on ill-defined superficialities. See Steve Gould's essay "What, If Anything, Is A Zebra?". You'll enjoy learning that there is, in fact, no such thing as a zebra---there are, rather, three different striped horse-like mammals, two of which are genetically related, and one of which is not. The propensity to be striped, like the propensity to have five things (fingers, segments, whatever) is a deeply embedded genetic artifact that expresses itself in various ways. > > Bob Harper > > On Apr 18, 2013, at 2:48 PM, Jason Wilkins wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Warning, this is a bit of a rant. >> >> That paragraph from Wikipedia seems to be confused. It gives the fourth >> paradigm as "declarative" but then says "first order logic for logic >> programming". It seems somebody did an incomplete replacement of >> "declarative" for "logic". Wikipedia is often schizophrenic like that. >> >> Personally, I think that object oriented and logical programming only >> became official paradigms because there was a certain level of hype for >> them in the 1980s and nobody has thought to strike them off the list after >> the hype died down. >> >> Object-oriented, as constituted today, is just a layer of abstraction over >> imperative programming (or imperative style programming in functional >> languages, because objects require side-effects). What "object-oriented" >> language actually in use now isn't just an imperative language with fancy >> abstraction mechanisms? >> >> The problem with having declarative languages as a paradigm (which logical >> languages would be a part) is that it feels like it should be a >> "miscellaneous" category. Being declarative doesn't tell you much except >> that some machine is going to turn your descriptions of something into some >> kind of action. In logical programming it is a set of predicates, but it >> could just as easily be almost anything else. In a way all languages are >> "declarative", it is just that we have some standard interpretations of >> what is declared that are very common (imperative and functional). >> >> My wish is that the idea of there being four paradigms would be abandoned >> the same we the idea of four food groups has been abandoned (which may >> surprise some of you). We have more than four different modes of thinking >> when programming and some are much more important than others and some are >> subsets of others. We should teach students a more sophisticated view. >> >> Ironically Wikipedia also shows us this complexity. The >> programming language paradigm side bar actually reveals the wealth >> of different styles that are available. There is simply no clean and >> useful way to overlay the four paradigms over what we see there, so it >> should be abandoned because it gives students a false idea. >> >> >> On Wed, Apr 17, 2013 at 9:42 AM, Andreas Abel wrote: >> >>> [ The Types Forum, http://lists.seas.upenn.edu/** >>> mailman/listinfo/types-list] >>> >>> On 17.04.2013 11:30, Uday S Reddy wrote: >>> >>>> Mark Janssen writes: >>>> >>>> From: en.wikipedia.org: Programming_paradigm: >>>>> >>>>> "A programming paradigm is a fundamental style of computer >>>>> programming. There are four main paradigms: object-oriented, >>>>> imperative, functional and declarative. Their foundations are distinct >>>>> models of computation: Turing machine for object-oriented and >>>>> imperative programming, lambda calculus for functional programming, >>>>> and first order logic for logic programming." >>>>> >>>> >>> I removed the second sentence relating paradigms to computation models >>> and put it on the talk page instead. It does not make sense to connect >>> imperative programming to Turing machines like functional programming to >>> lambda calculus. A better match would be random access machines, but the >>> whole idea of a connection between a programming paradigm and a computation >>> model is misleading. >>> >>> >>> While I understand the interest in purely theoretical models, I wonder >>>>> two things: 1) Are these distinct models of computation valid? And, >>>>> 2) If so, shouldn't a theory of types announce what model of >>>>> computation they are working from? >>>>> >>>> >>>> These distinctions are not fully valid. >>>> >>>> - Functional programming, logic programming and imperative programming are >>>> three different *computational mechanisms*. >>>> >>>> - Object-orientation and abstract data types are two different ways of >>>> building higher-level *abstractions*. >>>> >>>> The authors of this paragraph did not understand that computational >>>> mechanisms and higher-level abstractions are separate, orthogonal >>>> dimensions >>>> in programming language design. All six combinations, obtained by >>>> picking a >>>> computational mechanism from the first bullet and an abstraction mechanism >>>> from the second bullet, are possible. It is a mistake to put >>>> object-orientation in the first bullet. Their idea of "paradigm" is vague >>>> and ill-defined. >>>> >>>> Cheers, >>>> Uday Reddy >>>> >>>> >>> >>> -- >>> Andreas Abel <>< Du bist der geliebte Mensch. >>> >>> Theoretical Computer Science, University of Munich >>> Oettingenstr. 67, D-80538 Munich, GERMANY >>> >>> andreas.abel at ifi.lmu.de >>> http://www2.tcs.ifi.lmu.de/~**abel/ >>> > From ccshan at indiana.edu Fri Apr 19 22:56:40 2013 From: ccshan at indiana.edu (Chung-chieh Shan) Date: Fri, 19 Apr 2013 22:56:40 -0400 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20849.47176.287000.658730@gargle.gargle.HOWL> References: <20848.62755.218000.195549@gargle.gargle.HOWL> <20848.62082.625000.991667@gargle.gargle.HOWL> <5170E4E8.4040200@saraswat.org> <20130419175153.GA21981@mantle.bostoncoop.net> <20849.47176.287000.658730@gargle.gargle.HOWL> Message-ID: <20130420025640.GA21832@mantle.bostoncoop.net> On 2013-04-19T22:34:00+0100, Uday S Reddy wrote: > > I once defined "declarative programming" as "the division of _what_ > > to do and _how_ to do it into two modules that can be built and > > reused separately." > > Sorry, I don't buy it. The distinction of what vs how, i.e., external > behaviour vs internal implementation, exists for *every* software system > implemented in *every* conceivable programming language. I don't see what > this has to do with being "declarative". I agree that the distinction exists all over the place, and indeed it's probably always clear in your mind. Being "declarative" means making the distinction not only in your mind but also in code (more generally in formal artifacts) so that you can easily change the "what" without reimplementing the "how" or change the "how" without reimplementing the "what". That's what I meant by "two modules that can be built and reused separately". In particular, for a programming language to be declarative (with respect to some "what"/"how" distinction) is for the distinction between programs written in the language and implementations of the language to follow the distinction you refer to. > I was taught programming when I was a second year undergrad, using Dijkstra > & Wirth-style top-down design and stepwise refinement. So, I always thought > of imperative programming in "denotative" terms. What vs how were always > clear in my mind. I probably saw Landin's paper in the months and years > that followed (as I looked through every CACM paper on programming languages > from the 60's and 70's), and probably didn't understand the point of it. > The distinction between denotative and procedural languages that Landin > makes would have seemed to me to be a false dichotomy. It is perfectly > feasible for something to be "procedural" as well as "denotative". So, what > is he talking about? > > After maturing, I have learnt that many people really have a big mental > block about thinking of things as being procedural as well as denotative. > Simon Peyton Jones seems to have finally conquered this brain space by > explaining the things belonging to the IO types as "actions". That seems to > make sense to people. Three cheers to him! With respect to a given "what"/"how" distinction, a software system or a programming language can be either declarative or not. The same thing can be declarative with respect to one "what"/"how" distinction yet not with respect to another, just as a person can be tall with respect to one height standard yet not with respect to another. It doesn't require any mental block to think of the same thing (such as me) as both "tall" and "short" (with respect to different height standards), and it doesn't require any mental block to think of the same thing (such as Haskell) as both "declarative" and "not declarative" (with respect to different "what"/"how" distinctions). > And, let us not get carried away with "real world" ideas. The "real world" > has been using imperatives for millennia before we got into the game. We > didn't invent imperatives. I believe Vijay was talking about a particular > class of users and a particular class of applications where a declarative > interface is the right layer of abstraction. If he were to talk about a > stock trading system or an on-board flight control system, the right layer > of abstraction could very well be an imperative one. There is nothing > "unreal world" about imperatives (or "actions" as Simon calls them). Indeed, your notion of "imperative" is entirely consistent with being declarative. If "what" we want to do is to perform IO, a language of IO action combinators can be declarative (and imperative). If "what" we want to do is to parse sentences, the very same language would probably not be declarative. > PS You were probably confusing Strachey with Dana Scott. Strachey worked on > the semantics of imperative programs. Scott worked on the semantics of > recursion, using partial orders (and he also collaborated with Strachey on > semantics of imperative programs). A good place to read about Strachey's > semantics is the book by Mike Gordon, a classic book that I also happened to > read as an undergrad. It is easy to find used copies of it these days > because a lot of stupid libraries around the world are getting rid of their > copies. > > http://www.amazon.co.uk/The-Denotational-Description-Programming-Languages/dp/0387904336/ref=sr_1_1?ie=UTF8&qid=1366406702&sr=8-1&keywords=gordon+denotational+description Indeed, the specific domain in which Strachey fixed Landin's problem is not even the domain of all partial orders, but only some of them. From oleg at okmij.org Sat Apr 20 01:26:08 2013 From: oleg at okmij.org (oleg at okmij.org) Date: 20 Apr 2013 05:26:08 -0000 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages Message-ID: <20130420052608.90065.qmail@www1.g3.pair.com> Jonathan Aldrich wrote: > A critical property of objects, which is used architecturally in many OO > systems, is support for heterogeneous data structures: e.g. putting > several different implementations of an abstraction into a list. You > can do this is Haskell only through a "slightly clumsy" encoding that > wraps a type class in another data structure, thereby existentially > quantifying over the type class used. > With pure row polymorphism, just as with type classes > in Haskell, you can't implement heterogeneous data structures unless you > pack the row variable in an existential. I'm afraid you are short-changing Haskell. There are ways to build heterogeneous collections and extensible (row-polymorphic) records that have nothing to do with existentials. It was possible back in 2004. With data kinds and kind polymorphism, the approach became much more convenient. The approach is used in practice, btw, for example, for embedding DSL of first-class attribute grammars. (See ``Attribute Grammars Fly First-Class: How to do aspect oriented programming in Haskell'' Marcos Viera, S. Doaitse Swierstra and Wouter S. Swierstra. ICFP 2009) The OOHaskell paper http://arxiv.org/abs/cs/0509027 describes how to do bona fide OOP in Haskell. Haskell lets us not only represent object hierarchies with identity, internal state and virtual methods, but also represent and investigate complex issues like self-types, co-variance a la Eiffel (and how to make it sound) -- and a particularly thorny problem of preventing constructors from calling (virtual) methods on the not-yet constructed object. Section 3 of the OOHaskell paper describes several different ways of representing extensible records. Existentials is one way -- and not the best one. OOHaskell paper also showed two different ways of putting differently-typed objects into a single list; one approach permits sound downcast. From dreamingforward at gmail.com Sat Apr 20 02:02:22 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Fri, 19 Apr 2013 23:02:22 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: On Thu, Apr 18, 2013 at 11:31 PM, Jason Wilkins wrote: > I don't quite think I understand what you are saying. Are you saying that > mathematical models are not a good foundation for computer science because > computers are really made out of electronic gates? No, I'm really trying to point out that models based on Digital Logic vs. models based on Symbolic Logic are completely different -- they have different basiis. They are both types of "Maths", and that you can interchange them as a demonstration doesn't actually help the practical issue of keeping the two domains separate -- they have differing logics. It's like the domain of Natural numbers vs. the Complex, or perhaps the Natural and the Real. Yes you can translate back and forth, but they are for all practical purposes distinct and can't be mixed. > All I need to do is show that my model reduces to some basic physical > implementation (with perhaps some allowances for infinity) and then I can > promptly forget about that messy business and proceed to use my clean > mathematical model. If that's all you want to do, you can stick with Boolean Logic. > The reason any model of computation exists is that it is easier to think > about a problem in some terms than in others. By showing how to transform > one model to another you make it possible to choose exactly how you wish to > solve a problem. Yes, and I'm attempting to provide an argument that the (historically?) dominant model of symbolic calculus is misinforming the practical domain of working out differences and arguments within my own domain of the programming community. Unfortunately, my inexperience with the literature is actually betraying the validity of my point. > The reason we do not work directly in what are called "von Neumann machines" > is that they are not convenient for all kinds of problems. However we can > build a compiler to translate anything to anything else so we I don't see > why anybody would care. I'm trying to say that *I* care, because I can't seem to find the common ground that affects 1000's of people in the applied C.S. domain with the 1000's of people in the theoretical C.S. domain. MarkJ Tacoma From dreamingforward at gmail.com Sat Apr 20 02:24:46 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Fri, 19 Apr 2013 23:24:46 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <51708662.3020909@cs.cmu.edu> References: <20846.27580.375000.899631@gargle.gargle.HOWL> <516EB4D1.1020103@ifi.lmu.de> <51708662.3020909@cs.cmu.edu> Message-ID: >> Object-oriented, as constituted today, is just a layer of abstraction over >> imperative programming (or imperative style programming in functional >> languages, because objects require side-effects). > > The idea that objects require side-effects is a common misconception. You > can find a number of object-oriented abstractions that do not involve > mutable state in commonly-used libraries. If most object-oriented code is > stateful, it's probably more or less because most code, in general, is > stateful. I think the reference of objects having side-effects refers to those used within functional languages. But here I would suggest the nomenclature ("object") is vague. If the wikiwikiweb "language wars" is an indication, this vagueness is part of the problem as I noted in an earlier message. > You can write objects in just about any language (GTK+ and Microsoft's COM > are good examples of objects in C) but it's quite awkward to do so. A good > working definition of an "object-oriented language" is a language that makes > it _easy_ to define values exporting a procedural interface to data or > behavior. Likewise, you can do functional programming in C if you want, but > it's a lot easier if you have language support in the form of lambda > expressions, closures, etc. I think a key "articulation point" to the confusions are the common use of the word "language" to refer to something in a computer vs. a tool for interaction. It's like two worlds collided. >> What "object-oriented" language actually in use now isn't just >> an imperative language with fancy abstraction mechanisms? Haha, but the keyword in that sentence is "fancy". With a fancy enough abstraction layer I can make *anything* look like anything else to the point where definitions are rendered meaningless. But the point of *having a field* (like C.S.) and having publications is to make words more definitive, not less. Thanks for the dialog, and I apologize to the types-list immensely for my continued responses despite my complete lack of credentials. -- MarkJ Tacoma, Washington From dreamingforward at gmail.com Sat Apr 20 02:37:56 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Fri, 19 Apr 2013 23:37:56 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20848.61541.156000.365368@gargle.gargle.HOWL> References: <20848.61541.156000.365368@gargle.gargle.HOWL> Message-ID: > I think there is some misunderstanding here. Being "mathematical" in > academic work is a way of making our ideas rigorous and precise, instead of > trying to peddle wooly nonsense. I'm sorry. I am responsible for the misunderstanding. I used the word "math" when I really mean symbolic logic (which, historically, was part of philosophy). My point is that the field is confusing because it seems to ignore binary logic in favor of symbolic logic. Is binary logic not "rigorous and precise" enough? -- MarkJ Tacoma, Washington From dreamingforward at gmail.com Sat Apr 20 04:36:06 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Sat, 20 Apr 2013 01:36:06 -0700 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> Message-ID: > I suggest you read more on sentential logic (which can be called "Zero-Order > Logic", and which I think is what you mean by 'Boolean/Binary/Digital > Logic') While interesting (and thank you for the terms), I think I by Boolean Logic, something different is meant. In fact, perhaps we have hit upon the exact point where the confusion lies. The logic I'm talking about does not come out of philosophy like the predicate calculii -- and it is not sentential *at all*. I'm going to refer to that with the more Greek spelling of "logik". Boolean logic, is distinguished with, I'll claim, an entirely different lexicon. Now, this word set can be readily mapped to that used in predicate calculus, but this ease is also the cause of the confusion -- they are two different realms. The primary difference in language to note is this one (put in analogical form with predicate logic first): true:false::1:0. That seems simple, but from there two completely different maths have been made which are orthogonal to each other. With the former, one never adds truth values together, for example, but with the latter, that is about all you do. Further, one never negates the "true" to get "false" in sentential logic, but with boolean logic, it is done routinely. Boolean logic is often done in a parallel fashion, hence one hears of 32-bit adders, but you would never hear or conceive of such in predicate calculus. The machines that people actually program on are almost entirely based on boolean logic. (LISP machines a possible exception?) Boolean logic is distinguished by input and output. In between these two there is the predictable, consistent flow of logic. Predicate calculus seems to me to be distinguished by propositions and (and what seems to me to be) human evaluation. The mapping at these high-levels isn't clear at all and I've only seen it performed in Prolog. ....I should really take a look at the underlying code of a Prolog interpreter to see how it maps onto the binary hardware, but I suspect it is some the deep, dark magic that I'm not sure I should toy with. Cordially, Mark Janssen Tacoma, Washington > PS: Please could you take off my email address 'moezadel at live.com' from the > list of recipients of your emails to the types list. Otherwise I get > duplicate emails (because I am already a member of the types list). Thanks. My sincerest apologies. Will give more attention to the matter... From rossberg at mpi-sws.org Sat Apr 20 05:00:46 2013 From: rossberg at mpi-sws.org (Andreas Rossberg) Date: Sat, 20 Apr 2013 11:00:46 +0200 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20848.61541.156000.365368@gargle.gargle.HOWL> References: <20848.61541.156000.365368@gargle.gargle.HOWL> Message-ID: On Apr 19, 2013, at 09:21 , Uday S Reddy wrote: > Similarly, the mathematical models of programming languages help us to > obtain a deep understanding of how languages work and how to build systems > in a predictable, reliable way. It seems too much to expect, at the present > stage of our field, that all programmers should understand the mathematical > models. But I would definitely expect that programming language designers > who are trying to build new languages should understand the mathematical > models. Otherwise, they would be like automotive engineers trying to build > cars without knowing any Mechanics. Unfortunately, that has been the reality in the "Real World" for the last 40 years, and I'm pretty sure it will continue to be so for the next 40. In mainstream language design, formal training is the exception, not the rule. Most decision makers couldn't parse an operational semantics if their life depended on it. Let alone understand a denotational model or a type system. I usually compare it to architects designing bridges without knowing the first thing about statics. It's sufficient qualification to have crossed bridges all your life. And if in doubt, a road sign always is an appropriate measure for preventing one from collapsing. Traffic participants are expected to bring parachutes. To be fair, it took a few millennia before knowledge about statics prevailed among bridge builders, too. /Andreas From dmcgahon at gmail.com Sat Apr 20 05:53:27 2013 From: dmcgahon at gmail.com (Dermot McGahon) Date: Sat, 20 Apr 2013 10:53:27 +0100 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> Message-ID: Mark, Like you, I'm a programming practioner. Who has had an interest in types and PL research for some years now. While it's great that you've woken the distinguished gentlemen up, stimulating a bit of harmless high-level discussion around terminology, precision and band-wagons: I think you may be missing some of the points already made. In a helpful spirit, having at leas enough understanding of both camps to be dangerous, I will point out some of the specifics in that regard: - binary logic may be rigorous and precise enough. What it's not is convenient enough. The kind professors have said this clearly. - start reading their output. They very much DO make concepts clear and definitive. It's the Wikipedia dilettantes who muddy the waters. - re: abstraction and meaninglessness. Check out the Peano numbers and how they can be implemented, for example in Scala, without reliance on primitives types. Soundly based, relies on abstraction, no meaninglessness introduced. I think you're missing some of the sub-textual points and questions being passed around between the researchers here. Which is inevitable, because *they* do have the common ground that you seem to be yearning for. - re: that common ground. CS researchers do implementation in spades. ML, Erlang and Haskell, all cases in point. They do for the most part understand our world. How many implementers do research? Not really enough. Or take the time to learn the basic terms and concepts? I was not taught these concepts during either Batchelors (1990-1995) or Masters studies (2006-2009). Yes, to being taught ADT's and logic programming. Yes, to breath of knowledge. No, to depth and fundamentals. I don't believe lambda calculus was mentioned once during any of those years of study. - of course you can't mix the two types of model in one model. What a silly thing to say? What is important IS that you can translate between the two. A compiler is a translator. Translators are useful. High-level abstractions are useful. Models based on digital logic would not necessarily be more useful. You don't think in terms of gates when you're programming and neither should you. Why do you want to? - Read lambda calculus papers and think about how to construct the same arguments in boolean logic. It's not a reasonable approach. What exactly is your argument? - I can also recommend that you read more about categorical logic and domain theory (both mathematical subjects). As well as the many great references already mentioned, please do keep them coming gents. Like Mark, I will take the time to follow-up and continue to read, read, read. Dermot. On 20 April 2013 07:37, Mark Janssen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > > I think there is some misunderstanding here. Being "mathematical" in > > academic work is a way of making our ideas rigorous and precise, instead > of > > trying to peddle wooly nonsense. > > I'm sorry. I am responsible for the misunderstanding. I used the > word "math" when I really mean symbolic logic (which, historically, > was part of philosophy). My point is that the field is confusing > because it seems to ignore binary logic in favor of symbolic logic. > Is binary logic not "rigorous and precise" enough? > -- > MarkJ > Tacoma, Washington > From matthias at ccs.neu.edu Sat Apr 20 11:38:44 2013 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Sat, 20 Apr 2013 11:38:44 -0400 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> Message-ID: On Apr 20, 2013, at 5:00 AM, Andreas Rossberg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On Apr 19, 2013, at 09:21 , Uday S Reddy wrote: >> Similarly, the mathematical models of programming languages help us to >> obtain a deep understanding of how languages work and how to build systems >> in a predictable, reliable way. It seems too much to expect, at the present >> stage of our field, that all programmers should understand the mathematical >> models. But I would definitely expect that programming language designers >> who are trying to build new languages should understand the mathematical >> models. Otherwise, they would be like automotive engineers trying to build >> cars without knowing any Mechanics. > > Unfortunately, that has been the reality in the "Real World" for the last 40 years, and I'm pretty sure it will continue to be so for the next 40. In mainstream language design, formal training is the exception, not the rule. Most decision makers couldn't parse an operational semantics if their life depended on it. Let alone understand a denotational model or a type system. > > I usually compare it to architects designing bridges without knowing the first thing about statics. It's sufficient qualification to have crossed bridges all your life. And if in doubt, a road sign always is an appropriate measure for preventing one from collapsing. Traffic participants are expected to bring parachutes. > > To be fair, it took a few millennia before knowledge about statics prevailed among bridge builders, too. A specific point: I am not really sure about this. I do know that the construction of temples and churches and other such buildings has been a driving force of applied mathematics for a couple of thousand years. I'd be happy to supply citations but since CS @ Rice was a descendant of App Math, I have heard this claim many times, I have read about it, and I have encountered it on tours (most recently in Istanbul last year concerning dome constructions. Also see the development of skyscrapers. We simply couldn't construct truly tall (greater than say a dozen or two dozen floors), stable buildings until a 120 or so years ago. The desire to do so forced us to figure out the materials (steel) and the mathematics. A general point, much more important. Even though I am as guilty as anyone on this list for working with mathematics and mathematical models of PLs, let me play the outsider for a moment. Let me raise three specific questions: Are we (theoretical PLers) developing tools and mechanisms that programming language developers truly need the way builders needed steel and the constructors of domes needed mathematics/static? I think the answer is obviously 'no' because few if any of our meta-tools make it into the tool set of working programming language designers. We may respond with "but they don't design languages the size of skyscrapers" and we'd be wrong again. When I look at the specs of CL or C++, I see skyscrapers :-) -- I will say that sometimes I think that working programming language developers don't even seem to understand interpreters, but who knows, perhaps I don't understand theirs. Are we (theoretical PL designers) developing linguistic mechanisms that programmers (software developers_ truly need the way builders needed steel and the constructors of domes needed mathematics/static? I think the answer is 'sometimes' but, and this is where your answer may apply -- it takes 20-30 years before most of our new mechanisms make it into the tool set of say, 1,000 working programmers. With good ones, it's faster. In this dimension, we're not the only ones who are guilty of making our ideas real, but we should question on occasion why it takes so long to get an idea from 'our side' to 'theirs'. Are we (theoretical PLers) using our tools and mechanisms the right way? Since very few languages have language designers on board who understand our work well (Meijer @ VB, Steele @ Java and Fortress are definitely exceptions) or perhaps the right phrase is 'appreciate it well enough', perhaps it is our obligation to show them how to use it on their languages. -- We have done some of this work on our own production (reduction semantics for corners of Racket) and we find this exercise extremely useful (finding bugs, inconsistencies, etc). -- Shriram Krishnamurthi has applied this idea to JavaScript and Python, and he has also shown how to use these models to get useful work done for programmers. Also see his POPL 2013 talk on "programming languages as a natural phenomenon" or something like that. -- The K people around Grigore Rosu have tackled C and a range of other languages. I think if more of us did this kind of work, we would see two developments" 1. Working programming language developers may figure out that our tools are useful and use them. 2. We would figure out what kind of tools working language developers really need, and we might develop/change tools so that they are useful in the real world. I am sure there are other ways to go about closing the gap (e.g., working directly on commercial compilers/debuggers/bug finders), but we should share these ideas and work on them. -- Matthias From u.s.reddy at cs.bham.ac.uk Sat Apr 20 14:58:44 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Sat, 20 Apr 2013 19:58:44 +0100 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> Message-ID: <20850.58724.221000.32567@gargle.gargle.HOWL> Matthias Felleisen writes: > I think if more of us did this kind of work, we would see two > developments" > > 1. Working programming language developers may figure out that our tools > are useful and use them. > > 2. We would figure out what kind of tools working language developers > really need, and we might develop/change tools so that they are useful in > the real world. I would happily endorse these ideas. In fact, this is what I think would constitute "applied PL research": understanding, evaluating, critiquing, and shepherding new programming languages, which I believe will necessarily come from outside our community because it is the application develepors who understand best where the needs are for new languages. However, I still see considerable problems because it is not clear to me what "our tools" are. Our field has suffered from early fragmentation. Right in the 60's, people divided themselves in operational, denotational, and axiomatic camps and these camps have never come together, perhaps only diverged more in the succeeding decades. Further fragmentation in the language styles, e.g., functional programming vs imperative programming, later, ADTs vs objects, statically typed vs dynamically typed, correctness by verification vs correctness by construction and so on, ensued. There is no core body of knowledge that we all accept as being fundamental and essential to our field. No two programming language text books have the same content, no two programming language courses teach the same material, and we even have difficulty picking just one of the published text books as being good enough for our own courses. As a result, even when we force our students to take our courses, there is no common base of knowledge that our graduates share when they go out into the real world and perhaps become PL designers one day. Dermot McGahon mentioned just earlier today that he was not taught "these concepts" during either his Bachelor's or Master's degrees, which is telling. I really find it hard to see how we will be able to have any more influence on the outside world until we are able to put our own house in order. And, that means figuring out what our core body of knowledge is, integrating all the various approaches we have developed over the years, and ironing out our differences to the point that we see the merits in each other's approaches. Perhaps there is a useful role that SIGPLAN can play in generating such a consensus? Cheers, Uday From avik at cs.umd.edu Sat Apr 20 16:16:54 2013 From: avik at cs.umd.edu (Avik Chaudhuri) Date: Sat, 20 Apr 2013 13:16:54 -0700 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20850.58724.221000.32567@gargle.gargle.HOWL> References: <20848.61541.156000.365368@gargle.gargle.HOWL> <20850.58724.221000.32567@gargle.gargle.HOWL> Message-ID: On Apr 20, 2013, at 11:58 AM, Uday S Reddy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Matthias Felleisen writes: > >> I think if more of us did this kind of work, we would see two >> developments" >> >> 1. Working programming language developers may figure out that our tools >> are useful and use them. >> >> 2. We would figure out what kind of tools working language developers >> really need, and we might develop/change tools so that they are useful in >> the real world. > > I would happily endorse these ideas. In fact, this is what I think would > constitute "applied PL research": understanding, evaluating, critiquing, and > shepherding new programming languages, which I believe will necessarily come > from outside our community because it is the application develepors who > understand best where the needs are for new languages. I humbly contest this position. It is not the responsibility of others to pick up ideas from "our community" and apply it to the "real languages." At least some of us should (and do) devote enough time to popularize these ideas. We desperately need more people like this: http://www.scottaaronson.com/blog/. > > However, I still see considerable problems because it is not clear to me > what "our tools" are. Our field has suffered from early fragmentation. > Right in the 60's, people divided themselves in operational, denotational, > and axiomatic camps and these camps have never come together, perhaps only > diverged more in the succeeding decades. Further fragmentation in the > language styles, e.g., functional programming vs imperative programming, > later, ADTs vs objects, statically typed vs dynamically typed, correctness > by verification vs correctness by construction and so on, ensued. There is > no core body of knowledge that we all accept as being fundamental and > essential to our field. No two programming language text books have the > same content, no two programming language courses teach the same material, > and we even have difficulty picking just one of the published text books as > being good enough for our own courses. As a result, even when we force our > students to take our courses, there is no common base of knowledge that our > graduates share when they go out into the real world and perhaps become PL > designers one day. Dermot McGahon mentioned just earlier today that he was > not taught "these concepts" during either his Bachelor's or Master's > degrees, which is telling. I don't see variety as a bad thing: it is an indication of how rich our field is, and reflects how much of an art programming is, as much as it is a science. That said, sure, some consolidation would be great. In particular, I hate seeing similar ideas reinvented over and over again, I'd love to see connections being made across sub-disciplines, so that techniques and results in one can be applied to the other. -Avik. > > I really find it hard to see how we will be able to have any more influence > on the outside world until we are able to put our own house in order. And, > that means figuring out what our core body of knowledge is, integrating all > the various approaches we have developed over the years, and ironing out our > differences to the point that we see the merits in each other's approaches. > > Perhaps there is a useful role that SIGPLAN can play in generating such a > consensus? > > Cheers, > Uday > From u.s.reddy at cs.bham.ac.uk Sat Apr 20 17:35:04 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Sat, 20 Apr 2013 22:35:04 +0100 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> <20850.58724.221000.32567@gargle.gargle.HOWL> Message-ID: <20851.2568.619000.23847@gargle.gargle.HOWL> Avik Chaudhuri writes: > I would happily endorse these ideas. In fact, this is what I think > would constitute "applied PL research": understanding, evaluating, > critiquing, and shepherding new programming languages, which I believe > will necessarily come from outside our community because it is the > application develepors who understand best where the needs are for new > languages. > > I humbly contest this position. It is not the responsibility of others to > pick up ideas from "our community" and apply it to the "real languages." > At least some of us should (and do) devote enough time to popularize these > ideas. We desperately need more people like this: > http://www.scottaaronson.com/blog/. Perhaps I didn't express myself very clearly. I meant to say that interesting new languages will necessarily come from outside our community. But *we* should engage with them as part of our "applied PL research" to understand, analyze, critique and perhaps help their language designs. A strong example that comes to my mind is Dijkstra's efforts during the DoD's requisitioning for the language that eventually became Ada. Dijkstra closely studied all the DoD requirements and every language proposal that was submitted, and critiqued them. His critiques were published in SIGPLAN Notices and should be available in the ACM Digital Library (as well as the EWD collections). Even though you might think this was a rather negative form of engagement, it was still useful for the world to understand how programming language principles impact or should impact programming language design. I am also happy to recollect an oft-repeated aphorism of John Reynolds: "Programming language semanticists should be the obstetricians of programming languages, not their coroners." Cheers, Uday From vl at cs.utexas.edu Sun Apr 21 10:09:55 2013 From: vl at cs.utexas.edu (Vladimir Lifschitz) Date: Sun, 21 Apr 2013 09:09:55 -0500 Subject: [TYPES] Declarative vs imperative Message-ID: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> > I would point out that even Prolog can only be understood fully in > operational terms, e.g. the "cut" operator !, which controls the proof > search procedure. Yes, and consequently Prolog is not fully declarative. But there are other flavors of logic programming, for instance, answer set programming (ASP, see http://en.wikipedia.org/wiki/Answer_set_programming). ASP does not include the cut operator. On the other hand, it incorporates constructs not allowed in Prolog, such as choice rules. It does not have an operational semantics. In fact, different implementations use very different algorithms, but they produce the same result for the same program. The programmer doesn't need to know which implementation is going to be used. ASP is fully declarative, like functional programming. Here is how I would characterize the difference between procedural and imperative programming. A program in an imperative language describes an algortithm. A program in a declarative language describes a specification. -- Vladimir Lifschitz Department of Computer Science Office: (512) 471-9564 University of Texas at Austin Fax: (512) 471-8885 2317 Speedway, Stop D9500 E-mail: vl at cs.utexas.edu Austin, TX 78712-1757, USA WWW: http://www.cs.utexas.edu/~vl From jason.a.wilkins at gmail.com Mon Apr 22 08:35:49 2013 From: jason.a.wilkins at gmail.com (Jason Wilkins) Date: Mon, 22 Apr 2013 07:35:49 -0500 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: <20851.2568.619000.23847@gargle.gargle.HOWL> References: <20848.61541.156000.365368@gargle.gargle.HOWL> <20850.58724.221000.32567@gargle.gargle.HOWL> <20851.2568.619000.23847@gargle.gargle.HOWL> Message-ID: It was implied that the designers of C++ are not using the abstract mathematical tools provided by research, but that simply isn't true. Making tractable extensions to C++'s generic programming capabilities requires solid theory and the guys down the hall from me here at Texas A&M simply blow my mind with the kinds of things they are working on. On Sat, Apr 20, 2013 at 4:35 PM, Uday S Reddy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Avik Chaudhuri writes: > > > I would happily endorse these ideas. In fact, this is what I think > > would constitute "applied PL research": understanding, evaluating, > > critiquing, and shepherding new programming languages, which I > believe > > will necessarily come from outside our community because it is the > > application develepors who understand best where the needs are for > new > > languages. > > > > I humbly contest this position. It is not the responsibility of others to > > pick up ideas from "our community" and apply it to the "real languages." > > At least some of us should (and do) devote enough time to popularize > these > > ideas. We desperately need more people like this: > > http://www.scottaaronson.com/blog/. > > Perhaps I didn't express myself very clearly. I meant to say that > interesting new languages will necessarily come from outside our community. > But *we* should engage with them as part of our "applied PL research" to > understand, analyze, critique and perhaps help their language designs. > > A strong example that comes to my mind is Dijkstra's efforts during the > DoD's requisitioning for the language that eventually became Ada. Dijkstra > closely studied all the DoD requirements and every language proposal that > was submitted, and critiqued them. His critiques were published in SIGPLAN > Notices and should be available in the ACM Digital Library (as well as the > EWD collections). Even though you might think this was a rather negative > form of engagement, it was still useful for the world to understand how > programming language principles impact or should impact programming > language > design. > > I am also happy to recollect an oft-repeated aphorism of John Reynolds: > "Programming language semanticists should be the obstetricians of > programming languages, not their coroners." > > Cheers, > Uday > From nikhil at acm.org Mon Apr 22 11:53:04 2013 From: nikhil at acm.org (Rishiyur Nikhil) Date: Mon, 22 Apr 2013 11:53:04 -0400 Subject: [TYPES] Declarative vs imperative In-Reply-To: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> Message-ID: > ... fully declarative, like functional programming. > Here is how I would characterize the difference between procedural and > imperative programming. A program in an imperative language describes an > algortithm. A program in a declarative language describes a specification. But one man's specification is another man's algorithm. Even in Haskell, one writes a sorting program typically by choosing a particular algorithm (heap sort, quick sort, ...). Sure, these can be considered specifications of sorting, but they are hardly the most abstract spec for sorting. Similarly, a C program is a specification for a hundred different machine code algorithms that manage memory, register allocation etc. Nikhil From kthielen at liquidnet.com Mon Apr 22 12:16:18 2013 From: kthielen at liquidnet.com (Kalani Thielen) Date: Mon, 22 Apr 2013 12:16:18 -0400 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> Message-ID: At the risk of proving that this subject is just bikeshedding ... Is it possible to say that "declarative" knowledge is the type-level-N+1 expression describing a level-N term (so it's meaningful relative to a particular term -- as in your example, machine code is itself "declarative" with respect to physical machines)? Maybe you'd have to import dependent types / staged-programming to make that meaningful, but I think that's the right direction. The prototypical example that I think of when I hear "declarative" is the square-root problem from SICP. If you look at it now, it almost looks like they give a dependent type to Newton's method. -----Original Message----- From: types-list-bounces at lists.seas.upenn.edu [mailto:types-list-bounces at lists.seas.upenn.edu] On Behalf Of Rishiyur Nikhil Sent: Monday, April 22, 2013 11:53 AM To: Vladimir Lifschitz Cc: types-list at lists.seas.upenn.edu Subject: Re: [TYPES] Declarative vs imperative [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > ... fully declarative, like functional programming. > Here is how I would characterize the difference between procedural and > imperative programming. A program in an imperative language > describes an > algortithm. A program in a declarative language describes a specification. But one man's specification is another man's algorithm. Even in Haskell, one writes a sorting program typically by choosing a particular algorithm (heap sort, quick sort, ...). Sure, these can be considered specifications of sorting, but they are hardly the most abstract spec for sorting. Similarly, a C program is a specification for a hundred different machine code algorithms that manage memory, register allocation etc. Nikhil From lkuper at cs.indiana.edu Mon Apr 22 13:25:15 2013 From: lkuper at cs.indiana.edu (Lindsey Kuper) Date: Mon, 22 Apr 2013 13:25:15 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages Message-ID: > Date: Sat, 20 Apr 2013 10:53:27 +0100 > From: Dermot McGahon > To: types-list at lists.seas.upenn.edu > Subject: Re: [TYPES] The type/object distinction and possible > synthesis of OOP and imperative programming languages > Message-ID: > Like you, I'm a programming practioner. Who has had an interest in types > and PL research for some years now. While it's great that you've woken the > distinguished gentlemen up [...] > - I can also recommend that you read more about categorical logic and > domain theory (both mathematical subjects). As well as the many great > references already mentioned, please do keep them coming gents. Like Mark, > I will take the time to follow-up and continue to read, read, read. > > > Dermot. As an aside to an otherwise great discussion: although I'm sure you intend no harm by saying things like "keep them coming gents", please be aware that not everyone on this mailing list is male, and that making blanket assumptions about the gender of subscribers to the list may discourage people with worthwhile points to make from contributing. Lindsey Kuper From vl at cs.utexas.edu Mon Apr 22 13:26:20 2013 From: vl at cs.utexas.edu (Vladimir Lifschitz) Date: Mon, 22 Apr 2013 12:26:20 -0500 Subject: [TYPES] Declarative vs imperative In-Reply-To: (message from Rishiyur Nikhil on Mon, 22 Apr 2013 11:53:04 -0400) References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> Message-ID: <201304221726.r3MHQKuI006162@betta.cs.utexas.edu> > > A program in an imperative language describes an algortithm. > > A program in a declarative language describes a specification. > > But one man's specification is another man's algorithm. Even in Haskell, > one writes a sorting program typically by choosing a particular > algorithm (heap sort, quick sort, ...). Sure, these can be considered > specifications of sorting, but they are hardly the most abstract spec > for sorting. The way I see it, the difference between declarative and imperative programs is as clear-cut as the difference between declarative and imperative sentences in natural languages: "I like it" vs. "Sit down". A declarative sentence can be true or false; an imperative sentence can be executed or not executed. Here is how sorting is done in answer set programming (I learned this from Roland Kaminski): order(X,Z) % X is the predecessor of Z :- p(X;Z), % if both X and Z belong to p, X References: Message-ID: Thanks for taking the trouble to point that out, Lindsay! It's good to be reminded how easy it is for any of us (well, many of us, anyway :-) to slip into unintentionally hurtful or excluding figures of speech. - Benjamin On Apr 22, 2013, at 1:25 PM, Lindsey Kuper wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> Date: Sat, 20 Apr 2013 10:53:27 +0100 >> From: Dermot McGahon >> To: types-list at lists.seas.upenn.edu >> Subject: Re: [TYPES] The type/object distinction and possible >> synthesis of OOP and imperative programming languages >> Message-ID: > >> Like you, I'm a programming practioner. Who has had an interest in types >> and PL research for some years now. While it's great that you've woken the >> distinguished gentlemen up [...] > >> - I can also recommend that you read more about categorical logic and >> domain theory (both mathematical subjects). As well as the many great >> references already mentioned, please do keep them coming gents. Like Mark, >> I will take the time to follow-up and continue to read, read, read. >> >> >> Dermot. > > As an aside to an otherwise great discussion: although I'm sure you > intend no harm by saying things like "keep them coming gents", please > be aware that not everyone on this mailing list is male, and that > making blanket assumptions about the gender of subscribers to the list > may discourage people with worthwhile points to make from > contributing. > > Lindsey Kuper From bcpierce at cis.upenn.edu Mon Apr 22 14:07:35 2013 From: bcpierce at cis.upenn.edu (Benjamin C. Pierce) Date: Mon, 22 Apr 2013 14:07:35 -0400 Subject: [TYPES] The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: Message-ID: (? and sorry for mis-typing your name in the process of thanking you!) On Apr 22, 2013, at 1:51 PM, "Benjamin C. Pierce" wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Thanks for taking the trouble to point that out, Lindsay! It's good to be reminded how easy it is for any of us (well, many of us, anyway :-) to slip into unintentionally hurtful or excluding figures of speech. > > - Benjamin > > On Apr 22, 2013, at 1:25 PM, Lindsey Kuper wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >>> Date: Sat, 20 Apr 2013 10:53:27 +0100 >>> From: Dermot McGahon >>> To: types-list at lists.seas.upenn.edu >>> Subject: Re: [TYPES] The type/object distinction and possible >>> synthesis of OOP and imperative programming languages >>> Message-ID: >> >>> Like you, I'm a programming practioner. Who has had an interest in types >>> and PL research for some years now. While it's great that you've woken the >>> distinguished gentlemen up [...] >> >>> - I can also recommend that you read more about categorical logic and >>> domain theory (both mathematical subjects). As well as the many great >>> references already mentioned, please do keep them coming gents. Like Mark, >>> I will take the time to follow-up and continue to read, read, read. >>> >>> >>> Dermot. >> >> As an aside to an otherwise great discussion: although I'm sure you >> intend no harm by saying things like "keep them coming gents", please >> be aware that not everyone on this mailing list is male, and that >> making blanket assumptions about the gender of subscribers to the list >> may discourage people with worthwhile points to make from >> contributing. >> >> Lindsey Kuper > From u.s.reddy at cs.bham.ac.uk Mon Apr 22 14:15:36 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Mon, 22 Apr 2013 19:15:36 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <201304221726.r3MHQKuI006162@betta.cs.utexas.edu> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <201304221726.r3MHQKuI006162@betta.cs.utexas.edu> Message-ID: <20853.32328.475000.501346@gargle.gargle.HOWL> Vladimir Lifschitz writes: > The way I see it, the difference between declarative and imperative > programs is as clear-cut as the difference between declarative and > imperative sentences in natural languages: "I like it" vs. "Sit down". > A declarative sentence can be true or false; an imperative sentence > can be executed or not executed. Indeed, "Sit down" is an imperative sentence. "To sit down" is a noun. "It is nice to sit down" is a declarative sentence. In an imperative programming language, you will find all three. So, to me, nothing is clear-cut about the distinctions that people make! > A declarative program may look similar to a procedural program if they > use similar data structures. Still, the difference is clear: a (truly) > declarative program describes what is counted as a solution; a procedural > program tells us which operations to perform. Indeed, high-level programming languages that can automate a lot of the programming work are always brilliant. However, I don't see what this has to do with the declarative-procedural spectrum. There are high-level design tools using UML notations that also achieve a high degree of automation in program generation, and a good proportion of those notations are imperative. As I mentioned previously, the highest level programming systems that we might imagine for applications like stock-market trading systems or on-board flight control systems might indeed be imperative. So, this is a false dichotomy, as far as I am concerned. Cheers, Uday From wcook at cs.utexas.edu Mon Apr 22 15:38:28 2013 From: wcook at cs.utexas.edu (Will Cook) Date: Mon, 22 Apr 2013 14:38:28 -0500 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20853.32328.475000.501346@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <201304221726.r3MHQKuI006162@betta.cs.utexas.edu> <20853.32328.475000.501346@gargle.gargle.HOWL> Message-ID: <3D233953-4E9F-4457-8571-009338B7CEB5@cs.utexas.edu> I have been thinking about the use of the words "declarative" and "imperative" for some time. This is my understanding of how they are commonly used in computer science today: Declarative: describing "what" is to be computed rather than "how" to compute the result/behavior Imperative: a description of a computation that involves implicit effects, usually mutable state and input/output. As others have pointed out, our usage originally came from the distinction between imperative and declarative statements in natural languages. However, I think that our usage has diverged significantly from this origin, so that the words no longer form a dichotomy. For example, one might argue that a finite state machine is both declarative and imperative in the computer science senses. I think Uday made a similar suggestion. There is also Wadler's classic paper on "How to declare an imperative". Another hint that "declarative" and "imperative" are not antonyms is that the definitions don't have any significant words in common. The antonym of "imperative" in common usage is "pure functional". I don't know of a widely-used word that acts as an antonym for "declarative", although "operational" might work. It may be that "imperative" has a connotation of "step-by-step", which hints at it being the opposite of "declarative", but this connotation is fairly weak at this point. If we wanted to we could try to force ?declarative? and ?imperative? to be antonyms, possibly by redefining what we mean by ?imperative?, but I'm not sure that would be an improvement. I agree with those who say that "declarative" is a spectrum. For example, some people say that Haskell is a declarative language, but I my view Haskell programs are very much about *how* to compute a result. It is true that many details about how are left out (memory management, order of operations, etc). But if you compare a Haskell program with a logical specification (pre/post conditions), they are quite different. Thus while I would say Haskell is more declarative than many other programming languages, Haskell is not a declarative language in the strongest sense of the word. Haskell programs are not specifications, they are computations, in the sense that they say how to compute and answer. Here is a quick shot at a spectrum between "how" and "what". Each level has a quick summary of the "how" that is involved, and it also includes all the "hows" listed below them. I suspect that many of you might disagree with the placement or absence of various languages, so I cannot claim that this list is definitive. ** More "How"** How the machine works Assembly How memory is managed C, C++, etc Order of operations Java, Smalltalk, Python, Ruby, etc How data flows (with issues like nontermination and cut) Haskell, Prolog, Lambda Calculus (in various forms) ----split between Programming and Specification Languages--- Restricted Specification Languages BNF, SQL, Excel, Statecharts Logical specification languages VDM, Z, B, Alloy *** More "What" *** The idea that a specification language (by definition) cannot be executed is widely held but false. I consider BNF to be a simple counter example. BNF is clearly a specification language, and it clearly has efficient execution (parsing) strategies. As for the objects/type distinction that sparked this discussion, I think the discussion was pretty reasonable, and I would like to thank Jonathan for presenting my views so articulately. I agree with Uday that we need to get our own house in order. These kinds of discussions are a good start. William On Apr 22, 2013, at 1:15 PM, Uday S Reddy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Vladimir Lifschitz writes: > >> The way I see it, the difference between declarative and imperative >> programs is as clear-cut as the difference between declarative and >> imperative sentences in natural languages: "I like it" vs. "Sit down". >> A declarative sentence can be true or false; an imperative sentence >> can be executed or not executed. > > Indeed, "Sit down" is an imperative sentence. "To sit down" is a noun. "It > is nice to sit down" is a declarative sentence. In an imperative > programming language, you will find all three. So, to me, nothing is > clear-cut about the distinctions that people make! > >> A declarative program may look similar to a procedural program if they >> use similar data structures. Still, the difference is clear: a (truly) >> declarative program describes what is counted as a solution; a procedural >> program tells us which operations to perform. > > Indeed, high-level programming languages that can automate a lot of the > programming work are always brilliant. However, I don't see what this has > to do with the declarative-procedural spectrum. There are high-level design > tools using UML notations that also achieve a high degree of automation in > program generation, and a good proportion of those notations are imperative. > As I mentioned previously, the highest level programming systems that we > might imagine for applications like stock-market trading systems or on-board > flight control systems might indeed be imperative. > > So, this is a false dichotomy, as far as I am concerned. > > Cheers, > Uday From adam at adamsmith.as Mon Apr 22 17:27:38 2013 From: adam at adamsmith.as (Adam Smith) Date: Mon, 22 Apr 2013 14:27:38 -0700 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <201304221726.r3MHQKuI006162@betta.cs.utexas.edu> Message-ID: (resending after actually signing up for types-list) Vladimir's way distinguishing declarative vs imperative programs is indeed clear-cut, but let me offer an explanation why it's not going to convince every programmer or language designer. The key idea is that languages of one type are usually rich enough enough to support an embedded language of the other type (or possibly many layers like this), and the experience of the programmer/designer may so strongly focus on the embedded languages that the properties of the outer language become largely inconsequential. Concretely, if you come across a large collection of declarative statements that are semantically about imperative commands, you start to simply read through the declarative aspects and see the imperative program as the real one. Here are a bunch of declarative statements (that are simply true): ~ The first step of the algorithm is to set a variable called "maxlen" to "0". ~ The second step of is to enact a for-loop over the variable "j" from "0" to "2". ~ The body of this loop is the action of setting "maxlen" to the maximum of the value of "maxlen" and the length of input value "j". The following example is from an on-going project involving using answer set programming to reason about the different possible execution paths through imperative code. At the surface level, it's just one giant declarative AnsProlog fact that states that "add(....) is a procedure". For all other purposes (including capturing an idea in code and debugging its representation on the basis of it's example executions), it's a pile of quite imperative code. procedure( add( set(maxlen, 0), for(j,0,2, do( set(maxlen, max(maxlen,input(len(j)))))), set(c, 0), for(i,0,maxlen, do( set(k,0), for(j,0,2,do( if(lt(i, input(len(j))), then( set(k, plus(k,input(n(j,i)))))))), if(lt(0, c), then( set(k, plus(k,c)))), set(c, 0), if(lt(9, k), then( set(c, hi(k)))), set(output, digit(i,lo(k))))), if(lt(0, c), then( set(output, digit(i,c)), die(special))), die(end_of_procedure) )). This specification of a concrete algorithm is interpreted by a compiler (or is it just a specification for a compiler?) that transforms it into another specification cast in terms of assembly-like primitives: http://pastie.org/7698516 (following one particular compilation strategy amongst many). Finally, it is executed (in a sense) by an abstract processor with a program counter, registers, and application-specific ALU pathways. The result, in terms of the answer sets of the overall answer set program, describe the set of all branching pathways that could be explored by the algorithm on the basis of unseen (nondeterminsitically chosen) inputs. At one level, I've only ever entered facts and rules that I trust the solver will consider as a collection (but in no particular order). At another level, I've built a procedural pipeline: compile, assemble, generate processor, generate program inputs, execute program on processor, summarize observed pathways. At yet another level, I've written a program in a (made up) language that legitimately has its own operational semantics. In traditional Prolog, every rule you write can be read as simply a statement of a fact in the :-/2 predicate (clause/2). As a programmer, however, you might your rules as imperative lists of execution steps for backwards chaining -- because that's what the :-/2 rules most clearly seem to be talking about. Meanwhile, once you get into making meta-interpreters (quite common, I believe), you quickly go back to treating instances of :-/2 as declarative statements of the structure of some data, a specification of a case in some formal structure. At every level, whether the language is imperative or declarative is quite clear-cut, what's not at all clear is which level is the relevant one to talk about. > > > On Mon, Apr 22, 2013 at 10:26 AM, Vladimir Lifschitz wrote: > >> > > A program in an imperative language describes an algortithm. >> > > A program in a declarative language describes a specification. >> > >> > But one man's specification is another man's algorithm. Even in >> Haskell, >> > one writes a sorting program typically by choosing a particular >> > algorithm (heap sort, quick sort, ...). Sure, these can be considered >> > specifications of sorting, but they are hardly the most abstract spec >> > for sorting. >> >> The way I see it, the difference between declarative and imperative >> programs is as clear-cut as the difference between declarative and >> imperative sentences in natural languages: "I like it" vs. "Sit down". >> A declarative sentence can be true or false; an imperative sentence >> can be executed or not executed. >> >> Here is how sorting is done in answer set programming (I learned this >> from Roland Kaminski): >> >> >> order(X,Z) % X is the predecessor of Z >> :- p(X;Z), % if both X and Z belong to p, >> X> #false : p(Y),X> >> >> A declarative program may look similar to a procedural program if they >> use similar data structures. Still, the difference is clear: a (truly) >> declarative program describes what is counted as a solution; a procedural >> program tells us which operations to perform. >> >> Vladimir >> >> >> >> > From dreamingforward at gmail.com Mon Apr 22 17:49:44 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Mon, 22 Apr 2013 14:49:44 -0700 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> Message-ID: > But one man's specification is another man's algorithm. >[...] > Similarly, a C program is a specification for a hundred different > machine code algorithms that manage memory, register allocation etc. It is so odd that the field is so amiss regarding terms. It mirrors some of the turmoil I've been experiencing in the programming arena. The c2.com wiki shows this also. It seems a new "renaissance" is needed where the field can re-evaluate where it's been and start staking down terms which have been bandied about and either diluted or misused. A C program is not a specification in any sense (if it's even appropriate to call C source text a "program", prior to compilation into an executable). Only the theorist who has never gotten "close to the machine" could ever say that. I strongly suggest actually trying simple assembly language programming on a common CPU architecture to understand the path between Programming Languages and actual implementation to get a clear understanding of the very concrete relationships between source code, machine instructions, memory pointers, and execution. A C program is something that is actually *executed*, after the compiler transforms it, from the language definition/specification, into hardware instructions specific to the CPU (ex. Intel386), on hardware implementing a binary logic architecture. One never executes a specification, at most, one applies it. Further, unless one is a hardware engineer, one never talks about "machine code algorithms". In the software world, one only talks at most about machine code instructions. Outside of the days where programmers flipped switch banks to make a computer do things, there is a very clear boundary between *hardware* and *software* and they are completely orthogonal. I feel this is where the theorists of lambda calculus have crossed a boundary into machine execution where things are completely different and governed by strict binary logic (not parsers or lexical definitions), implemented, further, by electrical circuits which have to obey the *laws of physics*. The relationship between program language definition and actual hardware is important and not something that can just be ignored. It is something informed by math of binary logic (AND/OR/and NOT gates) and governed by the laws of physics. One does not have to sacrifice the pure world of math and theory. It's like confusing the trees for the forest or the program on a Turing Machine from the linear symbols on the tape. They are both perfectly clear (at least to the programmer), but also completely different from each other. Sorry if my words are a bit blunt, but I'm trying to make clarity out of chaos. -- MarkJ Tacoma, Washington From dreamingforward at gmail.com Mon Apr 22 18:00:16 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Mon, 22 Apr 2013 15:00:16 -0700 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> <20850.58724.221000.32567@gargle.gargle.HOWL> <20851.2568.619000.23847@gargle.gargle.HOWL> Message-ID: > It was implied that the designers of C++ are not using the abstract > mathematical tools provided by research, but that simply isn't true. > Making tractable extensions to C++'s generic programming capabilities > requires solid theory and the guys down the hall from me here at Texas A&M > simply blow my mind with the kinds of things they are working on. Although the C++ folks have managed to create a mini Turing Machine in the preprocessing/generic language of C++, it should not be equated to the theoretical realm of lambda calculus, it is like conflating the domain of C||, the complex numbers, with R|| the domain of the real. The fact that one shares some similarities doesn't imply that they can be placed in the same group together. -- MarkJ Tacoma, Washington From micinski at cs.umd.edu Mon Apr 22 18:21:21 2013 From: micinski at cs.umd.edu (Kristopher Micinski) Date: Mon, 22 Apr 2013 18:21:21 -0400 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> <20850.58724.221000.32567@gargle.gargle.HOWL> <20851.2568.619000.23847@gargle.gargle.HOWL> Message-ID: http://matt.might.net/articles/c++-template-meta-programming-with-lambda-calculus/ Why not?... :-) Kris On Mon, Apr 22, 2013 at 6:00 PM, Mark Janssen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> It was implied that the designers of C++ are not using the abstract >> mathematical tools provided by research, but that simply isn't true. >> Making tractable extensions to C++'s generic programming capabilities >> requires solid theory and the guys down the hall from me here at Texas A&M >> simply blow my mind with the kinds of things they are working on. > > Although the C++ folks have managed to create a mini Turing Machine in > the preprocessing/generic language of C++, it should not be equated to > the theoretical realm of lambda calculus, it is like conflating the > domain of C||, the complex numbers, with R|| the domain of the real. > The fact that one shares some similarities doesn't imply that they can > be placed in the same group together. > -- > MarkJ > Tacoma, Washington From jason.a.wilkins at gmail.com Mon Apr 22 19:37:13 2013 From: jason.a.wilkins at gmail.com (Jason Wilkins) Date: Mon, 22 Apr 2013 18:37:13 -0500 Subject: [TYPES] the possibly uselessness of semantics, was -- The type/object distinction and possible synthesis of OOP and imperative programming languages In-Reply-To: References: <20848.61541.156000.365368@gargle.gargle.HOWL> <20850.58724.221000.32567@gargle.gargle.HOWL> <20851.2568.619000.23847@gargle.gargle.HOWL> Message-ID: I'm not sure how your reply addresses my point at all. I'm here at the epicenter of C++ and these guys know their theory quite well. It is far from ad hoc hacking like was implied. On Monday, April 22, 2013, Mark Janssen wrote: > > It was implied that the designers of C++ are not using the abstract > > mathematical tools provided by research, but that simply isn't true. > > Making tractable extensions to C++'s generic programming capabilities > > requires solid theory and the guys down the hall from me here at Texas > A&M > > simply blow my mind with the kinds of things they are working on. > > Although the C++ folks have managed to create a mini Turing Machine in > the preprocessing/generic language of C++, it should not be equated to > the theoretical realm of lambda calculus, it is like conflating the > domain of C||, the complex numbers, with R|| the domain of the real. > The fact that one shares some similarities doesn't imply that they can > be placed in the same group together. > -- > MarkJ > Tacoma, Washington > From u.s.reddy at cs.bham.ac.uk Mon Apr 22 19:38:56 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Tue, 23 Apr 2013 00:38:56 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> Message-ID: <20853.51728.281000.952320@gargle.gargle.HOWL> Mark Janssen writes: > > But one man's specification is another man's algorithm. > >[...] > > Similarly, a C program is a specification for a hundred different > > machine code algorithms that manage memory, register allocation etc. > > ... > A C program is not a specification in any sense (if it's even > appropriate to call C source text a "program", prior to compilation > into an executable). Only the theorist who has never gotten "close to > the machine" could ever say that. Nikhil's point was the a C program is a "specification" in the sense that it has details left out which are filled in by the compiler and the run-time system automatically. He is pointing out the *relativity* of the notions of specification and implementation. May I also point out that this message of your was way too patronizing? Please take it for granted that we are all computer scientists here. We have all written assembly language programs, perhaps small compilers and fragments of operating systems as part of our education. So, we don't need to be lectured on those topics. However, almost all of us on this list would agree that a compiler does not constitute the "meaning" of a programming language. Rather, the compiler is an implementation tool that must preserve the meaning of programs, with a notion of "meaning" that is independently defined. Cheers, Uday -- Prof. Uday Reddy Tel: +44 121 414 2740 Professor of Computer Science Fax: +44 121 414 4281 School of Computer Science University of Birmingham Email: U.S.Reddy at cs.bham.ac.uk Edgbaston Birmingham B15 2TT Web: http://www.cs.bham.ac.uk/~udr From dreamingforward at gmail.com Mon Apr 22 20:52:24 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Mon, 22 Apr 2013 17:52:24 -0700 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20853.51728.281000.952320@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> Message-ID: >> A C program is not a specification in any sense (if it's even >> appropriate to call C source text a "program", prior to compilation >> into an executable). Only the theorist who has never gotten "close to >> the machine" could ever say that. > > Nikhil's point was the a C program is a "specification" in the sense that it > has details left out which are filled in by the compiler and the run-time > system automatically. He is pointing out the *relativity* of the notions of > specification and implementation. But there are no details left out. Neither the computer nor compiler "fills in the gaps". What computing devices are you talking about? At every step, at the various levels of abstraction, from the high-level source code, to the the binary executable, there is a complete and detailed "transformation" logic. It will compile down to the same machine code *every* time, if it's working properly. That is the beauty or power of the machine--it is completely and rigorously predictable, if it isn't, it's not a computer. > May I also point out that this message of your was way too patronizing? > Please take it for granted that we are all computer scientists here. We > have all written assembly language programs, perhaps small compilers and > fragments of operating systems as part of our education. So, we don't need > to be lectured on those topics. I really thought so, but these continued confusions leaves me little else to conclude. What machines do you use, where a C compiler fills in the gaps of your source code? Quantum computers? > However, almost all of us on this list would agree that a compiler does not > constitute the "meaning" of a programming language. Rather, the compiler is > an implementation tool that must preserve the meaning of programs, with a > notion of "meaning" that is independently defined. Now you have pointed out an important issue. Source code as no meaning. Computers are inert and not capable of determining meaning. It's meaning is determined by the *programmer* who is following the intentions and specification of the *language designer*. I don't believe the word "meaning" needs to be ambiguous because there should be no disagreement that machines are not conscious. I do appreciate the dialog and respect your experiences, but these confusions and ambiguities are seriously debilitating. -- MarkJ Tacoma, Washington From vl at cs.utexas.edu Mon Apr 22 23:51:31 2013 From: vl at cs.utexas.edu (Vladimir Lifschitz) Date: Mon, 22 Apr 2013 22:51:31 -0500 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20853.51728.281000.952320@gargle.gargle.HOWL> (message from Uday S Reddy on Tue, 23 Apr 2013 00:38:56 +0100) References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> Message-ID: <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> Nikhil writes, > a C program is a specification for a hundred different machine code > algorithms that manage memory, register allocation etc. According to Mark, > A C program is not a specification in any sense They obviously use the the same word, "specification," in different ways. Mark talks about formally specifying the computational problem to be solved (and so do I). Of course a C program is not a specification in this sense. Algebraic and differential equations are specifications, as well as pure LISP programs and ASP programs. Nikhil and Uday talk about specifying a class of computational procedures that differ from each other by low-level details such as register allocation. That usage indeed makes the notions of specification and implementation relative. What is a specification will depend then on how large the classes are, what level of detail we are willing to disregard. --Vladimir From dreamingforward at gmail.com Tue Apr 23 00:23:54 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Mon, 22 Apr 2013 21:23:54 -0700 Subject: [TYPES] Declarative vs imperative In-Reply-To: <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> Message-ID: On Mon, Apr 22, 2013 at 8:51 PM, Vladimir Lifschitz wrote: > Nikhil writes, > >> a C program is a specification for a hundred different machine code >> algorithms that manage memory, register allocation etc. > > According to Mark, > >> A C program is not a specification in any sense > > They obviously use the the same word, "specification," in different ways. > Mark talks about formally specifying the computational problem to be > solved (and so do I). Of course a C program is not a specification in > this sense. Algebraic and differential equations are specifications, as > well as pure LISP programs and ASP programs. Nikhil and Uday talk about > specifying a class of computational procedures that differ from each other > by low-level details such as register allocation. That usage indeed makes > the notions of specification and implementation relative. What is a > specification will depend then on how large the classes are, what level of > detail we are willing to disregard. Yes, I think the key "articulation" point for that word/concept (of "specification") is the distinction where one can say that they are *commanding* the computer (i.e. imperatively) vs. merely *telling* the computer something ("procedurally") -- a subtle but significant difference, but the latter always has a layer of specification/definition where there is a translation from a higher-level domain to another, lower-level domain. The key point for me (within the realm of physical hardware and Turing Machines), is that there is a point where this "ladder" of going form high-level to lower-level "bottoms out" and there is no more room for interpretation -- the electrical signals which are governed by the laws of physics. The symbolic computing crowd seems to be completely different that all of that and is not addressed in the above. There the "low-level" ends at the symbols themselves, not logic gates. Mark implies that there is a "bottom", where this "bottoms-out" -- the machine instructions themselves. There is a point, where the Once one sends the binary codes to the CPU one is commanding -- there is no room for interpretation Using the term "specification" in relation to a "computer program" ( > > --Vladimir -- MarkJ Tacoma, Washington From u.s.reddy at cs.bham.ac.uk Tue Apr 23 04:20:40 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Tue, 23 Apr 2013 09:20:40 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> Message-ID: <20854.17496.640000.44309@gargle.gargle.HOWL> Mark Janssen writes: > The symbolic computing crowd seems to be completely different that all > of that and is not addressed in the above. There the "low-level" ends > at the symbols themselves, not logic gates. Indeed, you understand the point perfectly now. One of Dijkstra's greatest aphorisms was: It is not the job of our programs to instruct the computer. Rather, it is the job of the computer to execute our programs. "Our programs" come the first and last. The computer is there to dutifully execute what we want done. Whether the program achieves its goals or not is a black-and-white question, which is settled even before the computer gets into the game. That is philosophically speaking. In practice, we may have no practical way to settle the question ahead of time. We may need to test things out on the computer to see whether our programs behave the way we want them to. However, in principle, if we spent the time and effort to think through every step and check every detail, we could have settled the question. We use the computer merely as a substitue for the lack of diligence on our part. Cheers, Uday From adam at adamsmith.as Tue Apr 23 04:57:03 2013 From: adam at adamsmith.as (Adam Smith) Date: Tue, 23 Apr 2013 01:57:03 -0700 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20854.17496.640000.44309@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <20854.17496.640000.44309@gargle.gargle.HOWL> Message-ID: > It is not the job of our programs to instruct the computer. > Rather, it is the job of the computer to execute our programs. > In the case of programming by example and related program synthesis technologies, it really is our job to instruct the computer. We provide it with an a possibly underdetermined set of functional constraints or inconsistent set of training examples. The computer's job is to find any one (of potentially many with equal scores) program, filling in the details, that best matches the request. I suppose one could argue that this isn't the activity of programming a computer -- it's the activity of using some software that let's you make queries against an implicitly defined database of possible structures that could later be treated as executable programs (or hardware designs in other applications). The physical machine is enacting some specific algorithm to search the space of potential outputs and filter them by the constraints, but details of this algorithm sink below the level of our perception in the same way the microcode in modern CPUs is invisible even to the assembly programmer. Whatever this activity, however, it is the hallmark of the programmer/modeler/domain-engineer's job when using a (programming) paradigm like answer set programming. We write documents in formal languages that we call programs programs, and we "run" them to get output with controllable properties that we expect. The implementers of the search software have conspired with the hardware to offer us the illusion of programming against an inherently nondeterministic machine in which nature finds a way such that we only observe it guessing valid solutions. Certainly, this is an illusion offered by abstraction levels, squinting, and convention, but so are all of the potential levels one might program at below it all the way down to the quantum level. In any nondeterminstic programming setting, it's the programmer's intent to enlist support of something outside of their own code (somewhere between a hardware random number generator and a SAT solver) to carry out some form of "filling in the details". Meanwhile, I'll agree that this phenomenon isn't too relevant when describing the nature of a C compiler (superoptimizing compilers written as nondeterministic programs aside). From bgeron at gmail.com Tue Apr 23 07:55:23 2013 From: bgeron at gmail.com (Bram Geron) Date: Tue, 23 Apr 2013 13:55:23 +0200 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> Message-ID: <517676AB.7020804@gmail.com> On 04/23/2013 02:52 AM, Mark Janssen wrote: > What machines do you use, where a C compiler fills in the gaps of your > source code? Quantum computers? Well, the C standard does not fully define the meaning of all C programs, e.g. when a program divides by zero. The actual program output may depend on the compiler options. The machine code is usually strictly defined, so in a sense the C compiler fills in gaps. The LLVM people have an interesting series of blog posts on this: http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html . In an attempt to better define declarative vs. imperative, or specification vs. implementation, it may be valuable to distinguish these categories of specification: 1. Functional specification 2. Asymptotic running time 3a. Asymptotic memory use 3b. Memory use up to a factor 2 4. Register use and instruction selection This list is by no means complete. Categories 2,3a are typically upper bounds: if the compiler can optimize stack usage from O(n) to O(1) by eliminating tail calls, that's fine but optional; the compiler does, in fact, compile according to specification. Furthermore, category 1 is typically a lower bound: the compiler may make undefined behavior defined. In my view, programs in these languages are specifications of these categories: ASP/SAT: 1 Prolog: 1, 2 Haskell: 1, 2, 3a Java: 1, 2, 3a C: 1, 2, 3b Assembly: 1, 2, 3, 3a, 4 Typically, the more "declarative" languages specify less categories. Apart from this numeric "axis", it may be interesting to regard the connection between how programs in a language correspond to the specification. For instance, programs in C explicitly have to free heap memory. In Java, objects are freed when they can no longer be reached from the stack. In Haskell, I guess thunks outside of global variables are freed when they can no longer be reached from the active thunk, but it is harder to make a mental model of this. One could thus argue that programs in "declarative" languages correspond closely to the induced functional spec, but loosely to the induced extrafunctional spec. Reversely, programs in "imperative" languages correspond closely to the extrafunctional spec, but more loosely to the functional spec. Cheers, Bram From adamprocter at mail.missouri.edu Tue Apr 23 11:51:28 2013 From: adamprocter at mail.missouri.edu (Procter, Adam M. (MU-Student)) Date: Tue, 23 Apr 2013 15:51:28 +0000 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL>, Message-ID: <2EDB69B8-CCED-4E51-AF02-C968D2EA8F90@mail.missouri.edu> [re-sending now that I'm subscribed with the proper return address; hopefully it won't auto-reject my post this time :)] On Apr 23, 2013, at 2:21 AM, "Mark Janssen" wrote: > But there are no details left out. Neither the computer nor compiler > "fills in the gaps". What computing devices are you talking about? > At every step, at the various levels of abstraction, from the > high-level source code, to the the binary executable, there is a > complete and detailed "transformation" logic. It will compile down > to the same machine code *every* time, if it's working properly. Are you claiming, then, that all those fancy optimization flags I can pass to my C compiler don't actually do anything? Or that (say) -fomit-frame-pointer is unfaithful to the "complete and detailed transformation logic"? Better file a bug report. Really, I just don't understand what you're trying to say here. Adam From dreamingforward at gmail.com Tue Apr 23 12:49:43 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Tue, 23 Apr 2013 09:49:43 -0700 Subject: [TYPES] Declarative vs imperative In-Reply-To: <517676AB.7020804@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <517676AB.7020804@gmail.com> Message-ID: >> What machines do you use, where a C compiler fills in the gaps of your >> source code? Quantum computers? > > Well, the C standard does not fully define the meaning of all C > programs, e.g. when a program divides by zero. The actual program output > may depend on the compiler options. The machine code is usually strictly > defined, so in a sense the C compiler fills in gaps. Right, re: division by zero, but the program output is still deterministic -- the gap was filled before you even wrote your program, by the compiler vendor/writer. A given compiler (say gcc vs. TurboC) could handle the case of division by zero differently, but the compiler writer still has to make the design decision what his/her compiler will do. Once that decision is made, it will do the same thing every time. *There is no ambiguity, the gap was filled by the compiler designer *prior* to your code.* Cheers, -- MarkJ Tacoma, Washington From dreamingforward at gmail.com Tue Apr 23 13:02:59 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Tue, 23 Apr 2013 10:02:59 -0700 Subject: [TYPES] Declarative vs imperative In-Reply-To: <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu> Message-ID: >> But there are no details left out. Neither the computer nor compiler >> "fills in the gaps". What computing devices are you talking about? >> At every step, at the various levels of abstraction, from the >> high-level source code, to the the binary executable, there is a >> complete and detailed "transformation" logic. It will compile down >> to the same machine code *every* time, if it's working properly. > > Are you claiming, then, that all those fancy optimization flags I can pass to my C compiler don't actually do anything? Or that (say) -fomit-frame-pointer is unfaithful to the "complete and detailed transformation logic"? Better file a bug report. I well aware of compiler flags and was not implying at all that they don't do anything. What I was saying is that the compilers output is deterministic. If you use the same flags and the same source, you will get the same output -- unless you're suggesting some *magic happens here* event. MarkJ Tacoma, Washington From lkuper at cs.indiana.edu Tue Apr 23 13:17:43 2013 From: lkuper at cs.indiana.edu (Lindsey Kuper) Date: Tue, 23 Apr 2013 13:17:43 -0400 Subject: [TYPES] Declarative vs imperative Message-ID: > Date: Mon, 22 Apr 2013 17:52:24 -0700 > From: Mark Janssen > To: Uday S Reddy > Cc: types-list at lists.seas.upenn.edu > Subject: Re: [TYPES] Declarative vs imperative > >>> A C program is not a specification in any sense (if it's even >>> appropriate to call C source text a "program", prior to compilation >>> into an executable). Only the theorist who has never gotten "close to >>> the machine" could ever say that. >> >> Nikhil's point was the a C program is a "specification" in the sense that it >> has details left out which are filled in by the compiler and the run-time >> system automatically. He is pointing out the *relativity* of the notions of >> specification and implementation. > > But there are no details left out. Neither the computer nor compiler > "fills in the gaps". What computing devices are you talking about? Mark, do you really feel that there are no details left out of a C program? Let's take a toy program as an example: #include int main() { int a = 3; int b = 0; printf("%d\n", a + b); } Here are some of the details I left out, and that the compiler will have to fill in for me: * Whether, how, and when to optimize out the addition of 0. * What registers `a` and `b` are stored in. * Which instructions should carry out the addition. * ... Different compilers are going to fill in these details in different ways -- perhaps not very different ways in the case of this toy program. But for a more sophisticated program, given five different C compilers all compiling to the same architecture, I doubt that the resulting five compiled programs will be byte-identical! This is as it should be, and it doesn't necessarily mean that any of them are "wrong"; rather, it means that there are various correct ways of implementing the "specification" -- I use the term loosely -- that the C program provides. You could argue that they would be identical in the ways that "matter". And that's one reason why we study programming language semantics, to allow us to precisely characterize what aspects of a program "matter" (in a given setting), which gives us tools to *prove* that those five programs are in fact identical (or not!) with respect to those aspects that "matter". On the other hand, here are some things I *did* include in the program above, but that I may or may not actually care about. These include: * The order in which `a` and `b` are declared. * The order in which `a` and `b` are initialized. * The order of arguments to `+`. * ... In the first chapter of _Optimizing Compilers for Modern Architectures_, Allen and Kennedy write, "Sequential languages introduce constraints that are not critical to preserving the meaning of a computation." The goal of an optimizing compiler, then, might be to determine which constraints *are* critical in a source program so that it can throw away the rest. (This is not such a big deal for my toy program, of course, but it gets a lot more interesting when we are dealing with programs that could benefit considerably from automatic parallelization, which they go on to discuss in the book.) Here, again, we can turn to PL semantics to help us state and prove properties of program transformations. > Now you have pointed out an important issue. Source code as no > meaning. Computers are inert and not capable of determining meaning. > It's meaning is determined by the *programmer* who is following the > intentions and specification of the *language designer*. I don't > believe the word "meaning" needs to be ambiguous because there should > be no disagreement that machines are not conscious. Although you could say that a piece of code on its own doesn't have a meaning until we ascribe it one, it does have a structure that can be analyzed. The idea of the field of programming language semantics is that programs and programming languages are mathematical objects, making it possible to study them using mathematical techniques. There are lots of ways to give a semantics to a program or a language. This is not to say that the intent of the programmer or language designer doesn't matter -- those things do matter, a lot. Rather, I mean to say that programmers and language designers can use the tools of programming language semantics to make their intent clear. Not using those tools leaves both language designers and programmers with a heavy burden -- language designers would have to try to think about every program that a programmer might write in advance (and many programs aren't written by humans anyway), and programmers would have to try to guess the intent of some designer. It doesn't have to be that way. Lindsey From adamprocter at mail.missouri.edu Tue Apr 23 13:35:09 2013 From: adamprocter at mail.missouri.edu (Procter, Adam M. (MU-Student)) Date: Tue, 23 Apr 2013 17:35:09 +0000 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu>, Message-ID: <88009DE1-94D1-4FAA-89FF-45CD9A620BAB@mail.missouri.edu> On Apr 23, 2013, at 12:03 PM, "Mark Janssen" > wrote: But there are no details left out. Neither the computer nor compiler "fills in the gaps". What computing devices are you talking about? At every step, at the various levels of abstraction, from the high-level source code, to the the binary executable, there is a complete and detailed "transformation" logic. It will compile down to the same machine code *every* time, if it's working properly. Are you claiming, then, that all those fancy optimization flags I can pass to my C compiler don't actually do anything? Or that (say) -fomit-frame-pointer is unfaithful to the "complete and detailed transformation logic"? Better file a bug report. I well aware of compiler flags and was not implying at all that they don't do anything. What I was saying is that the compilers output is deterministic. If you use the same flags and the same source, you will get the same output -- unless you're suggesting some *magic happens here* event. So you meant to say that given a source program P, a compiler K, and a set of compiler options O, K executed with flags O will produce the same object/target code from P every time, if K is a correct compiler. I'm not so sure that's true -- smarter people than me have used genetic algorithms to determine more effective combinations/orderings of optimization phases, for example (see [1]). But in any case, you would admit that there are *multiple* target programs--call that set T(P)--that a compiler may emit from the same source program and still be correct. P determines T(P), but usually |T(P)| > 1. Therefore I do not think it is so fuzzy-headed to call P a specification. It's not the word I would use in everyday speech, but I thought you were the one who wanted to shake up the terminology of the field! ;) Adam [1] http://www.cs.rice.edu/~keith/Promo/LACSI2001.pdf.gz From u.s.reddy at cs.bham.ac.uk Tue Apr 23 14:32:38 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Tue, 23 Apr 2013 19:32:38 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu> Message-ID: <20854.54214.994000.652798@gargle.gargle.HOWL> Mark Janssen writes: > I well aware of compiler flags and was not implying at all that they > don't do anything. What I was saying is that the compilers output is > deterministic. If you use the same flags and the same source, you > will get the same output -- unless you're suggesting some *magic > happens here* event. Since we seem to have given up talking about languages and want to talk about compilers instead, here is a point. There is no law that requires that a compiler's output should be deterministic. The compiler is quite within its rights to produce a different object code program during every compilation, as long as it accepts all legal programs and preserves their meaning as per the language definition. In fact, at least in one case, I would welcome a compiler that does that. In the C standard, the order of evaluation of arguments in function calls is unspecified. However, if I were stupid enough to depend on a particular evaluation order that my compiler chooses deterministically, I would have hidden bugs in my programs that I wouldn't be able to notice. If I paid good money for a C compiler, I would definitely demand that it should have an option to randomize the evaluation order of arguments. Cheers, Uday From m.escardo at cs.bham.ac.uk Tue Apr 23 16:38:29 2013 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Tue, 23 Apr 2013 21:38:29 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20854.54214.994000.652798@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu> <20854.54214.994000.652798@gargle.gargle.HOWL> Message-ID: <5176F145.3010804@cs.bham.ac.uk> On 23/04/13 19:32, Uday S Reddy wrote: > If I paid > good money for a C compiler, I would definitely demand that it should have > an option to randomize the evaluation order of arguments. You can compliantly randomize code generation for security purposes, as is well known --- see e.g. https://wiki.ubuntu.com/Security/Features#Userspace_Hardening http://en.wikipedia.org/wiki/Buffer_overflow_protection So, yes, as Uday says, a compiler can be non-deterministic in practice, even deliberately (and I suspect non-deliberately too). I haven't seen randomization for the resolution of unspecified evaluation order in the standard definition of C (or any underspecified language) yet, as Uday suggests, but I would be surprised if people working on (applied or theoretical) program verification didn't think of that already. M. From mailinglists at robbertkrebbers.nl Tue Apr 23 18:39:12 2013 From: mailinglists at robbertkrebbers.nl (Robbert Krebbers) Date: Wed, 24 Apr 2013 00:39:12 +0200 Subject: [TYPES] Declarative vs imperative In-Reply-To: <20854.54214.994000.652798@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu> <20854.54214.994000.652798@gargle.gargle.HOWL> Message-ID: <51770D90.5040903@robbertkrebbers.nl> Hello Uday, On 04/23/2013 08:32 PM, Uday S Reddy wrote: > In the C standard, the order of evaluation of arguments in function calls is > unspecified. However, if I were stupid enough to depend on a particular > evaluation order that my compiler chooses deterministically, I would have > hidden bugs in my programs that I wouldn't be able to notice. If I paid > good money for a C compiler, I would definitely demand that it should have > an option to randomize the evaluation order of arguments. Let me then notice that in the case of C, it is worse than just non-determinism. There are also so called sequence point violations, which happen if you modify an object more than once (or read after you've modified it) in between two sequence points. For example int x = 0; int main() { printf("%d ", (x = 3) + (x = 4)); printf("%d\n", x); return 0; } not just randomly prints "7 3" or "7 4", but instead gives rise to undefined behavior, and could print arbitrary nonsense. When compiled with gcc at my machine, it for example prints "8 4". > In fact, at least in one case, I would welcome a compiler that does that. If you would like to explore non-determinism in C, you should take a look at the executable C semantics by Chucky Ellison and Grigore Rosu http://code.google.com/p/c-semantics/ It provides an interpreter that can used for exactly what you want to do. Also the interpreter of CompCert is also able to explore all possible evaluation orders. A patch by me http://gallium.inria.fr/blog/non-determinism-and-sequence-points-in-c/ allows it also to be used to detect sequence point violations as well. Best, Robbert From tjark.weber at gmx.de Wed Apr 24 05:57:56 2013 From: tjark.weber at gmx.de (Tjark Weber) Date: Wed, 24 Apr 2013 11:57:56 +0200 Subject: [TYPES] Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <517676AB.7020804@gmail.com> Message-ID: <1366797476.1748.19.camel@weber> Mark, On Tue, 2013-04-23 at 09:49 -0700, Mark Janssen wrote: > > Well, the C standard does not fully define the meaning of all C > > programs, e.g. when a program divides by zero. The actual program > > output may depend on the compiler options. The machine code is > > usually strictly defined, so in a sense the C compiler fills in > > gaps. > > Right, re: division by zero, but the program output is still > deterministic -- the gap was filled before you even wrote your > program, by the compiler vendor/writer. A given compiler (say gcc > vs. TurboC) could handle the case of division by zero differently, but > the compiler writer still has to make the design decision what his/her > compiler will do. Once that decision is made, it will do the same > thing every time. *There is no ambiguity, the gap was filled by the > compiler designer *prior* to your code.* What you describe here is called *implementation-defined behavior* in the C standard. There is quite a list of things that are implementation-defined, see, e.g., http://gcc.gnu.org/onlinedocs/gcc/C-Implementation.html In contrast, division by 0 causes *undefined behavior* according to the C standard. This is a quite different category that gives you no guarantees about your program's behavior whatsoever. Your compiler might very well do one thing today and another tomorrow. Best, Tjark From u.s.reddy at cs.bham.ac.uk Wed Apr 24 07:16:19 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 24 Apr 2013 12:16:19 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <51770D90.5040903@robbertkrebbers.nl> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <4DB2AE46-8E26-4B2C-A1AB-BD4FA3CC3F90@mail.missouri.edu> <20854.54214.994000.652798@gargle.gargle.HOWL> <51770D90.5040903@robbertkrebbers.nl> Message-ID: <20855.48899.187000.533015@gargle.gargle.HOWL> Robbert Krebbers writes: > Let me then notice that in the case of C, it is worse than just > non-determinism. I do not believe that "non-determinism" is the right take on it. C was designed to be a deterministic programming language, at least in its sequential programming fragment, but various aspects of behaviour are left "undefined" or "unspecified". So, the programmer is required to avoid all the undefined/unspecified aspects of behaviour. Any apparent non-determinism in a C program constitutes a programming bug. For people that think that a compiler determines the language rather than the language definition, this might come as a huge shock. That is precisely how I intended it to be. > There are also so called sequence point violations, > which happen if you modify an object more than once (or read after > you've modified it) in between two sequence points. For example > > int x = 0; > int main() { > printf("%d ", (x = 3) + (x = 4)); > printf("%d\n", x); > return 0; > } > > not just randomly prints "7 3" or "7 4", but instead gives rise to > undefined behavior, and could print arbitrary nonsense. So, when the C definition tells you that "x = 3" is an "expression" rather than a "statement", it is a huge lie. Expressions are not supposed to change the state, despite the fact that a lot of library functions of C do precisely that. Those of us that stopped teaching C in our undergraduate curricula did so for very good reasons. Cheers, Uday From marc.denecker at cs.kuleuven.be Fri Apr 26 13:04:34 2013 From: marc.denecker at cs.kuleuven.be (Marc Denecker) Date: Fri, 26 Apr 2013 19:04:34 +0200 Subject: [TYPES] Declarative vs imperative In-Reply-To: <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> Message-ID: <517AB3A2.2060508@cs.kuleuven.be> Hi, Here is a another conception of the notion "declarative" logic and its difference with imperative languages. This chair has three legs It is nice to sit in it. All students attended the lecture this morning. Each lecture takes place in exactly one room ... They are declarative sentences, pieces of information. They are not commands, not procedures, not descriptions of problems. In this view, a theory in a declarative logic is a bag of information. It is not a command, not a program. It is not even a representation of a computational problem. Consequently, a theory does not "do" anything. It cannot be executed. It has no solutions. It provides information, and that is it. That does not mean that it cannot be useful. In the first place, it can be used by human experts to communicate information about a domain, in the same way as currently a UML domain specification is used. It is like a UML domain specification, but in a formal logic. In the second place, this information can be used to solve problems, see below. In this conception, logic is a clear and precise formal language to express information. The role of model theory is to give a formal account of the information content of expressions of the declarative language. There is no inherent form of inference to such a logic. Therefore, theories in such a logic are not representations of problems. However, the information in a theory can be used to solve computational problems or perform tasks, by applying suitable forms of inference. In fact, it may be possible to use the same theory, the same information, to solve many types of problems, by application of DIFFERENT forms of inference. For example, a theory of a scheduling domain can be used for the task of checking the correctness of a given schedule (by model checking), but also of generating schedules (by model generation). This conception of logic differs from the view in almost every area of computational logic. Computational "declarative" logics such as logic programming (e.g. prolog), functional programming languages (e.g. Haskell), query languages (e.g. SQL), deductive database languages, Answer set programming, abductive logic programming, description logics, temporal logics, constraint logic programming, "deductive" logic, ...: they have in common that the language is associated to a unique "inherent" form of inference. They are "uni-inferential". This has advantages and disadvantages. In any case, it invites human experts to view a theory as a description of a computational problem (to be solved by applying the inherent form of inference). In some cases, like in Prolog or Haskell, when not only the type of inference but also its implementation is specified together with the logic, a (unique) operational semantics is imposed on the language. From then on, theories can be viewed as procedural programs. E.g., under SLDNF resolution, Prolog "theories" can be understood as a sort of procedural programs (Kowalski's procedural interpretation) with non-common procedural features such as backtracking and unification. We see that by imposing a unique form of inference and, next, by imposing an implementation of that inference, "declarative" languages gradually shift in the direction of procedural languages. I think that this is the reason why the notion of "declarative language" and the difference with imperative langauges has become so blurred. On the other hand, if no inherent form of inference is attached to the logic, a theory cannot be viewed as a problem and certainly not as a program. It is a specification, a bag of information and not more than that. This does not mean that declarative logic as defined above is not useful for computational purposes. Quite on the contrary. The above conception of logic suggests to build Knowledge Base Systems: multi-inference engines that support different forms of inference on the same theories (the "knowledge base"). So, perhaps surprisingly, logics without inherent form of inference are actually "multi-inferential". This view of logic and of computing with logic was called the "Knowledge Base System paradigm" by me and Joost Vennekens in a position paper at ICLP2008. The research project of my group is to build a KBS system called IDP, for a "declarative" language in the above sense. This language (FO(.)) is a rich extension of classical logic. An example of the knowledge base paradigm in operation can be seen in a propotype interactive configuration tool built with IDP and available at http://dtai.cs.kuleuven.be/krr/software (See KB-System demo) This tool supports 5 different forms of inference on the same theory to support different functionalities for the user: model checking, propagation, model generation/expansion, model generation with optimisation, explanation. In this application, the computations to be performed are not computationally hard, but what counts is the principle. This system displays a unique form of reuse of the theory that is excluded in principle in procedural languages and uni-inferential declarative programming paradigms. Here the conceptual difference between a declarative theory and a program is clear-cut. About imperative and procedural languages A program in an imperative language can be viewed as a (compound) command to the computer but it also contains a precise description of computer processes (the operational semantics). As such, I do not think there is an important difference between imperative and procedural languages. It also means that C, C++, Java programs,... decaratively describe something: namely these computer processes. In this respect, these languages can be seen as process logics. So, for me C is indeed a declarative logic. However, it is obviously a very different logic than say classical logic. One aspect is related to "universality": C++ can only describe a very restricted class of (computer executable) processes. Classical logic can be used (although not so conveniently) to describe such processes as well (e.g., using some methodology for temperal knowledge representation such as situation or event calculus). But Classical logic can also be used to describe other domains : temporal or non-temporal. E.g., to describe the theory of groups, or the theory of correct university programs. C++ can not be used for that, simply because what is described in such a theory are not computer processes. So C++ is a domain specific logic with a very narrow domain. Another difference is that C++, seen as a declarative process logic, is (currently) associated with a unique form of inference: "execution inference" of the computer process. Marc Denecker -- Marc Denecker (prof) KU Leuven Departement Computerwetenschappen tel: ++32 (0)16/32.75.57 Celestijnenlaan 200A Room A02.145 fax: ++32 (0)16/32.79.96 B-3001 Heverlee, Belgium email: Marc.Denecker at cs.kuleuven.be http://people.cs.kuleuven.be/~marc.denecker/ ........................................................................ Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From u.s.reddy at cs.bham.ac.uk Fri Apr 26 17:44:18 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 26 Apr 2013 22:44:18 +0100 Subject: [TYPES] Declarative vs imperative In-Reply-To: <517AB3A2.2060508@cs.kuleuven.be> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> Message-ID: <20858.62770.434000.984824@gargle.gargle.HOWL> Marc Denecker writes: > There is no inherent form of inference to such a logic. Therefore, > theories in such a logic are not representations of problems. > > However, the information in a theory can be used to solve > computational problems or perform tasks, by applying suitable forms of > inference. ... > > It also means that C, C++, Java programs,... decaratively describe > something: namely these computer processes. In this respect, these > languages can be seen as process logics. So, for me C is indeed a > declarative logic. > > However, it is obviously a very different logic than say classical > logic. One aspect is related to "universality": C++ can only describe a > very restricted class of (computer executable) processes. Classical > logic can be used (although not so conveniently) > to describe such processes as well (e.g., using some > methodology for temperal knowledge representation such as situation or > event calculus). > > Another difference is that C++, seen as a declarative process logic, > is (currently) associated with a unique form of inference: "execution > inference" of the computer process. Thank you for your long and thoughtful post. I am in agreement with much of what you say. But some key points of difference remain. First of all, there is no law that says that declarative languages need to be executable. However there is a law to the effect that programming languages need to have a declarative reading. That is so that we can reason about the programs written in them. ("Declarative" and "procedural" were the terms coined in the logic programming community. The corresponding terms in programming language community are "denotational semantics" and "operational semantics".) The attractiveness of logic programming, when it was first launched, was that it had a ready-made declarative reading, which was expected to make a big difference for programming. Unfortunately, Prolog also had a bad procedural reading that people struggled with. So, in practice, Prolog programmers spent almost all of their effort on the procedural meaning, and the declarative meaning went out of the window. In the end, Prolog was a good dream, but a bad reality. Coming to classical logic per se, I believe it is ill-fit for describing computer programs or "processes" as you call them. First of all, classical logic doesn't have any well-developed notion of time or dynamics, and it has a nasty existential quantifier which assumes that all things exist once and for all. In computer systems, new things come into being all the time, well beyond the time of the original development or deployment. We know how to write computer programs that deal with that. Classical logic doesn't. (LEJ Brouwer made this criticism a long time ago in the context of mathematics. There it might have been just a philosophical problem. But in Computer Science, it is a *practical* problem.) I believe logicians of philosophical inclination are prone to be enamored with what they have and lose sight of what they don't have. For a good part of two millennia, they kept debating Aristotilean syllogisms, without realizing that classical logic was yet to be discovered. Finally, it was left to the mathematicians to formulate classical logic. The logicians of today are similarly enamored with classical logic without much of an understanding of what it lacks. We would be ill-advised to listen to them. Or, we would be stuck for another two millennia, God forbid. [By the way, the Wikipedia page on Classical Logic is in a pitiful state. I hope somebody will pay attention to it.] Brilliant new things are happening in Logic. - Mathematicians have formulated Toposes (a generalization of Kripke models), which give us a great new variety of models for intuitionistic logic. There are deep mathematical facts buried in them and continue to be discovered. Toposes and intuitionistic logic are appropriate for modeling computer programs, which live in a growing dynamic world rather than a static one. - Girard has formulated Linear Logic, which broadens our idea of what kind of "things" a logic can talk about. David Pym and Peter O'Hearn invented Bunched Implication Logic, extending Linear Logic with a beautiful model-theoretic basis. These logics applied to imperative programming (which go by the name of "Separation Logic") are revolutionizing the development of technology for imperative programs. It is time to leave behind the classical logic. In fact, we should have done it a long time ago. Cheers, Uday From u.s.reddy at cs.bham.ac.uk Mon Apr 29 15:39:48 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Mon, 29 Apr 2013 20:39:48 +0100 Subject: [TYPES] John Reynolds Message-ID: <20862.52356.77000.290092@gargle.gargle.HOWL> Dear friends and colleagues, I am sorry to bring the sad news that John Reynolds, whose work and insights I have mentioned and recounted in my recent exchanges, has passed away on Sunday in Pittsburgh. John has not only done seminal work on a wide range of programming language topics, but he was also a guiding spirit and a great friend to many of us. We will dearly miss him. At this juncture, we might take a moment to reflect on his life time of accomplishment: http://www.cs.cmu.edu/~jcr/ Uday -- Prof. Uday Reddy Tel: +44 121 414 2740 Professor of Computer Science Fax: +44 121 414 4281 School of Computer Science Email: U.S.Reddy at cs.bham.ac.uk University of Birmingham Edgbaston Birmingham B15 2TT Web: http://www.cs.bham.ac.uk/~udr From naumann at cs.stevens.edu Tue Apr 30 12:09:14 2013 From: naumann at cs.stevens.edu (David Naumann) Date: Tue, 30 Apr 2013 12:09:14 -0400 (EDT) Subject: [TYPES] John Reynolds In-Reply-To: <20862.52356.77000.290092@gargle.gargle.HOWL> References: <20862.52356.77000.290092@gargle.gargle.HOWL> Message-ID: A guiding spirit indeed. Another opportunity for reflection is this recent interview: http://link.cs.cmu.edu/article.php?a=763 http://link.cs.cmu.edu/article.php?a=763 On Mon, 29 Apr 2013, Uday S Reddy wrote: > Date: Mon, 29 Apr 2013 20:39:48 +0100 > From: Uday S Reddy > To: types-list > Subject: [TYPES] John Reynolds > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear friends and colleagues, > > I am sorry to bring the sad news that John Reynolds, whose work and insights > I have mentioned and recounted in my recent exchanges, has passed away on > Sunday in Pittsburgh. > > John has not only done seminal work on a wide range of programming language > topics, but he was also a guiding spirit and a great friend to many of us. > We will dearly miss him. > > At this juncture, we might take a moment to reflect on his life time of > accomplishment: > > http://www.cs.cmu.edu/~jcr/ > > Uday > > -- > Prof. Uday Reddy Tel: +44 121 414 2740 > Professor of Computer Science Fax: +44 121 414 4281 > School of Computer Science Email: U.S.Reddy at cs.bham.ac.uk > University of Birmingham > Edgbaston > Birmingham B15 2TT Web: http://www.cs.bham.ac.uk/~udr > From urzy at mimuw.edu.pl Wed May 1 08:02:37 2013 From: urzy at mimuw.edu.pl (=?UTF-8?B?UGF3ZcWCIFVyenljenlu?=) Date: Wed, 01 May 2013 14:02:37 +0200 Subject: [TYPES] Future of TLCA Message-ID: <5181045D.8000101@mimuw.edu.pl> This message is addressed to everyone interested in the future of the conference TLCA (Typed Lambda Calculi and Applications). The bi-annual conference started in 1993 and will have its 11th edition this June in Eindhoven as part of RDP. After 20 years it seems reasonable to ask some questions about the future. In particular there is an emerging discussion on whether TLCA should merge with RTA or perhaps develop into a new conference of a broader scope. In order to gain opinions from as large part of TLCA community as possible, we have created a Google Group: https://groups.google.com/forum/#!forum/tlca-list We invite everybody who feels part of TLCA community to contribute to the discussion with their opinions and suggestions. Samson Abramsky, Pierre-Louis Curien, Mariangiola Dezani-Ciancaglini, Masahito Hasegawa, Luke Ong, Simona Ronchi Della Rocca, Pawe? Urzyczyn From drl at cs.cmu.edu Wed May 1 10:22:18 2013 From: drl at cs.cmu.edu (Dan Licata) Date: Wed, 1 May 2013 10:22:18 -0400 Subject: [TYPES] Future of TLCA In-Reply-To: <5181045D.8000101@mimuw.edu.pl> References: <5181045D.8000101@mimuw.edu.pl> Message-ID: <20130501142218.GB4652@cs.cmu.edu> In case anyone else would like to follow this discussion using a non-gmail address, you can subscribe an email address to the group by sending an email to tlca-list+subscribe at googlegroups.com It just took me a few minutes to figure out how to do this, so I thought I'd share. -Dan On May01, Pawe? Urzyczyn wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > This message is addressed to everyone interested in the future > of the conference TLCA (Typed Lambda Calculi and Applications). > The bi-annual conference started in 1993 and will have its 11th > edition this June in Eindhoven as part of RDP. After 20 years > it seems reasonable to ask some questions about the future. > In particular there is an emerging discussion on whether TLCA > should merge with RTA or perhaps develop into a new conference > of a broader scope. In order to gain opinions from as large part > of TLCA community as possible, we have created a Google Group: > > https://groups.google.com/forum/#!forum/tlca-list > > We invite everybody who feels part of TLCA community to contribute > to the discussion with their opinions and suggestions. > > Samson Abramsky, Pierre-Louis Curien, > Mariangiola Dezani-Ciancaglini, > Masahito Hasegawa, Luke Ong, > Simona Ronchi Della Rocca, > Pawe? Urzyczyn > From marc.denecker at cs.kuleuven.be Fri May 3 09:09:13 2013 From: marc.denecker at cs.kuleuven.be (Marc Denecker) Date: Fri, 03 May 2013 15:09:13 +0200 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <20858.62770.434000.984824@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> Message-ID: <5183B6F9.3080504@cs.kuleuven.be> On 04/26/2013 11:44 PM, Uday S Reddy wrote: > Marc Denecker writes: > >> There is no inherent form of inference to such a logic. Therefore, >> theories in such a logic are not representations of problems. >> >> However, the information in a theory can be used to solve >> computational problems or perform tasks, by applying suitable forms of >> inference. ... >> >> It also means that C, C++, Java programs,... decaratively describe >> something: namely these computer processes. In this respect, these >> languages can be seen as process logics. So, for me C is indeed a >> declarative logic. >> >> However, it is obviously a very different logic than say classical >> logic. One aspect is related to "universality": C++ can only describe a >> very restricted class of (computer executable) processes. Classical >> logic can be used (although not so conveniently) >> to describe such processes as well (e.g., using some >> methodology for temperal knowledge representation such as situation or >> event calculus). >> >> Another difference is that C++, seen as a declarative process logic, >> is (currently) associated with a unique form of inference: "execution >> inference" of the computer process. > > Thank you for your long and thoughtful post. I am in agreement with much of > what you say. But some key points of difference remain. > > First of all, there is no law that says that declarative languages need to > be executable. However there is a law to the effect that programming > languages need to have a declarative reading. That is so that we can reason > about the programs written in them. ("Declarative" and "procedural" were > the terms coined in the logic programming community. The corresponding > terms in programming language community are "denotational semantics" and > "operational semantics".) > > The attractiveness of logic programming, when it was first launched, was that > it had a ready-made declarative reading, which was expected to make a big > difference for programming. Unfortunately, Prolog also had a bad procedural > reading that people struggled with. So, in practice, Prolog programmers > spent almost all of their effort on the procedural meaning, and the > declarative meaning went out of the window. In the end, Prolog was a good > dream, but a bad reality. > > Coming to classical logic per se, I believe it is ill-fit for describing > computer programs or "processes" as you call them. First of all, classical > logic doesn't have any well-developed notion of time or dynamics, and it has > a nasty existential quantifier which assumes that all things exist once and > for all. In computer systems, new things come into being all the time, well > beyond the time of the original development or deployment. We know how to > write computer programs that deal with that. Classical logic doesn't. (LEJ > Brouwer made this criticism a long time ago in the context of mathematics. > There it might have been just a philosophical problem. But in Computer > Science, it is a *practical* problem.) > > I believe logicians of philosophical inclination are prone to be enamored > with what they have and lose sight of what they don't have. For a good part > of two millennia, they kept debating Aristotilean syllogisms, without > realizing that classical logic was yet to be discovered. Finally, it was > left to the mathematicians to formulate classical logic. The logicians of > today are similarly enamored with classical logic without much of an > understanding of what it lacks. We would be ill-advised to listen to them. > Or, we would be stuck for another two millennia, God forbid. > > [By the way, the Wikipedia page on Classical Logic is in a pitiful state. I > hope somebody will pay attention to it.] > > Brilliant new things are happening in Logic. > > - Mathematicians have formulated Toposes (a generalization of Kripke > models), which give us a great new variety of models for intuitionistic > logic. There are deep mathematical facts buried in them and continue to be > discovered. Toposes and intuitionistic logic are appropriate for modeling > computer programs, which live in a growing dynamic world rather than a > static one. > > - Girard has formulated Linear Logic, which broadens our idea of what kind > of "things" a logic can talk about. David Pym and Peter O'Hearn invented > Bunched Implication Logic, extending Linear Logic with a beautiful > model-theoretic basis. These logics applied to imperative programming > (which go by the name of "Separation Logic") are revolutionizing the > development of technology for imperative programs. > > It is time to leave behind the classical logic. In fact, we should have > done it a long time ago. > > Cheers, > Uday > "It is time to leave behind the classical logic. In fact, we should have done it a long time ago." To me, that sounds like a total and unconditional rejection. You mention "(classical logic) is ill-fit for describing computer programs or processes" and your references to scientific progress are all in the context of modelling (the executions of) computer programs ("processes"). I am not an expert in logics for modelling programs and processes, and it was not my intention with my previous email nor with this one to defend classical logic for this purpose. But there are plenty of applications of logic outside modelling computer programs. I certainly have reservations about FO. I would agree that FO's syntax need to be improved, that FO is not expressive enough and needs to be extended. For example, I think you have a good point that classical logic's existential quantifier is not well suited for expressing "dynamic creation" of objects, and that such an operator might somehow be added. However, I would not throw away the standard existential quantifier; in many applications it is exactly the one that we need. In my opinion, that is the case with all the connectives and quantifiers of FO (\land,\lor,\neg,\forall,\exists): they are all fundamentally important information composition operators and their semantics in FO is essentially correct. If your rejection is as total as it sounded, you will disagree with that. Let me give you a potential argument for your case, which would really pull me over to your side. Consider the following information. A if in a semester no student registered for a course, then this course does not take place in that semester. In class we represent it in FO as: B ! c ! s : Semester(s) & Course(c) & ~ ? st: Registered(st,c,s) => ~ TakesPlace(x,s) or as B' ! c ! s : Semester(s) & Course(c) & TakesPlace(x,s) => ? st: Registered(st,c,s) (! is shorthand for forall, ? for exists, ~ for not) I point my students to the precision of B and B' as representations of A, and to the correctness of B's FO connectives and quantifiers to capture exactly the information that was to be expressed. The example is one of a dime a dozen. I argue to them that standard conjunction, disjunction, (objective) negation, universal and existential quantifiers are fundamentally important information composition operators and their FO model semantics is the right one. We see computational applications of such sentences using our KBS system as a didactic tool, for querying in the context of a database, for searching course plannings in the context of scheduling, and for some other sorts of applications. Note that the sentence contains the existential quantifier. But I think it is a clear case where the standard FO existential quantifier is appropriate, and not your "dynamic creation" one. If my strong claims above are right, then I would argue that FO is indeed a base language (in the sense that \land, \lor, \neg, \forall, \exists are base composition operators, and FO's semantics for them is correct!). On the other hand, here is a very convincing way to show me that I am wrong: it suffices to show me ONE database in which the informal proposition A is true and the formal sentence B is false or vice versa. Best wishes, Marc -- Marc Denecker (prof) KU Leuven Departement Computerwetenschappen tel: ++32 (0)16/32.75.57 Celestijnenlaan 200A Room A02.145 fax: ++32 (0)16/32.79.96 B-3001 Heverlee, Belgium email: Marc.Denecker at cs.kuleuven.be http://people.cs.kuleuven.be/~marc.denecker/ ........................................................................ Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From u.s.reddy at cs.bham.ac.uk Fri May 3 17:22:28 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 3 May 2013 22:22:28 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <5183B6F9.3080504@cs.kuleuven.be> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> Message-ID: <20868.10900.397000.669371@gargle.gargle.HOWL> Marc Denecker writes: > "It is time to leave behind the classical logic. In fact, we should > have done it a long time ago." > > To me, that sounds like a total and unconditional rejection. No, what I meant is that the classical logic represents a stage in the development of logic. It cannot be taken as the final answer. In fact, we cannot accept that we have a final answer until the entire natural language has been formalized, which might take a very very long time indeed! (The view I take, following Quine, is that logic is a regimentation of natural language. We can perfectly well circumscribe various regimens for various purposes.) I am entirely happy with the characterization of logical connectives as "information composition" operators. But we can only accept it as a good, but vague, intuition. We do not know what this "information" is. Neither do we know what the information is about. So, in order to claim that classical logic is a canonical information composition calculus, somebody would need to formalize those notions. Even though Vladimir has omitted the word "programming" in titling this subthread, the discussion has been about "declarative" and "imperative" as paradigms of programming. So, I would rather not divorce myself from programming concerns in discussing these issues. Cheers, Uday PS. I will try to respond to your more detailed points a little later. For now, I just wanted to set the record straight about what you called my "total and unconditional rejection" of classical logic, which it wasn't. From dreamingforward at gmail.com Fri May 3 18:28:01 2013 From: dreamingforward at gmail.com (Mark Janssen) Date: Fri, 3 May 2013 15:28:01 -0700 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <20868.10900.397000.669371@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> Message-ID: >> To me, that sounds like a total and unconditional rejection. > > No, what I meant is that the classical logic represents a stage in the > development of logic. It cannot be taken as the final answer. In fact, we > cannot accept that we have a final answer until the entire natural language > has been formalized, which might take a very very long time indeed! (The > view I take, following Quine, is that logic is a regimentation of natural > language. We can perfectly well circumscribe various regimens for various > purposes.) But if we're going to be in the Computer Science department, can we get away from the idea of "logic as a regimentation of natural language" (which is fine for the Philosophy department) and move to the idea of logic as equations of Binary Artihmetic and Boolean Algebra? -- MarkJ Tacoma, Washington From flippa at flippac.org Fri May 3 20:02:57 2013 From: flippa at flippac.org (Philippa Cowderoy) Date: Sat, 04 May 2013 01:02:57 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> Message-ID: <51845031.1020701@flippac.org> On 05/03/13 23:28, Mark Janssen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >>> To me, that sounds like a total and unconditional rejection. >> No, what I meant is that the classical logic represents a stage in the >> development of logic. It cannot be taken as the final answer. In fact, we >> cannot accept that we have a final answer until the entire natural language >> has been formalized, which might take a very very long time indeed! (The >> view I take, following Quine, is that logic is a regimentation of natural >> language. We can perfectly well circumscribe various regimens for various >> purposes.) > But if we're going to be in the Computer Science department, can we > get away from the idea of "logic as a regimentation of natural > language" (which is fine for the Philosophy department) and move to > the idea of logic as equations of Binary Artihmetic and Boolean > Algebra? We must do no such thing! Booleans are not especially fundamental to computing, however much they are part of our hardware implementations. To talk about how things map onto binary arithmetic and boolean algebra, we must talk about the things we are mapping onto them: this means accepting that some logics talk about other objects of study. -- flippa at flippac.org From tadeusz.litak at gmail.com Fri May 3 20:23:49 2013 From: tadeusz.litak at gmail.com (Tadeusz Litak) Date: Sat, 04 May 2013 02:23:49 +0200 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <20868.10900.397000.669371@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> Message-ID: <51845515.9040200@gmail.com> If I may chime in. The original point made by Uday re classical logic: >Coming to classical logic per se, I believe it is ill-fit for describing computer programs or "processes" is certainly worthy of attention. But it does not seem to imply the conclusion of that mail: >It is time to leave behind the classical logic. In fact, we should have done it a long time ago. (even if it wasn't intended, it does indeed sound "like a total and unconditional rejection"... such things happen in the fervor of a discussion :-) "Logical pluralism" is a position rather well-established in the philosophy of logic. I would think that in the context of Computer Science, it is even more tempting. [incidentally and perhaps contrary to established views, even Brouwer himself could be perhaps seen as one of first logical pluralists. While he very strongly rejected Fregean-Russellian logicism in *foundations of mathematics*, he has always held the work of Boole and the whole algebraic tradition in logic in high regard... But this is an aside] It might even happen to be Uday's own position, if I understand correctly the remark that "we can perfectly well circumscribe various regimens for various purposes." Most of my email will elaborate on this. I would simply say that whenever one wants, needs or has to think of all propositional formulas (also those possibly involving implication, and also those involving fusion, "tensor product" or what have you) as *rewritable to a conjunctive-disjunctive normal form without loss of information*, then the underlying domain logic is essentially classical. It is hard to imagine whole areas of Theoretical CS if rewriting formulas to CNF or proofs by contradiction/contraposition/excluded middle are suddenly deemed outdated and/or illegal... I mean not only and not even primarily logic programming, but also finite model theory, complexity theory, ontologies/description logics or the whole PODS/VLDB community... [actually, as a curious aside, the logic of database theorists, while certainly not constructive, is not fully classical either. They dread the top constant and unrestricted negation, preferring instead relative complement. This has to do with assumptions such as "closed world", "active domain" and the demand that queries are "domain independent". In short, their logic is rather that of Boolean rings without identity, which---funnily enough---also happen to be the setting of Stone's original work. It is just contemporary and ahistorical misrepresentation to say that Stone was working with "Boolean algebras". But this, again, is an aside...] And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss languages with control operators, first-class continuations, static catch/throw a la Scheme etc. There is so much stunningly beautiful work in that community that deserves to be better known... But, equally obviously, not all the programming languages have such constructs. Furthermore, as linear logicians (already mentioned by Uday) will be happy to tell you, there are contexts when even intuitionistic notion of implication (so also the one of topos-theorists or proof-assistants, for example) is way too coarse-grained. Particularly when one wants, needs or has to be resource-aware. Also, the recent work of Wadler, Pfenning and other authors suggests that Curry-Howard correspondence for concurrency will have to do with linear rather than intuitionistic logic. [And as substructural logicians will be happy to tell you, there are contexts where even linear logicians may seem coarse-grained, thick-skinned, corner-cutting brutes. :-) But this, yet again, is an aside.] But where I most likely would part ways with Uday is when he claims (if I understand correctly) that we are approaching or even should approach "a final answer" of any kind. To me, searching for one logic valid in all CS-relevant contexts seems a rather misguided enterprise. Especially or at least when we talk about logic understood as a formal inference system. What we perhaps need is more introductory logic courses---and also handbooks and monographs---for budding CS undergraduates and graduates (and perhaps also some postgraduates) which would make them understand the subtlety and complexity of the picture. And the benefits and costs of adopting specific inference rules. Proof-assistant based courses seem to go in just the right direction. I am teaching right now one based on that excellent "Software Foundations" material of Benjamin Pierce et al. I think it changes and sharpens not only the thinking of students, but also that of the teacher himself (or herself :-). But even this only goes so far---after all, the underlying logic is essentially intuitionistic... on the other hand, any weaker one could quickly become a nightmare for actually discussing things as demanding as semantics of programming languages (with bangs and exclamation marks in every second lemma... :-) To conclude, a few minor points: > In fact, we cannot accept that we have a final answer until the entire natural language has been formalized We'll wait for this only a little longer than for the invention of perpetuum mobile and heat death of the universe... :-) And which "natural language" are we talking about? Sometimes I think the only reason why, e.g., Chomsky ever came up with the idea of "universal grammar" was that he did not speak too many languages in the first place (although Hebrew seems reasonably distant from English)... > (The view I take, following Quine, is that logic is a regimentation of natural language. Same objection as above, and this is just to begin with. [The only redeeming features of Quine were that he wrote well and had a certain logical culture. As a philosopher, in my opinion, he had a rather destructive influence on development of logic, particularly in philosophy departments, even if nowhere near as disastrous as the neopositivists or the majority of "analytic philosophers". But this is just one more aside...] > We can perfectly well circumscribe various regimens for various purposes. As said above, I'm perfectly in agreement with this statement. > I am entirely happy with the characterization of logical connectives as "information composition" operators. But we can only accept it as a good, but vague, intuition. We do not know what this "information" is. Neither do we know what the information is about. So, in order to claim that classical logic is a canonical information composition calculus, somebody would need to formalize those notions. I think I can agree with every word here. Perhaps the difference then is not so big... I guess then that "leaving classical logic behind" meant rather "stop presenting it to students as the only, final and >>real<< formalism for Computer Scientists, everything else being a marginal pathology, if mentioned at all"... and if this was indeed intended by this remark, I would have a hard time disagreeing. Okay... back then to popcorn and a comfortable seat in the audience... Best, t. From luis.caires at di.fct.unl.pt Sat May 4 07:37:59 2013 From: luis.caires at di.fct.unl.pt (Luis Caires) Date: Sat, 4 May 2013 12:37:59 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> Message-ID: On Fri, May 3, 2013 at 11:28 PM, Mark Janssen wrote: > But if we're going to be in the Computer Science department, can we > get away from the idea of "logic as a regimentation of natural > language" (which is fine for the Philosophy department) and move to > the idea of logic as equations of Binary Artihmetic and Boolean > Algebra? Hi, about this point: perhaps it is helpful to recall some very basic stuff, put in very simple terms, even if it may sound quite elementary... While what (I think) you call boolean logic may be useful for explaining (basic) computing operations, and provide an adequate foundation for (basic) hardware design, it just does not scale up to higher levels of abstraction, as needed to talk and reason about complex computing systems. It really takes quite a bit to build up from bit-level operations as actually performed by the hardware up to concepts such as "ADTs", "algorithms", "objects", "functions", "modules", "types", "abstract machines" - ideas such as "code as data", "atomicity", "fairness", "levels of interpretation" - and all the great stuff that belong to the world of (both practical and theoretical) computer science. To be able to talk about all this we need much more than basic "binary artihmetic and boolean algebra". For our purposes, it is perhaps more convenient to think of "symbolic logic" as just a the convenient language to describe properties and formally reason about objects in some domain. Currently, one single unified logic is not yet available, we need several logics; different logics describe different kinds of properties of CS objects, useful for different purposes, all based, we hope, on a common trunk of deep principles. For example, I guess every programmer appreciate the usefulness of types in programming languages. But a type system is nothing more than a certain kind of a symbolic logic. It does not calculate with concrete data values or bits, but instead uses logic to deduce and reason about the type of things around, allowing the compiler to prove, at compile time, without executing the generated code, that a program satisfies a set of properties. Namely that, when actually executed, it will not suffer from basic runtime errors, such as invalid operations on data, etc. This is just an example. Each particular logic provides reasoning rules, allowing us to know how properties expressible in the logic relate, and how the objects the logic talks about satisfies a property or not. In some cases, algorithms can be provided to check whether an object satisfy a given property (automated verification). These basic concepts are essential for computer science, as no artifact (processors, algorithms, programming languages, compilers, etc) can be precisely and fully understood without reference to the various properties it should satisfy. If you want to check that a given code piece really returns a sorted vector, you cannot do this conveniently just with (what I think you call) boolean logic (even if some simple verification problems can be "compiled" down to boolean logic). As another example, some of us may teach (separation) logic to students to empower them with solid techniques for checking the absence of races in concurrent java programs. As yet another example, we (with colleagues) have recently discovered how to use (linear) logic to reason about the safety of session protocols in distributed systems. So there is this idea that (symbolic) logic is actually the "calculus of computer science" (see e.g. http://www.cs.rice.edu/~vardi/logic/). While symbolic logic has roots in philosophy, I guess it is fair to recognize that it is being now developed mostly in conection with (theoretical) computer science and math, often driven by the deep relation between logic and computation. So I would really encourage you to look into all this. Logical concepts may be also perhaps useful to discuss other issues in this thread such as "specs versus programs", and "declarative versus imperative". Let me copy some remarks by Hoare (ACM, 2009): "So I believe there is now a better scope than ever for pure research in computer science. The research must be motivated by curiosity about the fundamental principles of computer programming, and the desire to answer the basic questions common to all branches of science: what does this program do; how does it work; why does it work; and what is the evidence for believing the answers to all these questions? We know in principle how to answer them. It is the specifications that describes what a program does; it is assertions and other internal interface contracts between component modules that explain how it works; it is programming language semantics that explains why it works; and it is mathematical and logical proof, nowadays constructed and checked by computer, that ensures mutual consistency of specifications, interfaces, programs, and their implementations." So I guess this view of logic actually belongs to the CSD ... Thanks, Luis -- Best regards, Luis Caires http://ctp.di.fct.unl.pt/~lcaires/ Departamento de Inform?tica FCT Universidade Nova de Lisboa From kthielen at gmail.com Sat May 4 11:49:48 2013 From: kthielen at gmail.com (Kalani Thielen) Date: Sat, 4 May 2013 11:49:48 -0400 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <51845515.9040200@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> Message-ID: <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> > And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss > languages with control operators, first-class continuations, static catch/throw a la Scheme etc. I am not an academic or qualified in any way to speak on this, I'm just an uneducated, small town, country programmer, but on this point you've raised I wonder if the list could clear up a bit of confusion that I have. I'm familiar with thinking of types as sets and logical connectives (type constructors) as operations on sets. So the type A*B has size |A|*|B|, the product of the sizes of two sets. A->B has size |B|^|A| and this works out well (e.g.: Bool->Bool has size 2^2 = 4: not, id, const True, const False). So like you say, type negation corresponds to a continuation on that type (where a continuation doesn't return any value at all, satisfying the empty type). So ~A=A->_|_. That interpretation works out really well too, because identities like A+B=~A->B can be read as compilation techniques for variants (with the obvious construction and destruction approaches). But I'm not sure that I've got a straight story on this interpretation of negation, quite. I think that it's suggesting that the size of the set of continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, right? So there are 0 continuations -- they're impossible to construct? I appreciate any explanations y'all can offer on this point. Regards. On May 3, 2013, at 8:23 PM, Tadeusz Litak wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > If I may chime in. The original point made by Uday re classical logic: > > >Coming to classical logic per se, I believe it is ill-fit for describing > computer programs or "processes" > > is certainly worthy of attention. But it does not seem to imply the conclusion of that mail: > > >It is time to leave behind the classical logic. In fact, we should have > done it a long time ago. > > (even if it wasn't intended, it does indeed sound "like a total and unconditional rejection"... such things happen in the fervor of a discussion :-) > > "Logical pluralism" is a position rather well-established in the philosophy of logic. I would think that in the context of Computer Science, it is even more tempting. > > [incidentally and perhaps contrary to established views, even Brouwer himself could be perhaps seen as one of first logical pluralists. While he very strongly rejected Fregean-Russellian logicism in *foundations of mathematics*, he has always held the work of Boole and the whole algebraic tradition in logic in high regard... But this is an aside] > > It might even happen to be Uday's own position, if I understand correctly the remark that "we can perfectly well circumscribe various regimens for various purposes." Most of my email will elaborate on this. > > > I would simply say that whenever one wants, needs or has to think of all propositional formulas (also those possibly involving implication, and also those involving fusion, "tensor product" or what have you) as *rewritable to a conjunctive-disjunctive normal form without loss of information*, then the underlying domain logic is essentially classical. > > It is hard to imagine whole areas of Theoretical CS if rewriting formulas to CNF or proofs by contradiction/contraposition/excluded middle are suddenly deemed outdated and/or illegal... I mean not only and not even primarily logic programming, but also finite model theory, complexity theory, ontologies/description logics or the whole PODS/VLDB community... > > [actually, as a curious aside, the logic of database theorists, while certainly not constructive, is not fully classical either. They dread the top constant and unrestricted negation, preferring instead relative complement. This has to do with assumptions such as "closed world", "active domain" and the demand that queries are "domain independent". In short, their logic is rather that of Boolean rings without identity, which---funnily enough---also happen to be the setting of Stone's original work. It is just contemporary and ahistorical misrepresentation to say that Stone was working with "Boolean algebras". But this, again, is an aside...] > > And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss languages with control operators, first-class continuations, static catch/throw a la Scheme etc. There is so much stunningly beautiful work in that community that deserves to be better known... > > > But, equally obviously, not all the programming languages have such constructs. Furthermore, as linear logicians (already mentioned by Uday) will be happy to tell you, there are contexts when even intuitionistic notion of implication (so also the one of topos-theorists or proof-assistants, for example) is way too coarse-grained. Particularly when one wants, needs or has to be resource-aware. Also, the recent work of Wadler, Pfenning and other authors suggests that Curry-Howard correspondence for concurrency will have to do with linear rather than intuitionistic logic. > > > [And as substructural logicians will be happy to tell you, there are contexts where even linear logicians may seem coarse-grained, thick-skinned, corner-cutting brutes. :-) But this, yet again, is an aside.] > > But where I most likely would part ways with Uday is when he claims (if I understand correctly) that we are approaching or even should approach "a final answer" of any kind. To me, searching for one logic valid in all CS-relevant contexts seems a rather misguided enterprise. Especially or at least when we talk about logic understood as a formal inference system. > > What we perhaps need is more introductory logic courses---and also handbooks and monographs---for budding CS undergraduates and graduates (and perhaps also some postgraduates) which would make them understand the subtlety and complexity of the picture. And the benefits and costs of adopting specific inference rules. > > Proof-assistant based courses seem to go in just the right direction. I am teaching right now one based on that excellent "Software Foundations" material of Benjamin Pierce et al. I think it changes and sharpens not only the thinking of students, but also that of the teacher himself (or herself :-). > > But even this only goes so far---after all, the underlying logic is essentially intuitionistic... on the other hand, any weaker one could quickly become a nightmare for actually discussing things as demanding as semantics of programming languages (with bangs and exclamation marks in every second lemma... :-) > > > To conclude, a few minor points: > > > > In fact, we cannot accept that we have a final answer until the entire natural language has been formalized > > We'll wait for this only a little longer than for the invention of perpetuum mobile and heat death of the universe... :-) > > And which "natural language" are we talking about? Sometimes I think the only reason why, e.g., Chomsky ever came up with the idea of "universal grammar" was that he did not speak too many languages in the first place (although Hebrew seems reasonably distant from English)... > > > > (The view I take, following Quine, is that logic is a regimentation of natural language. > > Same objection as above, and this is just to begin with. > > [The only redeeming features of Quine were that he wrote well and had a certain logical culture. As a philosopher, in my opinion, he had a rather destructive influence on development of logic, particularly in philosophy departments, even if nowhere near as disastrous as the neopositivists or the majority of "analytic philosophers". But this is just one more aside...] > > > > We can perfectly well circumscribe various regimens for various purposes. > > As said above, I'm perfectly in agreement with this statement. > > > > > I am entirely happy with the characterization of logical connectives as "information composition" operators. But we can only accept it as a good, but vague, intuition. We do not know what this "information" is. Neither do we know what the information is about. So, in order to claim that classical logic is a canonical information composition calculus, somebody would need to formalize those notions. > > > I think I can agree with every word here. Perhaps the difference then is not so big... > > I guess then that "leaving classical logic behind" meant rather "stop presenting it to students as the only, final and >>real<< formalism for Computer Scientists, everything else being a marginal pathology, if mentioned at all"... and if this was indeed intended by this remark, I would have a hard time disagreeing. > > Okay... back then to popcorn and a comfortable seat in the audience... > > Best, > t. > > From u.s.reddy at cs.bham.ac.uk Sat May 4 12:08:55 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Sat, 4 May 2013 17:08:55 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> Message-ID: <20869.12951.281000.800976@gargle.gargle.HOWL> Mark Janssen writes: > But if we're going to be in the Computer Science department, can we > get away from the idea of "logic as a regimentation of natural > language" (which is fine for the Philosophy department) and move to > the idea of logic as equations of Binary Artihmetic and Boolean > Algebra? Oh, no! Wherever you have learnt logic from, I am afraid they didn't do a very good of explaining what logic is about (which is not unusual). As Marc Denecker alluded to, logic is essentially a study of the information content of statements. The "information content" idea is formalized in terms of consequence: which statement is a consequence of which other statements. If B is a consequence of A, i.e., if B necessarily happens to be true in all situations where A is true, we understand that the information contained in B is already contained in A. Programming languages are also regimentations of natural language, albeit a different part of natural language than the one classical logic has traditionally concerned itself with. When I was taught programming in the 70's, I was encouraged to use natural language for writing algorithms because it helps you to think better. (However, using natural language correctly, i.e., precisely and unambiguously, requires quite a bit of mathematical training, which is becoming increasingly rare in recent times.) Programming logic is the logic used for reasoning about programs. There, we are interested not only in the consequences of statements, but also the consequences of commands (or functions, operations or whatever). We use natural language to communicate (and to think in) every day. It is obviously not meant for just the philosophers! Cheers, Uday From lukstafi at gmail.com Sat May 4 12:10:28 2013 From: lukstafi at gmail.com (Lukasz Stafiniak) Date: Sat, 4 May 2013 18:10:28 +0200 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> Message-ID: On Sat, May 4, 2013 at 5:49 PM, Kalani Thielen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > I'm familiar with thinking of types as sets and logical connectives (type > constructors) as operations on sets. So the type A*B has size |A|*|B|, the > product of the sizes of two sets. A->B has size |B|^|A| and this works out > well (e.g.: Bool->Bool has size 2^2 = 4: not, id, const True, const False). > > So like you say, type negation corresponds to a continuation on that type > (where a continuation doesn't return any value at all, satisfying the empty > type). So ~A=A->_|_. That interpretation works out really well too, > because identities like A+B=~A->B can be read as compilation techniques for > variants (with the obvious construction and destruction approaches). > > But I'm not sure that I've got a straight story on this interpretation of > negation, quite. I think that it's suggesting that the size of the set of > continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, right? So > there are 0 continuations -- they're impossible to construct? > My wild guess is that a continuation type ~A is a co-inductive type, and the correspondence with cardinality of sets works only for inductive types. Inductive types are the smallest sets built according to the specification, while co-inductive types are the largest sets meeting some constraints. Therefore the latter have infinite cardinality. From u.s.reddy at cs.bham.ac.uk Sat May 4 14:17:08 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Sat, 4 May 2013 19:17:08 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <51845515.9040200@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> Message-ID: <20869.20644.390000.837851@gargle.gargle.HOWL> Tadeusz Litak writes: > >It is time to leave behind the classical logic. In fact, we should have > >done it a long time ago. > > (even if it wasn't intended, it does indeed sound "like a total and > unconditional rejection"... such things happen in the fervor of a > discussion :-) > > "Logical pluralism" is a position rather well-established in the > philosophy of logic. I would think that in the context of Computer > Science, it is even more tempting. Dear Tadeusz, thanks for your input. When I visited your new home page this morning, I was greeted with a quote from Richard Feynman to the effect: "a scientist speaking about a non-scientific subject is just as dumb as the next guy." which was a good warning to me. So, I won't stray too far into the philosophy of logic. I am happy to labelled a logical pluralist. If anything, my position is probably even broader than the pluralists. As Beall and Restall clarify in Section 7 of their article: http://homepages.uconn.edu/~jcb02005/papers/defplur.pdf their pluralism is the "one-many answer", which I already find too restrictive. My view is that logic is a piece of "technology" that we employ for specific purposes. Thare are many purposes. So there will be many pieces of technology. There is no "one true logic". As I said in my earlier response to Mark Janssen, I take logic to be the study of "information content" of statements. Since we do not yet have a good idea of what that means precisely, I take logical consequence to be a working definition, but I also believe it will need to be supplanted eventually with a definition based on "information content". Pholosophy of information has been taking large leaps in recent years. See this handbook for example: http://www.amazon.com/Philosophy-Information-Handbook-Science/dp/044451726X Computer scientists are very much part of this enterprise because logic is but our sister discipline in studying "information". --- Let me steer the discussion in a different direction. While we are paying a tribute to John Reynolds's life long work on "logical foundations of programming", we should take a look at Reynolds's Specification Logic [1,2]. This is a typed "first-order intuitionistic theory" in Tennent's description, where the intuitionistic part goes to account for the dynamic nature of Algol programs. As and when you allocate new variables, you extend the "world", and intuitionistic logic is quite comfortable dealing with such dynamics. At the same time, Specification Logic has a base type called "assert" for state assertions. Hoare-triple specifications {P} C {Q} are atomic formulas in specification logic. Write them as Spec(P, C, Q) if you wish, with the typing Spec : assert x comm x assert -> o. Now, the assertions are formulas in a subsidiary logic, which is *classical*. Since individual states do not have any dynamics, classical logic works fine for those. While this setting of an outer logic in one formalism (intuitionistic logic) and an inner logic in a second formalism (classical logic) seems very natural to those of studying Algol-like languages, most others find their heads reeling with such a structure. To make them feel a bit more comfortable, I tell them that they should think of it as really being a "higher-order" logic. But, it is not a true higher-order logic, where there would be functions of type o -> o, say, which act on propositions in the outer logic. Instead, we have here a second type of propositions, 'assert', which has a different meaning, and a different notion of consequence. Now, my question is, are there other logics of this kind anywhere? Cheers, Uday [1] Reynolds, J. C., Idealized Algol and its Specification Logic, in D. Neel (ed) Tools and Notions of Program Construction, p. 121-161, Cambridge U. Press, 1982. Also in P.W. O'Hearn and R.D. Tennent (eds) Algol-like Languages, Vol. 1, p. 125-156, Birkhauser, 1997. [2] Tennent, R. D., Denotational semantics, in S. Abramsky, D.M. Gabbay, T.S.E. Maibaum (eds) Handbook of Logic in Computer Science Vol. 3, p. 169-322, Oxford University Press, 1994. From moezadel at outlook.com Sat May 4 18:19:12 2013 From: moezadel at outlook.com (Moez AbdelGawad) Date: Sat, 4 May 2013 17:19:12 -0500 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu>, , , <20853.51728.281000.952320@gargle.gargle.HOWL>, <201304230351.r3N3pVf0014326@betta.cs.utexas.edu>, <517AB3A2.2060508@cs.kuleuven.be>, <20858.62770.434000.984824@gargle.gargle.HOWL>, <5183B6F9.3080504@cs.kuleuven.be>, <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com>, <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> Message-ID: Hi Kalani, Here are my two cents. _|_ (bottom/divergence) is a member of every type, including the "empty" type. The cardinality of the empty type is thus 1, not 0. Also, for the same reason, the cardinality of the Bool type is 3, not 2. -> is the constructor of (lazy/call-by-name) continuous functions (ie, not of all possible mathematical functions). 'Continuous' here is in a domain-theoretic sense, sometimes also called 'Scott-continuity'. The cardinality |A->B| is thus not necessarily |B|^|A|. Noting that A and B contain _|_ as noted above, |A->B| is |B|^|A| in case A and B are 'flat' domains/types. Without much ado, Bool and the empty type are flat. Also, note that -o-> usually denotes the strict/eager/call-by-value version of ->. If, again, A and B are flat, |A-o->B| = |B|^(|A|-1), where |A|-1 is the cardinality of non-bottom elements in A, because -o-> can map _|_ in A only to _|_ in B, not to other elements of B. As such, if I assume you meant eager functions over booleans, |Bool-o->Bool| = 3^2 = 9, not 4 (I believe you can easily figure out these nine functions if you note that a function, in addition to returning true or false, may also diverge/"return" _|_). Similarly, we have |A->_|_| = 1. Regards, -Moez > From: kthielen at gmail.com > Date: Sat, 4 May 2013 11:49:48 -0400 > To: tadeusz.litak at gmail.com > CC: types-list at lists.seas.upenn.edu > Subject: Re: [TYPES] [tag] Re: Declarative vs imperative > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss > > languages with control operators, first-class continuations, static catch/throw a la Scheme etc. > > I am not an academic or qualified in any way to speak on this, I'm just an uneducated, small town, country programmer, but on this point you've raised I wonder if the list could clear up a bit of confusion that I have. > > I'm familiar with thinking of types as sets and logical connectives (type constructors) as operations on sets. So the type A*B has size |A|*|B|, the product of the sizes of two sets. A->B has size |B|^|A| and this works out well (e.g.: Bool->Bool has size 2^2 = 4: not, id, const True, const False). > > So like you say, type negation corresponds to a continuation on that type (where a continuation doesn't return any value at all, satisfying the empty type). So ~A=A->_|_. That interpretation works out really well too, because identities like A+B=~A->B can be read as compilation techniques for variants (with the obvious construction and destruction approaches). > > But I'm not sure that I've got a straight story on this interpretation of negation, quite. I think that it's suggesting that the size of the set of continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, right? So there are 0 continuations -- they're impossible to construct? > > I appreciate any explanations y'all can offer on this point. > > Regards. > > > > On May 3, 2013, at 8:23 PM, Tadeusz Litak wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > If I may chime in. The original point made by Uday re classical logic: > > > > >Coming to classical logic per se, I believe it is ill-fit for describing > > computer programs or "processes" > > > > is certainly worthy of attention. But it does not seem to imply the conclusion of that mail: > > > > >It is time to leave behind the classical logic. In fact, we should have > > done it a long time ago. > > > > (even if it wasn't intended, it does indeed sound "like a total and unconditional rejection"... such things happen in the fervor of a discussion :-) > > > > "Logical pluralism" is a position rather well-established in the philosophy of logic. I would think that in the context of Computer Science, it is even more tempting. > > > > [incidentally and perhaps contrary to established views, even Brouwer himself could be perhaps seen as one of first logical pluralists. While he very strongly rejected Fregean-Russellian logicism in *foundations of mathematics*, he has always held the work of Boole and the whole algebraic tradition in logic in high regard... But this is an aside] > > > > It might even happen to be Uday's own position, if I understand correctly the remark that "we can perfectly well circumscribe various regimens for various purposes." Most of my email will elaborate on this. > > > > > > I would simply say that whenever one wants, needs or has to think of all propositional formulas (also those possibly involving implication, and also those involving fusion, "tensor product" or what have you) as *rewritable to a conjunctive-disjunctive normal form without loss of information*, then the underlying domain logic is essentially classical. > > > > It is hard to imagine whole areas of Theoretical CS if rewriting formulas to CNF or proofs by contradiction/contraposition/excluded middle are suddenly deemed outdated and/or illegal... I mean not only and not even primarily logic programming, but also finite model theory, complexity theory, ontologies/description logics or the whole PODS/VLDB community... > > > > [actually, as a curious aside, the logic of database theorists, while certainly not constructive, is not fully classical either. They dread the top constant and unrestricted negation, preferring instead relative complement. This has to do with assumptions such as "closed world", "active domain" and the demand that queries are "domain independent". In short, their logic is rather that of Boolean rings without identity, which---funnily enough---also happen to be the setting of Stone's original work. It is just contemporary and ahistorical misrepresentation to say that Stone was working with "Boolean algebras". But this, again, is an aside...] > > > > And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss languages with control operators, first-class continuations, static catch/throw a la Scheme etc. There is so much stunningly beautiful work in that community that deserves to be better known... > > > > > > But, equally obviously, not all the programming languages have such constructs. Furthermore, as linear logicians (already mentioned by Uday) will be happy to tell you, there are contexts when even intuitionistic notion of implication (so also the one of topos-theorists or proof-assistants, for example) is way too coarse-grained. Particularly when one wants, needs or has to be resource-aware. Also, the recent work of Wadler, Pfenning and other authors suggests that Curry-Howard correspondence for concurrency will have to do with linear rather than intuitionistic logic. > > > > > > [And as substructural logicians will be happy to tell you, there are contexts where even linear logicians may seem coarse-grained, thick-skinned, corner-cutting brutes. :-) But this, yet again, is an aside.] > > > > But where I most likely would part ways with Uday is when he claims (if I understand correctly) that we are approaching or even should approach "a final answer" of any kind. To me, searching for one logic valid in all CS-relevant contexts seems a rather misguided enterprise. Especially or at least when we talk about logic understood as a formal inference system. > > > > What we perhaps need is more introductory logic courses---and also handbooks and monographs---for budding CS undergraduates and graduates (and perhaps also some postgraduates) which would make them understand the subtlety and complexity of the picture. And the benefits and costs of adopting specific inference rules. > > > > Proof-assistant based courses seem to go in just the right direction. I am teaching right now one based on that excellent "Software Foundations" material of Benjamin Pierce et al. I think it changes and sharpens not only the thinking of students, but also that of the teacher himself (or herself :-). > > > > But even this only goes so far---after all, the underlying logic is essentially intuitionistic... on the other hand, any weaker one could quickly become a nightmare for actually discussing things as demanding as semantics of programming languages (with bangs and exclamation marks in every second lemma... :-) > > > > > > To conclude, a few minor points: > > > > > > > In fact, we cannot accept that we have a final answer until the entire natural language has been formalized > > > > We'll wait for this only a little longer than for the invention of perpetuum mobile and heat death of the universe... :-) > > > > And which "natural language" are we talking about? Sometimes I think the only reason why, e.g., Chomsky ever came up with the idea of "universal grammar" was that he did not speak too many languages in the first place (although Hebrew seems reasonably distant from English)... > > > > > > > (The view I take, following Quine, is that logic is a regimentation of natural language. > > > > Same objection as above, and this is just to begin with. > > > > [The only redeeming features of Quine were that he wrote well and had a certain logical culture. As a philosopher, in my opinion, he had a rather destructive influence on development of logic, particularly in philosophy departments, even if nowhere near as disastrous as the neopositivists or the majority of "analytic philosophers". But this is just one more aside...] > > > > > > > We can perfectly well circumscribe various regimens for various purposes. > > > > As said above, I'm perfectly in agreement with this statement. > > > > > > > > > I am entirely happy with the characterization of logical connectives as "information composition" operators. But we can only accept it as a good, but vague, intuition. We do not know what this "information" is. Neither do we know what the information is about. So, in order to claim that classical logic is a canonical information composition calculus, somebody would need to formalize those notions. > > > > > > I think I can agree with every word here. Perhaps the difference then is not so big... > > > > I guess then that "leaving classical logic behind" meant rather "stop presenting it to students as the only, final and >>real<< formalism for Computer Scientists, everything else being a marginal pathology, if mentioned at all"... and if this was indeed intended by this remark, I would have a hard time disagreeing. > > > > Okay... back then to popcorn and a comfortable seat in the audience... > > > > Best, > > t. > > > > > > From u.s.reddy at cs.bham.ac.uk Sun May 5 05:37:16 2013 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Sun, 5 May 2013 10:37:16 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <51845515.9040200@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> Message-ID: <20870.10316.578000.956376@gargle.gargle.HOWL> Tadeusz Litak writes: >>It is time to leave behind the classical logic. In fact, we should have >>done it a long time ago. > > (even if it wasn't intended, it does indeed sound "like a total and > unconditional rejection"... such things happen in the fervor of a > discussion :-) Having thought about why it sounds like a "total and unconditional rejection", I believe the difference is in the perspective of what logic is about. Logic consists of the principles of "reasoned discourse", as per Aristotle. Our reasoned discourse happens in natural language, which is a humongous ocean. We may never be able to understand fully all the principles of logic that are there. But it is clear that the logic that we do understand (all the known logics put together) represents only a miniscule proprotion of the vast ocean of "logic" that is employed in reasoned discourse. So, it seems to me that a great deal of humility is warranted in talking about "logic" in general. In contrast, people that vax about classical logic seem to have the presumption that classical logic has it all cased. They seem to think that it represents the sum total of all reasonable principles of reasoned discourse (even if they are willing to admit modal logics of one kind or another as being reasonable *extensions* of classical logic). Hence, anybody that talks about alternative logics is seen to be mounting an attack on the classical logic, denying the supreme position of classical logic as the one true logic. We, the non-believers, of course deny that classical logic is supreme in any sense. However, that is not an attack on classical logic itself. It is just an attack on the *presumption* that classical logic is supreme. All that we can say about classical logic is that it seems to be the canonical logic for the present-day mathematics. Given that mathematics is a very conservative discipline, with the bar of entry for new ideas set very high, it has an abundance of depth but not so much in breadth. Thus, a canonical logic for mathematics in no way represents a canonical logic for all of human thought. In particular, in a young and dynamic discipline like Computer Science, which has none of the mathematical conservatism, we should be free to explore all possible logics and invent new ones. In fact, devising logics is our very main business. We should be very wary of any presumptions about "the canonical logic" of any kind. Cheers, Uday From soloviev at irit.fr Sun May 5 07:23:03 2013 From: soloviev at irit.fr (Sergei SOLOVIEV) Date: Sun, 05 May 2013 13:23:03 +0200 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <20870.10316.578000.956376@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> <20870.10316.578000.956376@gargle.gargle.HOWL> Message-ID: <51864117.5000508@irit.fr> To add a bit in support to Uday's remark about "presumption of supremacy" of classical logic: In fact, it is well known that classical logic can be embedded in intuitionistic logic using, for example, negative interpretation (classical "exists" becomes \not\forall\not etc.) From the point of view of provability, the interpretation of a classical theorem is provable intuitionistically if and only if the original theorem was provable classically. So, what is bad rationally, if instead of respectable "exists" we shall say "it is not true that for all x does not exist..."? It is clear, that it is a "bad publicity", a less "convincing" way to say - bad for "supremacy", but it has nothing to do with scientific rationality itself. (Constructive mathematics is just more subtle concerning existence.) Best regards Sergei Soloviev Uday S Reddy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Tadeusz Litak writes: > > >>> It is time to leave behind the classical logic. In fact, we should have >>> done it a long time ago. >>> >> (even if it wasn't intended, it does indeed sound "like a total and >> unconditional rejection"... such things happen in the fervor of a >> discussion :-) >> > > Having thought about why it sounds like a "total and unconditional > rejection", I believe the difference is in the perspective of what logic is > about. > > Logic consists of the principles of "reasoned discourse", as per Aristotle. > Our reasoned discourse happens in natural language, which is a humongous > ocean. We may never be able to understand fully all the principles of logic > that are there. But it is clear that the logic that we do understand (all > the known logics put together) represents only a miniscule proprotion of the > vast ocean of "logic" that is employed in reasoned discourse. So, it seems > to me that a great deal of humility is warranted in talking about "logic" in > general. > > In contrast, people that vax about classical logic seem to have the > presumption that classical logic has it all cased. They seem to think that > it represents the sum total of all reasonable principles of reasoned > discourse (even if they are willing to admit modal logics of one kind or > another as being reasonable *extensions* of classical logic). Hence, > anybody that talks about alternative logics is seen to be mounting an attack > on the classical logic, denying the supreme position of classical logic as > the one true logic. > > We, the non-believers, of course deny that classical logic is supreme in any > sense. However, that is not an attack on classical logic itself. It is > just an attack on the *presumption* that classical logic is supreme. > > All that we can say about classical logic is that it seems to be the > canonical logic for the present-day mathematics. Given that mathematics is > a very conservative discipline, with the bar of entry for new ideas set very > high, it has an abundance of depth but not so much in breadth. Thus, a > canonical logic for mathematics in no way represents a canonical logic for > all of human thought. > > In particular, in a young and dynamic discipline like Computer Science, > which has none of the mathematical conservatism, we should be free to > explore all possible logics and invent new ones. In fact, devising logics > is our very main business. We should be very wary of any presumptions about > "the canonical logic" of any kind. > > Cheers, > Uday > From kthielen at gmail.com Sun May 5 17:04:24 2013 From: kthielen at gmail.com (Kalani Thielen) Date: Sun, 5 May 2013 17:04:24 -0400 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu>, , , <20853.51728.281000.952320@gargle.gargle.HOWL>, <201304230351.r3N3pVf0014326@betta.cs.utexas.edu>, <517AB3A2.2060508@cs.kuleuven.be>, <20858.62770.434000.984824@gargle.gargle.HOWL>, <5183B6F9.3080504@cs.kuleuven.be>, <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com>, <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> Message-ID: <2F25A8E4-A57D-43D2-9BD9-FCDA731B1F2F@gmail.com> Thanks for the response. Maybe I should have stated my question at the start instead of at the end, because I think that I might have been a bit unclear. Let me try again. If |A->B| = |B|^|A| and ~A = A->_|_ then e.g. |~Bool| (a continuation on a Bool) = |Bool->_|_| = 0^2 = 0. This seems to imply that there are 0 continuations on Bool values (or any other non-empty set, though there's still the identity function _|_->_|_ = 0^0 = 1). I'm sure that my interpretation or expectation is wrong somewhere (I'm just an uneducated country programmer, like I said) and I'd appreciate somebody setting me straight. I don't think that we need to worry about induction/coinduction, as there's no reference to recursive types here. The considerations about CBV/CBN and Turing-completeness forcing _|_ into all types can be set aside too, I believe, by just considering the translation of classical logic (as described in e.g. Timothy Griffin's "A Formulae-as-Types Notion of Control") in the simply-typed lambda calculus. Consider just CPS-transformed boolean functions then, where the only source of "non-termination" is in your expectation that invoking a continuation produces 0 values (i.e.: it never returns). Are there 0 continuations, or am I miscounting them somehow (or should counting not apply here)? I'm just very curious to develop intuition about "classical types" a bit more, since I've found them to be very useful and reasonable in other respects (and as I said, I'm sure that it's just my intuition that's wrong or confused here). Thanks. On May 4, 2013, at 6:19 PM, Moez AbdelGawad wrote: > Hi Kalani, > > Here are my two cents. > > _|_ (bottom/divergence) is a member of every type, including the "empty" type. The cardinality of the empty type is thus 1, not 0. Also, for the same reason, the cardinality of the Bool type is 3, not 2. > > -> is the constructor of (lazy/call-by-name) continuous functions (ie, not of all possible mathematical functions). 'Continuous' here is in a domain-theoretic sense, sometimes also called 'Scott-continuity'. The cardinality |A->B| is thus not necessarily |B|^|A|. Noting that A and B contain _|_ as noted above, |A->B| is |B|^|A| in case A and B are 'flat' domains/types. Without much ado, Bool and the empty type are flat. Also, note that -o-> usually denotes the strict/eager/call-by-value version of ->. If, again, A and B are flat, |A-o->B| = |B|^(|A|-1), where |A|-1 is the cardinality of non-bottom elements in A, because -o-> can map _|_ in A only to _|_ in B, not to other elements of B. > > As such, if I assume you meant eager functions over booleans, |Bool-o->Bool| = 3^2 = 9, not 4 (I believe you can easily figure out these nine functions if you note that a function, in addition to returning true or false, may also diverge/"return" _|_). Similarly, we have |A->_|_| = 1. > > Regards, > > -Moez > > > From: kthielen at gmail.com > > Date: Sat, 4 May 2013 11:49:48 -0400 > > To: tadeusz.litak at gmail.com > > CC: types-list at lists.seas.upenn.edu > > Subject: Re: [TYPES] [tag] Re: Declarative vs imperative > > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss > > > languages with control operators, first-class continuations, static catch/throw a la Scheme etc. > > > > I am not an academic or qualified in any way to speak on this, I'm just an uneducated, small town, country programmer, but on this point you've raised I wonder if the list could clear up a bit of confusion that I have. > > > > I'm familiar with thinking of types as sets and logical connectives (type constructors) as operations on sets. So the type A*B has size |A|*|B|, the product of the sizes of two sets. A->B has size |B|^|A| and this works out well (e.g.: Bool->Bool has size 2^2 = 4: not, id, const True, const False). > > > > So like you say, type negation corresponds to a continuation on that type (where a continuation doesn't return any value at all, satisfying the empty type). So ~A=A->_|_. That interpretation works out really well too, because identities like A+B=~A->B can be read as compilation techniques for variants (with the obvious construction and destruction approaches). > > > > But I'm not sure that I've got a straight story on this interpretation of negation, quite. I think that it's suggesting that the size of the set of continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, right? So there are 0 continuations -- they're impossible to construct? > > > > I appreciate any explanations y'all can offer on this point. > > > > Regards. > > > > > > > > On May 3, 2013, at 8:23 PM, Tadeusz Litak wrote: > > > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > If I may chime in. The original point made by Uday re classical logic: > > > > > > >Coming to classical logic per se, I believe it is ill-fit for describing > > > computer programs or "processes" > > > > > > is certainly worthy of attention. But it does not seem to imply the conclusion of that mail: > > > > > > >It is time to leave behind the classical logic. In fact, we should have > > > done it a long time ago. > > > > > > (even if it wasn't intended, it does indeed sound "like a total and unconditional rejection"... such things happen in the fervor of a discussion :-) > > > > > > "Logical pluralism" is a position rather well-established in the philosophy of logic. I would think that in the context of Computer Science, it is even more tempting. > > > > > > [incidentally and perhaps contrary to established views, even Brouwer himself could be perhaps seen as one of first logical pluralists. While he very strongly rejected Fregean-Russellian logicism in *foundations of mathematics*, he has always held the work of Boole and the whole algebraic tradition in logic in high regard... But this is an aside] > > > > > > It might even happen to be Uday's own position, if I understand correctly the remark that "we can perfectly well circumscribe various regimens for various purposes." Most of my email will elaborate on this. > > > > > > > > > I would simply say that whenever one wants, needs or has to think of all propositional formulas (also those possibly involving implication, and also those involving fusion, "tensor product" or what have you) as *rewritable to a conjunctive-disjunctive normal form without loss of information*, then the underlying domain logic is essentially classical. > > > > > > It is hard to imagine whole areas of Theoretical CS if rewriting formulas to CNF or proofs by contradiction/contraposition/excluded middle are suddenly deemed outdated and/or illegal... I mean not only and not even primarily logic programming, but also finite model theory, complexity theory, ontologies/description logics or the whole PODS/VLDB community... > > > > > > [actually, as a curious aside, the logic of database theorists, while certainly not constructive, is not fully classical either. They dread the top constant and unrestricted negation, preferring instead relative complement. This has to do with assumptions such as "closed world", "active domain" and the demand that queries are "domain independent". In short, their logic is rather that of Boolean rings without identity, which---funnily enough---also happen to be the setting of Stone's original work. It is just contemporary and ahistorical misrepresentation to say that Stone was working with "Boolean algebras". But this, again, is an aside...] > > > > > > And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss languages with control operators, first-class continuations, static catch/throw a la Scheme etc. There is so much stunningly beautiful work in that community that deserves to be better known... > > > > > > > > > But, equally obviously, not all the programming languages have such constructs. Furthermore, as linear logicians (already mentioned by Uday) will be happy to tell you, there are contexts when even intuitionistic notion of implication (so also the one of topos-theorists or proof-assistants, for example) is way too coarse-grained. Particularly when one wants, needs or has to be resource-aware. Also, the recent work of Wadler, Pfenning and other authors suggests that Curry-Howard correspondence for concurrency will have to do with linear rather than intuitionistic logic. > > > > > > > > > [And as substructural logicians will be happy to tell you, there are contexts where even linear logicians may seem coarse-grained, thick-skinned, corner-cutting brutes. :-) But this, yet again, is an aside.] > > > > > > But where I most likely would part ways with Uday is when he claims (if I understand correctly) that we are approaching or even should approach "a final answer" of any kind. To me, searching for one logic valid in all CS-relevant contexts seems a rather misguided enterprise. Especially or at least when we talk about logic understood as a formal inference system. > > > > > > What we perhaps need is more introductory logic courses---and also handbooks and monographs---for budding CS undergraduates and graduates (and perhaps also some postgraduates) which would make them understand the subtlety and complexity of the picture. And the benefits and costs of adopting specific inference rules. > > > > > > Proof-assistant based courses seem to go in just the right direction. I am teaching right now one based on that excellent "Software Foundations" material of Benjamin Pierce et al. I think it changes and sharpens not only the thinking of students, but also that of the teacher himself (or herself :-). > > > > > > But even this only goes so far---after all, the underlying logic is essentially intuitionistic... on the other hand, any weaker one could quickly become a nightmare for actually discussing things as demanding as semantics of programming languages (with bangs and exclamation marks in every second lemma... :-) > > > > > > > > > To conclude, a few minor points: > > > > > > > > > > In fact, we cannot accept that we have a final answer until the entire natural language has been formalized > > > > > > We'll wait for this only a little longer than for the invention of perpetuum mobile and heat death of the universe... :-) > > > > > > And which "natural language" are we talking about? Sometimes I think the only reason why, e.g., Chomsky ever came up with the idea of "universal grammar" was that he did not speak too many languages in the first place (although Hebrew seems reasonably distant from English)... > > > > > > > > > > (The view I take, following Quine, is that logic is a regimentation of natural language. > > > > > > Same objection as above, and this is just to begin with. > > > > > > [The only redeeming features of Quine were that he wrote well and had a certain logical culture. As a philosopher, in my opinion, he had a rather destructive influence on development of logic, particularly in philosophy departments, even if nowhere near as disastrous as the neopositivists or the majority of "analytic philosophers". But this is just one more aside...] > > > > > > > > > > We can perfectly well circumscribe various regimens for various purposes. > > > > > > As said above, I'm perfectly in agreement with this statement. > > > > > > > > > > > > > I am entirely happy with the characterization of logical connectives as "information composition" operators. But we can only accept it as a good, but vague, intuition. We do not know what this "information" is. Neither do we know what the information is about. So, in order to claim that classical logic is a canonical information composition calculus, somebody would need to formalize those notions. > > > > > > > > > I think I can agree with every word here. Perhaps the difference then is not so big... > > > > > > I guess then that "leaving classical logic behind" meant rather "stop presenting it to students as the only, final and >>real<< formalism for Computer Scientists, everything else being a marginal pathology, if mentioned at all"... and if this was indeed intended by this remark, I would have a hard time disagreeing. > > > > > > Okay... back then to popcorn and a comfortable seat in the audience... > > > > > > Best, > > > t. > > > > > > > > > > From Guillaume.Munch at pps.univ-paris-diderot.fr Sun May 5 18:10:45 2013 From: Guillaume.Munch at pps.univ-paris-diderot.fr (Guillaume Munch-Maccagnoni) Date: Mon, 06 May 2013 00:10:45 +0200 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> Message-ID: <1367791845.3578.181.camel@gm-desktop> Kalani Thielen wrote : > But I'm not sure that I've got a straight story on this interpretation > of negation, quite. I think that it's suggesting that the size of the > set of continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, > right? So there are 0 continuations -- they're impossible to > construct? Dear Kalani, The cardinal of A->empty is 0 only in the case |A| is non-empty. Otherwise, there is exactly 1 function. Therefore, by interpreting continuations as you do with sets, you do obtain an interpretation of classical logic, however with boolean truth values, and no structure of proofs. Continuations are not interpreted as arrows A->_|_ but as A->R for some chosen R. To witness the difference, let me take a proof of a Sigma^0_1 formula P (roughly speaking, a datatype containing no arrow type). Through the appropriate translation, this gives an intuitionistic proof of (P->R)->R with R not appearing in P. Now I may choose R=P and deduce an intuitionistic proof of P by applying to \x.x. >From the computational perspective, this corresponds to the fact that the reduction of a program containing control operators is only defined with respect to an initial context (here the empty context corresponding to \x.x). (Your question is relevant. There have been many arguments against the possibility of a model of classical logic that discriminates proofs, and they can be reduced to a generalization of your remark: there is at most one arrow A->_|_ in any model of intuitionistic logic that has an initial object (a bottom). However, this only indicates that in intuitionistic logic, negation and falsity are not obvious to define.) With this explanation in mind I wish to go back to the following message: Uday S Reddy wrote : > Coming to classical logic per se, I believe it is ill-fit for describing > computer programs or "processes" as you call them. First of all, classical > logic doesn't have any well-developed notion of time or dynamics, and it has > a nasty existential quantifier which assumes that all things exist once and > for all. In computer systems, new things come into being all the time, well > beyond the time of the original development or deployment. We know how to > write computer programs that deal with that. Classical logic doesn't. (LEJ > Brouwer made this criticism a long time ago in the context of mathematics. > There it might have been just a philosophical problem. But in Computer > Science, it is a *practical* problem.) > > > > - Girard has formulated Linear Logic, which broadens our idea of what kind > of "things" a logic can talk about. David Pym and Peter O'Hearn invented > Bunched Implication Logic, extending Linear Logic with a beautiful > model-theoretic basis. These logics applied to imperative programming > (which go by the name of "Separation Logic") are revolutionizing the > development of technology for imperative programs. > > It is time to leave behind the classical logic. In fact, we should have > done it a long time ago. Yet Girard, which Uday mention, and others, have theorized how classical logic can be given a nice notion of "dynamics", all the while being as strong as intuitionistic logic in terms of constructiveness over certain formulae. In the example above, a classical proof of P yields an intuitionistic one. There are restrictions on the shape of P (a datatype containing no arrow), but not on the theorems used to prove P. So we may argue that the constructive contents were already in the classical proofs involved. Constructiveness is of course relaxed compared to intuitionistic logic. We may use a direct analogy with programming languages that are not purely functional, but allow effects. Functional types are opaque, because it makes no sense to retrieve the source code of a function created at runtime, in the presence of call/cc, storage, I/O, etc. This does not make such programming languages non-constructive or lacking dynamics: the thing still computes values. This makes a nice refinement (along with other examples Uday mentions) of the intuitionist's take on constructiveness, which showed, more than twenty years ago already, that a computational interpretation of logic may give up referential transparency in favor of interactivity with the context/opponent. (On a side note, I know "logicians of philosophical inclination" that are very well aware of such developments regarding the issue of constructiveness.) Sergei SOLOVIEV wrote : > > To add a bit in support to Uday's remark about "presumption of > supremacy" of classical logic: > > In fact, it is well known that classical logic can be embedded in > intuitionistic logic > using, for example, negative interpretation (classical "exists" becomes > \not\forall\not etc.) > From the point of view of provability, the interpretation of a > classical theorem is provable > intuitionistically if and only if the original theorem was provable > classically. > So, what is bad rationally, if instead of respectable "exists" we shall > say "it is > not true that for all x does not exist..."? It is clear, that it is a > "bad publicity", > a less "convincing" way to say - bad for "supremacy", but it has nothing > to do > with scientific rationality itself. (Constructive mathematics is just > more subtle > concerning existence.) > I do not exactly understand the claim here. However, the particular translation used in my example is already "more subtle concerning existence" than the one sketched by Serguei. Moreover, negative interpretation does not reduce the problem of understanding the constructive contents of classical logic to the one of understanding intuitionistic logic, because problems are converted modulo some translation which is not compositional on proofs. Thus the issue becomes to understand the translations as much as it is to understand intuitionistic logic. Serguei's remark is against "presumption of supremacy" of classical logic, but it should not be seen as the other extreme: redundancy of classical logic with respect to intuitionistic logic. (It seemed to me that so far, the discussion were indeed in favour of plurality in logic.) Best regards, -- Guillaume Munch-Maccagnoni From m.escardo at cs.bham.ac.uk Sun May 5 18:31:44 2013 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Sun, 05 May 2013 23:31:44 +0100 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <2F25A8E4-A57D-43D2-9BD9-FCDA731B1F2F@gmail.com> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu>, , , <20853.51728.281000.952320@gargle.gargle.HOWL>, <201304230351.r3N3pVf0014326@betta.cs.utexas.edu>, <517AB3A2.2060508@cs.kuleuven.be>, <20858.62770.434000.984824@gargle.gargle.HOWL>, <5183B6F9.3080504@cs.kuleuven.be>, <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com>, <699E7608-2196-4B3F-97DD-5A983F178CE8@gmail.com> <2F25A8E4-A57D-43D2-9BD9-FCDA731B1F2F@gmail.com> Message-ID: <5186DDD0.4030604@cs.bham.ac.uk> In real analysis, it may be contentious what 0^0 ought to be, if anything at all. In set theory (classical or constructive), X^0 is always 1, even when X=0. And so it is in type theory This is because there is only one function 0->X (that with empty graph, or vacuously defined by zero cases). So if you imagine 1=unit type="true" (and more generally any inhabited set/type X=true) and 0=_|_=false (the type with no inhabitants), then the "classical truth table" agrees with the constructive meaning of (X->Y) = Y^X. The constructive meaning gives you more, in that you are not restricted to 0 and 1 for the possibilities of what X and Y can be. Of course, one should carefully distinguish the meanings of the notation _|_ invoked in the discussions in this list. It clearly has different meanings in e.g. domain theory and logic. In domain theory, it is not a type, but an element of any domain. I hope this explains the confusion. M. On 05/05/13 22:04, Kalani Thielen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Thanks for the response. Maybe I should have stated my question at the start instead of at the end, because I think that I might have been a bit unclear. Let me try again. > > If |A->B| = |B|^|A| and ~A = A->_|_ then e.g. |~Bool| (a continuation on a Bool) = |Bool->_|_| = 0^2 = 0. This seems to imply that there are 0 continuations on Bool values (or any other non-empty set, though there's still the identity function _|_->_|_ = 0^0 = 1). > > I'm sure that my interpretation or expectation is wrong somewhere (I'm just an uneducated country programmer, like I said) and I'd appreciate somebody setting me straight. > > I don't think that we need to worry about induction/coinduction, as there's no reference to recursive types here. The considerations about CBV/CBN and Turing-completeness forcing _|_ into all types can be set aside too, I believe, by just considering the translation of classical logic (as described in e.g. Timothy Griffin's "A Formulae-as-Types Notion of Control") in the simply-typed lambda calculus. Consider just CPS-transformed boolean functions then, where the only source of "non-termination" is in your expectation that invoking a continuation produces 0 values (i.e.: it never returns). > > Are there 0 continuations, or am I miscounting them somehow (or should counting not apply here)? I'm just very curious to develop intuition about "classical types" a bit more, since I've found them to be very useful and reasonable in other respects (and as I said, I'm sure that it's just my intuition that's wrong or confused here). > > Thanks. > > > > On May 4, 2013, at 6:19 PM, Moez AbdelGawad wrote: > >> Hi Kalani, >> >> Here are my two cents. >> >> _|_ (bottom/divergence) is a member of every type, including the "empty" type. The cardinality of the empty type is thus 1, not 0. Also, for the same reason, the cardinality of the Bool type is 3, not 2. >> >> -> is the constructor of (lazy/call-by-name) continuous functions (ie, not of all possible mathematical functions). 'Continuous' here is in a domain-theoretic sense, sometimes also called 'Scott-continuity'. The cardinality |A->B| is thus not necessarily |B|^|A|. Noting that A and B contain _|_ as noted above, |A->B| is |B|^|A| in case A and B are 'flat' domains/types. Without much ado, Bool and the empty type are flat. Also, note that -o-> usually denotes the strict/eager/call-by-value version of ->. If, again, A and B are flat, |A-o->B| = |B|^(|A|-1), where |A|-1 is the cardinality of non-bottom elements in A, because -o-> can map _|_ in A only to _|_ in B, not to other elements of B. >> >> As such, if I assume you meant eager functions over booleans, |Bool-o->Bool| = 3^2 = 9, not 4 (I believe you can easily figure out these nine functions if you note that a function, in addition to returning true or false, may also diverge/"return" _|_). Similarly, we have |A->_|_| = 1. >> >> Regards, >> >> -Moez >> >>> From: kthielen at gmail.com >>> Date: Sat, 4 May 2013 11:49:48 -0400 >>> To: tadeusz.litak at gmail.com >>> CC: types-list at lists.seas.upenn.edu >>> Subject: Re: [TYPES] [tag] Re: Declarative vs imperative >>> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>>> And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss >>>> languages with control operators, first-class continuations, static catch/throw a la Scheme etc. >>> >>> I am not an academic or qualified in any way to speak on this, I'm just an uneducated, small town, country programmer, but on this point you've raised I wonder if the list could clear up a bit of confusion that I have. >>> >>> I'm familiar with thinking of types as sets and logical connectives (type constructors) as operations on sets. So the type A*B has size |A|*|B|, the product of the sizes of two sets. A->B has size |B|^|A| and this works out well (e.g.: Bool->Bool has size 2^2 = 4: not, id, const True, const False). >>> >>> So like you say, type negation corresponds to a continuation on that type (where a continuation doesn't return any value at all, satisfying the empty type). So ~A=A->_|_. That interpretation works out really well too, because identities like A+B=~A->B can be read as compilation techniques for variants (with the obvious construction and destruction approaches). >>> >>> But I'm not sure that I've got a straight story on this interpretation of negation, quite. I think that it's suggesting that the size of the set of continuations A->_|_ is |_|_|^|A|, or 0^|A|, which should be 0, right? So there are 0 continuations -- they're impossible to construct? >>> >>> I appreciate any explanations y'all can offer on this point. >>> >>> Regards. >>> >>> >>> >>> On May 3, 2013, at 8:23 PM, Tadeusz Litak wrote: >>> >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>> >>>> If I may chime in. The original point made by Uday re classical logic: >>>> >>>>> Coming to classical logic per se, I believe it is ill-fit for describing >>>> computer programs or "processes" >>>> >>>> is certainly worthy of attention. But it does not seem to imply the conclusion of that mail: >>>> >>>>> It is time to leave behind the classical logic. In fact, we should have >>>> done it a long time ago. >>>> >>>> (even if it wasn't intended, it does indeed sound "like a total and unconditional rejection"... such things happen in the fervor of a discussion :-) >>>> >>>> "Logical pluralism" is a position rather well-established in the philosophy of logic. I would think that in the context of Computer Science, it is even more tempting. >>>> >>>> [incidentally and perhaps contrary to established views, even Brouwer himself could be perhaps seen as one of first logical pluralists. While he very strongly rejected Fregean-Russellian logicism in *foundations of mathematics*, he has always held the work of Boole and the whole algebraic tradition in logic in high regard... But this is an aside] >>>> >>>> It might even happen to be Uday's own position, if I understand correctly the remark that "we can perfectly well circumscribe various regimens for various purposes." Most of my email will elaborate on this. >>>> >>>> >>>> I would simply say that whenever one wants, needs or has to think of all propositional formulas (also those possibly involving implication, and also those involving fusion, "tensor product" or what have you) as *rewritable to a conjunctive-disjunctive normal form without loss of information*, then the underlying domain logic is essentially classical. >>>> >>>> It is hard to imagine whole areas of Theoretical CS if rewriting formulas to CNF or proofs by contradiction/contraposition/excluded middle are suddenly deemed outdated and/or illegal... I mean not only and not even primarily logic programming, but also finite model theory, complexity theory, ontologies/description logics or the whole PODS/VLDB community... >>>> >>>> [actually, as a curious aside, the logic of database theorists, while certainly not constructive, is not fully classical either. They dread the top constant and unrestricted negation, preferring instead relative complement. This has to do with assumptions such as "closed world", "active domain" and the demand that queries are "domain independent". In short, their logic is rather that of Boolean rings without identity, which---funnily enough---also happen to be the setting of Stone's original work. It is just contemporary and ahistorical misrepresentation to say that Stone was working with "Boolean algebras". But this, again, is an aside...] >>>> >>>> And even in the context of Curry-Howard correspondence, classical logic is a legitimate setting to discuss languages with control operators, first-class continuations, static catch/throw a la Scheme etc. There is so much stunningly beautiful work in that community that deserves to be better known... >>>> >>>> >>>> But, equally obviously, not all the programming languages have such constructs. Furthermore, as linear logicians (already mentioned by Uday) will be happy to tell you, there are contexts when even intuitionistic notion of implication (so also the one of topos-theorists or proof-assistants, for example) is way too coarse-grained. Particularly when one wants, needs or has to be resource-aware. Also, the recent work of Wadler, Pfenning and other authors suggests that Curry-Howard correspondence for concurrency will have to do with linear rather than intuitionistic logic. >>>> >>>> >>>> [And as substructural logicians will be happy to tell you, there are contexts where even linear logicians may seem coarse-grained, thick-skinned, corner-cutting brutes. :-) But this, yet again, is an aside.] >>>> >>>> But where I most likely would part ways with Uday is when he claims (if I understand correctly) that we are approaching or even should approach "a final answer" of any kind. To me, searching for one logic valid in all CS-relevant contexts seems a rather misguided enterprise. Especially or at least when we talk about logic understood as a formal inference system. >>>> >>>> What we perhaps need is more introductory logic courses---and also handbooks and monographs---for budding CS undergraduates and graduates (and perhaps also some postgraduates) which would make them understand the subtlety and complexity of the picture. And the benefits and costs of adopting specific inference rules. >>>> >>>> Proof-assistant based courses seem to go in just the right direction. I am teaching right now one based on that excellent "Software Foundations" material of Benjamin Pierce et al. I think it changes and sharpens not only the thinking of students, but also that of the teacher himself (or herself :-). >>>> >>>> But even this only goes so far---after all, the underlying logic is essentially intuitionistic... on the other hand, any weaker one could quickly become a nightmare for actually discussing things as demanding as semantics of programming languages (with bangs and exclamation marks in every second lemma... :-) >>>> >>>> >>>> To conclude, a few minor points: >>>> >>>> >>>>> In fact, we cannot accept that we have a final answer until the entire natural language has been formalized >>>> >>>> We'll wait for this only a little longer than for the invention of perpetuum mobile and heat death of the universe... :-) >>>> >>>> And which "natural language" are we talking about? Sometimes I think the only reason why, e.g., Chomsky ever came up with the idea of "universal grammar" was that he did not speak too many languages in the first place (although Hebrew seems reasonably distant from English)... >>>> >>>> >>>>> (The view I take, following Quine, is that logic is a regimentation of natural language. >>>> >>>> Same objection as above, and this is just to begin with. >>>> >>>> [The only redeeming features of Quine were that he wrote well and had a certain logical culture. As a philosopher, in my opinion, he had a rather destructive influence on development of logic, particularly in philosophy departments, even if nowhere near as disastrous as the neopositivists or the majority of "analytic philosophers". But this is just one more aside...] >>>> >>>> >>>>> We can perfectly well circumscribe various regimens for various purposes. >>>> >>>> As said above, I'm perfectly in agreement with this statement. >>>> >>>> >>>> >>>>> I am entirely happy with the characterization of logical connectives as "information composition" operators. But we can only accept it as a good, but vague, intuition. We do not know what this "information" is. Neither do we know what the information is about. So, in order to claim that classical logic is a canonical information composition calculus, somebody would need to formalize those notions. >>>> >>>> >>>> I think I can agree with every word here. Perhaps the difference then is not so big... >>>> >>>> I guess then that "leaving classical logic behind" meant rather "stop presenting it to students as the only, final and >>real<< formalism for Computer Scientists, everything else being a marginal pathology, if mentioned at all"... and if this was indeed intended by this remark, I would have a hard time disagreeing. >>>> >>>> Okay... back then to popcorn and a comfortable seat in the audience... >>>> >>>> Best, >>>> t. >>>> >>>> >>> >>> > -- Martin Escardo http://www.cs.bham.ac.uk/~mhe From marc.denecker at cs.kuleuven.be Mon May 13 06:35:33 2013 From: marc.denecker at cs.kuleuven.be (Marc Denecker) Date: Mon, 13 May 2013 12:35:33 +0200 Subject: [TYPES] [tag] Re: Declarative vs imperative In-Reply-To: <20870.10316.578000.956376@gargle.gargle.HOWL> References: <201304211409.r3LE9tC9010314@betta.cs.utexas.edu> <20853.51728.281000.952320@gargle.gargle.HOWL> <201304230351.r3N3pVf0014326@betta.cs.utexas.edu> <517AB3A2.2060508@cs.kuleuven.be> <20858.62770.434000.984824@gargle.gargle.HOWL> <5183B6F9.3080504@cs.kuleuven.be> <20868.10900.397000.669371@gargle.gargle.HOWL> <51845515.9040200@gmail.com> <20870.10316.578000.956376@gargle.gargle.HOWL> Message-ID: <5190C1F5.4010707@cs.kuleuven.be> On 05/03/2013 11:22 PM, Uday S Reddy wrote: > Marc Denecker writes: > >> "It is time to leave behind the classical logic. In fact, we should >> have done it a long time ago." >> >> To me, that sounds like a total and unconditional rejection. > > No, what I meant is that the classical logic represents a stage in the > development of logic. It cannot be taken as the final answer. In fact, we > cannot accept that we have a final answer until the entire natural language > has been formalized, which might take a very very long time indeed! (The > view I take, following Quine, is that logic is a regimentation of natural > language. We can perfectly well circumscribe various regimens for various > purposes.) > > I am entirely happy with the characterization of logical connectives as > "information composition" operators. We then agree on this point. This view goes back to Frege and even to Leibniz, right? But do we agree that the standard logical connectives of FO correctly implement an important set of basic information composition operators. That is, conjunction, disjunction, negation, the quantifiers? If that is the case then FO's connectives will have to be in other languages as well? We agree that these operators are not enough but the more universal languages that the scientific community should strive for in your and my opinion, will have to contain FO's information composition operators (modulo syntactic sugar) and hence be extensions of classical logic? I'm confused about your position in this right now. On the one hand, you do not (unconditionally) reject FO ; on the other hand, in your reaction on my first email. you seemed to react against the fact that I propose FO as a base language. If you agree that FO's connectives correctly implement a set of basic information composition operators, then you should agree with me on that claim. But if you think FO's connectives are not correct or not basic, then please consider the challenge in my previous email (or see below) > But we can only accept it as a good, > but vague, intuition. We do not know what this "information" is. Neither > do we know what the information is about. So, in order to claim that > classical logic is a canonical information composition calculus, somebody > would need to formalize those notions. I'm sure Tarski would disagree with you. I think this would be his answer. The notion of "information" that you claim is vague, is actually formalized in model semantics in a very precise way, albeit slightly implicitly. The information content of a logic expression/theory T is formalized by the class of its models (i.e., the structures in which T is true). Models are formal representations of possible states of affairs, non-models are formal representations of impossible states of affairs. This set is characteristic for the information expressed in T. This is obvious e.g., when we look at the notion of logical equivalence: two expressions T, T' are equivalent iff the classes of their models are identical. The logical connectives and quantifiers correspond to operators on these classes of models. Their definition is implicit in the definition of the satisfaction relation. E.g., the characteristic function of a conjunction is the intersection of the classes of models of the conjuncts. >From this formal concept (the class of models), we can derive concepts like formal concepts of entailment, validity, equivalence. All this is mathematical and precise. This formal notion of "information" needs to be connected to the informal notion of "information" which is the interpretation that we, human experts, give to a logical expression. Logic does not know and is not supposed to know what the informal information (in a logic sentence) is about because this depends on the meaning that we, human experts, give to the uninterpreted non-logical symbols. But once a precise intended interpretation is given, then I would say that the informal information content of a logic sentence is precise as well. For example, if we interpret the nonlogical symbols Semester/1, Course/1, Registered/3, TakesPlace/2 in the way the names suggest, then the intuitive information content of B ! c ! s : Semester(s) & Course(c) & ~ ? st: Registered(st,c,s) => ~ TakesPlace(x,s) is perfectly clear and is given by the equally precise informal sentence A if in a semester no student registered for a course, then this course does not take place in that semester. If anything is vague or inprecise, then I guess it should be easy to come up with e.g., an example of a database representing a situation where B and A disagree or at least where it is not clear whether A or B is true or not. This was the challenge in my previous email. I think informal language sentence A is precise, at least in any context where the underlying relations semester, course, are precise, and I dont think one can find a database where A and B would disagree. I would love to see a counterexample. > > Even though Vladimir has omitted the word "programming" in titling this > subthread, the discussion has been about "declarative" and "imperative" as > paradigms of programming. So, I would rather not divorce myself from > programming concerns in discussing these issues. One can only view a logic theory as a program if one associates an inherent form of inference to the logic. As I argued in my first email, this is the first step in messing up the difference between declarative logic and programming languages. It is the association of a unique inherent form of inference to a logic that blurs the distinction between declarative theories and procedural programs Cheers, Marc > > Cheers, > Uday > > PS. I will try to respond to your more detailed points a little later. For > now, I just wanted to set the record straight about what you called my > "total and unconditional rejection" of classical logic, which it wasn't. > -- Marc Denecker (prof) KU Leuven Departement Computerwetenschappen tel: ++32 (0)16/32.75.57 Celestijnenlaan 200A Room A02.145 fax: ++32 (0)16/32.79.96 B-3001 Heverlee, Belgium email: Marc.Denecker at cs.kuleuven.be http://people.cs.kuleuven.be/~marc.denecker/ ........................................................................ -- Marc Denecker (prof) KU Leuven Departement Computerwetenschappen tel: ++32 (0)16/32.75.57 Celestijnenlaan 200A Room A02.145 fax: ++32 (0)16/32.79.96 B-3001 Heverlee, Belgium email: Marc.Denecker at cs.kuleuven.be http://people.cs.kuleuven.be/~marc.denecker/ ........................................................................ Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From Mario.Frank at uni-potsdam.de Tue Jul 2 04:40:22 2013 From: Mario.Frank at uni-potsdam.de (Mario Frank) Date: Tue, 02 Jul 2013 10:40:22 +0200 Subject: [TYPES] Types and Theorem Proving Message-ID: <51D291F6.2000708@uni-potsdam.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear list members, I am a (nearly graduate) CS student at University of Potsdam (Germany). Currently I am trying to focus my plans for my PhD. During the last years, I was researching in the field of theorem proving and attended the CADE ATP Systems Competition twice (IJCAR 2012 and CADE 24) . Since theorem provers partially support typed logics, this mailing list could be a good forum to get some impression of experiences with theorem provers. In order to focus my plans for my PhD, I am trying to get some impressions about the experiences of implementers and users of theorem provers (interactive and automated) and theorem proving assistents (e.g. preprocessors for theorem proving). The scope is to collect some information about the use of theorem provers and the strengths and weaknesses which are seen by users. Also, usually missing functionalities and gaps are in my scope. Thus, I created a small survey which I would like to distribute. Surely, all information given are treated as confidential and no informations about the people who filled-in the survey will be disclosed. The survey can be found at: http://apache.cs.uni-potsdam.de/de/profs/ifi/theorie/deduction/theorem-proving-survey Sincerly, Mario Frank Institute for Computer Science University of Potsdam -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with undefined - http://www.enigmail.net/ iQEcBAEBAgAGBQJR0pH2AAoJEDnmHcItIfWECKgH/0tonQZxQjp4SqHV0vzgAdiw 5+iR6HLraTLrUnwyy2w9NuW5ehlU3a6tIzOOnhWGptmU1kC2Gqvi9EdW4dK3UiFq W3/8Mk+wPF/ED+5WOHg7j68m5MbtiZ1tPpZwwCjoAYIwJZkCb2fa/hqZ++9y9pZM LnPo/JkbFfyuGL07DBAMJi+Mu4jfCcfvqkiQwmv4NR1xrTu1D9BW9FbqtdW+FHR5 XBXlfssZXfu2fr/uL7EhaCE/Hve1MjSatynmmkehHOamcGweFEy9sYlSuPBLHk82 9Loev+XNTLgsi0zn47ZuMx3Tw+xR49UH4JVDbtPEZZUmVfRR71dBaTUmTR7gdpc= =czfi -----END PGP SIGNATURE----- From jonathan.aldrich at cs.cmu.edu Fri Aug 2 05:45:47 2013 From: jonathan.aldrich at cs.cmu.edu (Jonathan Aldrich) Date: Fri, 02 Aug 2013 02:45:47 -0700 Subject: [TYPES] essay on technical factors behind the success of objects Message-ID: <51FB7FCB.3040407@cs.cmu.edu> Dear Types members, There is occasional, continuing speculation in the language design and type system community regarding the industrial success of objects. The "type/object distinction" discussion on the types list this spring touched on this issue, for example. Many industrially-successful programming languages are object-oriented; is this just marketing, or a psychological phenomenon, or does it have a technical basis? I have written an manuscript, to appear in the Onward! 2013 essay proceedings, exploring a possible technical explanation for the success of objects. The essay argues that objects (and object types, in contrast for example to ADTs) facilitate interoperability between different implementations of an abstraction, and that interoperability is a critical requirement both for achieving architectural (large-scale, framework-style) reuse, and for supporting software ecosystems. The current draft of the manuscript is available at: http://www.cs.cmu.edu/~aldrich/papers/objects-essay.pdf I would welcome feedback on the essay--I realize that its thesis may be controversial, but even if some do not agree with it, I hope I will at least have expressed the essay's argument clearly. Comments received by this coming Monday, August 5th, are particularly appreciated, as I will be able to incorporate them into the published version of the essay. Cheers, Jonathan Aldrich From twilson at csufresno.edu Mon Aug 19 02:47:53 2013 From: twilson at csufresno.edu (Todd Wilson) Date: Sun, 18 Aug 2013 23:47:53 -0700 Subject: [TYPES] ICS calendar of conferences? Message-ID: Has anyone thought of maintaining an .ics calendar of programming-language conferences and other conferences of interest to the readers of this group? I'm imagining that calendar entries would include the kind of information and links you find in the LICS Newsletters, have separate entries for the various deadlines, be kept up to date as CFPs are announced and deadlines (inevitably) change, and be published on a publicly accessible server so that it could be easily imported into one's favorite calendar application. I would be willing to do the initial work in setting up such a calendar if it could eventually become a community-managed resource, with conference organizers being responsible for keeping entries up to date. Please send me any thoughts or suggestions you have for organizing and/or implementing this resource, and I'll post a summary in a week or so and ruminate on next steps -- all assuming, of course, that this hasn't already been done and I just didn't look hard enough before volunteering to do this! Thanks, Todd Wilson From rajagopal.pankajakshan at live.com Fri Aug 30 03:15:24 2013 From: rajagopal.pankajakshan at live.com (Rajagopal Pankajakshan) Date: Fri, 30 Aug 2013 07:15:24 +0000 Subject: [TYPES] Questions regarding a clock inference system for data-flow languages Message-ID: Esteemed type theory experts, During my internship at google this summer I was assigned to study type theory applied to stream-processing languages. One approach we considered was that of the so called clock calculi, as found for instance in Lucid-Synchrone [3]. Lucid-synchrone defines a type inference system to analyse synchronisation between data-flow streams. In this calculus, a stream clock s can be the base execution clock, the sample of another clock s on some condition (a name X or a variable n) or a variable ?. The paper claims that clock analysis can be performed using Hindley-Milner type inference. The basic mechanism of type polymorphism is to universally quantify the free clocks ?1,...,?n and names X1,...,Xm of a definition in its context H, here: gen(H, cl) = ???.?X?.cl where ?? = FV(cl) - FV(H) and X? = FN(cl) - FN(H) This allows to reuse that definition where it is used, by considering an instantiation relation: cl[?s/??][?c/X?] ? ???.?X?.cl My first question arises from the understanding that names X do not seem to have the same status as variables ? in this type system. They are generalised and instantiated in the same manner. For instance, consider the following Lucid program: node flip (x) = x when yes where clock yes = true The boolean stream function flip samples all occurrence of x according to a local stream yes. According to the type system [3, figure 5], it has clock ??.?X.??? on X, as the following derivation shows: |- true : s _________________________________________________________________________ |- clock yes = true : (m:s) _________________________________________________________________________ [yes : (m:s), x : s] |- x when yes : s on m _________________________________________________________________________ [x : s] |- x when yes where clock yes = true : s on m (m not in [x : s]) _________________________________________________________________________ |- node flip (x) = x when yes where clock yes = true : [flip : s->s on m] Gen([],s->s on m) = ?s?m.s->s on m The opposite stream function flop, below, can also be given type ?t?n.t-> t on n. node flop (x) = let x when no where clock no = false : [flop : ?s?n.t->y on n] Now, if we let H = [flip,:?s?m.s-> s on m,flop:?t?n.t-> t on n] then, by rule (Inst) in [3, Figure 5], we can assign the same clock to of flip and flop with the following type derivation u -> u on o ? ?s.?m.s->s on m u -> u on o ? ?t.?n.t->t on n ______________________________ ______________________________ H[x:s] |- flip : u -> u on o H[x:s] |- flop : u -> u on o H[x:s] |- x : s _______________________________________________________________________________ H[x:s] |- flip (x) : u on o H[x:s] |- flop (x) : u on o ______________________________________________________________ H[x:s] |- flip (x) + flop (x) : u on o ______________________________________________________________ (1) H |- node sum (x) = flip (x) & flop (x) : [sum : u-> u on o] Doesn't function flipflop deadlock or does this mean that it needs a buffer ? I tried to figure this out by testing the example with the implementation of Lucid-Synchrone available on the web [2]. Actually, it behaves differently from what the paper says. For instance, if we try to define function flip as above, the compiler says: "This expression has clock 'c but is used with clock (__c1:'d)" It refuses to consider the assumption 'c as a sampled clock (__c1:'d) generated by the when operator. Shouldn't it infer (__c1:'d) instead ? However, if we add a clock declaration to do that: let node sample (x, clock y) = x when y let node yes (x) = sample (x, true) let node no (x) = sample (x, false) let node yesandno (x) = yes x & no x then the compiler replies: # lucyc -i yesandno.ls val sample : 'a * bool => 'a val sample :: 'a * 'a -> 'a on ?_y0 val yes : 'a => 'a val yes :: 'a -> 'a on ?_y0 val no : 'a => 'a val no :: 'a -> 'a on ?_y0 val yesandno : bool => bool val yesandno :: 'a -> 'a on ?_y0 and produces a Caml program. But there are so many clocks in it that I can't really figure out what it does. Instead, I looked as the user manual [1] to determine which of [2] or [3] to focus on. Concerning instantiation, page 67, it says something different: "The clock (carrier-clock : stream-clock) of a stream e means that e has value carrier-clock which has the stream clock stream-clock. The value may be either a global value (defined at top-level) or an identifier. This identifier is unique and is an abstraction of the actual value of e." I don't understand how this uniqueness rule is specified in the paper [3] and, more generally, how it relates to type polymorphism. Doesn't it instead refer to existential quantification ? Also, in which cases does it apply in the implementation ? In the end, I'm a bit confused and wonder whether polymorphism matters at all. As long as one is not using streams of functions, doesn't a simple sub-typing rule suffice (to handle a base clock and samples "c on x") ? REFERENCES [1] Marc Pouzet. Lucid Synchrone Release, version 3.0. Tutorial and Reference Manual. April 2006. [2] Lucid-Synchrone v3.0b byte-code. URL http://www.di.ens.fr/~pouzet/lucid-synchrone/lucid-synchrone-3.0b.byte.tar.gz (md5 87ffdb559ad882a0d92a1213466c2a3c) [3] Jean-Louis Cola?o, Alain Girault, Gr?goire Hamon, and Marc Pouzet. Towards a Higher-order Synchronous Data-flow Language. In ACM Fourth International Conference on Embedded Software (EMSOFT'04), Pisa, Italy, september 2004. From adrien.guatto at laposte.net Fri Aug 30 12:01:42 2013 From: adrien.guatto at laposte.net (Adrien Guatto) Date: Fri, 30 Aug 2013 18:01:42 +0200 Subject: [TYPES] Questions regarding a clock inference system for data-flow languages In-Reply-To: References: Message-ID: <20130830180142.5e97c001@simmons> Hello Rajagopal and TYPES list, I am doing my PhD under the supervision of prof. M. Pouzet and A. Cohen, working on extended clock calculi for data-flow langages. I'll try to answer your questions to the best of my knowledge. From: Rajagopal Pankajakshan To: "types-list at lists.seas.upenn.edu" Subject: [TYPES] Questions regarding a clock inference system for data-flow languages Date: Fri, 30 Aug 2013 07:15:24 +0000 Sender: "Types-list" > During my internship at google this summer I was assigned to study > type theory applied to stream-processing languages. One approach we > considered was that of the so called clock calculi, as found for > instance in Lucid-Synchrone [3]. First, you may want to read "Clocks at First Class Abstract Types" (Cola?o and Pouzet, EMSOFT'03), because the section dedicated to clock typing in EMSOFT'04 is indeed very terse. However, the full system as implemented in Lucid Synchrone has never been described anywhere, as far as I know. > > Lucid-synchrone defines a type inference system to analyse > synchronisation between data-flow streams. In this calculus, a stream > clock s can be the base execution clock, the sample of another clock > s on some condition (a name X or a variable n) or a variable ?. The > paper claims that clock analysis can be performed using > Hindley-Milner type inference. It is indeed HM-like, but it needs to be extended with first-class abstract datatypes (restricted existential types) ? la Laufer & Odersky. See EMSOFT'03 above for references. > The basic mechanism of type polymorphism is to universally quantify > the free clocks ?1,...,?n and names X1,...,Xm of a definition in its > context H, here: > > gen(H, cl) = ???.?X?.cl where ?? = FV(cl) - FV(H) and X? = FN(cl) > - FN(H) > > This allows to reuse that definition where it is used, by considering > an instantiation relation: > > cl[?s/??][?c/X?] ? ???.?X?.cl > > My first question arises from the understanding that names X do not > seem to have the same status as variables ? in this type system. They > are generalised and instantiated in the same manner. For instance, > consider the following Lucid program: > node flip (x) = x when yes where clock yes = true > > The boolean stream function flip samples all occurrence of x > according to a local stream yes. According to the type system [3, > figure 5], it has clock ??.?X.??? on X, as the following derivation > shows: > > |- true : s > _________________________________________________________________________ > |- clock yes = true : (m:s) > _________________________________________________________________________ > [yes : (m:s), x : s] |- x when yes : s on m > _________________________________________________________________________ > [x : s] |- x when yes where clock yes = true : s on m (m not in > [x : s]) > _________________________________________________________________________ > |- node flip (x) = x when yes where clock yes = true : [flip : s->s > on m] > > Gen([],s->s on m) = ?s?m.s->s on m If I'm not mistaken, in your example m is a name, not a condition variable X. Thus it does not get generalized by Gen(). Indeed this program gets rejected by Lucid v3, as you note: a condition name may not escape its scope. > The opposite stream function flop, below, can also be given type > ?t?n.t-> t on n. > > node flop (x) = let x when no where clock no = false : [flop : > ?s?n.t->y on n] > > Now, if we let H = [flip,:?s?m.s-> s on m,flop:?t?n.t-> t on n] then, > by rule (Inst) in [3, Figure 5], we can assign the same clock to of > flip and flop with the following type derivation > u -> u on o ? ?s.?m.s->s on m u -> u on o ? ?t.?n.t->t on n > ______________________________ ______________________________ > H[x:s] |- flip : u -> u on o H[x:s] |- flop : u -> u on o > H[x:s] |- x : s > _______________________________________________________________________________ > H[x:s] |- flip (x) : u on o H[x:s] |- flop (x) : u on o > ______________________________________________________________ H[x:s] > |- flip (x) + flop (x) : u on o > ______________________________________________________________ (1) > H |- node sum (x) = flip (x) & flop (x) : [sum : u-> u on o] > Doesn't function flipflop deadlock or does this mean that it needs a > buffer ? Indeed, from the pure data-flow point of view, this program either needs an infinite buffer on the left input of (&), or deadlocks if (&) is synchronous (i.e. has no buffers on its inputs and outputs). > I tried to figure this out by testing the example with the > implementation of Lucid-Synchrone available on the web [2]. > Actually, it behaves differently from what the paper says. For > instance, if we try to define function flip as above, the compiler > says: > > "This expression has clock 'c but is used with clock (__c1:'d)" > > It refuses to consider the assumption 'c as a sampled clock (__c1:'d) > generated by the when operator. Shouldn't it infer (__c1:'d) instead ? > However, if we add a clock declaration to do that: > > let node sample (x, clock y) = x when y > let node yes (x) = sample (x, true) > let node no (x) = sample (x, false) > let node yesandno (x) = yes x & no x > > then the compiler replies: > > # lucyc -i yesandno.ls > val sample : 'a * bool => 'a > val sample :: 'a * 'a -> 'a on ?_y0 > val yes : 'a => 'a > val yes :: 'a -> 'a on ?_y0 > val no : 'a => 'a > val no :: 'a -> 'a on ?_y0 > val yesandno : bool => bool > val yesandno :: 'a -> 'a on ?_y0 > > and produces a Caml program. But there are so many clocks in it that > I can't really figure out what it does. Strange, this code fails to compile on my machine with the same version of the compiler. The clock types that you get are clearly incorrect. This bug seems to be triggered by the "clock" keyword on inputs. If you remove it, you get a saner clock signatures and "yes" and "no" are rejected: val sample : 'a * clock => 'a val sample :: 'a * (__c0:'a) -> 'a on __c0 val yes : 'a => 'a File "test.ls", line 2, characters 27-34: >let node yes (x) = sample (x, true) > ^^^^^^^ This expression has clock 'b * 'c, but is used with clock 'b * (__c1:'b). > Instead, I looked as the user manual [1] to > determine which of [2] or [3] to focus on. Concerning instantiation, > page 67, it says something different: > > "The clock (carrier-clock : stream-clock) of a stream e means that > e has value carrier-clock which has the stream clock stream-clock. > The value may be either a global value (defined at top-level) or an > identifier. This identifier is unique and is an abstraction of the > actual value of e." > > I don't understand how this uniqueness rule is specified in the paper > [3] and, more generally, how it relates to type polymorphism. > Doesn't it instead refer to existential quantification ? Also, in > which cases does it apply in the implementation ? You are right and I have a hard time figuring out from EMSOFT'04 too. It appears in a clearer form in EMSOFT'03, but I believe the final system of LSv3 is still quite a bit different. The basic idea is simple: you may not have an expression with a clock "'a on ?_c" if the condition ?_c is not present in the current scope. Thus, when clocking "e where D", one basically checks that the conditions "?_c_i" appearing in the clock type of "e" are still present in current environment. It may help to consider this property in relation with the code generation scheme. In the generated code, equations get translated to imperative statements, each one conditioned according to its clock type interpreted as a boolean formula. The current values of both clocks variables and conditions will thus have to be accessible from the current scope, at run-time. The compiler could automatically pass condition variables around, but it is a design choice that prof. Pouzet deliberately rejected. > In the end, I'm a bit confused and wonder whether polymorphism > matters at all. As long as one is not using streams of functions, > doesn't a simple sub-typing rule suffice (to handle a base clock and > samples "c on x") ? I am not sure to follow what you have in mind, but note that in Lucid Synchrone, a node do not have a single "base clock" in general. Please consider the following example, where f is parametric in two clock variables that get instantiated to two different clock types in g: # cat when.ls let node f x y = (0 fby x, 0 fby y) let node g x c = f x (x when c) # lucyc -i test.ls val f : int -> int => int * int val f :: 'a -> 'b -> 'a * 'b val g : int -> clock => int * int val g :: 'a -> (__c0:'a) -> 'a * 'a on __c0 Also, and although experts on this list are far more knowledgeable than me on this matter, it seems to be common knowledge that type inference for HM-style parametric polymorphism is simpler (and thus easier to implement) and more scalable than (non-structural?) subtyping. An implementation of clock typing following the principles outlined above and using destructive unification takes less than a thousand lines of ML code and is very efficient. To conclude, you may be interested in the heavily simplified version of clock typing that appears in "Clock-Directed Modular Code Generation for Synchronous Data-Flow Languages" by Biernacki et al. (LCTES'08), even if it is restricted to the first-order setting. The paper may also serve as a nice introduction to the code generation machinery. Hope that helps, do not hesitate to send me follow-up questions. -- Adrien Guatto http://www.di.ens.fr/~guatto/index_en.html From rajagopal.pankajakshan at live.com Tue Sep 3 04:53:07 2013 From: rajagopal.pankajakshan at live.com (Rajagopal Pankajakshan) Date: Tue, 3 Sep 2013 08:53:07 +0000 Subject: [TYPES] Questions regarding a clock inference system for data-flow languages In-Reply-To: <20130830180142.5e97c001@simmons> References: , <20130830180142.5e97c001@simmons> Message-ID: Thanks a lot for your explanations Adrien, > Date: Fri, 30 Aug 2013 18:01:42 +0200 > From: adrien.guatto at laposte.net > To: rajagopal.pankajakshan at live.com > CC: types-list at lists.seas.upenn.edu > Subject: Re: [TYPES] Questions regarding a clock inference system for data-flow languages > > Hello Rajagopal and TYPES list, > > I am doing my PhD under the supervision of prof. M. Pouzet and A. > Cohen, working on extended clock calculi for data-flow langages. I'll > try to answer your questions to the best of my knowledge. > > From: Rajagopal Pankajakshan > To: "types-list at lists.seas.upenn.edu" > Subject: [TYPES] Questions regarding a clock inference system for > data-flow languages Date: Fri, 30 Aug 2013 07:15:24 +0000 > Sender: "Types-list" > > > During my internship at google this summer I was assigned to study > > type theory applied to stream-processing languages. One approach we > > considered was that of the so called clock calculi, as found for > > instance in Lucid-Synchrone [3]. > > First, you may want to read "Clocks at First Class Abstract > Types" (Cola?o and Pouzet, EMSOFT'03), because the section dedicated to > clock typing in EMSOFT'04 is indeed very terse. However, the full > system as implemented in Lucid Synchrone has never been described > anywhere, as far as I know. I had a look at the suggested [EMSOFT'03] paper and there seems to be the same problem there in Figure 3. Since a stream type s is also a clock cl you can use the (let) rule to generalise its type the same way, even thought the intended rule is (let-clock). > > Lucid-synchrone defines a type inference system to analyse > > synchronisation between data-flow streams. In this calculus, a stream > > clock s can be the base execution clock, the sample of another clock > > s on some condition (a name X or a variable n) or a variable ?. The > > paper claims that clock analysis can be performed using > > Hindley-Milner type inference. > > It is indeed HM-like, but it needs to be extended with first-class > abstract datatypes (restricted existential types) ? la Laufer & > Odersky. See EMSOFT'03 above for references. > > > The basic mechanism of type polymorphism is to universally quantify > > the free clocks ?1,...,?n and names X1,...,Xm of a definition in its > > context H, here: > > > > gen(H, cl) = ???.?X?.cl where ?? = FV(cl) - FV(H) and X? = FN(cl) > > - FN(H) > > > > This allows to reuse that definition where it is used, by considering > > an instantiation relation: > > > > cl[?s/??][?c/X?] ? ???.?X?.cl > > > > My first question arises from the understanding that names X do not > > seem to have the same status as variables ? in this type system. They > > are generalised and instantiated in the same manner. For instance, > > consider the following Lucid program: > > node flip (x) = x when yes where clock yes = true > > > > The boolean stream function flip samples all occurrence of x > > according to a local stream yes. According to the type system [3, > > figure 5], it has clock ??.?X.??? on X, as the following derivation > > shows: > > > > |- true : s > > _________________________________________________________________________ > > |- clock yes = true : (m:s) > > _________________________________________________________________________ > > [yes : (m:s), x : s] |- x when yes : s on m > > _________________________________________________________________________ > > [x : s] |- x when yes where clock yes = true : s on m (m not in > > [x : s]) > > _________________________________________________________________________ > > |- node flip (x) = x when yes where clock yes = true : [flip : s->s > > on m] > > > > Gen([],s->s on m) = ?s?m.s->s on m > > If I'm not mistaken, in your example m is a name, not a condition > variable X. Thus it does not get generalized by Gen(). Indeed this > program gets rejected by Lucid v3, as you note: a condition name may > not escape its scope. The m&ns are really arbitrary in this example, I presume the intended meaning would probably to use is with names, but the judgment equally holds with variables as it should, shouldn't it: |- true : s _________________________________________________________________________ |- clock yes = true : (X:s) _________________________________________________________________________ [yes : (X:s), x : s] |- x when yes : s on X _________________________________________________________________________ [x : s] |- x when yes where clock yes = true : s on X (X not in [x : s]) _________________________________________________________________________ |- node flip (x) = x when yes where clock yes = true : [flip : s->s on X] Gen([],s->s on X) = ?s?X.s->s on X > > The opposite stream function flop, below, can also be given type > > ?t?n.t-> t on n. > > > > node flop (x) = let x when no where clock no = false : [flop : > > ?s?n.t->y on n] > > > > Now, if we let H = [flip,:?s?m.s-> s on m,flop:?t?n.t-> t on n] then, > > by rule (Inst) in [3, Figure 5], we can assign the same clock to of > > flip and flop with the following type derivation > > > u -> u on o ? ?s.?m.s->s on m u -> u on o ? ?t.?n.t->t on n > > ______________________________ ______________________________ > > H[x:s] |- flip : u -> u on o H[x:s] |- flop : u -> u on o > > H[x:s] |- x : s > > _______________________________________________________________________________ > > H[x:s] |- flip (x) : u on o H[x:s] |- flop (x) : u on o > > ______________________________________________________________ H[x:s] > > |- flip (x) + flop (x) : u on o > > ______________________________________________________________ (1) > > H |- node sum (x) = flip (x) & flop (x) : [sum : u-> u on o] > > > Doesn't function flipflop deadlock or does this mean that it needs a > > buffer ? > > Indeed, from the pure data-flow point of view, this program either > needs an infinite buffer on the left input of (&), or deadlocks if (&) > is synchronous (i.e. has no buffers on its inputs and outputs). Right, so the clocking rules fail to check the program synchronous from the (where) rule, in fact. > > I tried to figure this out by testing the example with the > > implementation of Lucid-Synchrone available on the web [2]. > > Actually, it behaves differently from what the paper says. For > > instance, if we try to define function flip as above, the compiler > > says: > > > > "This expression has clock 'c but is used with clock (__c1:'d)" > > > > It refuses to consider the assumption 'c as a sampled clock (__c1:'d) > > generated by the when operator. Shouldn't it infer (__c1:'d) instead ? > > However, if we add a clock declaration to do that: > > > > let node sample (x, clock y) = x when y > > let node yes (x) = sample (x, true) > > let node no (x) = sample (x, false) > > let node yesandno (x) = yes x & no x > > > > then the compiler replies: > > > > # lucyc -i yesandno.ls > > val sample : 'a * bool => 'a > > val sample :: 'a * 'a -> 'a on ?_y0 > > val yes : 'a => 'a > > val yes :: 'a -> 'a on ?_y0 > > val no : 'a => 'a > > val no :: 'a -> 'a on ?_y0 > > val yesandno : bool => bool > > val yesandno :: 'a -> 'a on ?_y0 > > > > and produces a Caml program. But there are so many clocks in it that > > I can't really figure out what it does. > > Strange, this code fails to compile on my machine with the same version > of the compiler. The clock types that you get are clearly incorrect. > This bug seems to be triggered by the "clock" keyword on inputs. If you > remove it, you get a saner clock signatures and "yes" and "no" are > rejected: > > val sample : 'a * clock => 'a > val sample :: 'a * (__c0:'a) -> 'a on __c0 > val yes : 'a => 'a > File "test.ls", line 2, characters 27-34: > >let node yes (x) = sample (x, true) > > ^^^^^^^ > This expression has clock 'b * 'c, > but is used with clock 'b * (__c1:'b). I just used the bytecode distribution available from your website: http://www.di.ens.fr/~pouzet/lucid-synchrone/lucid-synchrone-3.0b.byte.tar.gz (md5 87ffdb559ad882a0d92a1213466c2a3c). It would be kind if you could update it. > > Instead, I looked as the user manual [1] to > > determine which of [2] or [3] to focus on. Concerning instantiation, > > page 67, it says something different: > > > > "The clock (carrier-clock : stream-clock) of a stream e means that > > e has value carrier-clock which has the stream clock stream-clock. > > The value may be either a global value (defined at top-level) or an > > identifier. This identifier is unique and is an abstraction of the > > actual value of e." > > > > I don't understand how this uniqueness rule is specified in the paper > > [3] and, more generally, how it relates to type polymorphism. > > Doesn't it instead refer to existential quantification ? Also, in > > which cases does it apply in the implementation ? > > You are right and I have a hard time figuring out from EMSOFT'04 too. > It appears in a clearer form in EMSOFT'03, but I believe the final > system of LSv3 is still quite a bit different. > > The basic idea is simple: you may not have an expression with a clock > "'a on ?_c" if the condition ?_c is not present in the current scope. > Thus, when clocking "e where D", one basically checks that the > conditions "?_c_i" appearing in the clock type of "e" are still present > in current environment. > > It may help to consider this property in relation with the code > generation scheme. In the generated code, equations get translated to > imperative statements, each one conditioned according to its clock type > interpreted as a boolean formula. The current values of both clocks > variables and conditions will thus have to be accessible from the > current scope, at run-time. > > The compiler could automatically pass condition variables around, but > it is a design choice that prof. Pouzet deliberately rejected. > > > In the end, I'm a bit confused and wonder whether polymorphism > > matters at all. As long as one is not using streams of functions, > > doesn't a simple sub-typing rule suffice (to handle a base clock and > > samples "c on x") ? > > I am not sure to follow what you have in mind, but note that in Lucid > Synchrone, a node do not have a single "base clock" in general. Please > consider the following example, where f is parametric in two clock > variables that get instantiated to two different clock types in g: > > # cat when.ls > let node f x y = (0 fby x, 0 fby y) > let node g x c = f x (x when c) > # lucyc -i test.ls > val f : int -> int => int * int > val f :: 'a -> 'b -> 'a * 'b > val g : int -> clock => int * int > val g :: 'a -> (__c0:'a) -> 'a * 'a on __c0 > > Also, and although experts on this list are far more knowledgeable than > me on this matter, it seems to be common knowledge that type inference > for HM-style parametric polymorphism is simpler (and thus easier to > implement) and more scalable than (non-structural?) subtyping. An > implementation of clock typing following the principles outlined above > and using destructive unification takes less than a thousand lines of > ML code and is very efficient. > > To conclude, you may be interested in the heavily simplified version of > clock typing that appears in "Clock-Directed Modular Code Generation > for Synchronous Data-Flow Languages" by Biernacki et al. (LCTES'08), > even if it is restricted to the first-order setting. The paper may also > serve as a nice introduction to the code generation machinery. Thanks a lot for these detailed explanations, I will certainly look at the other LCTES'08 paper. > Hope that helps, do not hesitate to send me follow-up questions. Sure, regards. --- Raj > -- Adrien Guatto > http://www.di.ens.fr/~guatto/index_en.html From ryan at cs.harvard.edu Fri Sep 20 16:36:41 2013 From: ryan at cs.harvard.edu (Ryan Wisnesky) Date: Fri, 20 Sep 2013 16:36:41 -0400 Subject: [TYPES] decidability of BCCCs? Message-ID: Hello, I am trying to find a reference that states whether or not the free bi-cartesian closed category is decidable. That is, I am wondering if beta-eta equality of the simply typed lambda calculus extended with strong 0,1,+,* types is decidable. (In particular, I am interested in the full calculus, including the eliminator for 0, introduction for 1, and eta for all types). So far, my search through the literature says that the answer to this question is "almost" or "maybe" : - the STLC with strong 1,+,* types is decidable (omits 0, the empty type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf - the STLC with strong 0,1,+,* types has normal forms, but equivalent terms may have different such normal forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf - work on mechanized proofs of correctness for deciding co-product equality may or may not include the "absurd" eliminator for the empty type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf I feel like the answer to this question is probably well-known, but I can't seem to find it in the literature. Any help would be appreciated. Thanks, Ryan From tadeusz.litak at gmail.com Fri Sep 20 19:55:44 2013 From: tadeusz.litak at gmail.com (Tadeusz Litak) Date: Fri, 20 Sep 2013 16:55:44 -0700 Subject: [TYPES] decidability of BCCCs? In-Reply-To: References: Message-ID: <523CE080.5040708@gmail.com> Hello Ryan, it's not exactly my area, but the LiCS'01 reference that you quote mentions earlier work of Neil Ghani: >>A decision procedure for cartesian closed categories with binary coproducts has been presented in Ghani's the- sis [Gh95a] (see [Gh95b] for a summary) << [Gh95b] comments on the addition of empty type at the beginning of Section 3. And [Gh95a], i.e., Neil's PhD Thesis available at this link: http://hdl.handle.net/1842/404 discusses the subject in conclusions on p. 145. Also see "Initial objects" subsection on p. 104. Although I cannot find the reference "Inconsistency and extensionality" anywhere: not sure if it materialized. Anyway, HTH ... Best, t. On 9/20/13 1:36 PM, Ryan Wisnesky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am trying to find a reference that states whether or not the free bi-cartesian closed category is decidable. That is, I am wondering if beta-eta equality of the simply typed lambda calculus extended with strong 0,1,+,* types is decidable. (In particular, I am interested in the full calculus, including the eliminator for 0, introduction for 1, and eta for all types). > > So far, my search through the literature says that the answer to this question is "almost" or "maybe" : > > - the STLC with strong 1,+,* types is decidable (omits 0, the empty type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf > > - the STLC with strong 0,1,+,* types has normal forms, but equivalent terms may have different such normal forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf > > - work on mechanized proofs of correctness for deciding co-product equality may or may not include the "absurd" eliminator for the empty type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf > > I feel like the answer to this question is probably well-known, but I can't seem to find it in the literature. Any help would be appreciated. > > Thanks, > Ryan From adahmad at cs.cmu.edu Sat Sep 21 06:05:00 2013 From: adahmad at cs.cmu.edu (Arbob Ahmad) Date: Sat, 21 Sep 2013 06:05:00 -0400 Subject: [TYPES] decidability of BCCCs? In-Reply-To: <523D6630.4050705@cs.cmu.edu> References: <523CE080.5040708@gmail.com> <523D6630.4050705@cs.cmu.edu> Message-ID: <523D6F4C.8060507@cs.cmu.edu> Hello Ryan, Dan Licata, Bob Harper, and I designed an algorithm for deciding beta-eta equality for the simply typed lambda calculus with 0, 1, +, and * but without unspecified base types. The basic idea is that all of these types are finitely inhabited so we can enumerate their inhabitants to uniquely identify a member of each type. As you mention, finding a unique canonical form is the tricky part. We essentially used a form of multi-focusing that focused on everything in the context it possibly could simultaneously and then performed all necessary inversions simultaneously as well. This means if there is a variable of type bool -> bool in the context, a canonical form in this context must apply this function to both true and false simultaneously to uniquely determine which of the four possible functions of this type the variable represents. We haven't published this result, but I can send you more information if you're interested. I would also point you to the work of Lindley in Typed Lambda Calculus and Applications 2007. This gives a decision procedure based on extensional rewriting for the simply typed lambda calculus with + and *. It does not include 0 and 1 but suggests these wouldn't be too hard to add. Arbob Ahmad Tadeusz Litak wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello Ryan, > > it's not exactly my area, but the LiCS'01 reference that you quote > mentions earlier work of Neil Ghani: > >>A decision procedure for cartesian closed categories with binary > coproducts has been presented in Ghani's the- sis [Gh95a] (see [Gh95b] > for a summary) << > > [Gh95b] comments on the addition of empty type at the beginning of > Section 3. > > And [Gh95a], i.e., Neil's PhD Thesis available at this link: > http://hdl.handle.net/1842/404 > discusses the subject in conclusions on p. 145. Also see "Initial > objects" subsection on p. 104. > > Although I cannot find the reference "Inconsistency and > extensionality" anywhere: not sure if it materialized. > > Anyway, HTH ... > Best, > t. > > On 9/20/13 1:36 PM, Ryan Wisnesky wrote: >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hello, >> >> I am trying to find a reference that states whether or not the free >> bi-cartesian closed category is decidable. That is, I am wondering if >> beta-eta equality of the simply typed lambda calculus extended with >> strong 0,1,+,* types is decidable. (In particular, I am interested in >> the full calculus, including the eliminator for 0, introduction for >> 1, and eta for all types). >> >> So far, my search through the literature says that the answer to this >> question is "almost" or "maybe" : >> >> - the STLC with strong 1,+,* types is decidable (omits 0, the empty >> type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf >> >> - the STLC with strong 0,1,+,* types has normal forms, but equivalent >> terms may have different such normal >> forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf >> >> - work on mechanized proofs of correctness for deciding co-product >> equality may or may not include the "absurd" eliminator for the empty >> type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf >> >> I feel like the answer to this question is probably well-known, but I >> can't seem to find it in the literature. Any help would be appreciated. >> >> Thanks, >> Ryan > From psztxa at exmail.nottingham.ac.uk Sat Sep 21 07:45:10 2013 From: psztxa at exmail.nottingham.ac.uk (Altenkirch Thorsten) Date: Sat, 21 Sep 2013 12:45:10 +0100 Subject: [TYPES] decidability of BCCCs? In-Reply-To: Message-ID: The issue with empty types is that you have to decide propositional consistency, that is G |- true = false if and only if G is propositionally inconsistent (I.e. you can derive an inhabitant of the empty type). I believe that using this you get decidability of all coproducts including the empty one. This also strongly suggests that the calculus with dependent types and empty coproducts is undecidable because you would need to decide the consistency of a context in predicate logic. Cheers, Thorsten On 20/09/2013 21:36, "Ryan Wisnesky" wrote: >[ The Types Forum, >http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >Hello, > >I am trying to find a reference that states whether or not the free >bi-cartesian closed category is decidable. That is, I am wondering if >beta-eta equality of the simply typed lambda calculus extended with >strong 0,1,+,* types is decidable. (In particular, I am interested in >the full calculus, including the eliminator for 0, introduction for 1, >and eta for all types). > >So far, my search through the literature says that the answer to this >question is "almost" or "maybe" : > >- the STLC with strong 1,+,* types is decidable (omits 0, the empty >type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf > >- the STLC with strong 0,1,+,* types has normal forms, but equivalent >terms may have different such normal >forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf > >- work on mechanized proofs of correctness for deciding co-product >equality may or may not include the "absurd" eliminator for the empty >type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf > >I feel like the answer to this question is probably well-known, but I >can't seem to find it in the literature. Any help would be appreciated. > >Thanks, >Ryan This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From roberto at dicosmo.org Sat Sep 21 09:34:48 2013 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Sat, 21 Sep 2013 15:34:48 +0200 Subject: [TYPES] decidability of BCCCs? In-Reply-To: References: Message-ID: Dear Ryan, beta-eta equality in the presence of strong sums is a tricky subject that has been object of some attention for more than 20 years, both "per se" in the rewriting community, that looked for an elegant confluent rewriting system, and as a "tool" for researchers interested in type theory and semantics, that wanted to know for example whether one can characterise object isomorphism in a BiCCC, problem that one can try to attack by studying invertible terms in the corresponding lambda calculus with strong types. If one stays in the 1,*,-> fragment, all is wonderful: you get a nice confluent and normalising rewriting system for the lambda calculs based on eta expansion (see a blazingly short proof in http://www.dicosmo.org/Articles/POD.ps), the type isomorphisms are finitely axiomatizable and exactly correspond to the equations one knows from high schools for natural numbers equipped with product, unit and exponentiation, related to Tarski's High School algebra problem (a nice proof through Finite Sets is due to Sergei Soloviev in 1988, I gave one based on invertible terms in my PhD thesis back in 1992). You may add weak sums to the calculus, and still get a nice confluent and normalising rewriting system, that can also accomodate bounded recursion (http://www.pps.univ-*paris*-diderot.fr/~kesner/papers/icalp93.ps.gz). As soon as you add strong sums, though, things get very hairy: Neil Ghani proposed in his PhD thesis back in 1995 a decision procedure for lambda terms with strong 1,+,*,-> (no zero, see https://personal.cis.strath.ac.uk/*neil*.*ghani*/papers/yellowthesis.ps.gz): a rewriting system is used to bring terms in this calculus to a normal form which is not unique, and then it is shown that all such normal forms are equivalent modulo a sort of commuting conversions, so to decide equality of two terms, you first normalise them, and then check whether they are equivalent w.r.t. conversion (the equivalence classes being finite, this is decidable). The NBE appoach of Thorsten Altenkirch (et al.) of your LICS01 reference seems to provide a more straightforward way to decide equality, even if somebody coming from the rewriting world might like a more intuitive description of what the normal form actually look like. If you add the zero, you are in for more surpises: looking at the special BiCCC formed by the natural numbers equipped with 0,1,+,* and exponentiation, you discover that equality is no longer finitely axiomatisable, and yet a decision procedure does exist (see http://www.dicosmo.org/Papers/zeroisnfa.pdf), which solves in the negative, but with a positive note, Tarski's problem for this system. But then, in BiCCCs, the connection between numerical equalities and type isomorphisms breaks down and the nice result above says nothing about what happens for the general case. In http://www.dicosmo.org/Papers/lics02.pdfwe proved that isos are non finitely axiomatisable in BiCCC, but we know nothing on decidability. As for deciding beta-eta equality, the work of Marcelo Fiore and Vincent Balat http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf shows how to compute normal forms in full BiCCCs, and I kind of remember that we were convinced at some point that these normal forms must be unique up to some sort of commuting conversions (you can see this in the examples of the paper), but we did not prove it at that moment, and it is true that one would like to see this last step done (or a proof that it cannot be done). I wonder whether Marcelo, Neil and Thorsten might shed some light on this (Vincent and I started working on quite different areas a while ago) By the way, out of curiosity, would you mind telling us how this result would be useful your research? -- Roberto P.S.: for those interested in type isomorphism, http://www.dicosmo.org/Paper/mscs-survey.pdf provides a concise overview, even if no longer up to date. 2013/9/20 Ryan Wisnesky > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Hello, > > I am trying to find a reference that states whether or not the free > bi-cartesian closed category is decidable. That is, I am wondering if > beta-eta equality of the simply typed lambda calculus extended with strong > 0,1,+,* types is decidable. (In particular, I am interested in the full > calculus, including the eliminator for 0, introduction for 1, and eta for > all types). > > So far, my search through the literature says that the answer to this > question is "almost" or "maybe" : > > - the STLC with strong 1,+,* types is decidable (omits 0, the empty type): > http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf > > - the STLC with strong 0,1,+,* types has normal forms, but equivalent > terms may have different such normal forms: > http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf > > - work on mechanized proofs of correctness for deciding co-product > equality may or may not include the "absurd" eliminator for the empty type: > http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf > > I feel like the answer to this question is probably well-known, but I > can't seem to find it in the literature. Any help would be appreciated. > > Thanks, > Ryan > -- Roberto Di Cosmo ------------------------------------------------------------------ Professeur En delegation a l'INRIA PPS E-mail: roberto at dicosmo.org Universite Paris Diderot WWW : http://www.dicosmo.org Case 7014 Tel : ++33-(0)1-57 27 92 20 5, Rue Thomas Mann F-75205 Paris Cedex 13 Identica: http://identi.ca/rdicosmo FRANCE. Twitter: http://twitter.com/rdicosmo ------------------------------------------------------------------ Attachments: MIME accepted, Word deprecated http://www.gnu.org/philosophy/no-word-attachments.html ------------------------------------------------------------------ Office location: Bureau 320 (3rd floor) Batiment Sophie Germain Avenue de France Metro Bibliotheque Francois Mitterrand, ligne 14/RER C ----------------------------------------------------------------- GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 From ryan at cs.harvard.edu Sat Sep 21 10:10:56 2013 From: ryan at cs.harvard.edu (Ryan Wisnesky) Date: Sat, 21 Sep 2013 10:10:56 -0400 Subject: [TYPES] decidability of BCCCs? In-Reply-To: <523D6630.4050705@cs.cmu.edu> References: <523CE080.5040708@gmail.com> <523D6630.4050705@cs.cmu.edu> Message-ID: <05F80358-3B52-407C-81F2-3A07E479C44B@cs.harvard.edu> Hi Arbob, I would very much like to know more about this work. If you handle the full calculus, including the eliminator for 0, it would answer my question in the affirmative. Unfortunately, I was unable to tell if this was the case just by looking at the publicly available material. Thanks, Ryan On Sep 21, 2013, at 5:26 AM, Arbob Ahmad wrote: > Hello Ryan, > > Dan Licata, Bob Harper, and I designed an algorithm for deciding beta-eta equality for the simply typed lambda calculus with 0, 1, +, and * but without unspecified base types. The basic idea is that all of these types are finitely inhabited so we can enumerate their inhabitants to uniquely identify a member of each type. As you mention, finding a unique canonical form is the tricky part. We essentially used a form of multi-focusing that focused on everything in the context it possibly could simultaneously and then performed all necessary inversions simultaneously as well. This means if there is a variable of type bool -> bool in the context, a canonical form in this context must apply this function to both true and false simultaneously to uniquely determine which of the four possible functions of this type the variable represents. We haven't published this result, but I can send you more information if you're interested. > > I would also point you to the work of Lindley in Typed Lambda Calculus and Applications 2007. This gives a decision procedure based on extensional rewriting for the simply typed lambda calculus with + and *. It does not include 0 and 1 but suggests these wouldn't be too hard to add. > > Arbob Ahmad > > Tadeusz Litak wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hello Ryan, >> >> it's not exactly my area, but the LiCS'01 reference that you quote mentions earlier work of Neil Ghani: >> >>A decision procedure for cartesian closed categories with binary coproducts has been presented in Ghani's the- sis [Gh95a] (see [Gh95b] for a summary) << >> >> [Gh95b] comments on the addition of empty type at the beginning of Section 3. >> >> And [Gh95a], i.e., Neil's PhD Thesis available at this link: >> http://hdl.handle.net/1842/404 >> discusses the subject in conclusions on p. 145. Also see "Initial objects" subsection on p. 104. >> >> Although I cannot find the reference "Inconsistency and extensionality" anywhere: not sure if it materialized. >> >> Anyway, HTH ... >> Best, >> t. >> >> On 9/20/13 1:36 PM, Ryan Wisnesky wrote: >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Hello, >>> >>> I am trying to find a reference that states whether or not the free bi-cartesian closed category is decidable. That is, I am wondering if beta-eta equality of the simply typed lambda calculus extended with strong 0,1,+,* types is decidable. (In particular, I am interested in the full calculus, including the eliminator for 0, introduction for 1, and eta for all types). >>> >>> So far, my search through the literature says that the answer to this question is "almost" or "maybe" : >>> >>> - the STLC with strong 1,+,* types is decidable (omits 0, the empty type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf >>> >>> - the STLC with strong 0,1,+,* types has normal forms, but equivalent terms may have different such normal forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf >>> >>> - work on mechanized proofs of correctness for deciding co-product equality may or may not include the "absurd" eliminator for the empty type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf >>> >>> I feel like the answer to this question is probably well-known, but I can't seem to find it in the literature. Any help would be appreciated. >>> >>> Thanks, >>> Ryan >> From ryan at cs.harvard.edu Sat Sep 21 10:25:44 2013 From: ryan at cs.harvard.edu (Ryan Wisnesky) Date: Sat, 21 Sep 2013 10:25:44 -0400 Subject: [TYPES] decidability of BCCCs? In-Reply-To: References: Message-ID: <29DC2B99-808C-4CBC-A3F2-512E0E85E405@cs.harvard.edu> Hi Roberto, BCCCs seem to come up a lot at the intersection between database theory and programming language theory. For example, if you say that a database schema is a finitely presented category, then the category of schemas and mappings between them is a BCCC. For this particular BCCC, I believe that equality is semi-decidable, so I would like to say that its semi-decidability comes from its particular nature, and is not inherited from the free BCCC. BCCCs also come up when you try to connect higher-order logic to the nested relational calculus via their categorical (topos) semantics. I'd be happy to talk more about it offline. Thanks, Ryan On Sep 21, 2013, at 9:34 AM, Roberto Di Cosmo wrote: > Dear Ryan, > beta-eta equality in the presence of strong sums is a tricky subject that has been object of some attention for more than 20 years, both "per se" in the rewriting community, that looked for an elegant confluent rewriting system, and as a "tool" for researchers interested in type theory and semantics, that wanted to know for example whether one can characterise object isomorphism in a BiCCC, problem that one can try to attack by studying invertible terms in the corresponding lambda calculus with strong types. > > If one stays in the 1,*,-> fragment, all is wonderful: you get a nice confluent and normalising rewriting system for the lambda calculs based on eta expansion (see a blazingly short proof in http://www.dicosmo.org/Articles/POD.ps), the type isomorphisms are finitely axiomatizable > and exactly correspond to the equations one knows from high schools for natural numbers equipped with product, unit and exponentiation, related to Tarski's High School algebra problem > (a nice proof through Finite Sets is due to Sergei Soloviev in 1988, I gave one based on invertible terms in my PhD thesis back in 1992). > > You may add weak sums to the calculus, and still get a nice confluent and normalising rewriting system, that can also accomodate bounded recursion (http://www.pps.univ-paris-diderot.fr/~kesner/papers/icalp93.ps.gz). > > As soon as you add strong sums, though, things get very hairy: Neil Ghani proposed in his PhD thesis back in 1995 a decision procedure for lambda terms with strong 1,+,*,-> (no zero, see > https://personal.cis.strath.ac.uk/neil.ghani/papers/yellowthesis.ps.gz): a rewriting system is used to bring terms in this calculus to a normal form which is not unique, and then it is shown that all such normal forms are equivalent modulo a sort of commuting conversions, so to decide equality of two terms, you first normalise them, and then check whether they are equivalent w.r.t. conversion (the equivalence classes being finite, this is decidable). The NBE appoach of Thorsten Altenkirch (et al.) of your LICS01 reference seems to provide a more straightforward way to decide equality, even if somebody coming from the rewriting world might like a more intuitive description of what the normal form actually look like. > > If you add the zero, you are in for more surpises: looking at the special BiCCC formed by the natural numbers equipped with 0,1,+,* and exponentiation, you discover that equality is no longer finitely axiomatisable, and yet a decision procedure does exist (see http://www.dicosmo.org/Papers/zeroisnfa.pdf), > which solves in the negative, but with a positive note, Tarski's problem for this system. > > But then, in BiCCCs, the connection between numerical equalities and type isomorphisms breaks down and the nice result above says nothing about what happens for the general case. In http://www.dicosmo.org/Papers/lics02.pdf we proved that isos are non finitely axiomatisable in BiCCC, but we know nothing on decidability. > > As for deciding beta-eta equality, the work of Marcelo Fiore and Vincent Balat http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf shows how to compute normal forms in full BiCCCs, and I kind of remember that we were convinced at some point that these normal forms must be unique up to some sort of commuting conversions (you can see this in the examples of the paper), but we did not prove it at that moment, and it is true that one would like to see this last step done (or a proof that it cannot be done). > > I wonder whether Marcelo, Neil and Thorsten might shed some light on this (Vincent and I started working on quite different areas a while ago) > > By the way, out of curiosity, would you mind telling us how this result would be useful your research? > > -- > Roberto > > P.S.: for those interested in type isomorphism, http://www.dicosmo.org/Paper/mscs-survey.pdf provides a concise overview, even if no longer up to date. > > > > 2013/9/20 Ryan Wisnesky > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am trying to find a reference that states whether or not the free bi-cartesian closed category is decidable. That is, I am wondering if beta-eta equality of the simply typed lambda calculus extended with strong 0,1,+,* types is decidable. (In particular, I am interested in the full calculus, including the eliminator for 0, introduction for 1, and eta for all types). > > So far, my search through the literature says that the answer to this question is "almost" or "maybe" : > > - the STLC with strong 1,+,* types is decidable (omits 0, the empty type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf > > - the STLC with strong 0,1,+,* types has normal forms, but equivalent terms may have different such normal forms:http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf > > - work on mechanized proofs of correctness for deciding co-product equality may or may not include the "absurd" eliminator for the empty type: http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf > > I feel like the answer to this question is probably well-known, but I can't seem to find it in the literature. Any help would be appreciated. > > Thanks, > Ryan > > > > -- > Roberto Di Cosmo > > ------------------------------------------------------------------ > Professeur En delegation a l'INRIA > PPS E-mail: roberto at dicosmo.org > Universite Paris Diderot WWW : http://www.dicosmo.org > Case 7014 Tel : ++33-(0)1-57 27 92 20 > 5, Rue Thomas Mann > F-75205 Paris Cedex 13 Identica: http://identi.ca/rdicosmo > FRANCE. Twitter: http://twitter.com/rdicosmo > ------------------------------------------------------------------ > Attachments: > MIME accepted, Word deprecated > http://www.gnu.org/philosophy/no-word-attachments.html > ------------------------------------------------------------------ > Office location: > > Bureau 320 (3rd floor) > Batiment Sophie Germain > Avenue de France > Metro Bibliotheque Francois Mitterrand, ligne 14/RER C > ----------------------------------------------------------------- > GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 From roberto at dicosmo.org Sat Sep 21 11:54:58 2013 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Sat, 21 Sep 2013 17:54:58 +0200 Subject: [TYPES] decidability of BCCCs? In-Reply-To: References: Message-ID: The link to the survey on isomorphisms does not work; there was a missing "s", I apologize for the mistyping, here is the correct one http://www.dicosmo.org/Papers/mscs-survey.pdf By the way, all mentioned works are actually published, and the official versions can be found by looking for example at the DBLP listings for Altenkirch, Dezani, Di Cosmo, Fiore, Ghani, Kesner, etc. 2013/9/21 Roberto Di Cosmo > Dear Ryan, > beta-eta equality in the presence of strong sums is a tricky subject > that has been object of some attention for more than 20 years, both "per > se" in the rewriting community, that looked for an elegant confluent > rewriting system, and as a "tool" for researchers interested in type theory > and semantics, that wanted to know for example whether one can characterise > object isomorphism in a BiCCC, problem that one can try to attack by > studying invertible terms in the corresponding lambda calculus with strong > types. > > If one stays in the 1,*,-> fragment, all is wonderful: you get a nice > confluent and normalising rewriting system for the lambda calculs based on > eta expansion (see a blazingly short proof in > http://www.dicosmo.org/Articles/POD.ps), the type isomorphisms are > finitely axiomatizable > and exactly correspond to the equations one knows from high schools for > natural numbers equipped with product, unit and exponentiation, related to > Tarski's High School algebra problem > (a nice proof through Finite Sets is due to Sergei Soloviev in 1988, I > gave one based on invertible terms in my PhD thesis back in 1992). > > You may add weak sums to the calculus, and still get a nice confluent and > normalising rewriting system, that can also accomodate bounded recursion > (http://www.pps.univ-*paris*-diderot.fr/~kesner/papers/icalp93.ps.gz). > > As soon as you add strong sums, though, things get very hairy: Neil Ghani > proposed in his PhD thesis back in 1995 a decision procedure for lambda > terms with strong 1,+,*,-> (no zero, see > https://personal.cis.strath.ac.uk/*neil*.*ghani*/papers/yellowthesis.ps.gz): > a rewriting system is used to bring terms in this calculus to a normal form > which is not unique, and then it is shown that all such normal forms are > equivalent modulo a sort of commuting conversions, so to decide equality of > two terms, you first normalise them, and then check whether they are > equivalent w.r.t. conversion (the equivalence classes being finite, this is > decidable). The NBE appoach of Thorsten Altenkirch (et al.) of your LICS01 > reference seems to provide a more straightforward way to decide equality, > even if somebody coming from the rewriting world might like a more > intuitive description of what the normal form actually look like. > > If you add the zero, you are in for more surpises: looking at the special > BiCCC formed by the natural numbers equipped with 0,1,+,* and > exponentiation, you discover that equality is no longer finitely > axiomatisable, and yet a decision procedure does exist (see > http://www.dicosmo.org/Papers/zeroisnfa.pdf), > which solves in the negative, but with a positive note, Tarski's problem > for this system. > > But then, in BiCCCs, the connection between numerical equalities and type > isomorphisms breaks down and the nice result above says nothing about what > happens for the general case. In http://www.dicosmo.org/Papers/lics02.pdfwe proved that isos are non finitely axiomatisable in BiCCC, but we know > nothing on decidability. > > As for deciding beta-eta equality, the work of Marcelo Fiore and Vincent > Balat http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf shows how to > compute normal forms in full BiCCCs, and I kind of remember that we were > convinced at some point that these normal forms must be unique up to some > sort of commuting conversions (you can see this in the examples of the > paper), but we did not prove it at that moment, and it is true that one > would like to see this last step done (or a proof that it cannot be done). > > I wonder whether Marcelo, Neil and Thorsten might shed some light on this > (Vincent and I started working on quite different areas a while ago) > > By the way, out of curiosity, would you mind telling us how this result > would be useful your research? > > -- > Roberto > > P.S.: for those interested in type isomorphism, > http://www.dicosmo.org/Paper/mscs-survey.pdf provides a concise overview, > even if no longer up to date. > > > > 2013/9/20 Ryan Wisnesky > >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hello, >> >> I am trying to find a reference that states whether or not the free >> bi-cartesian closed category is decidable. That is, I am wondering if >> beta-eta equality of the simply typed lambda calculus extended with strong >> 0,1,+,* types is decidable. (In particular, I am interested in the full >> calculus, including the eliminator for 0, introduction for 1, and eta for >> all types). >> >> So far, my search through the literature says that the answer to this >> question is "almost" or "maybe" : >> >> - the STLC with strong 1,+,* types is decidable (omits 0, the empty >> type): http://www.cs.nott.ac.uk/~txa/publ/lics01.pdf >> >> - the STLC with strong 0,1,+,* types has normal forms, but equivalent >> terms may have different such normal forms: >> http://www.cl.cam.ac.uk/~mpf23/papers/Types/tdpe.pdf >> >> - work on mechanized proofs of correctness for deciding co-product >> equality may or may not include the "absurd" eliminator for the empty type: >> http://www.cis.upenn.edu/~sweirich/wmm/wmm07/programme/licata.pdf >> >> I feel like the answer to this question is probably well-known, but I >> can't seem to find it in the literature. Any help would be appreciated. >> >> Thanks, >> Ryan >> > > > > -- > Roberto Di Cosmo > > ------------------------------------------------------------------ > Professeur En delegation a l'INRIA > PPS E-mail: roberto at dicosmo.org > Universite Paris Diderot WWW : http://www.dicosmo.org > Case 7014 Tel : ++33-(0)1-57 27 92 20 > 5, Rue Thomas Mann > F-75205 Paris Cedex 13 Identica: http://identi.ca/rdicosmo > FRANCE. Twitter: http://twitter.com/rdicosmo > ------------------------------------------------------------------ > Attachments: > MIME accepted, Word deprecated > http://www.gnu.org/philosophy/no-word-attachments.html > ------------------------------------------------------------------ > Office location: > > Bureau 320 (3rd floor) > Batiment Sophie Germain > Avenue de France > Metro Bibliotheque Francois Mitterrand, ligne 14/RER C > ----------------------------------------------------------------- > GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 > -- Roberto Di Cosmo ------------------------------------------------------------------ Professeur En delegation a l'INRIA PPS E-mail: roberto at dicosmo.org Universite Paris Diderot WWW : http://www.dicosmo.org Case 7014 Tel : ++33-(0)1-57 27 92 20 5, Rue Thomas Mann F-75205 Paris Cedex 13 Identica: http://identi.ca/rdicosmo FRANCE. Twitter: http://twitter.com/rdicosmo ------------------------------------------------------------------ Attachments: MIME accepted, Word deprecated http://www.gnu.org/philosophy/no-word-attachments.html ------------------------------------------------------------------ Office location: Bureau 320 (3rd floor) Batiment Sophie Germain Avenue de France Metro Bibliotheque Francois Mitterrand, ligne 14/RER C ----------------------------------------------------------------- GPG fingerprint 2931 20CE 3A5A 5390 98EC 8BFC FCCA C3BE 39CB 12D3 From rm27 at cornell.edu Mon Oct 28 07:09:46 2013 From: rm27 at cornell.edu (Dr. Rod Moten) Date: Mon, 28 Oct 2013 07:09:46 -0400 Subject: [TYPES] type theory and Big Data Message-ID: <526E45FA.5000803@cornell.edu> Do you think type theory has a role to play in providing the mathematics needed for Big Data? https://www.simonsfoundation.org/quanta/20131004-the-mathematical-shape-of-things-to-come/ From ryan at cs.harvard.edu Mon Oct 28 17:58:49 2013 From: ryan at cs.harvard.edu (Ryan Wisnesky) Date: Mon, 28 Oct 2013 17:58:49 -0400 Subject: [TYPES] type theory and Big Data In-Reply-To: <526E45FA.5000803@cornell.edu> References: <526E45FA.5000803@cornell.edu> Message-ID: Hi, Collection types, typically monadic, were instrumental in the development of 'functional query languages'. This line of work started in the early 90s and a classical paper is Tannen, Buneman, and Wong's "Naturally Embedded Query Languages": http://repository.upenn.edu/cgi/viewcontent.cgi?article=1536&context=cis_reports Such languages continue to be proposed as interfaces for big-data systems like MapReduce: http://cacm.acm.org/magazines/2011/4/106584-a-co-relational-model-of-data-for-large-shared-data-banks/fulltext The implications of other type-theoretic constructions to information management are topics of current research. Regards, Ryan On Oct 28, 2013, at 7:09 AM, Dr. Rod Moten wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Do you think type theory has a role to play in providing the mathematics needed for Big Data? > https://www.simonsfoundation.org/quanta/20131004-the-mathematical-shape-of-things-to-come/ From Barry.Jay at uts.edu.au Mon Oct 28 22:24:38 2013 From: Barry.Jay at uts.edu.au (Barry Jay) Date: Tue, 29 Oct 2013 13:24:38 +1100 Subject: [TYPES] type theory and Big Data In-Reply-To: References: <526E45FA.5000803@cornell.edu> Message-ID: <526F1C66.3040302@uts.edu.au> Another typed approach to big data is to use the generic queries of pattern calculus http://www.springer.com/computer/theoretical+computer+science/book/978-3-540-89184-0 and bondi http://bondi.it.uts.edu.au/ Generic queries can be applied to data structures of arbitrary type, without adding any apparatus for collections or monads, etc. In principle, this allows a single, strongly-typed query to be executed across a wide variety of data bases, with varying schema. Yours, Barry On 29/10/13 08:58, Ryan Wisnesky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi, > > Collection types, typically monadic, were instrumental in the development of 'functional query languages'. This line of work started in the early 90s and a classical paper is Tannen, Buneman, and Wong's "Naturally Embedded Query Languages": > > http://repository.upenn.edu/cgi/viewcontent.cgi?article=1536&context=cis_reports > > Such languages continue to be proposed as interfaces for big-data systems like MapReduce: > > http://cacm.acm.org/magazines/2011/4/106584-a-co-relational-model-of-data-for-large-shared-data-banks/fulltext > > The implications of other type-theoretic constructions to information management are topics of current research. > > Regards, > Ryan > > > On Oct 28, 2013, at 7:09 AM, Dr. Rod Moten wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Do you think type theory has a role to play in providing the mathematics needed for Big Data? >> https://www.simonsfoundation.org/quanta/20131004-the-mathematical-shape-of-things-to-come/ UTS CRICOS Provider Code: 00099F DISCLAIMER: This email message and any accompanying attachments may contain confidential information. If you are not the intended recipient, do not read, use, disseminate, distribute or copy this message or attachments. If you have received this message in error, please notify the sender immediately and delete this message. Any views expressed in this message are those of the individual sender, except where the sender expressly, and with authority, states them to be the views of the University of Technology Sydney. Before opening any attachments, please check them for viruses and defects. Think. Green. Do. Please consider the environment before printing this email. From veronique.benzaken at lri.fr Tue Oct 29 10:47:57 2013 From: veronique.benzaken at lri.fr (Veronique Benzaken) Date: Tue, 29 Oct 2013 15:47:57 +0100 Subject: [TYPES] type theory and Big Data In-Reply-To: <526F1C66.3040302@uts.edu.au> References: <526E45FA.5000803@cornell.edu> <526F1C66.3040302@uts.edu.au> Message-ID: <526FCA9D.4040407@lri.fr> Hello, In the research line stated by Ryan, you should also have a look at a recent paper in the topic: http://www.pps.univ-paris-diderot.fr/~gc/papers/popl13.pdf On 10/29/2013 03:24 AM, Barry Jay wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Another typed approach to big data is to use the generic queries of > pattern calculus > > http://www.springer.com/computer/theoretical+computer+science/book/978-3-540-89184-0 > > > and bondi http://bondi.it.uts.edu.au/ > > Generic queries can be applied to data structures of arbitrary type, > without adding any apparatus for collections or monads, etc. In > principle, this allows a single, strongly-typed query to be executed > across a wide variety of data bases, with varying schema. > > Yours, > Barry > > > > > On 29/10/13 08:58, Ryan Wisnesky wrote: >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Hi, >> >> Collection types, typically monadic, were instrumental in the >> development of 'functional query languages'. This line of work >> started in the early 90s and a classical paper is Tannen, Buneman, and >> Wong's "Naturally Embedded Query Languages": >> >> http://repository.upenn.edu/cgi/viewcontent.cgi?article=1536&context=cis_reports >> >> >> Such languages continue to be proposed as interfaces for big-data >> systems like MapReduce: >> >> http://cacm.acm.org/magazines/2011/4/106584-a-co-relational-model-of-data-for-large-shared-data-banks/fulltext >> >> >> The implications of other type-theoretic constructions to information >> management are topics of current research. >> >> Regards, >> Ryan >> >> >> On Oct 28, 2013, at 7:09 AM, Dr. Rod Moten wrote: >> >>> [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> Do you think type theory has a role to play in providing the >>> mathematics needed for Big Data? >>> https://www.simonsfoundation.org/quanta/20131004-the-mathematical-shape-of-things-to-come/ >>> > > UTS CRICOS Provider Code: 00099F > DISCLAIMER: This email message and any accompanying attachments may > contain confidential information. > If you are not the intended recipient, do not read, use, disseminate, > distribute or copy this message or > attachments. If you have received this message in error, please notify > the sender immediately and delete > this message. Any views expressed in this message are those of the > individual sender, except where the > sender expressly, and with authority, states them to be the views of the > University of Technology Sydney. > Before opening any attachments, please check them for viruses and defects. > > Think. Green. Do. > > Please consider the environment before printing this email. -- Prof. V?ronique Benzaken www.lri.fr/~benzaken Vice Pr?sidente Recherche du D?partement d'Informatique ?quipe Vals - Verification, Algorithms, Languages and Systems tel : +33(0)1 6915 6628 fax : +33(0)1 6915 6586 Universit? Paris Sud L.R.I (UMR 8623, C.N.R.S) Bat 650, 91405 Orsay Cedex ?Summum jus, summa injuria? -- From james.cheney at gmail.com Wed Oct 30 10:41:08 2013 From: james.cheney at gmail.com (James Cheney) Date: Wed, 30 Oct 2013 14:41:08 +0000 Subject: [TYPES] type theory and Big Data In-Reply-To: <526E45FA.5000803@cornell.edu> References: <526E45FA.5000803@cornell.edu> Message-ID: Hi, I think so, though I don't accept the implicit premise of your question, that type theory / type systems / PL concepts do not already play such a role in databases / data analysis / big data (whatever that actually means :). A few illustrative data points, besides the work already mentioned by others, include: * Malcolm P. Atkinson and O. Peter Buneman. 1987. Types and persistence in database programming languages. *ACM Comput. Surv.* 19, 2 (June 1987), 105-170. * Language-integrated query (which as Ryan pointed out in part grows out of work by Wadler, and later Buneman, Tannen on comprehensions, monads and querying) http://en.wikipedia.org/wiki/Language_Integrated_Query * The PLAN-X (programming languages and XML) workshop and DBPL (database programming languages) symposium have many papers on the use of types for databases: http://www.informatik.uni-trier.de/~ley/db/conf/planx/ http://www.informatik.uni-trier.de/~ley/db/conf/dbpl/ * Recent / upcoming workshops where this type of work has been presented include - RADICAL 2010 http://research.microsoft.com/en-us/um/people/adg/RADICAL2010/ - XLDI in 2012 http://workshops.inf.ed.ac.uk/xldi2012/ - DDFP in 2013 http://research.microsoft.com/en-us/events/ddfp2013/ - DCP in 2014 http://research.microsoft.com/en-us/events/dcp2014/ --James On Mon, Oct 28, 2013 at 11:09 AM, Dr. Rod Moten wrote: > [ The Types Forum, http://lists.seas.upenn.edu/** > mailman/listinfo/types-list] > > Do you think type theory has a role to play in providing the mathematics > needed for Big Data? > https://www.simonsfoundation.**org/quanta/20131004-the-** > mathematical-shape-of-things-**to-come/ > From lukstafi at gmail.com Wed Dec 11 16:29:33 2013 From: lukstafi at gmail.com (Lukasz Stafiniak) Date: Wed, 11 Dec 2013 22:29:33 +0100 Subject: [TYPES] [ANN] InvarGenT: GADTs-based invariant/postcondition generation Message-ID: Hello, I am pleased to release the first version of InvarGenT, a system that performs full type inference for a type system with GADTs, and also generates new GADTs to serve as existential types. In addition to algebraic types, the first version handles linear arithmetic constraints. https://github.com/lukstafi/invargent/releases/tag/v1.0 Regards, ?ukasz Stafiniak From dreyer at mpi-sws.org Tue Dec 24 18:41:11 2013 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 25 Dec 2013 00:41:11 +0100 Subject: [TYPES] *** Coq survey *** (DEADLINE January 15th) Message-ID: [Matthieu Sozeau asked me to post this to the Types list on behalf of the Coq team. The results of the survey linked below will play a role in determining the future directions for development of Coq, so if you care about that, I encourage you to take time to fill it out. -Derek] Dear all, on behalf of the Coq developers, I?d like to invite everyone to respond to a survey on their usage of Coq, its programming and proving environment, development model, shortcomings and future directions. This survey aims at gathering important information about Coq's users and uses. The results will be used to better understand users' needs, and help decide in which direction Coq?s development should go. It is really important for us to get as many answers as possible before *** January 15th 2014 ***. We are also taking this occasion to collaboratively build a bibliography of Coq-related papers. The survey?s results will be synthesized, anonymized and made publicly available in february. The estimated burden time for this survey is around 30 minutes. The survey is available online at: https://sondages.inria.fr/index.php/276926/lang-en This survey was mainly prepared by Thomas Braibant, assisted by Enrico Tassi, thanks to them for giving us all an occasion to take a step back and reflect on Coq?s future. Happy holidays, ? Matthieu, for the Coq team.