From stefan at vectorfabrics.com Tue Jan 7 03:17:50 2014 From: stefan at vectorfabrics.com (Stefan Holdermans) Date: Tue, 7 Jan 2014 09:17:50 +0100 Subject: [TYPES] Decidable intersection-type inference for general recursion: state of the art? Message-ID: <7CF9F7A8-632A-4FE7-AE1F-2AC5DD79866D@vectorfabrics.com> Hi all, This is a question about principal typings and decidable type inference for general recursion in the context of type systems like System I, System E, ... . In [KW04] and [KWW02] type inference for recursive definitions is left as future work and marked as challenging as it interferes with the polar nature of the inference algorithm. Also, the work on System E does not seem to make any mention of general recursion. Can anyone tell me what is today's state of the art with respect to principality and (rank-restricted) intersection-type inference for recursive definitions? Thanks, Stefan [KW04] A.J. Kfoury and J.B. Wells: Principality and type inference for intersection types using expansion variables. Theor. Comput. Sci. 311(1-3): 1-70 (2004) [KWW02] A.J. Kfoury, G. Washburn, J.B. Wells: Implementing compositional analysis using intersection types with expansion variables. Electr. Notes Theor. Comput. Sci. 70(1): 124-148 (2002) From siraaj at khandkar.net Tue Jan 7 22:11:54 2014 From: siraaj at khandkar.net (Siraaj Khandkar) Date: Tue, 7 Jan 2014 22:11:54 -0500 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? Message-ID: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> We're having a discussion with a friend regarding the most accurate way to describe the typing situation in Python. His view is that Python data are typed and variables un-typed, moreover, he proposes that the terms "un-typed" and "uni-typed" are practically equivalent. At first it seemed somewhat reasonable to me, but the more I thought about it, the more my mind rejected both, the equivalence and the phrasing. The idea of uni-typing is that there's a set of types that the runtime supports and expressions can be composed of any members of that set, thus forming a single type, which is that set. This idea seems to describe the situation in a useful (for analysis) and an enlightening way, while the term "un-typed" does not seem to say anything useful. I'm also feeling uneasy about the phrasing: un-typed _variables_. That is, data and _expressions_ have types, but individual variables are just not something you can make a claim about outside of a context of an expression. We'd appreciate very much if the enlightened folks of this list would provide some input on this. -- Siraaj Khandkar .o. o.o ..o o.. .o. ..o .oo o.o .oo ..o ooo .o. .oo oo. ooo From smcdirm at microsoft.com Wed Jan 8 05:09:19 2014 From: smcdirm at microsoft.com (Sean McDirmid) Date: Wed, 8 Jan 2014 10:09:19 +0000 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> Message-ID: <8D23692C5F6EC3479277B1CA9770B19261F0C084@SINEX14MBXC416.southpacific.corp.microsoft.com> It is difficult to sort any type system into a simple easily named bucket. I see Python (and Ruby, Javascript) as dynamically and structurally typed. Static structural typing and dynamic nominal typing are completely sensical for other languages (especially the former, the latter is possible even if it lacks examples). I reserve "untyped" for languages where you don't even get a decent message-not-understood run-time error (e.g. a memory corruption in C on doing something bad to a void*). Untyped is much weaker than a safe dynamic language that crashes more elegantly (and more easily debugged) when a type error occurs. I agree that the concept of uni-typing isn't very useful; it doesn't tell us very much about the real typing properties of the language. -----Original Message----- From: Types-list [mailto:types-list-bounces at lists.seas.upenn.edu] On Behalf Of Siraaj Khandkar Sent: Wednesday, January 8, 2014 11:12 AM To: types-list at lists.seas.upenn.edu Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] We're having a discussion with a friend regarding the most accurate way to describe the typing situation in Python. His view is that Python data are typed and variables un-typed, moreover, he proposes that the terms "un-typed" and "uni-typed" are practically equivalent. At first it seemed somewhat reasonable to me, but the more I thought about it, the more my mind rejected both, the equivalence and the phrasing. The idea of uni-typing is that there's a set of types that the runtime supports and expressions can be composed of any members of that set, thus forming a single type, which is that set. This idea seems to describe the situation in a useful (for analysis) and an enlightening way, while the term "un-typed" does not seem to say anything useful. I'm also feeling uneasy about the phrasing: un-typed _variables_. That is, data and _expressions_ have types, but individual variables are just not something you can make a claim about outside of a context of an expression. We'd appreciate very much if the enlightened folks of this list would provide some input on this. -- Siraaj Khandkar .o. o.o ..o o.. .o. ..o .oo o.o .oo ..o ooo .o. .oo oo. ooo From gavin.mendel.gleason at gmail.com Wed Jan 8 08:58:50 2014 From: gavin.mendel.gleason at gmail.com (Gavin Mendel-Gleason) Date: Wed, 8 Jan 2014 13:58:50 +0000 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: <8D23692C5F6EC3479277B1CA9770B19261F0C084@SINEX14MBXC416.southpacific.corp.microsoft.com> References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> <8D23692C5F6EC3479277B1CA9770B19261F0C084@SINEX14MBXC416.southpacific.corp.microsoft.com> Message-ID: Sure what's wrong with uni-typing as a description? If you want to mimic dynamic typing in a typed programming language, it's trivial to do with a single type embedding all the types available with a wrapping constructor. Unboxing is simply a compiler optimisation removing the constructor through some sort of partial computation. Type errors are really just exceptions that give assertions on the typing constructors allowed in a given code path. On 8 January 2014 10:09, Sean McDirmid wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > It is difficult to sort any type system into a simple easily named bucket. > > I see Python (and Ruby, Javascript) as dynamically and structurally typed. > Static structural typing and dynamic nominal typing are completely sensical > for other languages (especially the former, the latter is possible even if > it lacks examples). > > I reserve "untyped" for languages where you don't even get a decent > message-not-understood run-time error (e.g. a memory corruption in C on > doing something bad to a void*). Untyped is much weaker than a safe dynamic > language that crashes more elegantly (and more easily debugged) when a type > error occurs. > > I agree that the concept of uni-typing isn't very useful; it doesn't tell > us very much about the real typing properties of the language. > > -----Original Message----- > From: Types-list [mailto:types-list-bounces at lists.seas.upenn.edu] On > Behalf Of Siraaj Khandkar > Sent: Wednesday, January 8, 2014 11:12 AM > To: types-list at lists.seas.upenn.edu > Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or > "uni-typed"? > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > We're having a discussion with a friend regarding the most accurate way to > describe the typing situation in Python. > > His view is that Python data are typed and variables un-typed, moreover, > he proposes that the terms "un-typed" and "uni-typed" are practically > equivalent. > > At first it seemed somewhat reasonable to me, but the more I thought about > it, the more my mind rejected both, the equivalence and the phrasing. > > The idea of uni-typing is that there's a set of types that the runtime > supports and expressions can be composed of any members of that set, thus > forming a single type, which is that set. This idea seems to describe the > situation in a useful (for analysis) and an enlightening way, while the > term "un-typed" does not seem to say anything useful. > > I'm also feeling uneasy about the phrasing: un-typed _variables_. That is, > data and _expressions_ have types, but individual variables are just not > something you can make a claim about outside of a context of an expression. > > We'd appreciate very much if the enlightened folks of this list would > provide some input on this. > > -- > Siraaj Khandkar > .o. o.o ..o o.. .o. > ..o .oo o.o .oo ..o > ooo .o. .oo oo. ooo > > > From smcdirm at microsoft.com Wed Jan 8 09:30:41 2014 From: smcdirm at microsoft.com (Sean McDirmid) Date: Wed, 8 Jan 2014 14:30:41 +0000 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> <8D23692C5F6EC3479277B1CA9770B19261F0C084@SINEX14MBXC416.southpacific.corp.microsoft.com>, Message-ID: Pragmatically, the programmer is still able to talk about "ducks" in Python even if the check to see if a value can be used as a duck occurs dynamically. Explaining to the programmer that Python only has one type would be very unfriendly and confusing when clearly one can talk about ducks that quack in the language without much problems. I guess it depends on your definition of type system and types. For a narrower definition oriented purely around static type checking, it might make sense to say Python has one type. On Jan 8, 2014, at 21:59, "Gavin Mendel-Gleason" > wrote: Sure what's wrong with uni-typing as a description? If you want to mimic dynamic typing in a typed programming language, it's trivial to do with a single type embedding all the types available with a wrapping constructor. Unboxing is simply a compiler optimisation removing the constructor through some sort of partial computation. Type errors are really just exceptions that give assertions on the typing constructors allowed in a given code path. On 8 January 2014 10:09, Sean McDirmid > wrote: [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] It is difficult to sort any type system into a simple easily named bucket. I see Python (and Ruby, Javascript) as dynamically and structurally typed. Static structural typing and dynamic nominal typing are completely sensical for other languages (especially the former, the latter is possible even if it lacks examples). I reserve "untyped" for languages where you don't even get a decent message-not-understood run-time error (e.g. a memory corruption in C on doing something bad to a void*). Untyped is much weaker than a safe dynamic language that crashes more elegantly (and more easily debugged) when a type error occurs. I agree that the concept of uni-typing isn't very useful; it doesn't tell us very much about the real typing properties of the language. -----Original Message----- From: Types-list [mailto:types-list-bounces at lists.seas.upenn.edu] On Behalf Of Siraaj Khandkar Sent: Wednesday, January 8, 2014 11:12 AM To: types-list at lists.seas.upenn.edu Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] We're having a discussion with a friend regarding the most accurate way to describe the typing situation in Python. His view is that Python data are typed and variables un-typed, moreover, he proposes that the terms "un-typed" and "uni-typed" are practically equivalent. At first it seemed somewhat reasonable to me, but the more I thought about it, the more my mind rejected both, the equivalence and the phrasing. The idea of uni-typing is that there's a set of types that the runtime supports and expressions can be composed of any members of that set, thus forming a single type, which is that set. This idea seems to describe the situation in a useful (for analysis) and an enlightening way, while the term "un-typed" does not seem to say anything useful. I'm also feeling uneasy about the phrasing: un-typed _variables_. That is, data and _expressions_ have types, but individual variables are just not something you can make a claim about outside of a context of an expression. We'd appreciate very much if the enlightened folks of this list would provide some input on this. -- Siraaj Khandkar .o. o.o ..o o.. .o. ..o .oo o.o .oo ..o ooo .o. .oo oo. ooo From samth at cs.indiana.edu Wed Jan 8 10:37:12 2014 From: samth at cs.indiana.edu (Sam Tobin-Hochstadt) Date: Wed, 8 Jan 2014 10:37:12 -0500 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> Message-ID: Pierce, in the introduction to TaPL, writes: "A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute." According to this, Python (and Ruby, JavaScript, Scheme, etc) doesn't do this, and thus does not have a type system, and is therefore usefully classified as "untyped". Reynolds [1], writes "Type structure is a syntactic discipline for enforcing levels of abstraction." Python has a variety of methods for enforcing abstraction levels, but they are not syntactic, with the possible exception of scope. It might be possible to view scopes in Python or other languages as type systems, but that would probably not prove very enlightening. The slogan the "untyped is uni-typed", often propounded by Harper [2] is somewhat different. In the general case, the implementation of a language like Python must be implemented so that there's exactly one "type" internally, and all Python values fit into that type. You can see this if you look at the Python C API -- there's the `PyObject` type, and it represents anything in Python [3]. Other languages have similar internal implementation types. In contrast, the C API for OCaml doesn't have a such a type. [4] In this sense, the "uni-typed" claim is clearly true. However, if you want to _understand_ why people find untyped languages useful, and how they use them, I think that the uni-typed perspective doesn't let us answer the questions we want. This is similar to the perspective that human thinking operates at the level of neurons -- it's clearly true, but it doesn't help us figure out what other people will do in practice. Instead, I find it useful to think about how programmers in languages like Python reason about their programs -- often it turns out to be via unformalized reasoning that bears a close resemblance to Pierce and Reynolds' definitions, usually often not to particular type systems in existing languages. Much of our work on Typed Racket has involved exploring exactly this question. Sam [1] Reynolds, _Types, Abstraction, and Parametric Polymorphism_, Information Processing, 1983 [2] http://existentialtype.wordpress.com/2011/03/19/dynamic-languages-are-static-languages/ [3] http://docs.python.org/2/c-api/structures.html [4] If you look at the OCaml C API, this will appear to be wrong, but the `value` type is really a pointer type, and the pointed-to value in the heap is not tagged in the way a `PyObject` is. On Tue, Jan 7, 2014 at 10:11 PM, Siraaj Khandkar wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > We're having a discussion with a friend regarding the most accurate way > to describe the typing situation in Python. > > His view is that Python data are typed and variables un-typed, moreover, > he proposes that the terms "un-typed" and "uni-typed" are practically > equivalent. > > At first it seemed somewhat reasonable to me, but the more I thought > about it, the more my mind rejected both, the equivalence and the > phrasing. > > The idea of uni-typing is that there's a set of types that the runtime > supports and expressions can be composed of any members of that set, > thus forming a single type, which is that set. This idea seems to > describe the situation in a useful (for analysis) and an enlightening > way, while the term "un-typed" does not seem to say anything useful. > > I'm also feeling uneasy about the phrasing: un-typed _variables_. That > is, data and _expressions_ have types, but individual variables are just > not something you can make a claim about outside of a context of an > expression. > > We'd appreciate very much if the enlightened folks of this list would > provide some input on this. > > -- > Siraaj Khandkar > .o. o.o ..o o.. .o. > ..o .oo o.o .oo ..o > ooo .o. .oo oo. ooo > > From rwh at cs.cmu.edu Wed Jan 8 10:45:52 2014 From: rwh at cs.cmu.edu (Prof. Robert Harper) Date: Wed, 8 Jan 2014 10:45:52 -0500 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> Message-ID: <1ABBF971-FD43-407D-8B78-113E12D24C13@cs.cmu.edu> Please see my Practical Foundations book for a thorough discussion of this needlessly vexed issue. The word "type" should be reserved for structural purposes, they codify the "comprehension principles" that constitute the language. Things like "duck types" are "refinements" or "predicates" that are descriptive of the (typed) programs that have a pre-determined operational meaning. So, of course, Python and its relatives are and always have been unityped. Some also have refinements to express verification conditions. One side benefit of calling a type a type is that one immediately sees that the whole idea of having only one type is, um, questionable. First off, why confine yourself a priori to such a severe limitation? Second off, you will see that the language really has half-hearted stabs at having types by having, for example, "multiple arguments", which exposes in an ad hoc way the type "thing * thing * ... * thing" or "thing sequence" without really admitting it (and using pattern matching syntax to cover it up). Bob Harper Sent from tablet > On Jan 7, 2014, at 22:11, Siraaj Khandkar wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > We're having a discussion with a friend regarding the most accurate was > to describe the typing situation in Python. > > His view is that Python data are typed and variables un-typed, moreover, > he proposes that the terms "un-typed" and "uni-typed" are practically > equivalent. > > At first it seemed somewhat reasonable to me, but the more I thought > about it, the more my mind rejected both, the equivalence and the > phrasing. > > The idea of uni-typing is that there's a set of types that the runtime > supports and expressions can be composed of any members of that set, > thus forming a single type, which is that set. This idea seems to > describe the situation in a useful (for analysis) and an enlightening > way, while the term "un-typed" does not seem to say anything useful. > > I'm also feeling uneasy about the phrasing: un-typed _variables_. That > is, data and _expressions_ have types, but individual variables are just > not something you can make a claim about outside of a context of an > expression. > > We'd appreciate very much if the enlightened folks of this list would > provide some input on this. > > -- > Siraaj Khandkar > .o. o.o ..o o.. .o. > ..o .oo o.o .oo ..o > ooo .o. .oo oo. ooo > > From u.s.reddy at cs.bham.ac.uk Wed Jan 8 11:37:21 2014 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 8 Jan 2014 16:37:21 +0000 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> <8D23692C5F6EC3479277B1CA9770B19261F0C084@SINEX14MBXC416.southpacific.corp.microsoft.com> Message-ID: <21197.32449.14000.857678@gargle.gargle.HOWL> Sean McDirmid writes: > Pragmatically, the programmer is still able to talk about "ducks" in > Python even if the check to see if a value can be used as a duck occurs > dynamically. Explaining to the programmer that Python only has one type > would be very unfriendly and confusing when clearly one can talk about > ducks that quack in the language without much problems. I think this is an excellent point. There can be other ways of enforcing type abstractions in a programming language than by static typing rules. For that reason, "untyped programming language" is a misnomer. (Perhaps it was imported thoughtlessly from mathematics, where something like the Zermelo-Fraenkel set theory is genuinely untyped.) I always refer to such languages as "dynamically typed" languages rather than "untyped" languages. Please see this discussion on Stack Exchange which is closely related: http://cs.stackexchange.com/questions/2301/categorisation-of-type-systems-strong-weak-dynamic-static/ Cheers, Uday From noam.zeilberger at gmail.com Wed Jan 8 12:00:21 2014 From: noam.zeilberger at gmail.com (Noam Zeilberger) Date: Wed, 8 Jan 2014 18:00:21 +0100 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: <1ABBF971-FD43-407D-8B78-113E12D24C13@cs.cmu.edu> References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> <1ABBF971-FD43-407D-8B78-113E12D24C13@cs.cmu.edu> Message-ID: Robert Harper" wrote: > The word "type" should be reserved for structural purposes, they codify the "comprehension principles" that constitute the language. Things like "duck types" are "refinements" or "predicates" that are descriptive of the (typed) programs that have a pre-determined operational meaning. The word "type" is definitely overloaded, but adapting from Reynolds [1], I prefer to distinguish these two uses as "intrinsic types" and "extrinsic types" respectively (or i-types and e-types for short). Historically you can find many examples of people using the word type to mean either i-type or e-type, so it is unrealistic to proscribe one or the other usage -- and personally, I really believe that the *tension* between these two perspectives is at the heart of type theory. I agree, though, that a lot of mental energy is wasted because people fail to resolve this linguistic ambiguity. Noam [1] see John C. Reynolds, "The meaning of types -- from intrinsic to extrinsic semantics", BRICS Report RS-00-32. From laurie at tratt.net Wed Jan 8 12:29:36 2014 From: laurie at tratt.net (Laurence Tratt) Date: Wed, 8 Jan 2014 17:29:36 +0000 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? Message-ID: <20140108172936.GC30939@phase.tratt.net> Sam Tobin-Hochstadt wrote: > In the general case, the implementation of a language like Python must be > implemented so that there's exactly one "type" internally, and all Python > values fit into that type. You can see this if you look at the Python C API > -- there's the `PyObject` type, and it represents anything in Python [3]. In CPython (and at least a couple of other VMs whose internals I know), the full description of what's going on can give one a different impression. It's not that the above is wrong but there's more than one way of looking at this. In essence, such VMs use C's type system to implement a sort of structural typing system, so that different primitive thingies (e.g. ints, floats, file handles, network sockets etc.) can be dealt with [1]. The basic property of C that makes this work is that elements in a struct are never reordered. PyObject has a corresponding macro PyObject_HEAD which expands to the same struct elements as PyObject. You can therefore make a subtype / extend PyObject [use whichever language makes you most happy!] with the following idiom: typedef struct { PyObject_HEAD // Everything after here is the extra information of the subtype int blah; } New_Type; So, yes, every "user object in memory" can be referenced as a PyObject*; but the object may have extra data in it (of differing sizes) which means it can also be referenced as a New_Object* (users have to know when to cast things appropriately). This could be taken to mean there are two different static types. The fact that this gives a subtyping flavour to things muddies the waters considerably: is there one static type or an arbitrary number of them in the VM? does the fact that this is semi-hidden from the end-programmer effect how we should count things? do FFIs which enable the user to deal with arbitrary chunks of memory mean that they can add primitive types, and that, even dynamically, the number of types isn't fixed? I don't know that there are good clean answers to these questions. In summary, I'm not sure this gives any weight for or against the uni-typed argument. And since no-one can enforce a specific terminology on others, I, and many others, will continue to use the terms "statically typed" and "dynamically typed" to try and sidestep ambiguity. Either way, I'm reasonably sure that the world will keep turning, programs will keep running, and this debate will reappear on an annual basis. Laurie [1] There are various other ways VMs could implement the same behaviour as the above; this is merely one way, albeit a convenient and efficient one. -- Personal http://tratt.net/laurie/ Software Development Team http://soft-dev.org/ https://github.com/ltratt http://twitter.com/laurencetratt From u.s.reddy at cs.bham.ac.uk Wed Jan 8 13:24:12 2014 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 8 Jan 2014 18:24:12 +0000 Subject: [TYPES] Types of expressions in dynamic languages: "un-typed" or "uni-typed"? In-Reply-To: <1ABBF971-FD43-407D-8B78-113E12D24C13@cs.cmu.edu> References: <3EE96975-7EFE-4BA0-BE1C-E4BC94578497@khandkar.net> <1ABBF971-FD43-407D-8B78-113E12D24C13@cs.cmu.edu> Message-ID: <21197.38860.689000.36240@gargle.gargle.HOWL> Prof. Robert Harper writes: > The word "type" should be reserved for structural purposes, they codify > the "comprehension principles" that constitute the language. Things like > "duck types" are "refinements" or "predicates" that are descriptive of the > (typed) programs that have a pre-determined operational meaning. So, of > course, Python and its relatives are and always have been unityped. Some > also have refinements to express verification conditions. I agree that there is something intuitive about the "structural" vs "descriptive" distinction. But, if you try to formalize it, you are very likely to end up with the static vs dynamic distinction. Almost all mathematicians define a category as consisting of a collection of objects and a collection of arrows, with operations called "domain" and "codomain" and say that two arrows are "composable" if dom(g) = cod(f). So, is this "comosability" structural or descriptive? [I myself don't define a category that way. I say that you have a collection of objects, and for every pair of objects A and B, a collection Hom(A,B) of arrows from A to B. Being a structuralist, you know exactly why I do it that way.] But does it matter one way or another? All that matters is whether the type abstractions are being respected, so that you don't compose the wrong things and expect to get some sense out of it. Cheers, Uday From eijiro.sumii at gmail.com Sat Jan 11 04:02:10 2014 From: eijiro.sumii at gmail.com (Eijiro Sumii) Date: Sat, 11 Jan 2014 18:02:10 +0900 Subject: [TYPES] Do "phantom types" mean types like ('a, 'n)sized_list as a whole, or dummy parameters like 'n itself? Message-ID: Dear Types people, I am (again) asking a question about terminology: recently, a student of mine and I have been developing a library extensively using so-called phantom types. As far as we understand, phantoms types are concerned with polymorphic data types whose type parameter does not appear in the r.h.s. of the definition, e.g. (in OCaml syntax), type ('a, 'n) sized_list = 'a list (* should be made abstract by module signature *) where type zero (* no constructor *) type 'n succ (* no constructor *) let nil : ('a, zero) sized_list = [] let cons ((a : 'a), (d : ('a, 'n) sized_list)) : ('a, 'n succ) sized_list = a :: d let car (x : ('a, 'n succ) sized_list) : 'a = List.hd x let cdr (x : ('a, 'n succ) sized_list) : ('a, 'n) sized_list = List.tl x (this example is for pedagogy only). My question is: do phantom types mean types like ('a,'n)sized_list as a whole, or dummy parameters like 'n itself? The majority of the literature (and Web pages) means the former, while some seem to mean the latter. For me, the latter (the minority) seems to be more "logical" since there is no constructor for the types substituted with 'n (hence the terminology "phantom"), while ('a,'n)sized_list itself *does* have flesh and blood (i.e., values). Does anybody know any historical reason for the former (the major) terminology? N.B. Yes, the types zero and 'n succ are like type void (in the sense of type theory, not C), but they are distinguished from each other by the names. Best regards, Eijiro From dreyer at mpi-sws.org Sat Jan 11 11:26:14 2014 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 11 Jan 2014 17:26:14 +0100 Subject: [TYPES] Do "phantom types" mean types like ('a, 'n)sized_list as a whole, or dummy parameters like 'n itself? In-Reply-To: References: Message-ID: Hi, Eijiro. I believe that the phrase "phantom types" is mostly used in the literature to refer to the entire *technique* of programming with phantom types, not to the actual types themselves. Used in that way, it is unambiguous. I would agree with your intuition and assessment that when it comes to the types themselves, it is the dummy type variables (like 'n in your example) that are phantom-like, not the type constructors that are parameterized by them. But on the other hand, I can imagine practical reasons for the alternative (and apparently more common) usage. For example, if phantom types refer to the type constructors, one can say, "And now let's define some phantom types," whereas if phantom types refer to the dummy type variables one is forced to more awkwardly say, "And now let's define some types that are parameterized by phantom types." Best regards, Derek On Sat, Jan 11, 2014 at 10:02 AM, Eijiro Sumii wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Types people, > > I am (again) asking a question about terminology: recently, a student > of mine and I have been developing a library extensively using > so-called phantom types. As far as we understand, phantoms types are > concerned with polymorphic data types whose type parameter does not > appear in the r.h.s. of the definition, e.g. (in OCaml syntax), > > type ('a, 'n) sized_list = 'a list > (* should be made abstract by module signature *) > > where > > type zero (* no constructor *) > type 'n succ (* no constructor *) > let nil : ('a, zero) sized_list = [] > let cons ((a : 'a), (d : ('a, 'n) sized_list)) > : ('a, 'n succ) sized_list = a :: d > let car (x : ('a, 'n succ) sized_list) > : 'a = List.hd x > let cdr (x : ('a, 'n succ) sized_list) > : ('a, 'n) sized_list = List.tl x > > (this example is for pedagogy only). > > My question is: do phantom types mean types like ('a,'n)sized_list as > a whole, or dummy parameters like 'n itself? The majority of the > literature (and Web pages) means the former, while some seem to mean > the latter. > > For me, the latter (the minority) seems to be more "logical" since > there is no constructor for the types substituted with 'n (hence the > terminology "phantom"), while ('a,'n)sized_list itself *does* have > flesh and blood (i.e., values). Does anybody know any historical > reason for the former (the major) terminology? > > N.B. Yes, the types zero and 'n succ are like type void (in the sense > of type theory, not C), but they are distinguished from each other by > the names. > > Best regards, > > Eijiro From trupill at gmail.com Thu Feb 27 05:18:26 2014 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Thu, 27 Feb 2014 11:18:26 +0100 Subject: [TYPES] OutsideIn(X) question Message-ID: Dear Types List, I have a small question related to the "OutsideIn(X): Modular type inference with local assumptions" and was hoping that you could help me a bit. My question related about the proof of soundness and principality, specifically Lemma 7.2 (to be found in page 67). In that lemma, it's stated that QQ and \phi' Q_q ||- \phi Q_w <-> \phi' Q_w'. I'm trying to recover the proof (which is omitted in the text), but I stumble upon a wall when trying to work out what happens in the case an axiom is applied. In particular, I'm playing with an example where QQ (the set of axioms) = { forall. C a => D a } (where C and D are one-parameter type classes) Q_q = { } Q_w = { D Int } Thus, if I apply the rule DINSTW (to be found in page 65), I get a new Q_w' = { C Int } Now, if the lemma 7.2 is true, it should be the case that (1) QQ ||- C Int <-> D Int which in particular means that I have the two implications (2) { forall. C a => D a, C Int } ||- D Int (3) { forall. C a => D a, D Int } ||- C Int (2) follows easily by applying the AXIOM rule of ||- (as shown in page 54). However, I don't see how to make (3) work :( I think that understanding this example will be key for my understanding of the whole system. Anybody could point to the error in my reasoning or to places where I could find more information? Thanks in advance. From dimitris at microsoft.com Mon Mar 3 09:58:49 2014 From: dimitris at microsoft.com (Dimitrios Vytiniotis) Date: Mon, 3 Mar 2014 14:58:49 +0000 Subject: [TYPES] OutsideIn(X) question References: Message-ID: Dear Alejandro, thanks for your notice! You are right, the entailment relation in the JFP paper is too weak for type classes. As we formulated it, the simplifier can take a type class reduction step but there is no way to show that the new set of constraints is equivalent to the original class constraint. There is nothing very deep going on, it's an oversight in the definition of entailment. The fix -- at least in the case of non-overlapping instances and vanilla multi-parameter type classes -- is to ensure that all axioms for type class instances form bi-implications. In fact in the constraint handling rules formulation of type classes, class instances always gave rise to such bi-implications. I will post the correction to the article on my web page in the next couple of weeks and let you know, but I thought I should send an update. Best regards, Dimitrios -----Original Message----- From: Types-list [mailto:types-list-bounces at lists.seas.upenn.edu] On Behalf Of Alejandro Serrano Mena Sent: Thursday, February 27, 2014 10:18 AM To: types-list at lists.seas.upenn.edu Subject: [TYPES] OutsideIn(X) question [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Dear Types List, I have a small question related to the "OutsideIn(X): Modular type inference with local assumptions" and was hoping that you could help me a bit. My question related about the proof of soundness and principality, specifically Lemma 7.2 (to be found in page 67). In that lemma, it's stated that QQ and \phi' Q_q ||- \phi Q_w <-> \phi' Q_w'. I'm trying to recover the proof (which is omitted in the text), but I stumble upon a wall when trying to work out what happens in the case an axiom is applied. In particular, I'm playing with an example where QQ (the set of axioms) = { forall. C a => D a } (where C and D are one-parameter type classes) Q_q = { } Q_w = { D Int } Thus, if I apply the rule DINSTW (to be found in page 65), I get a new Q_w' = { C Int } Now, if the lemma 7.2 is true, it should be the case that (1) QQ ||- C Int <-> D Int which in particular means that I have the two implications (2) { forall. C a => D a, C Int } ||- D Int (3) { forall. C a => D a, D Int } ||- C Int (2) follows easily by applying the AXIOM rule of ||- (as shown in page 54). However, I don't see how to make (3) work :( I think that understanding this example will be key for my understanding of the whole system. Anybody could point to the error in my reasoning or to places where I could find more information? Thanks in advance. From u.s.reddy at cs.bham.ac.uk Tue Apr 8 12:19:47 2014 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Tue, 8 Apr 2014 17:19:47 +0100 Subject: [TYPES] Logical relations and parametricity (Reynolds memorial paper) Message-ID: <21316.8611.577000.479836@gargle.gargle.HOWL> We would like to announce a Reynolds memorial paper that has just been published in a volume edited by John Power and Cai Wingfield: Logical Relations and Parametricity ? A Reynolds Programme for Category Theory and Programming Languages (Dedicated to the memory of John C. Reynolds, 1935?2013) Abstract In his seminal paper on ?Types, Abstraction and Parametric Polymorphism,? John Reynolds called for homomorphisms to be generalized from functions to relations. He reasoned that such a generalization would allow type-based ?abstraction? (representation independence, information hiding, naturality or parametricity) to be captured in a mathematical theory, while accounting for higher-order types. However, after 30 years of research, we do not yet know fully how to do such a generalization. In this article, we explain the problems in doing so, summarize the work carried out so far, and call for a renewed attempt at addressing the problem. Electronic Notes in Theoretical Computer Science Volume 303, 28 March 2014, Pages 149?180 Proceedings of the Workshop on Algebra, Coalgebra and Topology (WACT 2013) The ENTCS version may be found here: http://www.sciencedirect.com/science/article/pii/S1571066114000346 and a preprint here http://www.cs.bham.ac.uk/~udr/papers/logical-relations-and-parametricity.pdf --- Part of our intent in writing this article has been to regenerate interest in the problem of parametricity, which some people perceive as having died down after sustained efforts in the 90's, and to provide clean definitions for new entrants to the area. So, if you have wondered what parametricity is all about, we hope you will find the answers here. We continue to work on parametricity ourselves and we would be very glad to hear from other people who would like to get involved. If you have work that we have missed, please let us know about it as well. (We couldn't cite all the work we would have liked to cite due to page limitations.) Claudio Hermida Uday Reddy Edmund Robinson From dreyer at mpi-sws.org Wed May 7 11:13:00 2014 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 7 May 2014 17:13:00 +0200 Subject: [TYPES] New moderator: Dimitrios Vytiniotis Message-ID: Dear Types and Types/announce members, It has been my pleasure to serve as moderator of the Types and Types/announce lists for 5 years now, but it's time to hand over the reins to someone else. Dimitrios Vytiniotis of Microsoft Research Cambridge has kindly agreed to take over as moderator. I want to thank him for agreeing to moderate the list, and I'm sure he'll do a great job! In fact, he and I have discussed it, and one of the first things he would like to do as new moderator is to solicit feedback from the community on whether the Types list is serving its members as well as it could, and if not, how best to evolve it going forward. He will be sending mail about that shortly. Best regards, Derek Dreyer From dimitris at microsoft.com Thu May 8 06:26:02 2014 From: dimitris at microsoft.com (Dimitrios Vytiniotis) Date: Thu, 8 May 2014 10:26:02 +0000 Subject: [TYPES] future of types-list (call for feedback) Message-ID: <8A4BE71AF969544FA98E6D49C10DE5A60CC54E@DB3PRD3001MB019.064d.mgd.msft.net> Dear all, (dropping types-announce) First, many thanks to Derek for moderating the list over the past years. I am excited to take up this role. Moving forward: A. It appears (to me, but shout if you disagree) that the TYPES/announce list is serving its purpose reasonably well, with a constant flow of conference, job-related news, and other announcements in the general area of types, programming languages, semantics, logic, formal methods etc. B. On the other hand, the TYPES discussion list is quiet. The past few years we've had around 10-12 discussion threads a year. This might mean several things: - Relevant discussion is happening, but happening elsewhere. E.g. you see category theory posts in the Haskell mailing lists, core type theory posts in the Coq mailing lists, and general questions and discussions in lambda-the-ultimate. - Mailing lists are too old fashioned and discussions have moved to blogs and wikis. - TYPES is potentially a slightly intimidating place for newcomers to post questions or have conversations with world-class experts? - Somewhat contradicting the first possibility: TYPES is perceived as a very specialized theoretical list (e.g. only about types) so people who don't have a specialized question would not post here. - Other? (e.g. fewer people interested in the core TYPES topics?) The questions really are: - Is this situation problematic for you or is TYPES serving you ("you" the seasoned researchers, as well as "you" the newcomers or students) just fine and we should not worry? - If you think there is a problem, what is your diagnosis? - Should we do anything about it? Would you like to see TYPES becoming a more active forum and do you have concrete ideas about how to do it? Please do email me (dimitris at microsoft.com) with feedback or any thoughts on the future of this list. I will collect responses and compile answers, conclusions and try to identify action items. Kind regards, Dimitrios Vytiniotis -----Original Message----- From: Types-list [mailto:types-list-bounces at lists.seas.upenn.edu] On Behalf Of Derek Dreyer Sent: Wednesday, May 7, 2014 4:13 PM To: types-announce at lists.seas.upenn.edu; Types list Subject: [TYPES] New moderator: Dimitrios Vytiniotis [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Dear Types and Types/announce members, It has been my pleasure to serve as moderator of the Types and Types/announce lists for 5 years now, but it's time to hand over the reins to someone else. Dimitrios Vytiniotis of Microsoft Research Cambridge has kindly agreed to take over as moderator. I want to thank him for agreeing to moderate the list, and I'm sure he'll do a great job! In fact, he and I have discussed it, and one of the first things he would like to do as new moderator is to solicit feedback from the community on whether the Types list is serving its members as well as it could, and if not, how best to evolve it going forward. He will be sending mail about that shortly. Best regards, Derek Dreyer From vladimir at ias.edu Mon May 12 13:47:26 2014 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Mon, 12 May 2014 19:47:26 +0200 Subject: [TYPES] types Message-ID: Hello, I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. For them types were a restriction mechanism. As Russell and Whitehead write: "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? Vladimir. From matthias at ccs.neu.edu Mon May 12 17:24:32 2014 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Mon, 12 May 2014 17:24:32 -0400 Subject: [TYPES] types In-Reply-To: References: Message-ID: On May 12, 2014, at 1:47 PM, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? Viewing types as restrictive or enabling mechanisms is simply a matter of perspective, not intrinsic to the idea/language itself. One man's "types rule out X" is another man's "with types you can say that you can't get X" in a program. Furthermore, you can use typed thinking to organize the design of programs even in the absence of a type language and type checking. I have run courses for over 20 years this way and successfully so from the middle school level to the MS level. More concretely, you can program with types in "Scheme"; you don't need to be in "Ocaml" (quotes to remind readers that I mean the idea, not the exact language). Indeed, given the huge demand for "untyped" programmers, I argue that a responsible instructor must expose students to just such scenarios to prepare his students for this world. -- Matthias From dagitj at gmail.com Mon May 12 16:30:44 2014 From: dagitj at gmail.com (Jason Dagit) Date: Mon, 12 May 2014 13:30:44 -0700 Subject: [TYPES] types In-Reply-To: References: Message-ID: On Mon, May 12, 2014 at 10:47 AM, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it > seems to me that the concept of types as we use it today has very little > with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead > write: > > "It should be observed that the whole effect of the doctrine of types is > negative: it forbids certain inferences which would otherwise be valid, but > does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types > in Ocaml are a device without which writing many programs would be > extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in > programming languages which were enabling rather than forbidding in nature? > My understanding, which may be wrong/incomplete, is that types started to appear as a powerful mechanism in computer science with Milner and ML. For instance: Robin Milner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 17:348-375, August 1978. I got that paper from this list, which contains many other great papers to help you build up an historical context: http://www.cis.upenn.edu/~bcpierce/courses/670Fall04/GreatWorksInPL.shtml I hope that helps! From rwh at cs.cmu.edu Mon May 12 17:40:43 2014 From: rwh at cs.cmu.edu (Prof Robert Harper) Date: Mon, 12 May 2014 17:40:43 -0400 Subject: [TYPES] types In-Reply-To: References: Message-ID: hi vladimir, i think the main thing we took from russell is the principle that "a type is the range of significance of a variable". their focus was, i think, on the avoidance of paradoxes such as russell had found in frege's system, and it was thought that some form of stratification would be necessary to avoid "vicious circles". russell was, in particular, critical of impredicative definitions of any kind, exactly because of their quasi-circular nature. reynolds's dictum pertaining to types for programming languages is that "a type system is a syntactic discipline for enforcing levels of abstraction". i like this definition, because it emphasizes that the power of a type system arises from its strictures, which can be selectively relaxed, not its affordances, which sacrifice the ability to draw sharp distinctions. the limiting case of the latter are uni-typed languages whose one type is better viewed as one of many types in a richer language. by using the one distinguished type among many possibilities in a multi-typed language, one can recover the freedoms of the uni-typed setting, but not the other way around. a similar situation obtains with respect to laziness: if all types are pointed, then there are no unpointed types, but a pointed type is an example of a (not necessarily pointed) type. as to the first use of types in a programming language, i suspect one could argue for algol-60, but i am not a historian of these topics, so perhaps there were earlier examples. even fortran had types (for different numbers and for arrays), but algol took them a lot more seriously, it seems to me. much later, pascal popularized the use of types, but many regarded it as a backward step because the type system was so primitive (though rather advanced for its time) that it could be hard to manage. similar ideas were used in simula, which later led to languages such as java, and in algol-68, which made an impressive use of types, but which was not very successful in practice. the big breakthrough came with the discovery/invention of type inference, milner's algorithm being the classic result in this area, which showed that in a purely functional language of considerable expressive power one could write typed programs without ever mentioning a type. this was the game-changer, because it eliminated fruitless arguments about the "clutter" and "annoyance" of having to write down types all the time, and focused attention on the role of types in the semantics of the situation. although it wasn't considered such at the time (afaik), lisp was in fact one of the first practical typed languages that greatly increased the expressive power available to programmers by having a sufficiently rich type that included symbols, pairing, and a form of higher-order functions (the early dialects didn't get this quite right, but it was on the right track). it was at the time almost magical that one could just write down a list and manipulate it without having to worry about how it would be allocated in memory. martin-loef's paper "constructive mathematics and computer programming" was the first paper to draw the explicit connection between type theory (viewed as a formulation of constructive mathematics) and programming. it is one of the most influential papers ever written in computer science for that reason. it directly inspired nuprl, coq, alf, agda, and numerous other languages and provers. best, bob harper On May 12, 2014, at 13:47, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? > > Vladimir. > > > > > From cody.roux at andrew.cmu.edu Mon May 12 17:54:30 2014 From: cody.roux at andrew.cmu.edu (Cody Roux) Date: Mon, 12 May 2014 17:54:30 -0400 Subject: [TYPES] types In-Reply-To: References: Message-ID: <53714316.1060407@andrew.cmu.edu> As others have noted, tracing historical shifts in points of view is quite difficult. I somehow feel that the "Lawvere" part of the "Curry-Howard-Lawvere-isomorphism" is at the heart of the real shift from a "prohibitive" point of view to a "prescriptive" point of view. Of course it's hard not to cite Martin-L?f "On the Meaning of Logical Constants" (the Sienna lectures): http://www.ae-info.org/attach/User/Martin-L%C3%B6f_Per/OtherInformation/article.pdf It explains the philosophy behind the modern attitude towards type theory. Best, Cody On 05/12/2014 01:47 PM, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? > > Vladimir. > > > > > From martini at cs.unibo.it Mon May 12 18:14:23 2014 From: martini at cs.unibo.it (Simone Martini) Date: Tue, 13 May 2014 00:14:23 +0200 Subject: [TYPES] types In-Reply-To: References: Message-ID: <7D5C0DD6-0B13-4CB3-AF4E-80797D4CAEA4@cs.unibo.it> There is not a real tension between "types as restriction" (Russell) and "types as enablers" (modern programming languages)--these two views live at two different levels. From the syntactical point of view, what Russell says it is an obvious truth, for their types and for our types. There are programs which would be correct according to a BNF grammar which become illegal after typing. And, if we insist that typing be decidable, there will be programs which will not cause any type error during execution, and still will be declared illegal by typing. The essential difference with respect to Russell (and Church, and most of the "types-as-a-foundation-of-mathematics" authors) is that for them types where never supposed to be actually used by the working mathematician (with the debatable exception of Russell himself). It was sufficient that *in principle* most of the mathematics could be done in typed languages. Types in programming languages, on the contrary, while being restrictive in the same sense, are used everyday by the working computer programmer. And hence, from the very beginning in Algol and Pascal, computer science had to face the problem to make types more "expressive", "flexible", "enabling". One of the first explicit attempts to combine these two views is certainly the early work on LCF/ML---Damas-Milner type inference providing a powerful mechanism of enforcing type restrictions while allowing more liberal (but principled!) reasoning. The crucial point, here and in most computer science applications of mathematical logic concepts and techniques, is that CS never used ideological glasses (types per se; constructive mathematics per se; etc.), but exploited what it found useful for the design of more elegant, economical, usable, etc. artifacts. -s. On 12/mag/2014, at 19.47, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? > > Vladimir. > > > > > From smcdirm at microsoft.com Mon May 12 18:33:46 2014 From: smcdirm at microsoft.com (Sean McDirmid) Date: Mon, 12 May 2014 22:33:46 +0000 Subject: [TYPES] types In-Reply-To: References: Message-ID: <4C297619-89DA-4B58-9539-DFF5B88586A9@microsoft.com> There are two places where I have seen types used prescriptively rather than restrictively. (1) Type-based implicit resolutions in languages like Scala. (2) Code completion in IDEs that is enabled by type information; types are typically used to filter choices for the programmer's "next move." (2) is more of a tooling concern, but it is important enough to be a first class concern in production languages (i.e. language features are evaluated based on how they help/hurt code completion). > On May 13, 2014, at 4:28 AM, "Vladimir Voevodsky" wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? > > Vladimir. > > > > > From u.s.reddy at cs.bham.ac.uk Mon May 12 19:34:37 2014 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Tue, 13 May 2014 00:34:37 +0100 Subject: [TYPES] types In-Reply-To: References: Message-ID: <21361.23181.236000.319702@gargle.gargle.HOWL> Vladimir Voevodsky writes: > For them types were a restriction mechanism. As Russell and Whitehead > write: > > "It should be observed that the whole effect of the doctrine of types is > negative: it forbids certain inferences which would otherwise be valid, > but does not permit any which would otherwise be invalid." I am afraid the situation was very much the same in early programming language theory as well. Milner's "theory of type polymorphism in programming", which was mentioned by somebody earlier, defines an untyped calculus and then a typed version of it (ML) as a restriction of the untyped calculus. The selling point is that the programs that belong to the typed subset don't go "wrong". The first place where I have seen a contrary view was in: Dana Scott, Relating theories of the lambda calculus, in "To H.B. Curry: Essays in Combinator Logic, Lambda Calculus and Formalism", Hindley and Seldin (eds), Academic Press, 1980. Here, Scott argues that the typed lambda calculus is conceptually prior to the untyped calculus, and that the untyped calculus is just a special case of the typed calculus with a single type. That was quite remarkable to me at that time, coming as it did from the master of the untyped lambda calculus himself! I don't know how widely this paper was read, but when I went to the "Semantics of Data Types" conference in 1984, I noticed that almost everybody was talking about typed calculi without worrying much about the untyped lambda calculus. By 1984, the polymorphic lambda calculus (System F) was a hot topic (frequently called the "Girard-Reynolds calculus" at that time). Both Girard and Reynolds defined their calculus without any reference to any untyped calculus. So, all other people that worked with the polymorphic lambda calculus also followed the same approach. So, it seems to me that Scott, Girard and Reynolds, somehow jointly killed the untyped lambda calculus as a foundation by 1984. Cheers, Uday From gershomb at gmail.com Tue May 13 00:10:26 2014 From: gershomb at gmail.com (Gershom Bazerman) Date: Tue, 13 May 2014 00:10:26 -0400 Subject: [TYPES] types In-Reply-To: <53714316.1060407@andrew.cmu.edu> References: <53714316.1060407@andrew.cmu.edu> Message-ID: <53719B32.6090603@gmail.com> On 5/12/14, 5:54 PM, Cody Roux wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > As others have noted, tracing historical shifts in points of view is > quite difficult. I somehow feel that the "Lawvere" part of the > "Curry-Howard-Lawvere-isomorphism" is at the heart of the real shift > from a "prohibitive" point of view to a "prescriptive" point of view. > Thanks, Vladimir for your very interesting question. It is really a shame that (as far as I know) there is not a comprehensive article or book that we could turn to on this material. To follow up on the "Lawvere part", Goguen and the ADJ group developed initial algebra semantics as an algebraic theory of abstract data types in the early-mid 1970s, directly inspired by Lawvere's work in semantics. Goguen and Burstall went on to develop the CLEAR specification language, which elaborated these ideas, and as far as I know algebraic data types proper were first implemented in HOPE (described in Burstall, MacQueen, Sannella, 1980). It seems to me that the shift from data-types-as-hiding (the "abstract" notion of modularity) to data-types-as-presenting-actions (algebraic data types) is an example of precisely that shift from restriction to possibility that we're looking for. (However, I am confused by the reference to "Curry-Howard-Lawvere" as I thought the isomorphism proper [via cartesian closed categories] was due to Lambek? Is this a mistype, or was Lawvere more involved in that portion of the story than I realize?) In skimming papers looking for ideas here, I ran into a very funny counterexample as well. Reynolds' 1972 "Definitional Interpreters for Higher-Order Programming Languages" praises Scott semantics as follows: "The central technical problem that Scott has solved is to de?ne functions that are not only higher-order, but also _typeless_, so that any function may be applied to any other function, including itself." From a modern perspective, this of course reads as very strange praise. But, I suppose (since I am too young to _know_), at the time it must have seemed that being typeless was essential to allowing general recursion, higher order functions, etc. Cheers, Gershom From dreyer at mpi-sws.org Tue May 13 04:12:00 2014 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Tue, 13 May 2014 10:12:00 +0200 Subject: [TYPES] types In-Reply-To: References: Message-ID: Hi Vladimir. I would argue that types are enabling *because* they are forbidding. That sounds very zen, but I have a concrete point in mind. As Bob mentioned already, it was Reynolds who observed that types are useful for syntactically enforcing levels of abstraction, and what that really meant to Reynolds is that one can locally establish invariants for -- and change private representation of -- abstract data types (ADTs), without affecting the observable behavior of code that relies on those types. From my perspective, the crucial point here is that types *enable* programmers to write more reliable code because the burden of establishing invariants on an ADT is kept *local* to the module that defines it. The restrictions of the type system then guarantee that the rest of the program, by virtue of being forced to respect the abstraction of the ADT, is also forced to respect the ADT's locally-established invariants. Thus, the very restrictiveness of types is what enables program *correctness* to scale. Notice I said program correctness, not verification. What I mean is that programs written in mainstream typed languages often behave correctly due to complex invariants on ADTs that the programmer has in their head but that have not been written down or verified. These invariants are often really complex -- we are only now in a stage of verification research where we can even begin to express them formally! But even ignoring formal verification, types have the practical benefit of enabling the programmer to do some complex informal reasoning locally but have confidence that it holds good when the ADT is used in a much larger program. Best regards, Derek On Mon, May 12, 2014 at 7:47 PM, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? > > Vladimir. > > > > > From psztxa at exmail.nottingham.ac.uk Tue May 13 04:52:42 2014 From: psztxa at exmail.nottingham.ac.uk (Altenkirch Thorsten) Date: Tue, 13 May 2014 09:52:42 +0100 Subject: [TYPES] types In-Reply-To: <21361.23181.236000.319702@gargle.gargle.HOWL> Message-ID: Thank you Uday. My understanding of Type Theory is that types are first and values come later, they are only justified with reference to a type. The view that types are only a way to classify untyped entities is outdated because it focusses on the mechanism instead of the functionality and hence hinders abstraction. This is also the fundamental issue with set theory, where we can talk about elements and membership independently (and we can ask stupid questions such as "is the empty set an element of the natural numbers?"). While it is interesting as Vladimir suggests to look into the historic origins of the strongly typed approach and I think you are right to identify Scott, Girard and Reynolds but should not forget Martin-Loef, I think it is important to overcome the low level view that untyped entities are first, which is is still prevalent among many people. Thorsten On 13/05/2014 00:34, "Uday S Reddy" wrote: >[ The Types Forum, >http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >Vladimir Voevodsky writes: > >> For them types were a restriction mechanism. As Russell and Whitehead >> write: >> >> "It should be observed that the whole effect of the doctrine of types is >> negative: it forbids certain inferences which would otherwise be valid, >> but does not permit any which would otherwise be invalid." > >I am afraid the situation was very much the same in early programming >language theory as well. Milner's "theory of type polymorphism in >programming", which was mentioned by somebody earlier, defines an untyped >calculus and then a typed version of it (ML) as a restriction of the >untyped >calculus. The selling point is that the programs that belong to the typed >subset don't go "wrong". > >The first place where I have seen a contrary view was in: > > Dana Scott, Relating theories of the lambda calculus, in "To H.B. >Curry: > Essays in Combinator Logic, Lambda Calculus and Formalism", Hindley >and > Seldin (eds), Academic Press, 1980. > >Here, Scott argues that the typed lambda calculus is conceptually prior to >the untyped calculus, and that the untyped calculus is just a special case >of the typed calculus with a single type. That was quite remarkable to me >at that time, coming as it did from the master of the untyped lambda >calculus himself! > >I don't know how widely this paper was read, but when I went to the >"Semantics of Data Types" conference in 1984, I noticed that almost >everybody was talking about typed calculi without worrying much about the >untyped lambda calculus. > >By 1984, the polymorphic lambda calculus (System F) was a hot topic >(frequently called the "Girard-Reynolds calculus" at that time). Both >Girard and Reynolds defined their calculus without any reference to any >untyped calculus. So, all other people that worked with the polymorphic >lambda calculus also followed the same approach. > >So, it seems to me that Scott, Girard and Reynolds, somehow jointly killed >the untyped lambda calculus as a foundation by 1984. > >Cheers, >Uday This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From wadler at inf.ed.ac.uk Tue May 13 06:43:25 2014 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 13 May 2014 11:43:25 +0100 Subject: [TYPES] types In-Reply-To: <53719B32.6090603@gmail.com> References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> Message-ID: Vladimir's intriguing question reminds me of a related question: Why do we call them 'types' at all?. That is, how did the concept named by Russell end up as a term in computing? Perhaps someone on this list knows the answer. Turing published papers on type theory, so we know Turing read Russell (citation available), and we know Backus used 'type' in FORTRAN (citation required---I just tried scanning for an early FORTRAN paper to confirm it uses the word 'type', but did not succeed in ten minutes). What is the line between these two? Did Turing use the term 'type' in connection with computers? Did von Neumann or the other early architects? What source or sources influenced Backus? Yours, -- P . \ Philip Wadler, Professor of Theoretical Computer Science . /\ School of Informatics, University of Edinburgh . / \ http://homepages.inf.ed.ac.uk/wadler/ On 13 May 2014 05:10, Gershom Bazerman wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > On 5/12/14, 5:54 PM, Cody Roux wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/ >> mailman/listinfo/types-list ] >> >> As others have noted, tracing historical shifts in points of view is >> quite difficult. I somehow feel that the "Lawvere" part of the >> "Curry-Howard-Lawvere-isomorphism" is at the heart of the real shift >> from a "prohibitive" point of view to a "prescriptive" point of view. >> >> Thanks, Vladimir for your very interesting question. It is really a > shame that (as far as I know) there is not a comprehensive article or book > that we could turn to on this material. To follow up on the "Lawvere part", > Goguen and the ADJ group developed initial algebra semantics as an > algebraic theory of abstract data types in the early-mid 1970s, directly > inspired by Lawvere's work in semantics. Goguen and Burstall went on to > develop the CLEAR specification language, which elaborated these ideas, and > as far as I know algebraic data types proper were first implemented in HOPE > (described in Burstall, MacQueen, Sannella, 1980). It seems to me that the > shift from data-types-as-hiding (the "abstract" notion of modularity) to > data-types-as-presenting-actions (algebraic data types) is an example of > precisely that shift from restriction to possibility that we're looking for. > > (However, I am confused by the reference to "Curry-Howard-Lawvere" as I > thought the isomorphism proper [via cartesian closed categories] was due to > Lambek? Is this a mistype, or was Lawvere more involved in that portion of > the story than I realize?) > > In skimming papers looking for ideas here, I ran into a very funny > counterexample as well. Reynolds' 1972 "Definitional Interpreters for > Higher-Order Programming Languages" praises Scott semantics as follows: > "The central technical problem that Scott has solved is to de?ne functions > that are not only higher-order, but also _typeless_, so that any function > may be applied to any other function, including itself." From a modern > perspective, this of course reads as very strange praise. But, I suppose > (since I am too young to _know_), at the time it must have seemed that > being typeless was essential to allowing general recursion, higher order > functions, etc. > > Cheers, > Gershom > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From wadler at inf.ed.ac.uk Tue May 13 06:52:03 2014 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 13 May 2014 11:52:03 +0100 Subject: [TYPES] types In-Reply-To: References: Message-ID: Another landmark paper in the development of types as enablers is: Jim Morris, Types are nots sets, POPL 1973. (That's the first POPL.) http://dl.acm.org/citation.cfm?id=512938 Yours, -- P . \ Philip Wadler, Professor of Theoretical Computer Science . /\ School of Informatics, University of Edinburgh . / \ http://homepages.inf.ed.ac.uk/wadler/ On 13 May 2014 09:12, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Hi Vladimir. > > I would argue that types are enabling *because* they are forbidding. > That sounds very zen, but I have a concrete point in mind. > > As Bob mentioned already, it was Reynolds who observed that types are > useful for syntactically enforcing levels of abstraction, and what > that really meant to Reynolds is that one can locally establish > invariants for -- and change private representation of -- abstract > data types (ADTs), without affecting the observable behavior of code > that relies on those types. From my perspective, the crucial point > here is that types *enable* programmers to write more reliable code > because the burden of establishing invariants on an ADT is kept > *local* to the module that defines it. The restrictions of the type > system then guarantee that the rest of the program, by virtue of being > forced to respect the abstraction of the ADT, is also forced to > respect the ADT's locally-established invariants. Thus, the very > restrictiveness of types is what enables program *correctness* to > scale. > > Notice I said program correctness, not verification. What I mean is > that programs written in mainstream typed languages often behave > correctly due to complex invariants on ADTs that the programmer has in > their head but that have not been written down or verified. These > invariants are often really complex -- we are only now in a stage of > verification research where we can even begin to express them > formally! But even ignoring formal verification, types have the > practical benefit of enabling the programmer to do some complex > informal reasoning locally but have confidence that it holds good when > the ADT is used in a much larger program. > > Best regards, > Derek > > On Mon, May 12, 2014 at 7:47 PM, Vladimir Voevodsky > wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hello, > > > > I am reading Russell's texts and the more I investigate them the more it > seems to me that the concept of types as we use it today has very little > with how types where perceived by Russell or Church. > > > > For them types were a restriction mechanism. As Russell and Whitehead > write: > > > > "It should be observed that the whole effect of the doctrine of types is > negative: it forbids certain inferences which would otherwise be valid, but > does not permit any which would otherwise be invalid." > > > > The types which we use today are a constructive tool. For example, types > in Ocaml are a device without which writing many programs would be > extremely inconvenient. > > > > I am looking for a historic advice - when and where did types appear in > programming languages which were enabling rather than forbidding in nature? > > > > Vladimir. > > > > > > > > > > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From carette at mcmaster.ca Tue May 13 07:57:43 2014 From: carette at mcmaster.ca (Jacques Carette) Date: Tue, 13 May 2014 07:57:43 -0400 Subject: [TYPES] types In-Reply-To: References: Message-ID: <537208B7.4070706@mcmaster.ca> The history of type theory is actually rather well laid out at http://en.wikipedia.org/wiki/History_of_type_theory. I have not read this book http://www.amazon.com/History-Philosophy-Constructive-Synthese-Library/dp/9048154030 but it appears topical. However, I am quite surprised that no one has yet mentioned "A Modern Perspective on Type Theory", http://www.springer.com/mathematics/book/978-1-4020-2334-7 which takes a historical path through type theory. Perhaps because the types mailing list is inhabited by mainly PL people? Having said that, I want to focus on a particular aspect of the question: On 2014-05-12 1:47 PM, Vladimir Voevodsky wrote: > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. While many have been quite philosophical about what this means, I think something quite simple is being forgotten here: pattern matching. Without types, eliminators for sums are very awkward to deal with. While one can argue how much benefit there is in types for those whose values are given largely through their introduction form, types really shine when types are largely given by their elimination forms. Oh, it is possible to program entirely in terms of eliminators, but the results are "extremely inconvenient" indeed! To me, types enable a nice concept of sum types, which in turn enable a nice notion of pattern matching . [I use 'enable' here, knowing that one can do pattern matching without sums and without types, but the results are not as compelling.] And pattern matching allows programs to be written much more succinctly and clearly than without. Jacques From historium at gmail.com Tue May 13 08:04:11 2014 From: historium at gmail.com (Tamreen Khan) Date: Tue, 13 May 2014 08:04:11 -0400 Subject: [TYPES] types In-Reply-To: References: Message-ID: Is it possible that calling them "types" in FORTRAN was just a coincidence in that it used the same term used by Russell? I was under the impression that types as used in that language didn't really have as much of a mathematical basis as they do today. Tamreen On Tue, May 13, 2014 at 6:52 AM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > > Another landmark paper in the development of types as enablers is: > > Jim Morris, Types are nots sets, POPL 1973. (That's the first POPL.) > http://dl.acm.org/citation.cfm?id=512938 > > Yours, -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science > . /\ School of Informatics, University of Edinburgh > . / \ http://homepages.inf.ed.ac.uk/wadler/ > > > On 13 May 2014 09:12, Derek Dreyer wrote: > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > > > Hi Vladimir. > > > > I would argue that types are enabling *because* they are forbidding. > > That sounds very zen, but I have a concrete point in mind. > > > > As Bob mentioned already, it was Reynolds who observed that types are > > useful for syntactically enforcing levels of abstraction, and what > > that really meant to Reynolds is that one can locally establish > > invariants for -- and change private representation of -- abstract > > data types (ADTs), without affecting the observable behavior of code > > that relies on those types. From my perspective, the crucial point > > here is that types *enable* programmers to write more reliable code > > because the burden of establishing invariants on an ADT is kept > > *local* to the module that defines it. The restrictions of the type > > system then guarantee that the rest of the program, by virtue of being > > forced to respect the abstraction of the ADT, is also forced to > > respect the ADT's locally-established invariants. Thus, the very > > restrictiveness of types is what enables program *correctness* to > > scale. > > > > Notice I said program correctness, not verification. What I mean is > > that programs written in mainstream typed languages often behave > > correctly due to complex invariants on ADTs that the programmer has in > > their head but that have not been written down or verified. These > > invariants are often really complex -- we are only now in a stage of > > verification research where we can even begin to express them > > formally! But even ignoring formal verification, types have the > > practical benefit of enabling the programmer to do some complex > > informal reasoning locally but have confidence that it holds good when > > the ADT is used in a much larger program. > > > > Best regards, > > Derek > > > > On Mon, May 12, 2014 at 7:47 PM, Vladimir Voevodsky > > wrote: > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > Hello, > > > > > > I am reading Russell's texts and the more I investigate them the more > it > > seems to me that the concept of types as we use it today has very little > > with how types where perceived by Russell or Church. > > > > > > For them types were a restriction mechanism. As Russell and Whitehead > > write: > > > > > > "It should be observed that the whole effect of the doctrine of types > is > > negative: it forbids certain inferences which would otherwise be valid, > but > > does not permit any which would otherwise be invalid." > > > > > > The types which we use today are a constructive tool. For example, > types > > in Ocaml are a device without which writing many programs would be > > extremely inconvenient. > > > > > > I am looking for a historic advice - when and where did types appear in > > programming languages which were enabling rather than forbidding in > nature? > > > > > > Vladimir. > > > > > > > > > > > > > > > > > > > > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > From patrick.browne at dit.ie Tue May 13 08:09:38 2014 From: patrick.browne at dit.ie (PATRICK BROWNE) Date: Tue, 13 May 2014 13:09:38 +0100 Subject: [TYPES] Types as theories Message-ID: Dear list I am studying Goguen's paper Types as theories [1]. Below are two examples coded in CafeOBJ (any other algebraic specification language would do). Based on Goguen's paper, are the following true? 1) Subsort inheritance provides a classification of values, every value of the sub-sort is a value of the super-sort.(MONOID1) 2) Module import inheritance provides classification of models, each model of the class of models of the importing module contains a model of the imported module.(MONOID2) My intention is that MONOID1 illustrates 1) and 2) MONOID2 illustrates 2). Assumptions the CafeOBJ module is the software realization of a theory loose semantics, so any model of TRIV will do the term type is synonym for sort the < symbol indicates subsort a monoid is associative and has a neutral element. mod* MONOID1 { [ Monoid < Elt ] -- Signature in terms of super type op e : -> Elt op _._ : Elt Elt -> Elt -- equations in terms of subtype vars A B C : Group eq A . e = A . eq e . A = A . eq A . (B . C) = ((A . B) . C) . } mod* MONOID2 { -- -- The TRIV theory represented by the CafeOBJ -- built in loose theory TRIV which has Elt as its principal sort and no operations. -- I used protecting mode for import, perhaps it should be extending? pr(TRIV) op e : -> Elt op _._ : Elt Elt -> Elt vars A B C : Elt eq A . e = A . eq e . A = A . eq A . (B . C) = ((A . B) . C) . -- Associativity could be specified as a property } Regards, Pat [1]Goguen, J. (1991). Types as Theories. Topology and Category in Computer Science. G. M. Reed, A. W. Roscoe and R. F. Wachter, Oxford University Press: 357-390. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.2241 -- This email originated from DIT. If you received this email in error, please delete it from your system. Please note that if you are not the named addressee, disclosing, copying, distributing or taking any action based on the contents of this email or attachments is prohibited. www.dit.ie Is ? ITB?C a th?inig an r?omhphost seo. M? fuair t? an r?omhphost seo tr? earr?id, scrios de do ch?ras ? le do thoil. Tabhair ar aird, mura t? an seola? ainmnithe, go bhfuil dianchosc ar aon nochtadh, aon ch?ipe?il, aon d?ileadh n? ar aon ghn?omh a dh?anfar bunaithe ar an ?bhar at? sa r?omhphost n? sna hiat?in seo. www.dit.ie T? ITB?C ag aistri? go Gr?inseach Ghorm?in ? DIT is on the move to Grangegorman From wadler at inf.ed.ac.uk Tue May 13 07:53:53 2014 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 13 May 2014 12:53:53 +0100 Subject: [TYPES] types In-Reply-To: References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> Message-ID: Thanks, Tony, but that answer is incomplete. What is the linkage from Church to the early computing machines? Turing is an obvious link, but I don't know in which paper, or whether, he uses types in connection to computing machinery. I had thought FORTRAN referred to types, but I am halfway through downloading an early paper and it appears I may be mistaken. So Church may be part of the path, but what is the rest? Yours, -- P . \ Philip Wadler, Professor of Theoretical Computer Science . /\ School of Informatics, University of Edinburgh . / \ http://homepages.inf.ed.ac.uk/wadler/ On 13 May 2014 12:45, Tony Dekker wrote: > Why do we use Russell's term "types"? Church's classic 1940 paper "A > Formulation of the Simple Theory of Types" explicitly cites Russell's work > as an influence. Because of Church's great influence, I presume this paper > provides the "linkage." > > -- Anthony Dekker / dekker at acm.org > > > On Tue, May 13, 2014 at 8:43 PM, Philip Wadler wrote: > >> [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> >> Vladimir's intriguing question reminds me of a related question: Why do we >> call them 'types' at all?. That is, how did the concept named by Russell >> end up as a term in computing? Perhaps someone on this list knows the >> answer. >> >> Turing published papers on type theory, so we know Turing read Russell >> (citation available), and we know Backus used 'type' in FORTRAN (citation >> required---I just tried scanning for an early FORTRAN paper to confirm it >> uses the word 'type', but did not succeed in ten minutes). What is the >> line >> between these two? Did Turing use the term 'type' in connection with >> computers? Did von Neumann or the other early architects? What source or >> sources influenced Backus? >> >> Yours, -- P >> >> . \ Philip Wadler, Professor of Theoretical Computer Science >> . /\ School of Informatics, University of Edinburgh >> . / \ http://homepages.inf.ed.ac.uk/wadler/ >> >> >> On 13 May 2014 05:10, Gershom Bazerman wrote: >> >> > [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list] >> > >> > On 5/12/14, 5:54 PM, Cody Roux wrote: >> > >> >> [ The Types Forum, http://lists.seas.upenn.edu/ >> >> mailman/listinfo/types-list ] >> >> >> >> As others have noted, tracing historical shifts in points of view is >> >> quite difficult. I somehow feel that the "Lawvere" part of the >> >> "Curry-Howard-Lawvere-isomorphism" is at the heart of the real shift >> >> from a "prohibitive" point of view to a "prescriptive" point of view. >> >> >> >> Thanks, Vladimir for your very interesting question. It is really a >> > shame that (as far as I know) there is not a comprehensive article or >> book >> > that we could turn to on this material. To follow up on the "Lawvere >> part", >> > Goguen and the ADJ group developed initial algebra semantics as an >> > algebraic theory of abstract data types in the early-mid 1970s, directly >> > inspired by Lawvere's work in semantics. Goguen and Burstall went on to >> > develop the CLEAR specification language, which elaborated these ideas, >> and >> > as far as I know algebraic data types proper were first implemented in >> HOPE >> > (described in Burstall, MacQueen, Sannella, 1980). It seems to me that >> the >> > shift from data-types-as-hiding (the "abstract" notion of modularity) to >> > data-types-as-presenting-actions (algebraic data types) is an example of >> > precisely that shift from restriction to possibility that we're looking >> for. >> > >> > (However, I am confused by the reference to "Curry-Howard-Lawvere" as I >> > thought the isomorphism proper [via cartesian closed categories] was >> due to >> > Lambek? Is this a mistype, or was Lawvere more involved in that portion >> of >> > the story than I realize?) >> > >> > In skimming papers looking for ideas here, I ran into a very funny >> > counterexample as well. Reynolds' 1972 "Definitional Interpreters for >> > Higher-Order Programming Languages" praises Scott semantics as follows: >> > "The central technical problem that Scott has solved is to de?ne >> functions >> > that are not only higher-order, but also _typeless_, so that any >> function >> > may be applied to any other function, including itself." From a modern >> > perspective, this of course reads as very strange praise. But, I suppose >> > (since I am too young to _know_), at the time it must have seemed that >> > being typeless was essential to allowing general recursion, higher order >> > functions, etc. >> > >> > Cheers, >> > Gershom >> > >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From tony.dekker at gmail.com Tue May 13 08:42:37 2014 From: tony.dekker at gmail.com (Tony Dekker) Date: Tue, 13 May 2014 22:42:37 +1000 Subject: [TYPES] types In-Reply-To: References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> Message-ID: Philip, That's true -- I can't trace an exact path, but Turing does at least cite that 1940 paper by Church [Newman and Turing (1942), "A Formal Theorem in Church's Theory of Types" / Turing (1948), "Practical Forms of Type Theory"]. More significantly, ALGOL 60 was influenced by Church, and Landin explicitly acknowledges that in some of his papers. I'm *guessing* this influence extended to ALGOL's type system, but I haven't checked. -- Anthony Dekker / dekker at acm.org On Tue, May 13, 2014 at 9:53 PM, Philip Wadler wrote: > Thanks, Tony, but that answer is incomplete. What is the linkage from > Church to the early computing machines? Turing is an obvious link, but I > don't know in which paper, or whether, he uses types in connection to > computing machinery. I had thought FORTRAN referred to types, but I am > halfway through downloading an early paper and it appears I may be > mistaken. So Church may be part of the path, but what is the rest? Yours, > -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science > . /\ School of Informatics, University of Edinburgh > . / \ http://homepages.inf.ed.ac.uk/wadler/ > > > On 13 May 2014 12:45, Tony Dekker wrote: > >> Why do we use Russell's term "types"? Church's classic 1940 paper "A >> Formulation of the Simple Theory of Types" explicitly cites Russell's work >> as an influence. Because of Church's great influence, I presume this paper >> provides the "linkage." >> >> -- Anthony Dekker / dekker at acm.org >> >> >> On Tue, May 13, 2014 at 8:43 PM, Philip Wadler wrote: >> >>> [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>> >>> >>> Vladimir's intriguing question reminds me of a related question: Why do >>> we >>> call them 'types' at all?. That is, how did the concept named by Russell >>> end up as a term in computing? Perhaps someone on this list knows the >>> answer. >>> >>> Turing published papers on type theory, so we know Turing read Russell >>> (citation available), and we know Backus used 'type' in FORTRAN (citation >>> required---I just tried scanning for an early FORTRAN paper to confirm it >>> uses the word 'type', but did not succeed in ten minutes). What is the >>> line >>> between these two? Did Turing use the term 'type' in connection with >>> computers? Did von Neumann or the other early architects? What source or >>> sources influenced Backus? >>> >>> Yours, -- P >>> >>> . \ Philip Wadler, Professor of Theoretical Computer Science >>> . /\ School of Informatics, University of Edinburgh >>> . / \ http://homepages.inf.ed.ac.uk/wadler/ >>> >>> >>> On 13 May 2014 05:10, Gershom Bazerman wrote: >>> >>> > [ The Types Forum, >>> http://lists.seas.upenn.edu/mailman/listinfo/types-list] >>> > >>> > On 5/12/14, 5:54 PM, Cody Roux wrote: >>> > >>> >> [ The Types Forum, http://lists.seas.upenn.edu/ >>> >> mailman/listinfo/types-list ] >>> >> >>> >> As others have noted, tracing historical shifts in points of view is >>> >> quite difficult. I somehow feel that the "Lawvere" part of the >>> >> "Curry-Howard-Lawvere-isomorphism" is at the heart of the real shift >>> >> from a "prohibitive" point of view to a "prescriptive" point of view. >>> >> >>> >> Thanks, Vladimir for your very interesting question. It is really a >>> > shame that (as far as I know) there is not a comprehensive article or >>> book >>> > that we could turn to on this material. To follow up on the "Lawvere >>> part", >>> > Goguen and the ADJ group developed initial algebra semantics as an >>> > algebraic theory of abstract data types in the early-mid 1970s, >>> directly >>> > inspired by Lawvere's work in semantics. Goguen and Burstall went on to >>> > develop the CLEAR specification language, which elaborated these >>> ideas, and >>> > as far as I know algebraic data types proper were first implemented in >>> HOPE >>> > (described in Burstall, MacQueen, Sannella, 1980). It seems to me that >>> the >>> > shift from data-types-as-hiding (the "abstract" notion of modularity) >>> to >>> > data-types-as-presenting-actions (algebraic data types) is an example >>> of >>> > precisely that shift from restriction to possibility that we're >>> looking for. >>> > >>> > (However, I am confused by the reference to "Curry-Howard-Lawvere" as I >>> > thought the isomorphism proper [via cartesian closed categories] was >>> due to >>> > Lambek? Is this a mistype, or was Lawvere more involved in that >>> portion of >>> > the story than I realize?) >>> > >>> > In skimming papers looking for ideas here, I ran into a very funny >>> > counterexample as well. Reynolds' 1972 "Definitional Interpreters for >>> > Higher-Order Programming Languages" praises Scott semantics as follows: >>> > "The central technical problem that Scott has solved is to de?ne >>> functions >>> > that are not only higher-order, but also _typeless_, so that any >>> function >>> > may be applied to any other function, including itself." From a modern >>> > perspective, this of course reads as very strange praise. But, I >>> suppose >>> > (since I am too young to _know_), at the time it must have seemed that >>> > being typeless was essential to allowing general recursion, higher >>> order >>> > functions, etc. >>> > >>> > Cheers, >>> > Gershom >>> > >>> >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> >>> >> > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > From dan at ghica.net Tue May 13 08:18:53 2014 From: dan at ghica.net (Dan Ghica) Date: Tue, 13 May 2014 13:18:53 +0100 Subject: [TYPES] Types Message-ID: On 12 May 2014, at 22:24, Matthias Felleisen wrote: > Viewing types as restrictive or enabling mechanisms is simply a matter of perspective, not intrinsic to the idea/language itself. One man's "types rule out X" is another man's "with types you can say that you can't get X" in a program. Since you said ?intrinsic?, I will mention another classic: The Meaning of Types : From Intrinsic to Extrinsic Semantics , by Reynolds http://repository.cmu.edu/cgi/viewcontent.cgi?article=2290&context=compsci I think it addresses the issue Vladimir raised head on. From the abstract: "A definition of a typed language is said to be ?intrinsic? if it assigns meanings to typings rather than arbitrary phrases, so that ill-typed phrases are meaningless. In contrast, a definition is said to be ?extrinsic? if all phrases have meanings that are independent of their typings, while typings represent properties of these meanings.? drg Dr. Dan R. Ghica Reader in Semantics of Programming Languages University of Birmingham, School of Computer Science From samth at cs.indiana.edu Tue May 13 09:09:01 2014 From: samth at cs.indiana.edu (Sam Tobin-Hochstadt) Date: Tue, 13 May 2014 09:09:01 -0400 Subject: [TYPES] types In-Reply-To: <25974_1399983731_s4DCM4oX012338_537208B7.4070706@mcmaster.ca> References: <25974_1399983731_s4DCM4oX012338_537208B7.4070706@mcmaster.ca> Message-ID: On Tue, May 13, 2014 at 7:57 AM, Jacques Carette wrote: > [I use 'enable' here, knowing that one can do pattern matching without sums > and without types, but the results are not as compelling.] Jacques, I'm not sure what you mean by compelling, but any pair of these three is available without the other. For example, patterns matching and sums (but without types) are used extensively in Friedman and Wand's Essentials of Programming Languages textbook, and pattern matching with types but without sums can be seen in Typed Racket (for example, our paper on functional data structures [1]). Types and sums without pattern matching is easy to define, but as you say, awkward to program with. Sam Tobin-Hochstadt [1] http://www.ccs.neu.edu/racket/pubs/sfp10-kth.pdf with source at https://github.com/takikawa/tr-pfds/ From wadler at inf.ed.ac.uk Tue May 13 09:32:31 2014 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Tue, 13 May 2014 14:32:31 +0100 Subject: [TYPES] types In-Reply-To: <537208B7.4070706@mcmaster.ca> References: <537208B7.4070706@mcmaster.ca> Message-ID: I finally downloaded a scan of "The FORTRAN automatic coding system". The word type appears several times, for instance to distinguish "IF-type statement" from "DO-type statement", but is not used to distinguish fixed-point from floating-point. Yours, -- P J. W. Backus, et al, The FORTRAN automatic coding system, Papers presented at the 26-28 February 1957, Western Joint Computer Conference. http://dl.acm.org/citation.cfm?id=1455599 . \ Philip Wadler, Professor of Theoretical Computer Science . /\ School of Informatics, University of Edinburgh . / \ http://homepages.inf.ed.ac.uk/wadler/ On 13 May 2014 12:57, Jacques Carette wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > The history of type theory is actually rather well laid out at > http://en.wikipedia.org/wiki/History_of_type_theory. > > I have not read this book http://www.amazon.com/History- > Philosophy-Constructive-Synthese-Library/dp/9048154030 but it appears > topical. > > However, I am quite surprised that no one has yet mentioned "A Modern > Perspective on Type Theory", > http://www.springer.com/mathematics/book/978-1-4020-2334-7 > which takes a historical path through type theory. Perhaps because the > types mailing list is inhabited by mainly PL people? > > Having said that, I want to focus on a particular aspect of the question: > > > On 2014-05-12 1:47 PM, Vladimir Voevodsky wrote: > >> The types which we use today are a constructive tool. For example, types >> in Ocaml are a device without which writing many programs would be >> extremely inconvenient. >> > > While many have been quite philosophical about what this means, I think > something quite simple is being forgotten here: pattern matching. Without > types, eliminators for sums are very awkward to deal with. While one can > argue how much benefit there is in types for those whose values are given > largely through their introduction form, types really shine when types are > largely given by their elimination forms. Oh, it is possible to program > entirely in terms of eliminators, but the results are "extremely > inconvenient" indeed! > > To me, types enable a nice concept of sum types, which in turn enable a > nice notion of pattern matching . [I use 'enable' here, knowing that one > can do pattern matching without sums and without types, but the results are > not as compelling.] And pattern matching allows programs to be written > much more succinctly and clearly than without. > > Jacques > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From antti-juhani at kaijanaho.fi Tue May 13 10:11:22 2014 From: antti-juhani at kaijanaho.fi (Antti-Juhani Kaijanaho) Date: Tue, 13 May 2014 17:11:22 +0300 Subject: [TYPES] types In-Reply-To: Message-ID: <20140513141120.GB3005@teralehti.kaijanaho.fi> On Mon, May 12, 2014 at 05:40:43PM -0400, Prof Robert Harper wrote: > as to the first use of types in a programming language, i suspect one could > argue for algol-60, but i am not a historian of these topics, so perhaps > there were earlier examples. even fortran had types (for different numbers > and for arrays), but algol took them a lot more seriously, it seems to me. > much later, pascal popularized the use of types, but many regarded it as a > backward step because the type system was so primitive (though rather > advanced for its time) that it could be hard to manage. similar ideas were > used in simula, which later led to languages such as java, and in algol-68, > which made an impressive use of types, but which was not very successful in > practice. There's too little proper historical research on the history of programming languages. I did some digging for a section on the history of types in my PhLic thesis manuscript (to be submitted, hopefully, very soon), but since it was a side issue, I too did not use a proper historical research approach on it. However, the following notes are based on what I found. FORTRAN appears to be the first language to use a compile-time mechanism to distinguish between integers and floating point values. ALGOL 58 introduced the idea of type declarations for variables (types were still only used to distinguish between integers and floats). Note that in ALGOL 58 and ALGOL 60, arrays were not included in the type concept. In the meanwhile, FLOW-MATIC pioneered the idea of separating data description from the algorithm, which directly influenced the data description capabilities of COBOL and through it PL/I. The key paper in my opinion is Hoare's "Record handling" (ALGOL Bulletin 21, 1965, http://dl.acm.org/citation.cfm?id=1061041). In prior compiled languages, types were just a way to distinguish between integers and floats (them having different representations and thus needed to be distinguished for the compiler's convenience), but Hoare combined the ideas of structured values and reference values into the now-familiar idea of record types (and also mentions tagged union types as a possible extension). This paper directly influenced at least ALGOL 68, PASCAL, and, I believe, the class concept of SIMULA 67. On Tue, May 13, 2014 at 11:43:25AM +0100, Philip Wadler wrote: > Vladimir's intriguing question reminds me of a related question: Why do we > call them 'types' at all?. That is, how did the concept named by Russell > end up as a term in computing? Perhaps someone on this list knows the > answer. > > Turing published papers on type theory, so we know Turing read Russell > (citation available), and we know Backus used 'type' in FORTRAN (citation > required---I just tried scanning for an early FORTRAN paper to confirm it > uses the word 'type', but did not succeed in ten minutes). What is the line > between these two? Did Turing use the term 'type' in connection with > computers? Did von Neumann or the other early architects? What source or > sources influenced Backus? Backus et al considered the design of FORTRAN 0 an easy problem and did not spend much time on it. I doubt they considered formal logic much. The following quote from the HOPL I paper (p. 32 in the book) is, I think, illuminating: "In our naive unawareness of language design problems ? of course we knew nothing of many issues which were later thought to be important, e.g., block structure, conditional expressions, type declarations ? it seemed to us that once one had the notions of the assignment statement, the subscripted variable, and the DO statement in hand (and these were among our earliest ideas), then the remaining problems of language design were trivial" The October 1956 manual for FORTRAN uses the word "type" a lot, talking about "[t]wo types of constant" (p. 9) and "[t]wo types of variable" (p.9; in both cases, integer and floating point) but also of "32 types of statement" (p. 8) including "IF-type [and] GO TO-type" (p. 21). It seems to me their use of type follows from ordinary English, not from the theory of types in logic. The programming language community did not seem to be aware of the development of logic in general much, at least until the end of the 1970s. McCarthy was aware of the lambda calculus when he came up with LISP but had not studied it in detail. Landin (1965, doi:10.1145/ 363744.363749) used an untyped lambda calculus to describe Algol 60, which is typed. The first use of the simply typed lambda calculus I have been able to find was in Morris' PhD thesis (1969, http://dspace.mit.edu/handle/1721.1/64850), which does not cite any literature on it; I get the impression he was not aware of the previous work and independently rediscovered it. Reynolds (1974, doi:10.1007/3-540-06859-7_148) cited Morris for the idea of a typed lambda calculus and no source from the logic side of things so far as I can see. Milner (1978) did not cite Girard although he did cite Reynolds; his only citation from the logic side is to Hindley, unknown to him while he was working on the ideas of the paper (and Scott, if one counts him as being on the logic side). -- Antti-Juhani Kaijanaho From grosu at illinois.edu Tue May 13 10:17:10 2014 From: grosu at illinois.edu (Rosu, Grigore) Date: Tue, 13 May 2014 14:17:10 +0000 Subject: [TYPES] Types as theories In-Reply-To: References: Message-ID: <0A9D1F138D97BC4F9ACD513FCDC40017401AEDD4@CITESMBX4.ad.uillinois.edu> Hi Pat, Your two claims are correct, but note that in the second case, model M containing another model M' can mean many different things, depending upon the importing mode. For example, in your second example, you are stating that you protect TRIV, which actually makes your MONOID2 spec inconsistent: indeed, there is no model of MONOID2 whose reduct to the signature of TRIV is the initial model of TRIV, because the former will always contain at least one element (the unit) while the latter is the empty model. So yes, you should use extending instead of protecting mode. However, both your algebraic specifications are unusual for the OBJ/CafeOBJ/Maude language families. The most natural definition of a monoid is to just have a sort Monoid, the unit, the binary operation, and the three axioms, without getting the TRIV theory or the Elt sort involved at all. Grigore ________________________________________ From: Types-list [types-list-bounces at lists.seas.upenn.edu] on behalf of PATRICK BROWNE [patrick.browne at dit.ie] Sent: Tuesday, May 13, 2014 7:09 AM To: types-list at lists.seas.upenn.edu Subject: [TYPES] Types as theories [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Dear list I am studying Goguen's paper Types as theories [1]. Below are two examples coded in CafeOBJ (any other algebraic specification language would do). Based on Goguen's paper, are the following true? 1) Subsort inheritance provides a classification of values, every value of the sub-sort is a value of the super-sort.(MONOID1) 2) Module import inheritance provides classification of models, each model of the class of models of the importing module contains a model of the imported module.(MONOID2) My intention is that MONOID1 illustrates 1) and 2) MONOID2 illustrates 2). Assumptions the CafeOBJ module is the software realization of a theory loose semantics, so any model of TRIV will do the term type is synonym for sort the < symbol indicates subsort a monoid is associative and has a neutral element. mod* MONOID1 { [ Monoid < Elt ] -- Signature in terms of super type op e : -> Elt op _._ : Elt Elt -> Elt -- equations in terms of subtype vars A B C : Group eq A . e = A . eq e . A = A . eq A . (B . C) = ((A . B) . C) . } mod* MONOID2 { -- -- The TRIV theory represented by the CafeOBJ -- built in loose theory TRIV which has Elt as its principal sort and no operations. -- I used protecting mode for import, perhaps it should be extending? pr(TRIV) op e : -> Elt op _._ : Elt Elt -> Elt vars A B C : Elt eq A . e = A . eq e . A = A . eq A . (B . C) = ((A . B) . C) . -- Associativity could be specified as a property } Regards, Pat [1]Goguen, J. (1991). Types as Theories. Topology and Category in Computer Science. G. M. Reed, A. W. Roscoe and R. F. Wachter, Oxford University Press: 357-390. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.2241 -- This email originated from DIT. If you received this email in error, please delete it from your system. Please note that if you are not the named addressee, disclosing, copying, distributing or taking any action based on the contents of this email or attachments is prohibited. www.dit.ie Is ? ITB?C a th?inig an r?omhphost seo. M? fuair t? an r?omhphost seo tr? earr?id, scrios de do ch?ras ? le do thoil. Tabhair ar aird, mura t? an seola? ainmnithe, go bhfuil dianchosc ar aon nochtadh, aon ch?ipe?il, aon d?ileadh n? ar aon ghn?omh a dh?anfar bunaithe ar an ?bhar at? sa r?omhphost n? sna hiat?in seo. www.dit.ie T? ITB?C ag aistri? go Gr?inseach Ghorm?in ? DIT is on the move to Grangegorman From greg at eecs.harvard.edu Tue May 13 10:25:15 2014 From: greg at eecs.harvard.edu (Greg Morrisett) Date: Tue, 13 May 2014 10:25:15 -0400 Subject: [TYPES] types In-Reply-To: References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> Message-ID: <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> One thing this whole conversation ignores is the role of types in compilation. Even early FORTRAN compilers needed to make distinctions (integral versus floating-point) in order to (a) determine how much space to allocate, (b) [possibly] align values in memory, and (c) to dispatch to appropriate instructions for doing arithmetic (i.e., "+" was overloaded.) These distinctions are still relevant and important today. Indeed, types have played a central role in efficient compilation of languages, and in that sense, are extremely enabling. A uni-typed language, in the parlance of Harper, requires a universal representation, which usually means a level of indirection, perhaps additional tags so that we can discriminate, etc. Today, a good compiler for a uni-typed language first reconstructs the underlying "Scott-types" (or some refinement thereof) in the hopes of avoid the costs of universal representations. (See e.g., the JVML, LLVM, GHC's IR, etc.) So the practice and theory are quite aligned here: types form the foundations, and the illusion of a dynamically-typed setting is there for convenience of the user. Even in the uni-typed languages, not all operations are allowed on all values (e.g., trying taking the car/head of a lambda), even though these operations are certainly defined at the bit-level. So there is some imposition of abstraction before the fact, enabling reasoning at the level of the language's abstract machine, instead of the concrete bit-level. I suppose if you turn your head around backwards enough, you see this as an "imposition" on your God-given right to do what you want with the bits, but others see this as a constructive way to define a language. -Greg From u.s.reddy at cs.bham.ac.uk Tue May 13 10:42:59 2014 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Tue, 13 May 2014 15:42:59 +0100 Subject: [TYPES] types In-Reply-To: References: <537208B7.4070706@mcmaster.ca> Message-ID: <21362.12147.869000.393429@gargle.gargle.HOWL> Philip Wadler writes: > I finally downloaded a scan of "The FORTRAN automatic coding system". The > word type appears several times, for instance to distinguish "IF-type > statement" from "DO-type statement", but is not used to distinguish > fixed-point from floating-point. Yours, -- P > > J. W. Backus, et al, The FORTRAN automatic coding system, Papers presented > at the 26-28 February 1957, Western Joint Computer Conference. > http://dl.acm.org/citation.cfm?id=1455599 Thanks for clearing that up! I do believe Fortran's distinction between INTEGER and REAL was bascially carried over from assembly languages. In the computers of the day, the instruction set worked on one word-long integers and two word-long floating point numbers. So, even in assembly, one had to reserve different amounts of storage for variables of different types, which amounted to "variable declarations" in assembly. Fortran's variable declarations were basically high-level counterparts of these assembly language incantations. Algol 60 report had a section 2.8 titled "Values and Types" in the preamble where it is stated: The various "types" (integer, real, Boolean) basically denote properties of values. The types associated with syntactic units refer to the values of these units. Note the "types" in quotes, indicating that this was a novel usage of the term in the Algol report. The formal parameters of procedures were declared with "specifiers" and the syntax of specifiers was given as: ::= string | | array | array | label | switch | procedure | procedure These are closer to what we call types today. In the History of Programming Languages discussion on Algol 60, Peter Naur indicated that Algol 60 adopted call-by-name parameter passing essentially to provide a single rule of evaluation that covered all these types/specifiers, instead of a separate rule for each specifier (which were apparently considered in earlier drafts). Cheers, Uday From nikhil at acm.org Tue May 13 10:45:41 2014 From: nikhil at acm.org (Rishiyur Nikhil) Date: Tue, 13 May 2014 10:45:41 -0400 Subject: [TYPES] types In-Reply-To: <20140513141120.GB3005@teralehti.kaijanaho.fi> References: <20140513141120.GB3005@teralehti.kaijanaho.fi> Message-ID: > FORTRAN appears to be the first language to use a compile-time mechanism to > distinguish between integers and floating point values. > ... > Note that in ALGOL 58 and ALGOL 60, arrays were not included in the type concept. Depressingly, even in 2014, this view is still extant. Take a look at the following web pages at Mathworks, which has such wide influence amongst scientists and engineers: http://www.mathworks.com/help/simulink/ug/working-with-data-types.html About Data Types The term data type refers to the way in which a computer represents numbers in memory. A data type determines the amount of storage allocated to a number, the method used to encode the number's value as a pattern of binary digits, and the operations available for manipulating the type. http://www.mathworks.com/help/stateflow/ug/typing-stateflow-data.html What Is Data Type? The term data type refers to the way computers represent numbers in memory. The type determines the amount of storage allocated to data, the method of encoding a data value as a pattern of binary digits, and the operations available for manipulating the data. (No mention of arrays and matrices as types, even though they are central to Matlab/Simulink/Stateflow). Nikhil From m.escardo at cs.bham.ac.uk Tue May 13 11:00:15 2014 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Tue, 13 May 2014 16:00:15 +0100 Subject: [TYPES] types In-Reply-To: <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> Message-ID: <5372337F.7070505@cs.bham.ac.uk> I seem to vaguely recall that in the remote past, types in some programming languages were called "modes" (Algol?). So to find the literature before types in logic and in programming languages "converged" one probably has to look for that keyword. M. On 13/05/14 15:25, Greg Morrisett wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > One thing this whole conversation ignores is the role of types > in compilation. Even early FORTRAN compilers needed to make > distinctions (integral versus floating-point) in order to > (a) determine how much space to allocate, (b) [possibly] align > values in memory, and (c) to dispatch to appropriate instructions > for doing arithmetic (i.e., "+" was overloaded.) These > distinctions are still relevant and important today. > > Indeed, types have played a central role in efficient > compilation of languages, and in that sense, are extremely > enabling. A uni-typed language, in the parlance of Harper, > requires a universal representation, which usually means > a level of indirection, perhaps additional tags so that > we can discriminate, etc. > > Today, a good compiler for a uni-typed language first reconstructs > the underlying "Scott-types" (or some refinement thereof) in > the hopes of avoid the costs of universal representations. > (See e.g., the JVML, LLVM, GHC's IR, etc.) So the practice > and theory are quite aligned here: types form the foundations, > and the illusion of a dynamically-typed setting is there for > convenience of the user. > > Even in the uni-typed languages, not all operations are allowed > on all values (e.g., trying taking the car/head of a lambda), > even though these operations are certainly defined at the > bit-level. So there is some imposition of abstraction before > the fact, enabling reasoning at the level of the language's > abstract machine, instead of the concrete bit-level. > > I suppose if you turn your head around backwards enough, you > see this as an "imposition" on your God-given right to > do what you want with the bits, but others see this as a > constructive way to define a language. > > -Greg > > > -- Martin Escardo http://www.cs.bham.ac.uk/~mhe From gbuday at gmail.com Tue May 13 11:03:14 2014 From: gbuday at gmail.com (Gergely Buday) Date: Tue, 13 May 2014 17:03:14 +0200 Subject: [TYPES] types In-Reply-To: References: Message-ID: Matthias Felleisen wrote: > Furthermore, you can use typed thinking to organize the design of programs > even in the absence of a type language and type checking. I have run courses > for over 20 years this way and successfully so from the middle school level to > the MS level. More concretely, you can program with types in "Scheme"; you > don't need to be in "Ocaml" (quotes to remind readers that I mean the idea, not > the exact language). Indeed, given the huge demand for "untyped" programmers, > I argue that a responsible instructor must expose students to just such > scenarios to prepare his students for this world. Greg Morrisett wrote: > Today, a good compiler for a uni-typed language first reconstructs > the underlying "Scott-types" (or some refinement thereof) in > the hopes of avoid the costs of universal representations. > (See e.g., the JVML, LLVM, GHC's IR, etc.) So the practice > and theory are quite aligned here: types form the foundations, > and the illusion of a dynamically-typed setting is there for > convenience of the user. What is strange here that even if a language is untyped, and I cannot put enough quoting marks here, the programmer is thinking using types, even in e.g. Javascript, and it is more difficult to teach it not mentioning types. One should refer to the compiler that makes something dubious with types. Of course Javascript is more written than read and that is easier without explicit types. - Gergely From vladimir at ias.edu Tue May 13 11:06:55 2014 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Tue, 13 May 2014 17:06:55 +0200 Subject: [TYPES] types In-Reply-To: <64E66C1E-E50F-4910-B839-9F1539FA161B@mariusnita.com> References: <64E66C1E-E50F-4910-B839-9F1539FA161B@mariusnita.com> Message-ID: <26F002D2-C72B-44E6-9F0E-43A61946337A@ias.edu> On May 12, 2014, at 11:51 PM, Marius Nita wrote: > As far as I can tell, that quote accurately describes types in ML or Haskell, which disallow all "invalid" behaviors along with some valid ones. It's not clear what you mean by "constructive". "match" and "case". V. From vladimir at ias.edu Tue May 13 11:44:51 2014 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Tue, 13 May 2014 17:44:51 +0200 Subject: [TYPES] types In-Reply-To: References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> Message-ID: <279F1939-B5BB-4900-8E72-B0846AED3423@ias.edu> > Turing published papers on type theory, so we know Turing read Russell > (citation available), and we know Backus used 'type' in FORTRAN (citation > required---I just tried scanning for an early FORTRAN paper to confirm it > uses the word 'type', but did not succeed in ten minutes). What is the line > between these two? Generally speaking, Bob Harper might have answered this question by saying: > i think the main thing we took from russell is the principle that "a type is the range of significance of a variable". but it would be interesting to find a reference. Vladimir. PS The precise quote from Russell 1908 is: > A type is defined as the range of significance of a propositional function, i.e., > as the collection of arguments for which the said function has values. On May 13, 2014, at 12:43 PM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Vladimir's intriguing question reminds me of a related question: Why do we > call them 'types' at all?. That is, how did the concept named by Russell > end up as a term in computing? Perhaps someone on this list knows the > answer. > > Turing published papers on type theory, so we know Turing read Russell > (citation available), and we know Backus used 'type' in FORTRAN (citation > required---I just tried scanning for an early FORTRAN paper to confirm it > uses the word 'type', but did not succeed in ten minutes). What is the line > between these two? Did Turing use the term 'type' in connection with > computers? Did von Neumann or the other early architects? What source or > sources influenced Backus? > > Yours, -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science > . /\ School of Informatics, University of Edinburgh > . / \ http://homepages.inf.ed.ac.uk/wadler/ > > > On 13 May 2014 05:10, Gershom Bazerman wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] >> >> On 5/12/14, 5:54 PM, Cody Roux wrote: >> >>> [ The Types Forum, http://lists.seas.upenn.edu/ >>> mailman/listinfo/types-list ] >>> >>> As others have noted, tracing historical shifts in points of view is >>> quite difficult. I somehow feel that the "Lawvere" part of the >>> "Curry-Howard-Lawvere-isomorphism" is at the heart of the real shift >>> from a "prohibitive" point of view to a "prescriptive" point of view. >>> >>> Thanks, Vladimir for your very interesting question. It is really a >> shame that (as far as I know) there is not a comprehensive article or book >> that we could turn to on this material. To follow up on the "Lawvere part", >> Goguen and the ADJ group developed initial algebra semantics as an >> algebraic theory of abstract data types in the early-mid 1970s, directly >> inspired by Lawvere's work in semantics. Goguen and Burstall went on to >> develop the CLEAR specification language, which elaborated these ideas, and >> as far as I know algebraic data types proper were first implemented in HOPE >> (described in Burstall, MacQueen, Sannella, 1980). It seems to me that the >> shift from data-types-as-hiding (the "abstract" notion of modularity) to >> data-types-as-presenting-actions (algebraic data types) is an example of >> precisely that shift from restriction to possibility that we're looking for. >> >> (However, I am confused by the reference to "Curry-Howard-Lawvere" as I >> thought the isomorphism proper [via cartesian closed categories] was due to >> Lambek? Is this a mistype, or was Lawvere more involved in that portion of >> the story than I realize?) >> >> In skimming papers looking for ideas here, I ran into a very funny >> counterexample as well. Reynolds' 1972 "Definitional Interpreters for >> Higher-Order Programming Languages" praises Scott semantics as follows: >> "The central technical problem that Scott has solved is to de?ne functions >> that are not only higher-order, but also _typeless_, so that any function >> may be applied to any other function, including itself." From a modern >> perspective, this of course reads as very strange praise. But, I suppose >> (since I am too young to _know_), at the time it must have seemed that >> being typeless was essential to allowing general recursion, higher order >> functions, etc. >> >> Cheers, >> Gershom >> > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. From patrick.browne at dit.ie Tue May 13 12:34:08 2014 From: patrick.browne at dit.ie (PATRICK BROWNE) Date: Tue, 13 May 2014 17:34:08 +0100 Subject: [TYPES] Types as theories In-Reply-To: <0A9D1F138D97BC4F9ACD513FCDC40017401AEDD4@CITESMBX4.ad.uillinois.edu> References: <0A9D1F138D97BC4F9ACD513FCDC40017401AEDD4@CITESMBX4.ad.uillinois.edu> Message-ID: Grigore, Thanks for your clarification. Yes, MONOID2 is convoluted. It was the simplest example that I could think of that illustrated the sub-model idea and was similar to Goguen's example (he actually used monoids and groups). Thanks, Pat On 13 May 2014 15:17, Rosu, Grigore wrote: > Hi Pat, > > Your two claims are correct, but note that in the second case, model M > containing another model M' can mean many different things, depending upon > the importing mode. For example, in your second example, you are stating > that you protect TRIV, which actually makes your MONOID2 spec inconsistent: > indeed, there is no model of MONOID2 whose reduct to the signature of TRIV > is the initial model of TRIV, because the former will always contain at > least one element (the unit) while the latter is the empty model. So yes, > you should use extending instead of protecting mode. > > However, both your algebraic specifications are unusual for the > OBJ/CafeOBJ/Maude language families. The most natural definition of a > monoid is to just have a sort Monoid, the unit, the binary operation, and > the three axioms, without getting the TRIV theory or the Elt sort involved > at all. > > Grigore > > > > ________________________________________ > From: Types-list [types-list-bounces at lists.seas.upenn.edu] on behalf of > PATRICK BROWNE [patrick.browne at dit.ie] > Sent: Tuesday, May 13, 2014 7:09 AM > To: types-list at lists.seas.upenn.edu > Subject: [TYPES] Types as theories > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Dear list > I am studying Goguen's paper Types as theories [1]. > Below are two examples coded in CafeOBJ (any other algebraic specification > language would do). > Based on Goguen's paper, are the following true? > > 1) Subsort inheritance provides a classification of values, every value of > the sub-sort is a value of the super-sort.(MONOID1) > > 2) Module import inheritance provides classification of models, each model > of the class of models of the importing module contains a model of the > imported module.(MONOID2) > > > My intention is that MONOID1 illustrates 1) and 2) MONOID2 illustrates 2). > Assumptions > > the CafeOBJ module is the software realization of a theory > > loose semantics, so any model of TRIV will do > > the term type is synonym for sort > > the < symbol indicates subsort > > a monoid is associative and has a neutral element. > > mod* MONOID1 { > [ Monoid < Elt ] > -- Signature in terms of super type > op e : -> Elt > op _._ : Elt Elt -> Elt > -- equations in terms of subtype > vars A B C : Group > eq A . e = A . > eq e . A = A . > eq A . (B . C) = ((A . B) . C) . > } > > > mod* MONOID2 { > -- > -- The TRIV theory represented by the CafeOBJ > -- built in loose theory TRIV which has Elt as its principal sort and no > operations. > -- I used protecting mode for import, perhaps it should be extending? > pr(TRIV) > op e : -> Elt > op _._ : Elt Elt -> Elt > vars A B C : Elt > eq A . e = A . > eq e . A = A . > eq A . (B . C) = ((A . B) . C) . -- Associativity could be specified as a > property > } > > Regards, Pat > > [1]Goguen, J. (1991). Types as Theories. Topology and Category in Computer > Science. G. M. Reed, A. W. Roscoe and R. F. Wachter, Oxford University > Press: 357-390. > http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.2241 > > -- > > > This email originated from DIT. If you received this email in error, please > delete it from your system. Please note that if you are not the named > addressee, disclosing, copying, distributing or taking any action based on > the contents of this email or attachments is prohibited. www.dit.ie > > Is ? ITB?C a th?inig an r?omhphost seo. M? fuair t? an r?omhphost seo tr? > earr?id, scrios de do ch?ras ? le do thoil. Tabhair ar aird, mura t? an > seola? ainmnithe, go bhfuil dianchosc ar aon nochtadh, aon ch?ipe?il, aon > d?ileadh n? ar aon ghn?omh a dh?anfar bunaithe ar an ?bhar at? sa > r?omhphost n? sna hiat?in seo. www.dit.ie > > T? ITB?C ag aistri? go Gr?inseach Ghorm?in ? DIT is on the move to > Grangegorman > -- This email originated from DIT. If you received this email in error, please delete it from your system. Please note that if you are not the named addressee, disclosing, copying, distributing or taking any action based on the contents of this email or attachments is prohibited. www.dit.ie Is ? ITB?C a th?inig an r?omhphost seo. M? fuair t? an r?omhphost seo tr? earr?id, scrios de do ch?ras ? le do thoil. Tabhair ar aird, mura t? an seola? ainmnithe, go bhfuil dianchosc ar aon nochtadh, aon ch?ipe?il, aon d?ileadh n? ar aon ghn?omh a dh?anfar bunaithe ar an ?bhar at? sa r?omhphost n? sna hiat?in seo. www.dit.ie T? ITB?C ag aistri? go Gr?inseach Ghorm?in ? DIT is on the move to Grangegorman From Sergei.Soloviev at irit.fr Tue May 13 12:45:21 2014 From: Sergei.Soloviev at irit.fr (Sergei Soloviev) Date: Tue, 13 May 2014 18:45:21 +0200 Subject: [TYPES] types In-Reply-To: Message-ID: <2a19-53724c00-15-20e0b100@222359994> Hi, Vladimir, I was reading quite a lot Cantor. When he suggested his own notion of set, he was quite unsure how "laxist" the definition may be - essentially he would not admit that any elements (without unifying principle or construction) can be taken together to form a set. Much more laxist view was taken by Frege and some others, and Cantor himself started to take more risky approach - and there result some paradoxes concerning ordinals that much discouraged him. With respect to all this, Russel's types are a restriction. Constructive types (as in programming) are restrictive in the sense that they require some common constructing principle (or principles) behind. All the best Sergei Soloviev Le Lundi 12 Mai 2014 19:47 CEST, Vladimir Voevodsky a ?crit: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am reading Russell's texts and the more I investigate them the more it seems to me that the concept of types as we use it today has very little with how types where perceived by Russell or Church. > > For them types were a restriction mechanism. As Russell and Whitehead write: > > "It should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any which would otherwise be invalid." > > The types which we use today are a constructive tool. For example, types in Ocaml are a device without which writing many programs would be extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in programming languages which were enabling rather than forbidding in nature? > > Vladimir. > > > > > From joshua.guttman at gmail.com Tue May 13 13:21:30 2014 From: joshua.guttman at gmail.com (Joshua Guttman) Date: Tue, 13 May 2014 13:21:30 -0400 Subject: [TYPES] types In-Reply-To: References: Message-ID: On Mon, May 12, 2014 at 1:47 PM, Vladimir Voevodsky wrote: > The types which we use today are a constructive tool. For example, types > in Ocaml are a device without which writing many programs would be > extremely inconvenient. > > I am looking for a historic advice - when and where did types appear in > programming languages which were enabling rather than forbidding in nature? > I'm not really sure whether this was historically influential when it comes to programming languages, but I think an interesting aspect of the types-as-enabling story goes back to Goedel's System T, from the Dialectica paper of the early 1950s. Having the structure of simple types, together with the primitive recursion operators of all types to fill in the functions in a somewhat finitist way: This seems to me an interesting start on a notion of computation where the types are central, and the means for generating computational activity tightly hew to the type structure. By the 70s, I guess a lot of people were aware that with pairing types added, it was a pretty appealing framework. And the addition of sum types or primitive recursive sum types seems pretty obvious. But I don't know whether Reynolds or Milner etc were aware of this corner of the proof theory literature. It was certainly not available in English in the late 70s. Joshua From carette at mcmaster.ca Tue May 13 13:31:17 2014 From: carette at mcmaster.ca (Jacques Carette) Date: Tue, 13 May 2014 13:31:17 -0400 Subject: [TYPES] types In-Reply-To: References: <25974_1399983731_s4DCM4oX012338_537208B7.4070706@mcmaster.ca> Message-ID: <537256E5.80401@mcmaster.ca> On 14-05-13 09:09 AM, Sam Tobin-Hochstadt wrote: > On Tue, May 13, 2014 at 7:57 AM, Jacques Carette wrote: >> [I use 'enable' here, knowing that one can do pattern matching without sums >> and without types, but the results are not as compelling.] > Jacques, > > I'm not sure what you mean by compelling, but any pair of these three > is available without the other. For example, patterns matching and > sums (but without types) are used extensively in Friedman and Wand's > Essentials of Programming Languages textbook, and pattern matching > with types but without sums can be seen in Typed Racket (for example, > our paper on functional data structures [1]). Types and sums without > pattern matching is easy to define, but as you say, awkward to program > with. > > Sam Tobin-Hochstadt > > [1] http://www.ccs.neu.edu/racket/pubs/sfp10-kth.pdf with source at > https://github.com/takikawa/tr-pfds/ What I mean by 'compelling' is most influenced by my (recent) experience with agda-mode in Emacs: given a (dependant!) sum type, one can ask the IDE to give all valid case-split on a particular variable of that type. One is insured coverage, by the language, by construction. This feature allows one to write folds over large ASTs in just a few keystrokes, with little chance for error. It is really the confluence of types, sum and pattern-matching which hits a 'sweet spot' quite similar to what Hindley-Milner does for inference. To me, 'compelling' usually means faster production (by a human) of correct *readable* code. I want my compiler to verify certain parts of "correct" for me. Jacques From rossberg at mpi-sws.org Tue May 13 13:36:33 2014 From: rossberg at mpi-sws.org (Andreas Rossberg) Date: Tue, 13 May 2014 19:36:33 +0200 Subject: [TYPES] types In-Reply-To: <5372337F.7070505@cs.bham.ac.uk> References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> <5372337F.7070505@cs.bham.ac.uk> Message-ID: On May 13, 2014, at 17:00 , Martin Escardo wrote: > I seem to vaguely recall that in the remote past, types in some programming languages were called "modes" (Algol?). So to find the literature before types in logic and in programming languages "converged" one probably has to look for that keyword. M. That was Algol-68, which invented all sorts of new terminology, most of which is rather obscure from today?s perspective. But indeed, I always wondered why in particular they departed from the ?type? terminology of earlier Algol. Maybe that?s an indicator that ?type? wasn?t yet a standard notion in the PL world at the time? /Andreas From m at mariusnita.com Tue May 13 16:51:24 2014 From: m at mariusnita.com (Marius Nita) Date: Tue, 13 May 2014 14:51:24 -0600 Subject: [TYPES] types In-Reply-To: <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> Message-ID: <7139199B-259D-435E-9C57-FC6BC520D527@mariusnita.com> It seems to me like "types" can be many things. From a static type system perspective, types are something very specific. This is how we reach the description of Javascript as "untyped" or "unityped" even though from another perspective it absolutely does have types, and programmers use these types to structure and reason about their programs. The static type system of ML does some reasonable/decidable amount of checking, after which it hands everything else to a dynamic checker. So is it not appropriate to use the word "types" when reasoning about expression classifications that cannot be captured by static types? Programmers constantly categorize computations in terms of "fast", "slow", "runs in less than a second", "computes on the fly", "serves precomputed data", "in a dirty state", "sensitive to logged-in state", "touches the DOM", let alone more basic things like "lowercase string", "positive number", "very large number", and so on. Web programmers will routinely split an architecture into code that "might touch the DOM" and "DOM-independent code". It seems to me that if we talk about types as "enabling", these yet-unverifiable types are the most enabling ? the most useful in building large architectures and the most prevalent in programming culture and in discussion around programs. I can bet that the most relevant discussion about "types of data" and "types of code" in the Linux kernel, say, is not about whether the right kind of arguments are being passed into functions, but rather about much more more complex and important classifications of code that have to do with performance, security, code design and clarity, and so on, and are necessarily enforced dynamically or by practicing a disciplined style. (And these would have to be enforced in the same way in ML as well.) I think of these as "types" even though they are yet largely out of reach for programming language researchers. Marius On May 13, 2014, at 8:25 AM, Greg Morrisett wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > One thing this whole conversation ignores is the role of types > in compilation. Even early FORTRAN compilers needed to make > distinctions (integral versus floating-point) in order to > (a) determine how much space to allocate, (b) [possibly] align > values in memory, and (c) to dispatch to appropriate instructions > for doing arithmetic (i.e., "+" was overloaded.) These > distinctions are still relevant and important today. > > Indeed, types have played a central role in efficient > compilation of languages, and in that sense, are extremely > enabling. A uni-typed language, in the parlance of Harper, > requires a universal representation, which usually means > a level of indirection, perhaps additional tags so that > we can discriminate, etc. > > Today, a good compiler for a uni-typed language first reconstructs > the underlying "Scott-types" (or some refinement thereof) in > the hopes of avoid the costs of universal representations. > (See e.g., the JVML, LLVM, GHC's IR, etc.) So the practice > and theory are quite aligned here: types form the foundations, > and the illusion of a dynamically-typed setting is there for > convenience of the user. > > Even in the uni-typed languages, not all operations are allowed > on all values (e.g., trying taking the car/head of a lambda), > even though these operations are certainly defined at the > bit-level. So there is some imposition of abstraction before > the fact, enabling reasoning at the level of the language's > abstract machine, instead of the concrete bit-level. > > I suppose if you turn your head around backwards enough, you > see this as an "imposition" on your God-given right to > do what you want with the bits, but others see this as a > constructive way to define a language. > > -Greg > > > From andrews at csd.uwo.ca Sun Jun 1 20:50:14 2014 From: andrews at csd.uwo.ca (Jamie Andrews) Date: Sun, 01 Jun 2014 17:50:14 -0700 Subject: [TYPES] types In-Reply-To: <7139199B-259D-435E-9C57-FC6BC520D527@mariusnita.com> References: <53714316.1060407@andrew.cmu.edu> <53719B32.6090603@gmail.com> <0A460EF6-41C7-4886-B8B6-A58065115403@eecs.harvard.edu> <7139199B-259D-435E-9C57-FC6BC520D527@mariusnita.com> Message-ID: <538BCA46.2090504@csd.uwo.ca> Coming a little late to the party, with apologies. My impression is that the early use of types in programming languages, in languages such as Fortran, Cobol, PL/I and the Algols, mostly concentrated on distinctions between primitive types, such as integers and floating-point numbers, and on arrays. By the early 1970s, with B, BCPL, C and Pascal, this had expanded to pointers and "structs" ("records" in Pascal). "Type" here was just a natural English word to apply to this concept, as it had been for Russell to apply to a somewhat different concept. Note that none of these kinds of types has much in common with the types that had been studied in type theory up to that point. Church's T system was built on a set of base types, which play the role that primitive types play in programming languages, but I don't think that the type theorists gave them much thought. The types that they worked with most were types of functions, predicates, connectives and so on. Independent of programming languages and preceding them, the genesis of types seems to be Russell's use of types to resolve the set-theoretic paradoxes. This led (after some false starts) to Church's T system. From there, the chain of discoveries seems to go as follows. Dana Scott's PPLAMBDA paper does not refer explicitly to Church, but I think it's safe to assume from the paper that he knew about system T. Milner et al., in their Edinburgh LCF book, refer to the PPLAMBDA paper as the source of the idea of types being applied to classify functions. From there, Church-style types (usefully extended) made their way into ML. It was only after ML that people started really trying to unify the old Pascal-style types and the type theory-based types. I think of Cardelli's name in this context, and I believe that he had a lot to do with it, but that he wasn't the only one. "structs" correspond to products and "unions" to sums; if we want to pass around functions, or objects "containing" functions, then we need to assign types to functions; and so on. Cardelli's big monograph "Type Systems" contains (for me) the clearest exposition of those ideas. It is interesting that Christopher Strachey knew Dana Scott, and was involved in the development of B and BCPL in the late 1960s, but (to my knowledge) did not make the connection between type-theoretic types and programming-language types. But I might just not be familiar with all the evidence in that regard. I imagine that some smart people at the interface between type theory and programming languages must have noticed a connection in a general way, but they don't seem to have made any solid progress until ML came on the scene. cheers --Jamie. From dimitris at microsoft.com Fri Jun 27 20:27:40 2014 From: dimitris at microsoft.com (Dimitrios Vytiniotis) Date: Sat, 28 Jun 2014 00:27:40 +0000 Subject: [TYPES] FW: Joachim (Jim) Lambek In-Reply-To: References: Message-ID: <8A4BE71AF969544FA98E6D49C10DE5A603830C74@DB3PRD3001MB019.064d.mgd.msft.net> Forwarding on behalf of Christian Retor?. -----Original Message----- From: retore [mailto:christian.retore at labri.fr] Sent: Friday, June 27, 2014 11:13 PM To: mol at cs.earlham.edu; types-announce at lists.seas.upenn.edu Subject: Joachim (Jim) Lambek Dear Colleagues, Below is a repost of a message by Michael Barr on the category theory list. I regret to inform you that Jim Lambek passed away last Monday at the age of 91. (for those who are not familiar with his work, see e.g. http://en.wikipedia.org/wiki/Joachim_Lambek http://ncatlab.org/nlab/show/Joachim+Lambek) > Michael Barr math.mcgill.ca> > 2014-06-24 01:55:48 GMT > I regret to inform you all that Jim died this afternoon. His son says > it was congestive heart failure which is as good a way as any to > describe dying of old age. He was still coming to seminar last fall > and celebrated his 91st birthday in December in pretty good shape, but > has been gradually going downhill since. I don't believe he came to > the office since late fall. He had a good run. Others may be more entitled than i am to publish this news --- but no one tells this on the lists i know, although the news is from Monday. I met Jim in 1988 in Paris at a logic and category theory seminar, then in Italy and at LACL conferences... I invited him at a workshop in Nantes in 2001 (on learning categorial grammars) he broke his arm and thereafter he was reluctant to cross the ocean (e.g. for my habilitation he send a report but he did no attend). The book for his 90th birthday: http://link.springer.com/book/10.1007/978-3-642-54789-8 His latest book: https://www.novapublishers.com/catalog/product_info.php?products_id=48156&osCsid=4ae8245ae55e45a5746d463e8be4f53d As you can see he was active until recently, and as far as it make sense, according to his son, he passed away peacefully surrounded by his family. Best regards, -- Christian Retor? http://www.labri.fr/perso/retore From vladimir at ias.edu Wed Jul 23 23:29:50 2014 From: vladimir at ias.edu (Vladimir Voevodsky) Date: Thu, 24 Jul 2014 07:29:50 +0400 Subject: [TYPES] type systems in terms of contexts and judgements Message-ID: Hello, I am looking for a reference to a paper or book where intuitionistic type theory (Martin-Lof type theory) was first formulated in terms of contexts and judgements. The foundational paper by Martin-Lof (1972) does not use contexts. Vladimir. From rwh at cs.cmu.edu Thu Jul 24 12:45:22 2014 From: rwh at cs.cmu.edu (Prof Robert Harper) Date: Thu, 24 Jul 2014 12:45:22 -0400 Subject: [TYPES] type systems in terms of contexts and judgements In-Reply-To: References: Message-ID: I don't know what would be the first occurrence, but the term "judgement" and the use of hypothetical judgments (for which he refers to de Bruijn's AUTOMATH) appears in Martin-Loef's "Constructive Mathematics and Computer Programming" from 1978. He addressed the concepts head-on in his two papers on philosophical logic, called "On the meaning of the logical constants and the justifications of the logical laws" and "Truth of a proposition, evidence for a judgment". ----------------------------- Prof Robert Harper Carnegie Mellon CSD rwh at cs.cmu.edu ----------------------------- On Jul 23, 2014, at 23:29, Vladimir Voevodsky wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > I am looking for a reference to a paper or book where intuitionistic type theory (Martin-Lof type theory) was first formulated in terms of contexts and judgements. The foundational paper by Martin-Lof (1972) does not use contexts. > > Vladimir. > > From P.B.Levy at cs.bham.ac.uk Fri Aug 22 19:42:10 2014 From: P.B.Levy at cs.bham.ac.uk (Paul Levy) Date: Sat, 23 Aug 2014 00:42:10 +0100 Subject: [TYPES] decidability vs completeness Message-ID: Dear colleagues, I would like to present a polemic for your consideration. Let T : (nat * nat) -> bool be the primitive recursive predicate where T(m,n) means that Turing machine number m (with no arguments) executes for at least n steps. Say that a number m is "terminating" if for some n, not T(m,n), and "HA-divergent" if it is provable in Heyting arithmetic that for all n, T(m,n). (This happens to coincide with provability in Peano arithmetic but I am using a constructive system for the sake of the argument.) In a dependent type system with boolean type and natural number type, for every number m let t_m be the term lambda x:nat. if T(m,x) then true else 7 Say that a system is "typing HA-complete" if |- t_m : nat -> bool is provable for all HA-divergent m. Say that a system is "typing decidable" if the judgement |- t : A, where t is a closed term and A a closed term, is decidable. A system cannot be both typing decidable and typing HA-complete, because termination and HA-divergence are recursively inseparable properties. Martin-L?f's intensional type theory is typing decidable by design and therefore not typing HA-complete. Now the polemic: In my opinion typing HA-completeness is a very minimal completeness property to require of a dependently typed system (with nat), and I cannot see the interest of a system that lacks it. Advocates of typing decidability will of course disagree. A curious feature of this disagreement is the contrast with traditional arguments over the acceptability of individual judgements. (Is the Axiom of Choice valid? Is excluded middle valid? Is this large cardinal hypothesis valid? etc.) Here, typing decidability advocates are apparently comfortable with the judgement |- t_m : nat -> bool for any *particular* HA-divergent m, but object to including it for *all* HA- divergent m. If I am wrong, please enlighten me by providing a particular HA- divergent m and a philosophical or mathematical case why t_m should not have type nat -> bool. Best regards, Paul -- Paul Blain Levy School of Computer Science, University of Birmingham +44 121 414 4792 http://www.cs.bham.ac.uk/~pbl From P.B.Levy at cs.bham.ac.uk Sat Aug 23 12:01:22 2014 From: P.B.Levy at cs.bham.ac.uk (Paul Levy) Date: Sat, 23 Aug 2014 17:01:22 +0100 Subject: [TYPES] decidability vs completeness In-Reply-To: References: Message-ID: On 23 Aug 2014, at 00:42, Paul Levy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear colleagues, > > I would like to present a polemic for your consideration. > > Let T : (nat * nat) -> bool be the primitive recursive predicate > where T(m,n) means that Turing machine number m (with no arguments) > executes for at least n steps. > > Say that a number m is "terminating" if for some n, not T(m,n), and > "HA-divergent" if it is provable in Heyting arithmetic that for all > n, T(m,n). > > (This happens to coincide with provability in Peano arithmetic but I > am using a constructive system for the sake of the argument.) > > In a dependent type system with boolean type and natural number > type, for every number m let t_m be the term > > lambda x:nat. if T(m,x) then true else 7 > > Say that a system is "typing HA-complete" if |- t_m : nat -> bool is > provable for all HA-divergent m. > > Say that a system is "typing decidable" if the judgement |- t : A, > where t is a closed term and A a closed term Sorry, that should be "where t is a closed term and A a closed type" > , is decidable. > > A system cannot be both typing decidable and typing HA-complete, > because termination and HA-divergence are recursively inseparable > properties. Martin-L?f's intensional type theory is typing > decidable by design and therefore not typing HA-complete. > > Now the polemic: > > In my opinion typing HA-completeness is a very minimal completeness > property to require of a dependently typed system (with nat), and I > cannot see the interest of a system that lacks it. Advocates of > typing decidability will of course disagree. A curious feature of > this disagreement is the contrast with traditional arguments over > the acceptability of individual judgements. (Is the Axiom of Choice > valid? Is excluded middle valid? Is this large cardinal hypothesis > valid? etc.) Here, typing decidability advocates are apparently > comfortable with the judgement |- t_m : nat -> bool for any > *particular* HA-divergent m, but object to including it for *all* HA- > divergent m. > > If I am wrong, please enlighten me by providing a particular HA- > divergent m and a philosophical or mathematical case why t_m should > not have type nat -> bool. > > Best regards, > Paul From rwh at cs.cmu.edu Sat Aug 23 15:34:58 2014 From: rwh at cs.cmu.edu (Prof Robert Harper) Date: Sat, 23 Aug 2014 15:34:58 -0400 Subject: [TYPES] decidability vs completeness In-Reply-To: References: Message-ID: hi paul, this sort of example well exemplifies the fundamental conceptual difference between nuprl and coq. the core of coq is martin-lof's itt. the system ett is, by definition, itt extended with equality reflection and uniqueness of identity proofs. it is often misrepresented that nuprl is ett, usually on passage to a "critique" of nuprl on the grounds that the typing judgment in nuprl is not decidable, and supposedly this is a cardinal principle that ought to hold for all type theories. but nuprl is not ett. say what you want about ett, but it has no bearing on nuprl. the reason is that the nuprl typing relation (written "in" in ascii, written "\in" in print) is not at all intended to be, and never was intended to be, the coq typing relation (written ":" in both ascii and in print). comparing the two to each other is not really sensible; each is trying to express a different thing. in nuprl it would be the case that t_m in (nat->bool), but as you mention it would not be the case that in coq t_m : (nat->bool). the reason is that the typing judgment in nuprl represents a statement about the a priori given execution behavior of a program, whereas in coq the analogous statement is about the grammar of programs themselves, before they ever get to be thought of as having execution behavior. put in other terms, nuprl is a theory of TRUTH, and is based on the semantic propositions-as-types principle; it can be seen as an attempt to formulate brouwer's notion of truth in intuitionistic mathematics. coq, on the other hand, is a theory of FORMAL PROOF, and is based on the syntactic propositions-as-types principle; it is an account of proof, not an account of truth. it can be seen as a full development of heyting's formalization of intuitionistic mathematics, which brouwer in fact rejected, but that's neither here nor there. you are saying, if i understood correctly, that certain facts are true, and these facts can be independent of a particular proof theory. eg, the goedel sentence is true, but is not provable in HA. (all the axioms of HA are true, and that fact cannot be doubted without simply doubting everything in mathematics.) if one starts with a partial structural (syntactic) type theory with general recursive types, then one can consider nuprl's types as relations over the terms of a particular recursive type, essentially mu t.(t->t) or elaborations thereof. but the syntactic type theory cannot be construed as a logic, or rather only as an inconsistent logic. in nuprl partiality is handled using "bar types", which predate the lifting monad, but which are essentially that concept phrased in an operational setting. there is a type of unevaluated computations of a type, so that partial functions A -` B are total functions A -> [B] (written \overline{B} in nuprl). hope this helps, and i hope i haven't misconstrued your question. bob ----------------------------- Prof Robert Harper Carnegie Mellon CSD rwh at cs.cmu.edu ----------------------------- On Aug 23, 2014, at 12:01, Paul Levy wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > On 23 Aug 2014, at 00:42, Paul Levy wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Dear colleagues, >> >> I would like to present a polemic for your consideration. >> >> Let T : (nat * nat) -> bool be the primitive recursive predicate where T(m,n) means that Turing machine number m (with no arguments) executes for at least n steps. >> >> Say that a number m is "terminating" if for some n, not T(m,n), and "HA-divergent" if it is provable in Heyting arithmetic that for all n, T(m,n). >> >> (This happens to coincide with provability in Peano arithmetic but I am using a constructive system for the sake of the argument.) >> >> In a dependent type system with boolean type and natural number type, for every number m let t_m be the term >> >> lambda x:nat. if T(m,x) then true else 7 >> >> Say that a system is "typing HA-complete" if |- t_m : nat -> bool is provable for all HA-divergent m. >> >> Say that a system is "typing decidable" if the judgement |- t : A, where t is a closed term and A a closed term > > Sorry, that should be "where t is a closed term and A a closed type" > >> , is decidable. >> >> A system cannot be both typing decidable and typing HA-complete, because termination and HA-divergence are recursively inseparable properties. Martin-L?f's intensional type theory is typing decidable by design and therefore not typing HA-complete. >> >> Now the polemic: >> >> In my opinion typing HA-completeness is a very minimal completeness property to require of a dependently typed system (with nat), and I cannot see the interest of a system that lacks it. Advocates of typing decidability will of course disagree. A curious feature of this disagreement is the contrast with traditional arguments over the acceptability of individual judgements. (Is the Axiom of Choice valid? Is excluded middle valid? Is this large cardinal hypothesis valid? etc.) Here, typing decidability advocates are apparently comfortable with the judgement |- t_m : nat -> bool for any *particular* HA-divergent m, but object to including it for *all* HA-divergent m. >> >> If I am wrong, please enlighten me by providing a particular HA-divergent m and a philosophical or mathematical case why t_m should not have type nat -> bool. >> >> Best regards, >> Paul From jon at jonmsterling.com Sat Aug 23 22:15:39 2014 From: jon at jonmsterling.com (Jon Sterling) Date: Sat, 23 Aug 2014 19:15:39 -0700 Subject: [TYPES] decidability vs completeness In-Reply-To: References: Message-ID: <1408846539.1978655.156047153.72E0E3CF@webmail.messagingengine.com> Bob, That was a wonderful answer; thank you for drawing the distinction between H >> M:A and H >> M in A. Previously, I'd always considered those just two notations for the same idea, and nuprl/ctt just had additional axioms the judgement synthetic. But it is much more elegant to just consider them two very different judgements. After reading this, I have been thinking about what it would look like to have a decidable type theory which keeps the cheap syntactic typing judgement, but also has a notion of semantic membership. This could be done by tweaking observational type theory (Altenkirch/McBride) to have more interesting rules for its propositional equality, which could then in its reflexive case represent membership. Of course, the fact that the derivations of realizers get reified in the terms means that this is no different from plain ott (a syntactic mere approximation of the Truth), but in the same way that ott allows you to express (with no small degree of difficulty) the true equality of things propositionally, it might also allow you to express (with an equal degree of difficulty) non-trivial membership for subset types, intersection types, etc. In order for any of this to work, the equality type has to be reformulated to be defined over untyped terms like in nuprl, and if this were done, I think it might also be possible to express something like the bar types in this setting, but I am not certain of that. Of course, the desire to do any of this syntactic stuff seems to derive from a conflation of the logical framework with the type theory, but it might work for those who have already agreed to pay the full price for itt. Kind regards, Jon -- Jon Sterling http://jonmsterling.com/ On Sat, Aug 23, 2014, at 12:34 PM, Prof Robert Harper wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > hi paul, > > this sort of example well exemplifies the fundamental conceptual > difference between nuprl and coq. the core of coq is martin-lof's itt. > the system ett is, by definition, itt extended with equality reflection > and uniqueness of identity proofs. it is often misrepresented that nuprl > is ett, usually on passage to a "critique" of nuprl on the grounds that > the typing judgment in nuprl is not decidable, and supposedly this is a > cardinal principle that ought to hold for all type theories. > > but nuprl is not ett. say what you want about ett, but it has no bearing > on nuprl. the reason is that the nuprl typing relation (written "in" in > ascii, written "\in" in print) is not at all intended to be, and never > was intended to be, the coq typing relation (written ":" in both ascii > and in print). comparing the two to each other is not really sensible; > each is trying to express a different thing. > > in nuprl it would be the case that t_m in (nat->bool), but as you mention > it would not be the case that in coq t_m : (nat->bool). the reason is > that the typing judgment in nuprl represents a statement about the a > priori given execution behavior of a program, whereas in coq the > analogous statement is about the grammar of programs themselves, before > they ever get to be thought of as having execution behavior. > > put in other terms, nuprl is a theory of TRUTH, and is based on the > semantic propositions-as-types principle; it can be seen as an attempt to > formulate brouwer's notion of truth in intuitionistic mathematics. coq, > on the other hand, is a theory of FORMAL PROOF, and is based on the > syntactic propositions-as-types principle; it is an account of proof, not > an account of truth. it can be seen as a full development of heyting's > formalization of intuitionistic mathematics, which brouwer in fact > rejected, but that's neither here nor there. > > you are saying, if i understood correctly, that certain facts are true, > and these facts can be independent of a particular proof theory. eg, the > goedel sentence is true, but is not provable in HA. (all the axioms of > HA are true, and that fact cannot be doubted without simply doubting > everything in mathematics.) > > if one starts with a partial structural (syntactic) type theory with > general recursive types, then one can consider nuprl's types as relations > over the terms of a particular recursive type, essentially mu t.(t->t) or > elaborations thereof. but the syntactic type theory cannot be construed > as a logic, or rather only as an inconsistent logic. in nuprl partiality > is handled using "bar types", which predate the lifting monad, but which > are essentially that concept phrased in an operational setting. there is > a type of unevaluated computations of a type, so that partial functions A > -` B are total functions A -> [B] (written \overline{B} in nuprl). > > hope this helps, and i hope i haven't misconstrued your question. > > bob > > ----------------------------- > Prof Robert Harper > Carnegie Mellon CSD > rwh at cs.cmu.edu > ----------------------------- > > > > On Aug 23, 2014, at 12:01, Paul Levy wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > On 23 Aug 2014, at 00:42, Paul Levy wrote: > > > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > >> Dear colleagues, > >> > >> I would like to present a polemic for your consideration. > >> > >> Let T : (nat * nat) -> bool be the primitive recursive predicate where T(m,n) means that Turing machine number m (with no arguments) executes for at least n steps. > >> > >> Say that a number m is "terminating" if for some n, not T(m,n), and "HA-divergent" if it is provable in Heyting arithmetic that for all n, T(m,n). > >> > >> (This happens to coincide with provability in Peano arithmetic but I am using a constructive system for the sake of the argument.) > >> > >> In a dependent type system with boolean type and natural number type, for every number m let t_m be the term > >> > >> lambda x:nat. if T(m,x) then true else 7 > >> > >> Say that a system is "typing HA-complete" if |- t_m : nat -> bool is provable for all HA-divergent m. > >> > >> Say that a system is "typing decidable" if the judgement |- t : A, where t is a closed term and A a closed term > > > > Sorry, that should be "where t is a closed term and A a closed type" > > > >> , is decidable. > >> > >> A system cannot be both typing decidable and typing HA-complete, because termination and HA-divergence are recursively inseparable properties. Martin-L?f's intensional type theory is typing decidable by design and therefore not typing HA-complete. > >> > >> Now the polemic: > >> > >> In my opinion typing HA-completeness is a very minimal completeness property to require of a dependently typed system (with nat), and I cannot see the interest of a system that lacks it. Advocates of typing decidability will of course disagree. A curious feature of this disagreement is the contrast with traditional arguments over the acceptability of individual judgements. (Is the Axiom of Choice valid? Is excluded middle valid? Is this large cardinal hypothesis valid? etc.) Here, typing decidability advocates are apparently comfortable with the judgement |- t_m : nat -> bool for any *particular* HA-divergent m, but object to including it for *all* HA-divergent m. > >> > >> If I am wrong, please enlighten me by providing a particular HA-divergent m and a philosophical or mathematical case why t_m should not have type nat -> bool. > >> > >> Best regards, > >> Paul > From streicher at mathematik.tu-darmstadt.de Sun Aug 24 04:38:39 2014 From: streicher at mathematik.tu-darmstadt.de (Thomas Streicher) Date: Sun, 24 Aug 2014 10:38:39 +0200 Subject: [TYPES] decidability vs completeness In-Reply-To: References: Message-ID: <20140824083838.GA8641@mathematik.tu-darmstadt.de> Dear Paul, what you observe is a variation of the fact that type theory cannot prove all true \Pi^0_1 sentences. The only variation is that you reformulate it as a type checking problem. As we know from Goedel's 1931 Incompleteness Theorem any reasonably strong system fails to prove true \Pi^0_1 sentences. This applies in particular to realizability models of type theory which are complete in the sense that for every closed type expression A either A or A -> False is inhabited. One point of view of type theory is that it approximates realizability which is a semantic notion. But realizability is a notion whose logical complexity is beyond decidability. Thus, this approximation is bound to be just an approximation. In ordinary logic one is at least free to compensate for this incompleteness by adding axioms. That is not so with type theory where there is also the constraint that the realizers for these axioms have also computational meaning. I agree with you that there is a sort of "formalist" tradition in type theory which identifies validity with provability. Of course, that's not said explicitly but sort of insinuated. Thomas From P.B.Levy at cs.bham.ac.uk Mon Aug 25 12:17:38 2014 From: P.B.Levy at cs.bham.ac.uk (Paul Levy) Date: Mon, 25 Aug 2014 17:17:38 +0100 Subject: [TYPES] boolean conversation Message-ID: A set theorist, an ordinary mathematician and an intensional type theorist are discussing the booleans. The set theorist says: "I define bool to be the set {{},{{}}}, calling {} false and {{}} true." The mathematician replies: "Why be so specific? bool can be any set with a specified isomorphism to {{},{{}}}. That's why I reject your theorem that the empty set is an element of the union-set of bool." The intensional type theorist says to the mathematician: "Why be so specific? bool can be any omega-groupoid with a specified equivalence to {{},{{}}}. That's why I reject your theorems that x : bool, y : id(x,true), z: if x then bool else nat |- z : bool x : bool, y : id(x,true) |- y : id(if x then true else false, true) (a consequence of judgemental eta). " Having overheard this conversation, I ask the type theorist: "But why do you reject the judgements x : bool, y : if x then 1 else 0 |- if x then true else 7 : bool x : 0 |- 7 : bool (a consequence of judgemental eta) ? After all, these are semantically valid no matter which omega- groupoid and specified equivalence to {{},{{}}} you use to interpret bool." The type theorist replies: "Because either of these judgements would make typing undecidable." I ask: "Then you agree with my position that decidable typing is woefully incomplete typing?" The type theorist replies: "Yes, but (like all intensional type theorists) I am ambiguous about whether I am doing type theory or logic. With my logician hat on I care only about completeness of inhabitation, not about completeness of typing." A final question for the type theorist and anyone still reading: is it the case that any closed type of ITT + function extensionality that's inhabited in ETT is also inhabited in ITT + function extensionality? Or at least in homotopy type theory? Paul -- Paul Blain Levy School of Computer Science, University of Birmingham +44 121 414 4792 http://www.cs.bham.ac.uk/~pbl From ionut.g.stan at gmail.com Thu Aug 28 22:24:22 2014 From: ionut.g.stan at gmail.com (=?UTF-8?B?IklvbnXImyBHLiBTdGFuIg==?=) Date: Fri, 29 Aug 2014 05:24:22 +0300 Subject: [TYPES] Types in distributed systems Message-ID: <53FFE456.1030003@gmail.com> Hello, First, let me acknowledge upfront that I'm a complete dilettante when it comes to type theory (I'm still working through the first chapters of TaPL). That means that what I'm about to ask will be either stupid or extremely inexact. Nevertheless, I'm curious. Now, onto my question... The TL;DR version is: how does one specify types in a distributed programming model, like actors? And how much can we trust these types? I was debating today, on Twitter [1], how to type the sets of messages two actors can exchange; a producer (P) and a consumer (C). Let's say that the possible set of messages produced by P is M. We would like, by means of a type system, to know that C can handle: 1) just messages of type M and 2) all possible messages of type M. M could be represented by a sum type. Let's consider this particular one: M1 = A | B | C In a closed world this makes complete sense (to me, at least) and it's easy to verify statically. But in an open world setting, like a distributed system, where P and C are on different machines that may be upgraded separately, things look harder to me. You may guarantee statically that the two properties are met, but at runtime there may appear race conditions that violate the second property. For example, we deploy P1 and C1, both agreeing on M1. Next, we add a new variant to M1: M2 = A | B | C | D P1 and C1 are updated accordingly and we get P2 and C2, which we try to deploy, but a race condition appears and P2 send message D to C1. Obviously, C1 does not understand it. Even though the type system told us that C1 can handle all variants of M, it can't actually. A similar scenario appears when removing one of the variants of M1. Is there any typing approach to this kind of problem? It looks to me that types would have to include a version as well and all runtime communication between different versions should be prohibited by having versioned communication channels. Does anyone have any insights or pointers to articles, papers or books that discuss this sort of problem? Thank you for reading this! [1]: https://twitter.com/shajra/status/504568858967953408 -- Ionu? G. Stan | http://igstan.ro From harley.eades at gmail.com Fri Aug 29 08:57:14 2014 From: harley.eades at gmail.com (Harley Eades III) Date: Fri, 29 Aug 2014 08:57:14 -0400 Subject: [TYPES] Types in distributed systems In-Reply-To: <53FFE456.1030003@gmail.com> References: <53FFE456.1030003@gmail.com> Message-ID: Hi, Stan. Just a quick response. A busy Friday for me. This reminds me of this tech. report: http://www.dcs.gla.ac.uk/~simon/publications/TR-2003-131.pdf I know that linear types as session types are also very promosing. Take a look at some of the papers here: http://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.html http://www.cs.cmu.edu/~fp/publications.html You might also be interested in looking at the pi-calculus and its type systems. A quick google using ?pi-calculus type system? will bring up a bunch of options including papers related to session types. I hope this helps. .\ Harley Eades On Aug 28, 2014, at 10:24 PM, Ionu? G. Stan wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > First, let me acknowledge upfront that I'm a complete dilettante when it comes to type theory (I'm still working through the first chapters of TaPL). That means that what I'm about to ask will be either stupid or extremely inexact. Nevertheless, I'm curious. > > Now, onto my question... > > The TL;DR version is: how does one specify types in a distributed programming model, like actors? And how much can we trust these types? > > I was debating today, on Twitter [1], how to type the sets of messages two actors can exchange; a producer (P) and a consumer (C). > > Let's say that the possible set of messages produced by P is M. We would like, by means of a type system, to know that C can handle: 1) just messages of type M and 2) all possible messages of type M. > > M could be represented by a sum type. Let's consider this particular one: > > M1 = A | B | C > > In a closed world this makes complete sense (to me, at least) and it's easy to verify statically. But in an open world setting, like a distributed system, where P and C are on different machines that may be upgraded separately, things look harder to me. You may guarantee statically that the two properties are met, but at runtime there may appear race conditions that violate the second property. > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add a new variant to M1: > > M2 = A | B | C | D > > P1 and C1 are updated accordingly and we get P2 and C2, which we try to deploy, but a race condition appears and P2 send message D to C1. Obviously, C1 does not understand it. Even though the type system told us that C1 can handle all variants of M, it can't actually. > > A similar scenario appears when removing one of the variants of M1. > > Is there any typing approach to this kind of problem? > > It looks to me that types would have to include a version as well and all runtime communication between different versions should be prohibited by having versioned communication channels. > > Does anyone have any insights or pointers to articles, papers or books that discuss this sort of problem? > > Thank you for reading this! > > > [1]: https://twitter.com/shajra/status/504568858967953408 > > -- > Ionu? G. Stan | http://igstan.ro From fp at cs.cmu.edu Fri Aug 29 09:49:40 2014 From: fp at cs.cmu.edu (Frank Pfenning) Date: Fri, 29 Aug 2014 09:49:40 -0400 Subject: [TYPES] Types in distributed systems In-Reply-To: <53FFE456.1030003@gmail.com> References: <53FFE456.1030003@gmail.com> Message-ID: Session types were invented to address well-typed communication between processes. I believe Honda's CONCUR 1993 paper "Types for Dyadic Interaction" started this line of development, with much subsequent work I will not try to review, alongside other foundational work on directly assigning typed to versions of Milner's pi-calculus. More recently, it was discovered that session types arise from a Curry-Howard interpretation of linear logic, where linear propositions correspond to session types, sequent proofs correspond to processes, and cut reduction corresponds to communication. In this kind of system, deadlock and race freedom are guaranteed by typing. See Caires & Pf. (CONCUR 2010) for a pure calculus, and Toninho et al. (ESOP 2013) for a full functional language with an encapsulation of session-typed concurrency in a monad-like construct. There is a natural notion of subtyping that arises in the system if you have labeled forms of internal and external choice. This permits some forms of backward-compatible upgrades of services, but may not address everything you had in mind. - Frank On Thu, Aug 28, 2014 at 10:24 PM, "Ionu? G. Stan" wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hello, > > First, let me acknowledge upfront that I'm a complete dilettante when it > comes to type theory (I'm still working through the first chapters of > TaPL). That means that what I'm about to ask will be either stupid or > extremely inexact. Nevertheless, I'm curious. > > Now, onto my question... > > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? > > I was debating today, on Twitter [1], how to type the sets of messages two > actors can exchange; a producer (P) and a consumer (C). > > Let's say that the possible set of messages produced by P is M. We would > like, by means of a type system, to know that C can handle: 1) just > messages of type M and 2) all possible messages of type M. > > M could be represented by a sum type. Let's consider this particular one: > > M1 = A | B | C > > In a closed world this makes complete sense (to me, at least) and it's > easy to verify statically. But in an open world setting, like a distributed > system, where P and C are on different machines that may be upgraded > separately, things look harder to me. You may guarantee statically that the > two properties are met, but at runtime there may appear race conditions > that violate the second property. > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add a new > variant to M1: > > M2 = A | B | C | D > > P1 and C1 are updated accordingly and we get P2 and C2, which we try to > deploy, but a race condition appears and P2 send message D to C1. > Obviously, C1 does not understand it. Even though the type system told us > that C1 can handle all variants of M, it can't actually. > > A similar scenario appears when removing one of the variants of M1. > > Is there any typing approach to this kind of problem? > > It looks to me that types would have to include a version as well and all > runtime communication between different versions should be prohibited by > having versioned communication channels. > > Does anyone have any insights or pointers to articles, papers or books > that discuss this sort of problem? > > Thank you for reading this! > > > [1]: https://twitter.com/shajra/status/504568858967953408 > > -- > Ionu? G. Stan | http://igstan.ro > -- Frank Pfenning, Professor and Head Department of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 http://www.cs.cmu.edu/~fp +1 412 268-6343 GHC 7019 From aravara at fct.unl.pt Fri Aug 29 14:52:02 2014 From: aravara at fct.unl.pt (Antonio Ravara) Date: Fri, 29 Aug 2014 19:52:02 +0100 Subject: [TYPES] Types in distributed systems In-Reply-To: <53FFE456.1030003@gmail.com> References: <53FFE456.1030003@gmail.com> Message-ID: <5400CBD2.7000206@fct.unl.pt> Behavioural types (of which session types are an instance) handle the scenario described in several ways, providing guarantees ranging from protocol compatibility to progress. An European project (BETTY - www.behavioural-types.eu) is dedicated to the topic, and produced recently state-of-the-art reports covering the (huge!) bibliography on the subject. You can find them at http://www.behavioural-types.eu/publications. Hope this helps. antonio On 29/08/14 03:24, "Ionu? G. Stan" wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > First, let me acknowledge upfront that I'm a complete dilettante when it > comes to type theory (I'm still working through the first chapters of > TaPL). That means that what I'm about to ask will be either stupid or > extremely inexact. Nevertheless, I'm curious. > > Now, onto my question... > > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? > > I was debating today, on Twitter [1], how to type the sets of messages > two actors can exchange; a producer (P) and a consumer (C). > > Let's say that the possible set of messages produced by P is M. We would > like, by means of a type system, to know that C can handle: 1) just > messages of type M and 2) all possible messages of type M. > > M could be represented by a sum type. Let's consider this particular one: > > M1 = A | B | C > > In a closed world this makes complete sense (to me, at least) and it's > easy to verify statically. But in an open world setting, like a > distributed system, where P and C are on different machines that may be > upgraded separately, things look harder to me. You may guarantee > statically that the two properties are met, but at runtime there may > appear race conditions that violate the second property. > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add a > new variant to M1: > > M2 = A | B | C | D > > P1 and C1 are updated accordingly and we get P2 and C2, which we try to > deploy, but a race condition appears and P2 send message D to C1. > Obviously, C1 does not understand it. Even though the type system told > us that C1 can handle all variants of M, it can't actually. > > A similar scenario appears when removing one of the variants of M1. > > Is there any typing approach to this kind of problem? > > It looks to me that types would have to include a version as well and > all runtime communication between different versions should be > prohibited by having versioned communication channels. > > Does anyone have any insights or pointers to articles, papers or books > that discuss this sort of problem? > > Thank you for reading this! > > > [1]: https://twitter.com/shajra/status/504568858967953408 > From oobles at gmail.com Fri Aug 29 18:54:38 2014 From: oobles at gmail.com (David Ryan) Date: Sat, 30 Aug 2014 08:54:38 +1000 Subject: [TYPES] Types in distributed systems In-Reply-To: References: <53FFE456.1030003@gmail.com> Message-ID: Hi Stan, First off, let me say that I would know less about the theory of type systems than you. However, I've spent many years being interested in the application of data in distributed systems. I developed Argot ( www.argot-sdk.org) to address the issues you've raised. Argot uses a concept analogous to a dictionary. Each system contains a dictionary of versioned data types. When communication begins, first a small set of core data types are compared to ensure the base dictionary types are compatible (these are data types used to define other data types). Following this, the two systems compare each message to be communicated. The peers will then only communicate with the messages that are commonly defined. Argot is designed to allow individual systems to change over time, yet continue to communicate. Anyone working with XML type systems will know the difficulty of keeping systems in alignment. Regards, David. On Fri, Aug 29, 2014 at 10:57 PM, Harley Eades III wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hi, Stan. > > Just a quick response. A busy Friday for me. > > This reminds me of this tech. report: > > http://www.dcs.gla.ac.uk/~simon/publications/TR-2003-131.pdf > > I know that linear types as session types are also very promosing. Take > a look at some of the papers here: > http://homepages.inf.ed.ac.uk/wadler/topics/linear-logic.html > http://www.cs.cmu.edu/~fp/publications.html > > You might also be interested in looking at the pi-calculus and its type > systems. A quick google using ?pi-calculus type system? will bring up > a bunch of options including papers related to session types. > > I hope this helps. > > .\ Harley Eades > > On Aug 28, 2014, at 10:24 PM, Ionu? G. Stan > wrote: > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > Hello, > > > > First, let me acknowledge upfront that I'm a complete dilettante when it > comes to type theory (I'm still working through the first chapters of > TaPL). That means that what I'm about to ask will be either stupid or > extremely inexact. Nevertheless, I'm curious. > > > > Now, onto my question... > > > > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? > > > > I was debating today, on Twitter [1], how to type the sets of messages > two actors can exchange; a producer (P) and a consumer (C). > > > > Let's say that the possible set of messages produced by P is M. We would > like, by means of a type system, to know that C can handle: 1) just > messages of type M and 2) all possible messages of type M. > > > > M could be represented by a sum type. Let's consider this particular one: > > > > M1 = A | B | C > > > > In a closed world this makes complete sense (to me, at least) and it's > easy to verify statically. But in an open world setting, like a distributed > system, where P and C are on different machines that may be upgraded > separately, things look harder to me. You may guarantee statically that the > two properties are met, but at runtime there may appear race conditions > that violate the second property. > > > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add a > new variant to M1: > > > > M2 = A | B | C | D > > > > P1 and C1 are updated accordingly and we get P2 and C2, which we try to > deploy, but a race condition appears and P2 send message D to C1. > Obviously, C1 does not understand it. Even though the type system told us > that C1 can handle all variants of M, it can't actually. > > > > A similar scenario appears when removing one of the variants of M1. > > > > Is there any typing approach to this kind of problem? > > > > It looks to me that types would have to include a version as well and > all runtime communication between different versions should be prohibited > by having versioned communication channels. > > > > Does anyone have any insights or pointers to articles, papers or books > that discuss this sort of problem? > > > > Thank you for reading this! > > > > > > [1]: https://twitter.com/shajra/status/504568858967953408 > > > > -- > > Ionu? G. Stan | http://igstan.ro > > From andru at cs.cornell.edu Fri Aug 29 22:41:34 2014 From: andru at cs.cornell.edu (Andrew Myers) Date: Fri, 29 Aug 2014 22:41:34 -0400 Subject: [TYPES] Types in distributed systems In-Reply-To: <53FFE456.1030003@gmail.com> References: <53FFE456.1030003@gmail.com> Message-ID: <03808F6B-27A8-402B-8F0D-657BD14BF5F5@cs.cornell.edu> The notion that you can't trust types in distributed computations shows up in some prior work in a fairly explicit way. Riely and Hennessey's paper "Trust and Partial Typing in Open Systems of Mobile Agents" captures this idea with their notion of partial typing. Security type systems with a notion of integrity, such as that in Jif, can also protect against agents that lie about types. We have used that approach in our Fabric system for distributed computation, which also deals with untrusted mobile code (see our Oakland'12 paper). -- Andrew On 28 Aug 2014, at 22:24, Ionu? G. Stan wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hello, > > First, let me acknowledge upfront that I'm a complete dilettante when > it comes to type theory (I'm still working through the first chapters > of TaPL). That means that what I'm about to ask will be either stupid > or extremely inexact. Nevertheless, I'm curious. > > Now, onto my question... > > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? > > I was debating today, on Twitter [1], how to type the sets of messages > two actors can exchange; a producer (P) and a consumer (C). > > Let's say that the possible set of messages produced by P is M. We > would like, by means of a type system, to know that C can handle: 1) > just messages of type M and 2) all possible messages of type M. > > M could be represented by a sum type. Let's consider this particular > one: > > M1 = A | B | C > > In a closed world this makes complete sense (to me, at least) and it's > easy to verify statically. But in an open world setting, like a > distributed system, where P and C are on different machines that may > be upgraded separately, things look harder to me. You may guarantee > statically that the two properties are met, but at runtime there may > appear race conditions that violate the second property. > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add a > new variant to M1: > > M2 = A | B | C | D > > P1 and C1 are updated accordingly and we get P2 and C2, which we try > to deploy, but a race condition appears and P2 send message D to C1. > Obviously, C1 does not understand it. Even though the type system told > us that C1 can handle all variants of M, it can't actually. > > A similar scenario appears when removing one of the variants of M1. > > Is there any typing approach to this kind of problem? > > It looks to me that types would have to include a version as well and > all runtime communication between different versions should be > prohibited by having versioned communication channels. > > Does anyone have any insights or pointers to articles, papers or books > that discuss this sort of problem? > > Thank you for reading this! > > > [1]: https://twitter.com/shajra/status/504568858967953408 > > -- > Ionu? G. Stan | http://igstan.ro -- Andrew Andrew Myers Professor, Department of Computer Science Cornell University http://www.cs.cornell.edu/andru From wadler at inf.ed.ac.uk Sat Aug 30 07:26:31 2014 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Sat, 30 Aug 2014 12:26:31 +0100 Subject: [TYPES] Types in distributed systems In-Reply-To: <53FFE456.1030003@gmail.com> References: <53FFE456.1030003@gmail.com> Message-ID: See the Acute system by Sewell and others. http://www.cl.cam.ac.uk/~pes20/acute/index.html -- P . \ Philip Wadler, Professor of Theoretical Computer Science . /\ School of Informatics, University of Edinburgh . / \ http://homepages.inf.ed.ac.uk/wadler/ On 29 August 2014 03:24, "Ionu? G. Stan" wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Hello, > > First, let me acknowledge upfront that I'm a complete dilettante when it > comes to type theory (I'm still working through the first chapters of > TaPL). That means that what I'm about to ask will be either stupid or > extremely inexact. Nevertheless, I'm curious. > > Now, onto my question... > > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? > > I was debating today, on Twitter [1], how to type the sets of messages two > actors can exchange; a producer (P) and a consumer (C). > > Let's say that the possible set of messages produced by P is M. We would > like, by means of a type system, to know that C can handle: 1) just > messages of type M and 2) all possible messages of type M. > > M could be represented by a sum type. Let's consider this particular one: > > M1 = A | B | C > > In a closed world this makes complete sense (to me, at least) and it's > easy to verify statically. But in an open world setting, like a distributed > system, where P and C are on different machines that may be upgraded > separately, things look harder to me. You may guarantee statically that the > two properties are met, but at runtime there may appear race conditions > that violate the second property. > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add a new > variant to M1: > > M2 = A | B | C | D > > P1 and C1 are updated accordingly and we get P2 and C2, which we try to > deploy, but a race condition appears and P2 send message D to C1. > Obviously, C1 does not understand it. Even though the type system told us > that C1 can handle all variants of M, it can't actually. > > A similar scenario appears when removing one of the variants of M1. > > Is there any typing approach to this kind of problem? > > It looks to me that types would have to include a version as well and all > runtime communication between different versions should be prohibited by > having versioned communication channels. > > Does anyone have any insights or pointers to articles, papers or books > that discuss this sort of problem? > > Thank you for reading this! > > > [1]: https://twitter.com/shajra/status/504568858967953408 > > -- > Ionu? G. Stan | http://igstan.ro > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: not available URL: From ionut.g.stan at gmail.com Sat Aug 30 16:44:05 2014 From: ionut.g.stan at gmail.com (=?UTF-8?B?IklvbnXImyBHLiBTdGFuIg==?=) Date: Sat, 30 Aug 2014 23:44:05 +0300 Subject: [TYPES] Types in distributed systems In-Reply-To: References: <53FFE456.1030003@gmail.com> Message-ID: <54023795.4000900@gmail.com> Thank you all for the given pointers. Much more than I had initially anticipated. I skimmed over some of the mentioned papers and they all look great. This should keep me busy for a little while. On 30/08/14 14:26, Philip Wadler wrote: > See the Acute system by Sewell and others. > > http://www.cl.cam.ac.uk/~pes20/acute/index.html > > -- P > > . \ Philip Wadler, Professor of Theoretical Computer Science > . /\ School of Informatics, University of Edinburgh > . / \ http://homepages.inf.ed.ac.uk/wadler/ > > > On 29 August 2014 03:24, "Ionu? G. Stan" > wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/__mailman/listinfo/types-list > ] > > Hello, > > First, let me acknowledge upfront that I'm a complete dilettante > when it comes to type theory (I'm still working through the first > chapters of TaPL). That means that what I'm about to ask will be > either stupid or extremely inexact. Nevertheless, I'm curious. > > Now, onto my question... > > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? > > I was debating today, on Twitter [1], how to type the sets of > messages two actors can exchange; a producer (P) and a consumer (C). > > Let's say that the possible set of messages produced by P is M. We > would like, by means of a type system, to know that C can handle: 1) > just messages of type M and 2) all possible messages of type M. > > M could be represented by a sum type. Let's consider this particular > one: > > M1 = A | B | C > > In a closed world this makes complete sense (to me, at least) and > it's easy to verify statically. But in an open world setting, like a > distributed system, where P and C are on different machines that may > be upgraded separately, things look harder to me. You may guarantee > statically that the two properties are met, but at runtime there may > appear race conditions that violate the second property. > > For example, we deploy P1 and C1, both agreeing on M1. Next, we add > a new variant to M1: > > M2 = A | B | C | D > > P1 and C1 are updated accordingly and we get P2 and C2, which we try > to deploy, but a race condition appears and P2 send message D to C1. > Obviously, C1 does not understand it. Even though the type system > told us that C1 can handle all variants of M, it can't actually. > > A similar scenario appears when removing one of the variants of M1. > > Is there any typing approach to this kind of problem? > > It looks to me that types would have to include a version as well > and all runtime communication between different versions should be > prohibited by having versioned communication channels. > > Does anyone have any insights or pointers to articles, papers or > books that discuss this sort of problem? > > Thank you for reading this! > > > [1]: https://twitter.com/shajra/__status/504568858967953408 > > > -- > Ionu? G. Stan | http://igstan.ro > > > > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > -- Ionu? G. Stan | http://igstan.ro From Peter.Sewell at cl.cam.ac.uk Tue Sep 2 04:55:31 2014 From: Peter.Sewell at cl.cam.ac.uk (Peter Sewell) Date: Tue, 2 Sep 2014 09:55:31 +0100 Subject: [TYPES] Types in distributed systems In-Reply-To: References: <53FFE456.1030003@gmail.com> Message-ID: > The TL;DR version is: how does one specify types in a distributed > programming model, like actors? And how much can we trust these types? As you can see from the earlier responses, there's been a lot of work addressing different aspects of this problem. To quickly (and partially!) summarise, we have: - typing to ensure message-passing endpoints conform to some protocols (session types etc.), as several people have mentioned - partially typed systems, in which type-checking part of the system ensures that its interactions with some other untrusted (and hence not necessarily type-correct) part will not be disastrous. Andrew mentioned James Riely and Matthew Hennessy's work on this, Jan Vitek and I dabbled in this direction (http://www.cl.cam.ac.uk/~pes20/wrapall.ps), and there's a lot of more recent work, especially in non-distributed PL contexts (eg Richards et al.'s OOPSLA 13 paper, like types, etc.). - typing that ensures safety properties in the face of version change of some of the endpoints. Phil mentioned our Acute language, and there's also the HashCaml follow-up (http://www.cl.cam.ac.uk/~pes20/hashcaml/index.html). These explore how one can preserve type abstraction boundaries using only runtime checking of channel-name identity - basically implicitly versioned communication channels. - typing to ensure that dynamic updates of endpoints will work, as explored by Mike Hicks and others. Peter From rossberg at mpi-sws.org Tue Sep 2 05:55:31 2014 From: rossberg at mpi-sws.org (Andreas Rossberg) Date: Tue, 2 Sep 2014 11:55:31 +0200 Subject: [TYPES] Types in distributed systems In-Reply-To: <54023795.4000900@gmail.com> References: <53FFE456.1030003@gmail.com> <54023795.4000900@gmail.com> Message-ID: <4EC38E5B-9C3B-4ED9-A3B6-4756FB51BFEC@mpi-sws.org> In addition to the good answers you already got, let me add a few clarifications perhaps addressing your question. First, it is worth noting that ?distributed? and ?open? are quite different properties. A distributed program isn?t necessarily open, and an open program isn?t necessarily distributed. In terms of type system support, they manifest as very different problems. Openness typically necessitates some amount of runtime type checking (or more than that, depending on the possible complexity of the data and the degree of trust), to reconfirm static assumptions at runtime. Type systems for distribution are potentially concerned with notions like locality and mobility. ?Open? does not necessarily imply ?upgradable?. A system with the ability to e.g. plug in modules at runtime, or communicate with statically unknown principals, is already open. In these scenarios, it should be sufficient to just perform a one-time check when a module is loaded or a connection is established. After that, it stays fixed. Upgrading adds a dimension of _mutability_ to the mix. It will typically be necessary to re-check each upgrade or reconnect. An even more involved scenario is to allow upgrading with potentially _incompatible_ types. If that is desired, a single check is no longer possible. You need to check every single interaction, at least in those places where the types are incompatible (which you can define as ?the provided type is not a subtype of the expected type?). This can be realised by having the language or runtime insert checked (higher-order) type coercions between incompatible interacting program parts, which will reject unexpected data as a runtime error (or take other measures). The scenario you describe seems to be yet another case. IIUC, you want to upgrade _multiple_ components, in a way that is incompatible with the previous version, but upgrades _all_ involved parts, so that in the end, types are consistent again. Obviously, there is no real problem if the system supports atomic update of multiple components. But if it doesn?t, then this problem may turn out to be a variant of the previous one: the first update is incompatible initially, so a type checking layer must be inserted at the boundaries. However, when the second component is also updated in a consistent manner, the assumptions match again, and the checks can be eliminated. Such a scheme would also work with more than two components. /Andreas On Aug 30, 2014, at 22:44 , Ionu? G. Stan wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Thank you all for the given pointers. Much more than I had initially anticipated. I skimmed over some of the mentioned papers and they all look great. This should keep me busy for a little while. > > > On 30/08/14 14:26, Philip Wadler wrote: >> See the Acute system by Sewell and others. >> >> http://www.cl.cam.ac.uk/~pes20/acute/index.html >> >> -- P >> >> . \ Philip Wadler, Professor of Theoretical Computer Science >> . /\ School of Informatics, University of Edinburgh >> . / \ http://homepages.inf.ed.ac.uk/wadler/ >> >> >> On 29 August 2014 03:24, "Ionu? G. Stan" > > wrote: >> >> [ The Types Forum, >> http://lists.seas.upenn.edu/__mailman/listinfo/types-list >> ] >> >> Hello, >> >> First, let me acknowledge upfront that I'm a complete dilettante >> when it comes to type theory (I'm still working through the first >> chapters of TaPL). That means that what I'm about to ask will be >> either stupid or extremely inexact. Nevertheless, I'm curious. >> >> Now, onto my question... >> >> The TL;DR version is: how does one specify types in a distributed >> programming model, like actors? And how much can we trust these types? >> >> I was debating today, on Twitter [1], how to type the sets of >> messages two actors can exchange; a producer (P) and a consumer (C). >> >> Let's say that the possible set of messages produced by P is M. We >> would like, by means of a type system, to know that C can handle: 1) >> just messages of type M and 2) all possible messages of type M. >> >> M could be represented by a sum type. Let's consider this particular >> one: >> >> M1 = A | B | C >> >> In a closed world this makes complete sense (to me, at least) and >> it's easy to verify statically. But in an open world setting, like a >> distributed system, where P and C are on different machines that may >> be upgraded separately, things look harder to me. You may guarantee >> statically that the two properties are met, but at runtime there may >> appear race conditions that violate the second property. >> >> For example, we deploy P1 and C1, both agreeing on M1. Next, we add >> a new variant to M1: >> >> M2 = A | B | C | D >> >> P1 and C1 are updated accordingly and we get P2 and C2, which we try >> to deploy, but a race condition appears and P2 send message D to C1. >> Obviously, C1 does not understand it. Even though the type system >> told us that C1 can handle all variants of M, it can't actually. >> >> A similar scenario appears when removing one of the variants of M1. >> >> Is there any typing approach to this kind of problem? >> >> It looks to me that types would have to include a version as well >> and all runtime communication between different versions should be >> prohibited by having versioned communication channels. >> >> Does anyone have any insights or pointers to articles, papers or >> books that discuss this sort of problem? >> >> Thank you for reading this! >> >> >> [1]: https://twitter.com/shajra/__status/504568858967953408 >> >> >> -- >> Ionu? G. Stan | http://igstan.ro >> >> >> >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> > > > -- > Ionu? G. Stan | http://igstan.ro From ionut.g.stan at gmail.com Tue Sep 2 07:31:08 2014 From: ionut.g.stan at gmail.com (=?UTF-8?B?IklvbnXImyBHLiBTdGFuIg==?=) Date: Tue, 02 Sep 2014 14:31:08 +0300 Subject: [TYPES] Types in distributed systems In-Reply-To: <4EC38E5B-9C3B-4ED9-A3B6-4756FB51BFEC@mpi-sws.org> References: <53FFE456.1030003@gmail.com> <54023795.4000900@gmail.com> <4EC38E5B-9C3B-4ED9-A3B6-4756FB51BFEC@mpi-sws.org> Message-ID: <5405AA7C.1020304@gmail.com> Thank you very much for this short classification. It does, indeed, add some order to my thoughts. I have added a few comments inline. On 02/09/14 12:55, Andreas Rossberg wrote: > In addition to the good answers you already got, let me add a few clarifications perhaps addressing your question. > > First, it is worth noting that ?distributed? and ?open? are quite different properties. A distributed program isn?t necessarily open, and an open program isn?t necessarily distributed. In terms of type system support, they manifest as very different problems. Openness typically necessitates some amount of runtime type checking (or more than that, depending on the possible complexity of the data and the degree of trust), to reconfirm static assumptions at runtime. Type systems for distribution are potentially concerned with notions like locality and mobility. > > ?Open? does not necessarily imply ?upgradable?. A system with the ability to e.g. plug in modules at runtime, or communicate with statically unknown principals, is already open. In these scenarios, it should be sufficient to just perform a one-time check when a module is loaded or a connection is established. After that, it stays fixed. Upgrading adds a dimension of _mutability_ to the mix. It will typically be necessary to re-check each upgrade or reconnect. Right. Part of the same stream of questions I had initially was where would the Java Virtual Machine be situated in this picture. A single JVM process isn't distributed, and yet, one may still encounter similar situations to those that I imagined in original email, but due to classpath conflicts this time. I'm pretty confident now that the JVM can be classified as an open system. Case in point. On the JVM, one may end up in a situation where two libraries I'm depending upon, depend in turn on the same third library, *but* different versions of it. My code will compile just fine, because transitive dependencies aren't verified statically. At runtime, though, only one of those two versions of the third library will win and one of the possible outcomes is the dreaded java.lang.NoSuchMethodException. Although I cannot articulate it precisely, I feel there's a gap between, e.g., having a runtime check for out of bounds array indeces, on one hand, and missing method definitions at runtime on the other hand. As a library author, if I'm calling a function or method from an external module and my code compiles then I have a static *guarantee* that that method will be there at runtime, isn't it? Am I missing something or is this not a useful difference to make in practice/theory? > An even more involved scenario is to allow upgrading with potentially _incompatible_ types. If that is desired, a single check is no longer possible. You need to check every single interaction, at least in those places where the types are incompatible (which you can define as ?the provided type is not a subtype of the expected type?). This can be realised by having the language or runtime insert checked (higher-order) type coercions between incompatible interacting program parts, which will reject unexpected data as a runtime error (or take other measures). > > The scenario you describe seems to be yet another case. IIUC, you want to upgrade _multiple_ components, in a way that is incompatible with the previous version, but upgrades _all_ involved parts, so that in the end, types are consistent again. Obviously, there is no real problem if the system supports atomic update of multiple components. But if it doesn?t, then this problem may turn out to be a variant of the previous one: the first update is incompatible initially, so a type checking layer must be inserted at the boundaries. However, when the second component is also updated in a consistent manner, the assumptions match again, and the checks can be eliminated. Such a scheme would also work with more than two components. > > /Andreas > > On Aug 30, 2014, at 22:44 , Ionu? G. Stan wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Thank you all for the given pointers. Much more than I had initially anticipated. I skimmed over some of the mentioned papers and they all look great. This should keep me busy for a little while. >> >> >> On 30/08/14 14:26, Philip Wadler wrote: >>> See the Acute system by Sewell and others. >>> >>> http://www.cl.cam.ac.uk/~pes20/acute/index.html >>> >>> -- P >>> >>> . \ Philip Wadler, Professor of Theoretical Computer Science >>> . /\ School of Informatics, University of Edinburgh >>> . / \ http://homepages.inf.ed.ac.uk/wadler/ >>> >>> >>> On 29 August 2014 03:24, "Ionu? G. Stan" >> > wrote: >>> >>> [ The Types Forum, >>> http://lists.seas.upenn.edu/__mailman/listinfo/types-list >>> ] >>> >>> Hello, >>> >>> First, let me acknowledge upfront that I'm a complete dilettante >>> when it comes to type theory (I'm still working through the first >>> chapters of TaPL). That means that what I'm about to ask will be >>> either stupid or extremely inexact. Nevertheless, I'm curious. >>> >>> Now, onto my question... >>> >>> The TL;DR version is: how does one specify types in a distributed >>> programming model, like actors? And how much can we trust these types? >>> >>> I was debating today, on Twitter [1], how to type the sets of >>> messages two actors can exchange; a producer (P) and a consumer (C). >>> >>> Let's say that the possible set of messages produced by P is M. We >>> would like, by means of a type system, to know that C can handle: 1) >>> just messages of type M and 2) all possible messages of type M. >>> >>> M could be represented by a sum type. Let's consider this particular >>> one: >>> >>> M1 = A | B | C >>> >>> In a closed world this makes complete sense (to me, at least) and >>> it's easy to verify statically. But in an open world setting, like a >>> distributed system, where P and C are on different machines that may >>> be upgraded separately, things look harder to me. You may guarantee >>> statically that the two properties are met, but at runtime there may >>> appear race conditions that violate the second property. >>> >>> For example, we deploy P1 and C1, both agreeing on M1. Next, we add >>> a new variant to M1: >>> >>> M2 = A | B | C | D >>> >>> P1 and C1 are updated accordingly and we get P2 and C2, which we try >>> to deploy, but a race condition appears and P2 send message D to C1. >>> Obviously, C1 does not understand it. Even though the type system >>> told us that C1 can handle all variants of M, it can't actually. >>> >>> A similar scenario appears when removing one of the variants of M1. >>> >>> Is there any typing approach to this kind of problem? >>> >>> It looks to me that types would have to include a version as well >>> and all runtime communication between different versions should be >>> prohibited by having versioned communication channels. >>> >>> Does anyone have any insights or pointers to articles, papers or >>> books that discuss this sort of problem? >>> >>> Thank you for reading this! >>> >>> >>> [1]: https://twitter.com/shajra/__status/504568858967953408 >>> >>> >>> -- >>> Ionu? G. Stan | http://igstan.ro >>> >>> >>> >>> >>> The University of Edinburgh is a charitable body, registered in >>> Scotland, with registration number SC005336. >>> >> >> >> -- >> Ionu? G. Stan | http://igstan.ro > -- Ionu? G. Stan | http://igstan.ro From ionut.g.stan at gmail.com Tue Sep 2 07:37:54 2014 From: ionut.g.stan at gmail.com (=?UTF-8?B?IklvbnXImyBHLiBTdGFuIg==?=) Date: Tue, 02 Sep 2014 14:37:54 +0300 Subject: [TYPES] Types in distributed systems In-Reply-To: References: <53FFE456.1030003@gmail.com> Message-ID: <5405AC12.9030808@gmail.com> On 02/09/14 11:55, Peter Sewell wrote: >> The TL;DR version is: how does one specify types in a distributed >> programming model, like actors? And how much can we trust these types? > > As you can see from the earlier responses, there's been a lot of work > addressing different aspects of this problem. To quickly (and > partially!) summarise, we have: > > - typing to ensure message-passing endpoints conform to some protocols > (session types etc.), as several people have mentioned > > - partially typed systems, in which type-checking part of the system > ensures that its interactions with some other untrusted (and hence not > necessarily type-correct) part will not be disastrous. Andrew > mentioned James Riely and Matthew Hennessy's work on this, Jan Vitek > and I dabbled in this direction > (http://www.cl.cam.ac.uk/~pes20/wrapall.ps), and there's a lot of more > recent work, especially in non-distributed PL contexts (eg Richards et > al.'s OOPSLA 13 paper, like types, etc.). > > - typing that ensures safety properties in the face of version change > of some of the endpoints. Phil mentioned our Acute language, and > there's also the HashCaml follow-up > (http://www.cl.cam.ac.uk/~pes20/hashcaml/index.html). These explore > how one can preserve type abstraction boundaries using only runtime > checking of channel-name identity - basically implicitly versioned > communication channels. > > - typing to ensure that dynamic updates of endpoints will work, as > explored by Mike Hicks and others. Thank you. This is a very useful summary to have by when studying this stuff. It looks to me that the 3rd point is a subset of the 2nd one? By virtue of seeing different versions of a type as different types altogether? -- Ionu? G. Stan | http://igstan.ro From oobles at gmail.com Thu Sep 4 02:08:15 2014 From: oobles at gmail.com (David Ryan) Date: Thu, 4 Sep 2014 16:08:15 +1000 Subject: [TYPES] Types in distributed systems In-Reply-To: <5405AA7C.1020304@gmail.com> References: <53FFE456.1030003@gmail.com> <54023795.4000900@gmail.com> <4EC38E5B-9C3B-4ED9-A3B6-4756FB51BFEC@mpi-sws.org> <5405AA7C.1020304@gmail.com> Message-ID: On Tue, Sep 2, 2014 at 9:31 PM, "Ionu? G. Stan" wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Thank you very much for this short classification. It does, indeed, add > some order to my thoughts. I have added a few comments inline. > > > On 02/09/14 12:55, Andreas Rossberg wrote: > >> In addition to the good answers you already got, let me add a few >> clarifications perhaps addressing your question. >> >> First, it is worth noting that ?distributed? and ?open? are quite >> different properties. A distributed program isn?t necessarily open, and an >> open program isn?t necessarily distributed. In terms of type system >> support, they manifest as very different problems. Openness typically >> necessitates some amount of runtime type checking (or more than that, >> depending on the possible complexity of the data and the degree of trust), >> to reconfirm static assumptions at runtime. Type systems for distribution >> are potentially concerned with notions like locality and mobility. >> >> ?Open? does not necessarily imply ?upgradable?. A system with the ability >> to e.g. plug in modules at runtime, or communicate with statically unknown >> principals, is already open. In these scenarios, it should be sufficient to >> just perform a one-time check when a module is loaded or a connection is >> established. After that, it stays fixed. Upgrading adds a dimension of >> _mutability_ to the mix. It will typically be necessary to re-check each >> upgrade or reconnect. >> > > Right. Part of the same stream of questions I had initially was where > would the Java Virtual Machine be situated in this picture. A single JVM > process isn't distributed, and yet, one may still encounter similar > situations to those that I imagined in original email, but due to classpath > conflicts this time. I'm pretty confident now that the JVM can be > classified as an open system. > > Case in point. On the JVM, one may end up in a situation where two > libraries I'm depending upon, depend in turn on the same third library, > *but* different versions of it. My code will compile just fine, because > transitive dependencies aren't verified statically. At runtime, though, > only one of those two versions of the third library will win and one of the > possible outcomes is the dreaded java.lang.NoSuchMethodException. > That's an interesting point. I've always thought that types in a distributed system are confined to anywhere the only communication between systems is via binary buffer. However, the same problem of contract negotiation occurs between any system that does runtime linking. This is also done in binary library linking when loading dynamic libraries. > > Although I cannot articulate it precisely, I feel there's a gap between, > e.g., having a runtime check for out of bounds array indeces, on one hand, > and missing method definitions at runtime on the other hand. As a library > author, if I'm calling a function or method from an external module and my > code compiles then I have a static *guarantee* that that method will be > there at runtime, isn't it? > It's actually worse than just a missing a method. If you allow object references to be passed as parameter to a function then to do complete type checking requires that every method and referenced object the passed object connects is also checked against the expected library. This is the same in distributed system allowing complex types (e.g. XML). Every sub type needs to match the expected version. The limitations in XML for instance have given rise to rules about what you can and can't do when changing versions (e.g http://www.subbu.org/blog/2005/03/versioning-xml-schemas). There's plenty of interesting problems associated with distributed type systems. I've always been surprised it hasn't been covered as well as it should in literature. I'm also yet to find a mailing list or forum with enough people interested in investigating it further. If anyone has any pointers to such a place let me know. Regards, David. From rossberg at mpi-sws.org Thu Sep 4 04:55:20 2014 From: rossberg at mpi-sws.org (Andreas Rossberg) Date: Thu, 4 Sep 2014 10:55:20 +0200 Subject: [TYPES] Types in distributed systems In-Reply-To: <5405AA7C.1020304@gmail.com> References: <53FFE456.1030003@gmail.com> <54023795.4000900@gmail.com> <4EC38E5B-9C3B-4ED9-A3B6-4756FB51BFEC@mpi-sws.org> <5405AA7C.1020304@gmail.com> Message-ID: <7BB390C6-3A03-4304-AD28-BF9CD4B0F2A1@mpi-sws.org> On Sep 2, 2014, at 13:31 , Ionu? G. Stan wrote: > Part of the same stream of questions I had initially was where would the Java Virtual Machine be situated in this picture. A single JVM process isn't distributed, and yet, one may still encounter similar situations to those that I imagined in original email, but due to classpath conflicts this time. I'm pretty confident now that the JVM can be classified as an open system. Yes, indeed. > Case in point. On the JVM, one may end up in a situation where two libraries I'm depending upon, depend in turn on the same third library, *but* different versions of it. My code will compile just fine, because transitive dependencies aren't verified statically. At runtime, though, only one of those two versions of the third library will win and one of the possible outcomes is the dreaded java.lang.NoSuchMethodException. > > Although I cannot articulate it precisely, I feel there's a gap between, e.g., having a runtime check for out of bounds array indeces, on one hand, and missing method definitions at runtime on the other hand. As a library author, if I'm calling a function or method from an external module and my code compiles then I have a static *guarantee* that that method will be there at runtime, isn't it? > > Am I missing something or is this not a useful difference to make in practice/theory? Right, as others have observed in the past, Java effectively is an untyped language. Whatever its type system pretends to check is not a static guarantee of anything, at least not for object types. That is because there is no sound type checking when linking against a referenced class. In general, the classes you see at runtime time bear no relation to whatever you had imported at compile time (other than their syntactic name). But this is a failure of Java and the JVM. It can be fixed, e.g. by doing a structural, link-time type check for every import edge. That is e.g. the approach we used in Alice ML, which is a version of SML extended with dynamic modules and a programmable ?component manager", very much in the spirit of Java?s class loader, but without the soundness breakage (e.g. described in http://www.mpi-sws.org/~rossberg/papers/missing-link.pdf, if you apologise the self reference :) ). /Andreas From sheldon at alum.mit.edu Thu Sep 4 10:33:37 2014 From: sheldon at alum.mit.edu (Mark Sheldon) Date: Thu, 4 Sep 2014 10:33:37 -0400 Subject: [TYPES] Types in distributed systems In-Reply-To: <7BB390C6-3A03-4304-AD28-BF9CD4B0F2A1@mpi-sws.org> References: <53FFE456.1030003@gmail.com> <54023795.4000900@gmail.com> <4EC38E5B-9C3B-4ED9-A3B6-4756FB51BFEC@mpi-sws.org> <5405AA7C.1020304@gmail.com> <7BB390C6-3A03-4304-AD28-BF9CD4B0F2A1@mpi-sws.org> Message-ID: <03590D46-72C9-42B8-9DD2-18EA6B5FF3B8@alum.mit.edu> I?m enjoying this discussion, so thank you for raising the issues! The issues related to dynamic linking in the JVM really takes me back... We ran into these problems of dynamic linking way back in the 1980s/90s when working on a language (FX) with first-class modules. Modules in FX defined/implemented abstract types, and having them be first-class meant they were dynamically loaded, but we wanted static type checking. A fun result is that such a language acts as its own linking language (or, I suppose you could say, there is no longer a need for separate linking language). Our solution was this: We used a unique ID for each version of a module/library. That ID could in principle be the source code for the library, a unique hash of that, etc. We used a system with names and time stamps, which was ok in a non-distributed setting. Compiled code retained the IDs of modules it was compiled against, and loading the modules involved checking that the library you?re loading matched the one used at compilation. If you kept multiple versions of a module around, you could have different versions loaded and successfully manipulate instances of the different versions. The static type checker would ensure that you could not send an instance module M version 1 to an operation expecting an instance of module M version 2: the two modules would be considered mutually incompatible. We had no notion of specifying anything like backward compatibility or induced subtype relationships, which is really something you want for distributed protocols, for example. This is surely superseded by more recent work, some cited in this thread, but if you?re interested, we had a paper in the 1990 Lisp and Functional Programming Conference. It was our little version of DLL hell :-) -Mark Mark Sheldon msheldon at cs.tufts.edu On Sep 4, 2014, at 4:55 AM, Andreas Rossberg wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > On Sep 2, 2014, at 13:31 , Ionu? G. Stan wrote: >> Part of the same stream of questions I had initially was where would the Java Virtual Machine be situated in this picture. A single JVM process isn't distributed, and yet, one may still encounter similar situations to those that I imagined in original email, but due to classpath conflicts this time. I'm pretty confident now that the JVM can be classified as an open system. > > Yes, indeed. > >> Case in point. On the JVM, one may end up in a situation where two libraries I'm depending upon, depend in turn on the same third library, *but* different versions of it. My code will compile just fine, because transitive dependencies aren't verified statically. At runtime, though, only one of those two versions of the third library will win and one of the possible outcomes is the dreaded java.lang.NoSuchMethodException. >> >> Although I cannot articulate it precisely, I feel there's a gap between, e.g., having a runtime check for out of bounds array indeces, on one hand, and missing method definitions at runtime on the other hand. As a library author, if I'm calling a function or method from an external module and my code compiles then I have a static *guarantee* that that method will be there at runtime, isn't it? >> >> Am I missing something or is this not a useful difference to make in practice/theory? > > Right, as others have observed in the past, Java effectively is an untyped language. Whatever its type system pretends to check is not a static guarantee of anything, at least not for object types. That is because there is no sound type checking when linking against a referenced class. In general, the classes you see at runtime time bear no relation to whatever you had imported at compile time (other than their syntactic name). > > But this is a failure of Java and the JVM. It can be fixed, e.g. by doing a structural, link-time type check for every import edge. That is e.g. the approach we used in Alice ML, which is a version of SML extended with dynamic modules and a programmable ?component manager", very much in the spirit of Java?s class loader, but without the soundness breakage (e.g. described in http://www.mpi-sws.org/~rossberg/papers/missing-link.pdf, if you apologise the self reference :) ). > > /Andreas > From abc.deaf.xyz at gmail.com Wed Nov 5 05:55:59 2014 From: abc.deaf.xyz at gmail.com (=?UTF-8?Q?Eduardo_Le=C3=B3n?=) Date: Wed, 5 Nov 2014 05:55:59 -0500 Subject: [TYPES] Purely functional programming with reference cells Message-ID: Hello: I am trying to reconcile "impure" language features with the benefits of purely functional programming: equational reasoning, getting theorems for free from parametric types, just-flip-a-switch parallelism, etc. Right now, the feature I am focusing on is ML's reference cells. I would like some feedback on whether my approach makes sense. As far as I can tell, the essence of purely functional programming is captured by referential transparency: For values "f", "g", "x", "y", whenever "f = g" and "x = y", "f x" and "g y" must both evaluate to equal values or both diverge, if they type check. (I am really tempted to consider non-total functions to be non-values here.) In a language like Haskell, or rather, some idealized subset of it (no seq, no bottom), we begin with a fixed notion of term equality (completely unrelated to the Eq type class) and then guarantee by construction that referential transparency holds as a metatheorem. However, this approach to defining equality is too inflexible to be useful. For example, if I implement priority queues as binomial heaps, it is perfectly possible for two heaps to be balanced differently internally, yet have the same elements and thus be equal *for all I care*. So I want to take a different approach: Rather than committing myself to a fixed notion of equality, let's start with the notion of referential transparency as *definitionally* true in the metatheory. Then, equality is the smallest equivalence relation between values that makes referential transparency hold. I think this gives us a beautiful account of reference cells, one that does not force us to call them "non-functional features": Every time we update a reference cell, we are asserting that the old and new values are equal. (In ML, with the caveat that we do not provide any more justification for this assertion than "because the programmer says so".) Furthermore, this also subsumes laziness as a special case. Does this make sense? This idea of "mutation as an identification between old and new values" somehow reminds me a lot of the role of paths in homotopy type theory. Is this intuition correct? -- Eduardo Le?n From Francois.Pottier at inria.fr Wed Nov 5 10:48:26 2014 From: Francois.Pottier at inria.fr (Francois Pottier) Date: Wed, 5 Nov 2014 16:48:26 +0100 Subject: [TYPES] Purely functional programming with reference cells In-Reply-To: References: Message-ID: <20141105154826.GA7881@yquem.inria.fr> Hello, On Wed, Nov 05, 2014 at 05:55:59AM -0500, Eduardo Le?n wrote: > I think this gives us a beautiful account of reference cells, one that does > not force us to call them "non-functional features": Every time we update a > reference cell, we are asserting that the old and new values are > equal. [...] Furthermore, this also subsumes laziness as a special > case. Just a little remark. I am not sure what you mean by "this subsumes laziness". In fact, it seems that if you want to implement "laziness" (that is, thunks) using reference cells, then you need to be able to write a value into a previously empty cell. So, you need to relax your condition and allow the new value to be "more defined" (in some sense) than the old value. More generally, I think you might be interested by the paper "LVars: Lattice-based Data Structures for Deterministic Parallelism", by Lindsey Kuper and Ryan R. Newton. Best regards, -- Fran?ois Pottier Francois.Pottier at inria.fr http://gallium.inria.fr/~fpottier/ From jon at jonmsterling.com Wed Nov 5 12:01:30 2014 From: jon at jonmsterling.com (Jon Sterling) Date: Wed, 05 Nov 2014 09:01:30 -0800 Subject: [TYPES] Purely functional programming with reference cells In-Reply-To: <20141105154826.GA7881@yquem.inria.fr> References: <20141105154826.GA7881@yquem.inria.fr> Message-ID: <1415206890.3382100.187440277.4363EE1E@webmail.messagingengine.com> Eduardo, I would say that the kind of thing you are thinking about works perfectly fine when you start from a typed computation system and then classify it with a semantical type system, in the style of Nuprl. Then I conjecture that a non-referentially-transparent operation is just that?you can define it in the syntax of the computation system, but because it is (by definition) not extensional, you cannot give it a useful type. For instance, consider an operation "X := ?x. let y = @my-cell in my-cell := x; y"; essentially, this operation spits out whatever input you gave it *previously*. In order to give the type N -> N to this operation, you have to show that "X=X:N-> N"; to prove that, you demonstrate that "n:N !- [n/x]X=[n/x]X : N". But this is false, because [n/x]X equals one thing on one side, and another thing on another side. Therefore you have, in the sense of Bishop, an operation which is not a function. A function is not just a lambda term, a function is a lambda term which is "functional", and this is captured in the reflexive case of the member equality judgement for the function type. Another effect which is inevitable in an untyped computation system is non-termination; and this also works fine in this semantical typing discipline, since you show by induction that at each n, X(n) terminates?and this is part of the typing derivation that you construct, not part of the term. I would say that the main point is that because each type in a computational type theory is literally a PER over the terms of the untyped computation system, then you can put whatever primitive effects you want in the computation system, so long as you have given them an operational semantics, and then define your type system over those such that you have referential transparency in the typed terms. A paper which may prove useful is Neel Krishnaswami recent "Integrating Linear and Dependent Types" (http://semantic-domain.blogspot.com/2014/11/integrating-linear-and-dependent-types.html). Also see Bob Harper's "Constructing Type Systems over an Operational Semantics" (https://www.cs.uoregon.edu/research/summerschool/summer10/lectures/Harper-JSC92.pdf) which seems to have inspired some of the choices that Neel made. Kind regards, Jon On Wed, Nov 5, 2014, at 07:48 AM, Francois Pottier wrote: > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Hello, > > On Wed, Nov 05, 2014 at 05:55:59AM -0500, Eduardo Le?n wrote: > > I think this gives us a beautiful account of reference cells, one that does > > not force us to call them "non-functional features": Every time we update a > > reference cell, we are asserting that the old and new values are > > equal. [...] Furthermore, this also subsumes laziness as a special > > case. > > Just a little remark. I am not sure what you mean by "this subsumes > laziness". > In fact, it seems that if you want to implement "laziness" (that is, > thunks) > using reference cells, then you need to be able to write a value into a > previously empty cell. So, you need to relax your condition and allow the > new > value to be "more defined" (in some sense) than the old value. > > More generally, I think you might be interested by the paper "LVars: > Lattice-based Data Structures for Deterministic Parallelism", by > Lindsey Kuper and Ryan R. Newton. > > Best regards, > > -- > Fran?ois Pottier > Francois.Pottier at inria.fr > http://gallium.inria.fr/~fpottier/ From jon at jonmsterling.com Wed Nov 5 12:02:09 2014 From: jon at jonmsterling.com (Jon Sterling) Date: Wed, 05 Nov 2014 09:02:09 -0800 Subject: [TYPES] Purely functional programming with reference cells In-Reply-To: <1415206890.3382100.187440277.4363EE1E@webmail.messagingengine.com> References: <20141105154826.GA7881@yquem.inria.fr> <1415206890.3382100.187440277.4363EE1E@webmail.messagingengine.com> Message-ID: <1415206929.3382198.187446817.7F253F86@webmail.messagingengine.com> (Sorry, please read the first sentence as ?from an *untyped* computation system" not "typed". -JMS On Wed, Nov 5, 2014, at 09:01 AM, Jon Sterling wrote: > Eduardo, > > I would say that the kind of thing you are thinking about works > perfectly fine when you start from a typed computation system and then > classify it with a semantical type system, in the style of Nuprl. Then I > conjecture that a non-referentially-transparent operation is just > that?you can define it in the syntax of the computation system, but > because it is (by definition) not extensional, you cannot give it a > useful type. > > For instance, consider an operation "X := ?x. let y = @my-cell in > my-cell := x; y"; essentially, this operation spits out whatever input > you gave it *previously*. In order to give the type N -> N to this > operation, you have to show that "X=X:N-> N"; to prove that, you > demonstrate that "n:N !- [n/x]X=[n/x]X : N". But this is false, because > [n/x]X equals one thing on one side, and another thing on another side. > Therefore you have, in the sense of Bishop, an operation which is not a > function. A function is not just a lambda term, a function is a lambda > term which is "functional", and this is captured in the reflexive case > of the member equality judgement for the function type. > > Another effect which is inevitable in an untyped computation system is > non-termination; and this also works fine in this semantical typing > discipline, since you show by induction that at each n, X(n) > terminates?and this is part of the typing derivation that you construct, > not part of the term. > > I would say that the main point is that because each type in a > computational type theory is literally a PER over the terms of the > untyped computation system, then you can put whatever primitive effects > you want in the computation system, so long as you have given them an > operational semantics, and then define your type system over those such > that you have referential transparency in the typed terms. > > A paper which may prove useful is Neel Krishnaswami recent "Integrating > Linear and Dependent Types" > (http://semantic-domain.blogspot.com/2014/11/integrating-linear-and-dependent-types.html). > Also see Bob Harper's "Constructing Type Systems over an Operational > Semantics" > (https://www.cs.uoregon.edu/research/summerschool/summer10/lectures/Harper-JSC92.pdf) > which seems to have inspired some of the choices that Neel made. > > Kind regards, > Jon > > On Wed, Nov 5, 2014, at 07:48 AM, Francois Pottier wrote: > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > Hello, > > > > On Wed, Nov 05, 2014 at 05:55:59AM -0500, Eduardo Le?n wrote: > > > I think this gives us a beautiful account of reference cells, one that does > > > not force us to call them "non-functional features": Every time we update a > > > reference cell, we are asserting that the old and new values are > > > equal. [...] Furthermore, this also subsumes laziness as a special > > > case. > > > > Just a little remark. I am not sure what you mean by "this subsumes > > laziness". > > In fact, it seems that if you want to implement "laziness" (that is, > > thunks) > > using reference cells, then you need to be able to write a value into a > > previously empty cell. So, you need to relax your condition and allow the > > new > > value to be "more defined" (in some sense) than the old value. > > > > More generally, I think you might be interested by the paper "LVars: > > Lattice-based Data Structures for Deterministic Parallelism", by > > Lindsey Kuper and Ryan R. Newton. > > > > Best regards, > > > > -- > > Fran?ois Pottier > > Francois.Pottier at inria.fr > > http://gallium.inria.fr/~fpottier/ From dreyer at mpi-sws.org Wed Nov 5 12:51:30 2014 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Wed, 5 Nov 2014 18:51:30 +0100 Subject: [TYPES] origin of the term "hom-set"? Message-ID: Hi. Does anyone know the origin of the term "hom-set"? It came up yesterday in class, and after a little googling, I have not turned up the answer. I always assumed it stood for "homomorphism set" (?) but even that much I have not been able to verify, and it doesn't explain why we are talking about "homomorphisms" as opposed to "morphisms" (or "sets" rather than "collections", since the hom-set is not always a set). I presume there is some historical reason for this? Thanks, Derek From abc.deaf.xyz at gmail.com Wed Nov 5 13:23:54 2014 From: abc.deaf.xyz at gmail.com (=?UTF-8?Q?Eduardo_Le=C3=B3n?=) Date: Wed, 5 Nov 2014 13:23:54 -0500 Subject: [TYPES] Purely functional programming with reference cells In-Reply-To: <20141105154826.GA7881@yquem.inria.fr> References: <20141105154826.GA7881@yquem.inria.fr> Message-ID: On Wed, Nov 5, 2014 at 10:48 AM, Francois Pottier wrote: > Just a little remark. I am not sure what you mean by "this subsumes > laziness". > In fact, it seems that if you want to implement "laziness" (that is, > thunks) > using reference cells, then you need to be able to write a value into a > previously empty cell. So, you need to relax your condition and allow the > new > value to be "more defined" (in some sense) than the old value. > I do not see why the cell would ever have to be empty. Why would the following implementation not work? type 'a cell = Susp of (unit -> 'a) | Unsusp of 'a type 'a t = 'a cell ref let susp f x = ref (Susp (fun () -> f x)) let unsusp x = ref (Unsusp x) let force l = match !l with | Susp f -> let x = f () in l := Unsusp x; x | Unsusp x -> x At no point in time is the cell empty, assuming the operation that updates it is atomic. It always contains exactly the information required (either a function or the result of evaluating it) to compute its only observable result (the result of forcing the thunk). More generally, I think you might be interested by the paper "LVars: > Lattice-based Data Structures for Deterministic Parallelism", by > Lindsey Kuper and Ryan R. Newton. > I have come across that paper before, but I have not read it particularly carefully. If I recall correctly, the main idea is that a result whose type is the carrier of a join semilattice can be computed beginning from a state of no information (the semilattice's bottom) and spawning parallel tasks that add pieces of information (the semilattice's join, although so far only a commutative monoid is required). Furthermore, at any point, any of the parallel tasks can query whether a particular piece of information has already been added (this is the part that requires a semilattice). Is there anything I have missed? In any case, I will give it another read. Thanks for the pointer! -- Eduardo Le?n From ianj at ccs.neu.edu Wed Nov 5 13:27:00 2014 From: ianj at ccs.neu.edu (J. Ian Johnson) Date: Wed, 5 Nov 2014 13:27:00 -0500 (EST) Subject: [TYPES] origin of the term "hom-set"? In-Reply-To: <32912910.874651415211820481.JavaMail.root@zimbra> Message-ID: <24268540.874731415212020176.JavaMail.root@zimbra> It dates back to at least the original paper on category theory by Mac Lane and Eilenberg (General theory of natural equivalences), on page 14: "For abelian groups there is a similar functor "Hom." Specifically, let G be a locally compact regular topological group, H a topological abelian group. We construct the set Hom (G, H) of all (continuous) homomorphisms phi of G into H. The sum of two such homomorphisms phi1 and phi2 is defined by setting (phi1+phi2)g=phi1g+phi2g, for each g in G; this sum is itself a homomorphism because H is abelian." -Ian ----- Original Message ----- From: "Derek Dreyer" To: "Types list" Sent: Wednesday, November 5, 2014 12:51:30 PM GMT -05:00 US/Canada Eastern Subject: [TYPES] origin of the term "hom-set"? [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Hi. Does anyone know the origin of the term "hom-set"? It came up yesterday in class, and after a little googling, I have not turned up the answer. I always assumed it stood for "homomorphism set" (?) but even that much I have not been able to verify, and it doesn't explain why we are talking about "homomorphisms" as opposed to "morphisms" (or "sets" rather than "collections", since the hom-set is not always a set). I presume there is some historical reason for this? Thanks, Derek From harley.eades at gmail.com Wed Nov 5 13:27:37 2014 From: harley.eades at gmail.com (Harley Eades III) Date: Wed, 5 Nov 2014 13:27:37 -0500 Subject: [TYPES] origin of the term "hom-set"? In-Reply-To: References: Message-ID: <94EC2722-7A99-4F74-8AFE-EEBEA272023C@gmail.com> Hi, Derek. I am not sure of this myself, but I am forwarding this to the categories list as well. Category theorist: Do you happen to know the answer to Derek?s question? See below. I am willing to bet someone there knows. Very best, .\ Harley Eades On Nov 5, 2014, at 12:51 PM, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi. Does anyone know the origin of the term "hom-set"? It came up > yesterday in class, and after a little googling, I have not turned up > the answer. I always assumed it stood for "homomorphism set" (?) but > even that much I have not been able to verify, and it doesn't explain > why we are talking about "homomorphisms" as opposed to "morphisms" (or > "sets" rather than "collections", since the hom-set is not always a > set). I presume there is some historical reason for this? > > Thanks, > Derek From u.s.reddy at cs.bham.ac.uk Wed Nov 5 13:52:15 2014 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Wed, 5 Nov 2014 18:52:15 +0000 Subject: [TYPES] origin of the term "hom-set"? In-Reply-To: References: Message-ID: <21594.29151.93000.529884@gargle.gargle.HOWL> Derek Dreyer writes: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi. Does anyone know the origin of the term "hom-set"? It came up > yesterday in class, and after a little googling, I have not turned up > the answer. I always assumed it stood for "homomorphism set" (?) but > even that much I have not been able to verify, and it doesn't explain > why we are talking about "homomorphisms" as opposed to "morphisms" (or > "sets" rather than "collections", since the hom-set is not always a > set). I presume there is some historical reason for this? Originally, "morphism" was a shortened form of "homomorphism". However, some mathematicians these days restrict "homomorphism" to mean a structure-preserving map (as in algebra) whereas morphisms don't have any such requirement. With that terminology, the two are distinct concepts. As for why it should be called a "hom-set", recall that Mac Lane works in a set theory with universes. So, what we call a collection or class is a "set" for him and what we call a set is a "small set". Cheers, Uday From meckler at uchicago.edu Sun Nov 30 17:05:58 2014 From: meckler at uchicago.edu (Izaak Meckler) Date: Sun, 30 Nov 2014 16:05:58 -0600 Subject: [TYPES] Topology of the type T = T -> T Message-ID: Hi all, Does anyone know any resources describing the topology of the (say, Haskell 98) type given by data T = T (T -> T) This is something like the type of untyped lambda calculus programs. The topology I have in mind is defined as follows. A subset X of T is open set if there is a Haskell term f :: T -> () with f x = () for x in A, and f x = undefined otherwise. The topology on T is the one generated by these open sets (generated in the classical sense, although I'd be interested also in the case where the unions are required to be definable functions out of Nat as described by Martin Escardo here , p. 33). It is not hard to show that T has non-trivial topology (for example, there are open sets separating the two church booleans) and it seems it's not compact. I'd appreciate any other info people have on this topology. Thanks, Izzy From G.A.McCusker at bath.ac.uk Mon Dec 1 04:57:19 2014 From: G.A.McCusker at bath.ac.uk (Guy McCusker) Date: Mon, 1 Dec 2014 09:57:19 +0000 Subject: [TYPES] Topology of the type T = T -> T In-Reply-To: References: Message-ID: On 30 Nov 2014, at 22:05, Izaak Meckler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi all, > > Does anyone know any resources describing the topology of the (say, Haskell > 98) type given by > > data T = T (T -> T) > > This is something like the type of untyped lambda calculus programs. The semantics of this type were studied quite closely by Abramsky and Ong in their work on the lazy lambda-calculus. I don't know if this work will answer your question but it is surely worth a look. Some references: S. Abramsky, C.H.L. Ong, Full Abstraction in the Lazy Lambda Calculus, Information and Computation, Volume 105, Issue 2, August 1993, Pages 159-267, ISSN 0890-5401, http://dx.doi.org/10.1006/inco.1993.1044. (http://www.sciencedirect.com/science/article/pii/S0890540183710448) Samson Abramsky. 1990. The lazy lambda calculus. In Research topics in functional programming, David A. Turner (Ed.). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA 65-116. (http://www.cs.ox.ac.uk/samson.abramsky/lazy.pdf) C.-H. L. Ong, Lazy Lambda Calculus: Theories, Models and Local Structure Characterization. In Proceedings of 19th International Colloquium on Automata, Programming and Languages (ICALP 1992), Springer-Verlag, Lecture Notes in Computer Science Volume 623, 1992. (ftp://ftp.comlab.ox.ac.uk/pub/Documents/techpapers/Luke.Ong/pse.ps.gz) Guy. From m.escardo at cs.bham.ac.uk Mon Dec 1 05:54:51 2014 From: m.escardo at cs.bham.ac.uk (Martin Escardo) Date: Mon, 01 Dec 2014 10:54:51 +0000 Subject: [TYPES] Topology of the type T = T -> T In-Reply-To: References: Message-ID: <547C48FB.9030007@cs.bham.ac.uk> On 30/11/14 22:05, Izaak Meckler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Hi all, > > Does anyone know any resources describing the topology of the (say, Haskell > 98) type given by > > data T = T (T -> T) > > This is something like the type of untyped lambda calculus programs. The > topology I have in mind is defined as follows. A subset X of T is open set > if there is a Haskell term > > f :: T -> () > > with f x = () for x in A, and f x = undefined otherwise. The topology on T > is the one generated by these open sets (generated in the classical sense, > although I'd be interested also in the case where the unions are required > to be definable functions out of Nat as described by Martin Escardo here > , p. 33). > > It is not hard to show that T has non-trivial topology (for example, there > are open sets separating the two church booleans) and it seems it's not > compact. I'd appreciate any other info people have on this topology. The references Guy gave you should help you to describe the topology more explicitly. Just one comment: the topology is compact, simply because there is a bottom element in the domain. The same is true for the type of integers: it is compact because it has a bottom. What is not true for the type of integers is that the total elements form a compact set (related to the Halting Problem). You could ask whether the total elements of T form a compact set. But we first have to agree what the total elements of T are/should be. Back in 1997 in a domains workshop in Munich, Gordon Plotkin showed that there is no reasonable notion of totality for such a domain, if I remember correctly. But he didn't write this up, as far as I know. Best wishes, Martin From bcpierce at cis.upenn.edu Thu Dec 18 11:22:54 2014 From: bcpierce at cis.upenn.edu (Benjamin C. Pierce) Date: Thu, 18 Dec 2014 11:22:54 -0500 Subject: [TYPES] FP and the Chomsky hierarchy Message-ID: A colleague asked me an interesting question recently: Can the Chomsky hierarchy (regular grammars, context-free, context-sensitive, Turing-complete) be phrased in terms that functional programmers would find natural, e.g. in terms of typed or syntactically restricted lambda-calculi? Pointers appreciated... - Benjamin From gabriel.scherer at gmail.com Thu Dec 18 16:46:54 2014 From: gabriel.scherer at gmail.com (Gabriel Scherer) Date: Thu, 18 Dec 2014 22:46:54 +0100 Subject: [TYPES] FP and the Chomsky hierarchy In-Reply-To: References: Message-ID: In The safe lambda-calculus, Blum and Ong, 2009 http://arxiv.org/abs/0901.2399 a restriction of the simply-typed lambda calculus is presented as an adaptation of an existing restriction on *higher-order* grammars. The corresponding subset of lambda-calculus has the interesting property that one can define reduction without having to take care of variable capture at all -- it can be ruled out statically thanks to the order restriction. On Thu, Dec 18, 2014 at 5:22 PM, Benjamin C. Pierce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > A colleague asked me an interesting question recently: > > Can the Chomsky hierarchy (regular grammars, context-free, context-sensitive, Turing-complete) be phrased in terms that functional programmers would find natural, e.g. in terms of typed or syntactically restricted lambda-calculi? > > Pointers appreciated... > > - Benjamin From fp at cs.cmu.edu Thu Dec 18 21:29:59 2014 From: fp at cs.cmu.edu (Frank Pfenning) Date: Thu, 18 Dec 2014 21:29:59 -0500 Subject: [TYPES] FP and the Chomsky hierarchy In-Reply-To: References: Message-ID: Hi Benjamin, Categorial grammars are defined via the Lambek Calculus, which is an intuitionistic substructural logic. They generate the same languages as context-free grammars. A proof term assignment to the Lambek Calculus yields a type system for a lambda-calculus with interesting properties which is coherent with linear and intuitionistic logic. See [Polakow01] which also includes some interesting applications. Dependent ordered types allow one to model a strong form of context-sensitivity. Regular grammars are a natural sublanguage, although I don't know of a reference where this has been characterized precisely. Another connection is by looking at type systems. For example, regular tree grammars are the foundations of datasort refinements of functional languages [Davies05], which originated much earlier in the logic programming community. You probably know better than me the extensions to type languages for XML schemas, which come in some ways closer to classes of string languages rather than tree languages. Unfortunately, when you try to generalize to context-free grammars, too many basic questions about typing become undecidable. Jeff Polakow. Ordered Linear Logic and Applications. PhD Thesis, Carnegie Mellon University, August 2001. Available as Technical Report CMU-CS-01-152 . Rowan Davies. Practical Refinement-Type Checking. PhD Thesis, Carnegie Mellon University, May 2005. Available as Technical Report CMU-CS-05-110 . On Thu, Dec 18, 2014 at 11:22 AM, Benjamin C. Pierce wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > A colleague asked me an interesting question recently: > > Can the Chomsky hierarchy (regular grammars, context-free, > context-sensitive, Turing-complete) be phrased in terms that functional > programmers would find natural, e.g. in terms of typed or syntactically > restricted lambda-calculi? > > Pointers appreciated... > > - Benjamin > -- Frank Pfenning, Professor and Head Department of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 http://www.cs.cmu.edu/~fp +1 412 268-6343 GHC 7019 From oleg at okmij.org Fri Dec 19 03:35:43 2014 From: oleg at okmij.org (oleg at okmij.org) Date: Fri, 19 Dec 2014 03:35:43 -0500 (EST) Subject: [TYPES] FP and the Chomsky hierarchy In-Reply-To: Message-ID: <20141219083543.2D746C3833@www1.g3.pair.com> Representing Chomsky hierarchy in the `natural' for functional programmers way (as a type-restricted linear lambda-calculus) is one of the attractions of the so-called Abstract Categorial Grammar (ACG) formalism. It is being developed by Philippe de Groote and collaborators. One can find many pointers on the ACG home page http://www.loria.fr/equipes/calligramme/acg/ The specific question about Chomsky hierarchy and the orders of types in ACG is answered on slide 44 (PDF page 44) of the following course http://www.loria.fr/equipes/calligramme/acg/publications/esslli-09/2009-esslli-acg-week-1-day-3.pdf (Of course one have to read previous slides to understand the L(n,m) notation). Slide 56 talks about the hierarchy of tree languages (rather than string languages) and the open problems. Incidentally, ACG turns out to closely related to the Tagless-Final approach (as Philippe de Groote himself told me).