From carsten at itu.dk Wed Jan 6 08:13:37 2010 From: carsten at itu.dk (Carsten Schuermann) Date: Wed, 6 Jan 2010 14:13:37 +0100 Subject: [TYPES] Implementations using explicit substitutions Message-ID: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> Dear colleagues, We are currently writing a paper on explicit substitution calculi, and we would like to include references to implementations of proof assistants, theorem provers, and programming languages that use them. If you are aware of any such system please send us an email. Best regards, -- Anders Schack-Nielsen and Carsten Schuermann From dan.doel at gmail.com Wed Jan 6 08:45:28 2010 From: dan.doel at gmail.com (Dan Doel) Date: Wed, 6 Jan 2010 08:45:28 -0500 Subject: [TYPES] Implementations using explicit substitutions In-Reply-To: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> References: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> Message-ID: <201001060845.28748.dan.doel@gmail.com> This is not a programming language or theorem prover per-se, but James Chapman's thesis, Type checking and normalisation*, uses a term representation with explicit substitutions for doing the titular activities in a (total,) dependently typed functional language. The thesis is written mainly in Epigram, but it could be translated into plenty of other languages. Cheers, -- Dan * http://www.cs.ioc.ee/~james/PUBLICATION.html From Michael.Norrish at nicta.com.au Wed Jan 6 18:26:53 2010 From: Michael.Norrish at nicta.com.au (Michael Norrish) Date: Thu, 07 Jan 2010 10:26:53 +1100 Subject: [TYPES] Implementations using explicit substitutions In-Reply-To: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> References: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> Message-ID: <4B451C3D.4020302@nicta.com.au> Carsten Schuermann wrote: > We are currently writing a paper on explicit substitution calculi, and we > would like to include references to implementations of proof > assistants, theorem provers, and programming languages that use them. > If you are aware of any such system please send us an email. Bruno Barras implemented an explicit substitution mechanism for the HOL4 kernel. This, and its advantages, are described in @InProceedings{Barras:2000:HOL-EVAL, author = {Bruno Barras}, title = {Programming and Computing in {HOL}}, crossref = {tphols2000}, pages = {17--37}, year = 2000, doi = {10.1007/3-540-44659-1_2}, } Bruno's code is still part of the system. Michael. From avik at cs.umd.edu Wed Jan 6 20:48:50 2010 From: avik at cs.umd.edu (Avik Chaudhuri) Date: Wed, 6 Jan 2010 20:48:50 -0500 Subject: [TYPES] Implementations using explicit substitutions In-Reply-To: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> References: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> Message-ID: This is perhaps a non-standard application of explicit substitutions: in the following paper we show how to formalize and enforce a data- flow security property using explicit substitutions to track taints in the underlying language. (The language is intended to provide an abstract model of a particular operating system's security environment.) http://portal.acm.org/citation.cfm?doid=1513443.1513447 -Avik. On Jan 6, 2010, at 8:13 AM, Carsten Schuermann wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Dear colleagues, > > We are currently writing a paper on explicit substitution calculi, > and we > would like to include references to implementations of proof > assistants, theorem provers, and programming languages that use them. > > If you are aware of any such system please send us an email. > > Best regards, > -- Anders Schack-Nielsen and Carsten Schuermann > From ajeffrey at bell-labs.com Thu Jan 7 10:28:27 2010 From: ajeffrey at bell-labs.com (Alan Jeffrey) Date: Thu, 7 Jan 2010 09:28:27 -0600 Subject: [TYPES] Implementations using explicit substitutions In-Reply-To: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> References: <7E0646BD-7605-4949-89FC-A84C458C33F0@itu.dk> Message-ID: <4B45FD9B.3050902@bell-labs.com> Back in the mists of time (when the design of OO languages with generics were the new new thing) I ran a course on design of OO languages. The interpreter used explicit substitutions as the model of variable binding, which provides a nice bridge between the formal model of substitution and an interpreter based on dictionaries. The course description, including the sources (you'd be after hobbes.transform.Substitution) is at: http://fpl.cs.depaul.edu/ajeffrey/se580/ or you can just run the applet (yes, an applet, did I mention the class was a while back?) at: http://fpl.cs.depaul.edu/ajeffrey/se580/applet/ which shows explicit substitutions in action. Alan Jeffrey. Carsten Schuermann wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > ------------------------------------------------------------------------ > > Dear colleagues, > > We are currently writing a paper on explicit substitution calculi, and we > would like to include references to implementations of proof > assistants, theorem provers, and programming languages that use them. > > If you are aware of any such system please send us an email. > > Best regards, > -- Anders Schack-Nielsen and Carsten Schuermann > From purush.iyer at gmail.com Sat Jan 16 17:04:36 2010 From: purush.iyer at gmail.com (Purush Iyer) Date: Sat, 16 Jan 2010 17:04:36 -0500 Subject: [TYPES] Grand Challenge Problems? Message-ID: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> Colleagues Now that the lek (er, POPL community) has decided on vexing questions about papers at POPL, I thought it best to consult this group on a harder problem. In my current job at a funding agency I am confronted by the question, from Physicists and Control Theoreticians, "What are the main *scientific* grand challenge problems in Programming Languages and Systems?" I have not too successful to date; hence this e-mail. If y'all can come up with a list of five to ten challenges I would appreciate it very much. I don't take this question lightly; I had a long chat with Jens Palsberg over the summer and I don't think we were able to make much headway. Sincerely Purush Iyer From matthias at ccs.neu.edu Sat Jan 16 17:25:38 2010 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Sat, 16 Jan 2010 17:25:38 -0500 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> Message-ID: <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> Dear Purush Iyer, what is a "grand challenge"? I naturally know the funding agency answer: a tool for extracting a huge amount of money from politicians who need something graphic to convince voters that their spending is acceptable and produces 'good' things but I'd like to know whether there's a "scientific" definition that would make it worthwhile discussing the idea on a science mailing list. Thanks -- Matthias On Jan 16, 2010, at 5:04 PM, Purush Iyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Colleagues > > Now that the lek (er, POPL community) has decided on vexing questions > about papers at POPL, I thought it best to consult this group on a harder > problem. In my current job at a funding agency I am confronted by the > question, from Physicists and Control Theoreticians, "What are the main > *scientific* grand challenge problems in Programming Languages and > Systems?" I have not too successful to date; hence this e-mail. > > If y'all can come up with a list of five to ten challenges I would > appreciate it very much. I don't take this question lightly; I had a long > chat with Jens Palsberg over the summer and I don't think we were able to > make much headway. > > Sincerely > Purush Iyer From andru at cs.cornell.edu Sat Jan 16 17:37:49 2010 From: andru at cs.cornell.edu (Andrew Myers) Date: Sat, 16 Jan 2010 17:37:49 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <4B523FBD.8020907@cs.cornell.edu> By popular request, the following is a more detailed description of the reviewing process that Dave Evans and I used for IEEE Security and Privacy 2009. The reviewing process used by Oakland 2009 was adapted from a two-tier processed used successfully by a few conferences in previous years. It was pioneered by Tom Anderson for SIGCOMM 2006, and used subsequently by SOSP 2007 and OSDI 2008. Unlike most conference review processes, we had a two-tier PC, and three rounds of reviewing. I believe that this structure helped us make more informed decisions, led to better discussions at the PC meeting, gave authors more feedback, and resulted in a better product overall. We had 77 days to review 253 submissions. This may sound like a lot of time, but reviewing for Oakland stretches across Christmas and other winter holidays. The PC of 50 people was divided 25/25 into 'heavy' and 'light' groups. Despite the names, these PC members did similar amounts of work. The heavy members did a few more reviews and attended the PC meeting; the light members participated in electronic discussion before the meeting. Dividing the PC into half meant that we had a smaller group at the PC meeting; and had more effective discussions than in previous years. A two-tier PC also helped us recruit some PC members who preferred not to travel. We did not distinguish between heavy and light members in any external documents such as the proceedings. I think this helped us recruit light members. Reviewing proceeded in three rounds, seen pictorially at http://www.cs.cornell.edu/andru/oakland09/reviewing-slides.pdf. We started round 1 with 249 credible papers. Each paper received one heavy and one light reviewer. Reviewers had 35 days to complete up to 12 reviews. Based on these initial reviews, the chairs rejected 36 papers and marked 33 papers as probable rejects. In round 2, we had 180 papers considered fully live, each of which received an additional heavy and light review. Papers considered probable rejects were assigned just one additional reviewer. Round 2 started just after Christmas, and reviewers had 20 days to complete up to 12 reviews. After round 2, we had 3-4 reviews per live paper. Papers all of whose reviews were negative were rejected at this point, with some electronic discussion to make sure everyone involved agreed. By round 3, we were down to 68 papers, most of which were pretty good papers. Each live paper now received one additional heavy review, ensuring that there were three reviewers present at the PC meeting for each discussed paper. Reviewers received up to five papers to review, in ten days. Based on these reviews and more electronic discussion, we rejected four more papers. All papers with some support at this point made it to the PC meeting. The chairs actively worked to resolve papers through electronic discussion, which was important in achieving closure. The PC meeting was a day and and half long, and resulted in 26 of the 68 papers being chosen for the program. Each paper was assigned a lead reviewer ahead of time. The lead reviewer presented not only their own view, but also those of the light reviewers who were not present. Where possible, we chose lead reviewers who were positive and confident about their reviews. At some points, we had breakout sessions for small groups of reviewers to discuss papers in parallel. However, no paper was accepted without the whole PC hearing the reasons for acceptance. This seems important for a broad conference like Oakland (or POPL). One benefit of multiple rounds of reviewing was that we could do a better job of assigning reviewers in later rounds, for three reasons: first, the reviews helped us understand what the key issues were; second, we asked reviewers explicitly for suggestions; third, we could identify the problematic paper where all the reviews were low-confidence and do hole-filling. We also asked external experts to help review papers where we didn't have enough expertise in-house. In the end, all papers received between 2 and 8 reviews, and accepted papers received between 5 and 8 reviews. The multiround structure meant that reviewing effort was concentrated on the stronger papers, and authors of accepted papers got more feedback, and often more expert feedback, than they had in previous years. The reviewing load was increased slightly over previous years for heavy reviewers (~23), but decreased slightly (~20) for light reviewers. Keeping load mostly constant was possible because we had a larger PC than in the past. The two-tier structure meant that despite a larger PC, we could have a smaller PC meeting. Filtering out weak papers early helped keep the reviewing load manageable. Papers were rejected after round 1 only when they had two confident, strongly negative reviews. The chairs did this in consultation with each other. Papers with very negative reviews but without high confidence, or confident reviews that were not as negative, were considered probable rejects and assigned a third review in round 2. If that review was positive, the paper received three reviews in round 3 instead of the usual one, ensuring that it made it to the PC meeting (this only happened in a couple of cases). PC members did not report any concerns to us that good papers might have been filtered out early. Assigning the right reviewers in round 1 makes both filtering and assignment of additional reviewers more effective. To be able to assign round-1 reviewers efficiently, it is important for the chairs to get as much information from the PC as possible about what papers they would like to review and about what topics they are expert. A final issue we put thought into was the rating scale. While the rating scale might not seem that important, in past years the Oakland committee had found that a badly designed rating scale could cause problems. The four-point Identify the Champion scale (A-D) used by many PL conferences works fine for single-round reviewing. But for multiple rounds with early filtering, it's helpful to distinguish the papers that are truly weak from the ones that merely don't make the grade. Therefore, ratings came from the following scale: 1: Strong reject. Will argue strongly to reject. 2: Reject. Will argue to reject (Identify the Champion's D) 3: Weak reject. Will not argue to reject (C) 4: Weak accept. Will not argue to accept (B) 5: Accept. Will argue to accept. (A) 6: Strong accept. Will argue strongly to accept. As in Identify the Champion, giving the ratings meaningful semantics helped ensure consistency across reviewers. Papers that received 1's and 2's were easy to filter out after round 1; we rejected papers with confident 1/1 or 1/2 ratings, and some 2/2's. Having the extreme ratings of 1 and 6 also seemed to give reviewers a little more excuse to use 2 and 5 as ratings, staking out stronger positions than they might have otherwise. The absence of a middle 'neutral' point usefully forced reviewers to lean one way or the other. Overall, this reviewing process probably involved somewhat more total work for the chairs than a conventional reviewing process, but it was also spread out more over the reviewing period. Problems could be identified and addressed much earlier. Total work for PC members was comparable to a conventional process. Some PC members appreciated that the multiple intermediate deadlines prevented a last-minute rush to get reviews done, and that the average quality of reviewed papers was higher. Hope this helps, -- Andrew From dreyer at mpi-sws.org Sat Jan 16 17:58:23 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 16 Jan 2010 23:58:23 +0100 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> Message-ID: <7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> A note from the moderator: I agree with Matthias that Purush's question is a vague and extremely broad one, and it's worth asking if there's a more "scientific" definition of "grand challenge". However, given that at last year's POPL there was a panel session devoted precisely to the topic of "PL Grand Challenges", I believe the question (even in its vague form) is not out of bounds for a scientific forum like the Types-list. FWIW, to address Purush's question, some notes from that POPL'09 panel session on PL Grand Challenges are available here: http://plgrand.blogspot.com/ Thanks, Derek On Sat, Jan 16, 2010 at 11:25 PM, Matthias Felleisen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Dear Purush Iyer, > > what is a "grand challenge"? I naturally know the funding agency > answer: > > ?a tool for extracting a huge amount of money from politicians > ?who need something graphic to convince voters that their spending > ?is acceptable and produces 'good' things > > but I'd like to know whether there's a "scientific" definition > that would make it worthwhile discussing the idea on a science > mailing list. > > Thanks -- Matthias > > > > > > On Jan 16, 2010, at 5:04 PM, Purush Iyer wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> Colleagues >> >> Now that the lek (er, POPL community) ? has decided on vexing questions >> about papers at POPL, I thought it best to consult this group on a harder >> problem. ?In my current job at a funding agency I am confronted by the >> question, from Physicists and Control Theoreticians, "What are the main >> *scientific* grand challenge problems in Programming Languages and >> Systems?" ? I have not too successful to date; hence this e-mail. >> >> If y'all can come up with a list of five to ten challenges I would >> appreciate it very much. ?I don't take this question lightly; I had a long >> chat with Jens Palsberg over the summer and I don't think we were able to >> make much headway. >> >> Sincerely >> Purush Iyer > > From purush.iyer at gmail.com Sat Jan 16 22:14:55 2010 From: purush.iyer at gmail.com (Purush Iyer) Date: Sat, 16 Jan 2010 22:14:55 -0500 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> Message-ID: <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> Derek Thanks for the pointer. I do remember looking at these discussions after 09 POPL. But they did not, unfortunately, solve my quandry By "scientific" I am referring to problems that need either new mathematics to be invented or perhaps a new method. It is hard to define what is scientific, though I am sure all of us would be able to say when presented with an example whether it is a scientific problem or not. I consider problems posed by Alan Mycroft's 1980 paper (on call-by-need and value) and his 1984 paper (on Recursive type schemes) as posing scientific questions. The former led to ten years of papers in POPL and other places (with nifty ideas by Hudak, Mishra, Jensen and others). The latter led to the notion of semi-unification (by Henglein, Kfoury and others). At the risk of sounding vain, I consider my 1997 paper in TAPSOFT on quantitive reasoning in Probablistic Lossy Channel Systems as raising a nice scientific question in Formal Methods; it took the discovery of notion of attractors in infinite markov chains (by Alex Rabinovitch, Parosh Abdulla and others) to solve the problem effectively. My question is simple: Show me problems that are scientific, that need new mathematics or blindingly new insights to solve. I am stumped while explaining what the deep problems in our field are to Physicists and Control Theoreticians. In all of this I realize that work in PLS is synthetic in nature -- part science, part engineering -- and that, may be, we typically make up a problem and solve it right away. Perhaps there are no scientific grand challenges; but, I dared to ask anyway. Thanks for hearing me out. sincerely Purush Iyer On Sat, Jan 16, 2010 at 5:58 PM, Derek Dreyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > A note from the moderator: > > I agree with Matthias that Purush's question is a vague and extremely > broad one, and it's worth asking if there's a more "scientific" > definition of "grand challenge". > > However, given that at last year's POPL there was a panel session > devoted precisely to the topic of "PL Grand Challenges", I believe the > question (even in its vague form) is not out of bounds for a > scientific forum like the Types-list. > > FWIW, to address Purush's question, some notes from that POPL'09 panel > session on PL Grand Challenges are available here: > > http://plgrand.blogspot.com/ > > Thanks, > Derek > > On Sat, Jan 16, 2010 at 11:25 PM, Matthias Felleisen > wrote: > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > Dear Purush Iyer, > > > > what is a "grand challenge"? I naturally know the funding agency > > answer: > > > > a tool for extracting a huge amount of money from politicians > > who need something graphic to convince voters that their spending > > is acceptable and produces 'good' things > > > > but I'd like to know whether there's a "scientific" definition > > that would make it worthwhile discussing the idea on a science > > mailing list. > > > > Thanks -- Matthias > > > > > > > > > > > > On Jan 16, 2010, at 5:04 PM, Purush Iyer wrote: > > > >> [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > >> Colleagues > >> > >> Now that the lek (er, POPL community) has decided on vexing questions > >> about papers at POPL, I thought it best to consult this group on a > harder > >> problem. In my current job at a funding agency I am confronted by the > >> question, from Physicists and Control Theoreticians, "What are the main > >> *scientific* grand challenge problems in Programming Languages and > >> Systems?" I have not too successful to date; hence this e-mail. > >> > >> If y'all can come up with a list of five to ten challenges I would > >> appreciate it very much. I don't take this question lightly; I had a > long > >> chat with Jens Palsberg over the summer and I don't think we were able > to > >> make much headway. > >> > >> Sincerely > >> Purush Iyer > > > > > From sreeram at tachyontech.net Sun Jan 17 08:29:50 2010 From: sreeram at tachyontech.net (KS Sreeram) Date: Sun, 17 Jan 2010 18:59:50 +0530 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> Message-ID: Here is a grand challenge: Make a clean, expressive, high-level programming language which has performance that _equals_ C++. Many programming languages have features that make it safer/cleaner/more expressive than C++. But every one of those languages sacrifice performance in the process of achieving those features. The following line of research doesn't seem to have been explored much: How far can we go in terms of safety and expressiveness without sacrificing even an iota of performance? --- KS Sreeram Tachyon Technologies On Sun, Jan 17, 2010 at 8:44 AM, Purush Iyer wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Derek > > Thanks for the pointer. I do remember looking at these discussions after > 09 POPL. But they did not, unfortunately, solve my quandry > > By "scientific" I am referring to problems that need either new > mathematics > to be invented or perhaps a new method. It is hard to define what is > scientific, though I am sure all of us would be able to say when presented > with an example whether it is a scientific problem or not. > > I consider problems posed by Alan Mycroft's 1980 paper (on call-by-need > and > value) and his 1984 paper (on Recursive type schemes) as posing scientific > questions. The former led to ten years of papers in POPL and other places > (with nifty ideas by Hudak, Mishra, Jensen and others). The latter led to > the notion of semi-unification (by Henglein, Kfoury and others). At the > risk of sounding vain, I consider my 1997 paper in TAPSOFT on quantitive > reasoning in Probablistic Lossy Channel Systems as raising a nice > scientific > question in Formal Methods; it took the discovery of notion of attractors > in > infinite markov chains (by Alex Rabinovitch, Parosh Abdulla and others) to > solve the problem effectively. > > My question is simple: Show me problems that are scientific, that need > new > mathematics or blindingly new insights to solve. I am stumped while > explaining what the deep problems in our field are to Physicists and > Control > Theoreticians. In all of this I realize that work in PLS is synthetic in > nature -- part science, part engineering -- and that, may be, we typically > make up a problem and solve it right away. Perhaps there are no > scientific > grand challenges; but, I dared to ask anyway. Thanks for hearing me out. > > sincerely > Purush Iyer > > > > > > > On Sat, Jan 16, 2010 at 5:58 PM, Derek Dreyer wrote: > > > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > > > A note from the moderator: > > > > I agree with Matthias that Purush's question is a vague and extremely > > broad one, and it's worth asking if there's a more "scientific" > > definition of "grand challenge". > > > > However, given that at last year's POPL there was a panel session > > devoted precisely to the topic of "PL Grand Challenges", I believe the > > question (even in its vague form) is not out of bounds for a > > scientific forum like the Types-list. > > > > FWIW, to address Purush's question, some notes from that POPL'09 panel > > session on PL Grand Challenges are available here: > > > > http://plgrand.blogspot.com/ > > > > Thanks, > > Derek > > > > On Sat, Jan 16, 2010 at 11:25 PM, Matthias Felleisen > > wrote: > > > [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > > > > > > Dear Purush Iyer, > > > > > > what is a "grand challenge"? I naturally know the funding agency > > > answer: > > > > > > a tool for extracting a huge amount of money from politicians > > > who need something graphic to convince voters that their spending > > > is acceptable and produces 'good' things > > > > > > but I'd like to know whether there's a "scientific" definition > > > that would make it worthwhile discussing the idea on a science > > > mailing list. > > > > > > Thanks -- Matthias > > > > > > > > > > > > > > > > > > On Jan 16, 2010, at 5:04 PM, Purush Iyer wrote: > > > > > >> [ The Types Forum, > > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > >> > > >> Colleagues > > >> > > >> Now that the lek (er, POPL community) has decided on vexing > questions > > >> about papers at POPL, I thought it best to consult this group on a > > harder > > >> problem. In my current job at a funding agency I am confronted by the > > >> question, from Physicists and Control Theoreticians, "What are the > main > > >> *scientific* grand challenge problems in Programming Languages and > > >> Systems?" I have not too successful to date; hence this e-mail. > > >> > > >> If y'all can come up with a list of five to ten challenges I would > > >> appreciate it very much. I don't take this question lightly; I had a > > long > > >> chat with Jens Palsberg over the summer and I don't think we were able > > to > > >> make much headway. > > >> > > >> Sincerely > > >> Purush Iyer > > > > > > > > > From geoff at knauth.org Sun Jan 17 11:03:59 2010 From: geoff at knauth.org (Geoffrey S. Knauth) Date: Sun, 17 Jan 2010 11:03:59 -0500 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> Message-ID: <57C67E29-5AB3-4000-A2EE-8BEDF58A4555@knauth.org> On Jan 17, 2010, at 08:29, KS Sreeram wrote: > Make a clean, expressive, high-level programming language which > has performance that _equals_ C++. Does the Stalin Scheme compiler rate? From sreeram at tachyontech.net Sun Jan 17 12:20:14 2010 From: sreeram at tachyontech.net (KS Sreeram) Date: Sun, 17 Jan 2010 22:50:14 +0530 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: <57C67E29-5AB3-4000-A2EE-8BEDF58A4555@knauth.org> References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> <57C67E29-5AB3-4000-A2EE-8BEDF58A4555@knauth.org> Message-ID: Stalin is certainly a great piece of work. But it does have a few shortcomings: 1) Painfully long build times 2) Difficulty in reasoning about the performance. It's not very clear how to _design_ for performance. --- KS Sreeram Tachyon Technologies On Sun, Jan 17, 2010 at 9:33 PM, Geoffrey S. Knauth wrote: > On Jan 17, 2010, at 08:29, KS Sreeram wrote: > > > Make a clean, expressive, high-level programming language which > > has performance that _equals_ C++. > > Does the Stalin Scheme compiler rate? > > From dreyer at mpi-sws.org Sun Jan 17 12:45:59 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sun, 17 Jan 2010 18:45:59 +0100 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> <57C67E29-5AB3-4000-A2EE-8BEDF58A4555@knauth.org> Message-ID: <7fa251b71001170945h5e688fe4rea14e400276ef736@mail.gmail.com> A note from the moderator: To give this thread some focus and prevent it from devolving into endless arguments about particular languages or systems, I will restrict future posts to those directly concerning the original question about grand challenges in PL. Feel free to take other arguments off-line. Best regards, Derek From rfuhrer at watson.ibm.com Mon Jan 18 12:19:09 2010 From: rfuhrer at watson.ibm.com (Robert M. Fuhrer) Date: Mon, 18 Jan 2010 12:19:09 -0500 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: References: Message-ID: I think that the PL community has been giving inadequate attention to a critical aspect of its work for too long: that of defining a reliable empirical process or theoretical model for assessing the effectiveness of a given nugget of language design. It seems striking that many of the programming languages in mainstream use today fare poorly when viewed rigorously from perspectives such as safety, modularity, or maintainability. I think that's a sign that the models we use to evaluate languages fail to capture one or more critical real- world aspects of what makes a language design good (or practical?). Such a model could help identify designs that, say, introduce a feature intended to improve program safety, but in so doing require developers to engage in reasoning even more subtle than that of the original problem. Note I'm *not* saying that mainstream acceptance implies a good design. I *am* saying that if a language's "theoretical superiority" was *obvious* to practitioners (and not only to language design experts), then the mainstream would *ultimately* adopt it, given enough time for economics to catch up. I don't see that happening -- I see a rift where apparently better designs fail to be adopted, even after decades. In short, I strongly suspect something deeper is going on. If "obvious superiority implies ultimate adoption" is wrong in the domain of programming languages (it clearly doesn't always work elsewhere), then the "goodness" model doesn't also predict adoption. That's ok - a "goodness" model is still valuable to us as scientists. We just need a different one to predict market acceptance (which is also valuable). So, my PL Grand Challenge: come up with a *much* more principled approach to evaluating programming language designs. [... now donning flame-retardant outerwear ...] -- Cheers, - Bob ------------------------------------------------- Robert M. Fuhrer Research Staff Member Programming Technologies Dept. IBM T.J. Watson Research Center IMP Project Lead (http://www.eclipse.org/imp) X10: Productivity for High-Performance Parallel Programming (http://x10-lang.org ) From mwh at cs.umd.edu Sat Jan 23 16:10:40 2010 From: mwh at cs.umd.edu (Michael Hicks) Date: Sat, 23 Jan 2010 22:10:40 +0100 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B523FBD.8020907@cs.cornell.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> Message-ID: A quick follow-up on Andrew's post. I attended the discussion at POPL on Wednesday (the 20th), and it seemed to focus on how/whether to accept more papers. I observe that the original proposal posted to this list (which was soundly rejected by straw poll at the meeting) was aimed at improving the review process, to be more fair and effective. Simon's amended proposal for accepting more papers starts by saying we should have a "quality bar" but does not suggest how to set this bar, and then assumes that whatever the method, more papers would be/should be accepted. I find the lack of discussion on the integrity/quality of the review process a bit surprising, and unfortunate. My sense was that the agreement to relax current standards for paper acceptance is based somewhat on a resignation that reviewing, whatever the process, is destined to be flawed. To the contrary, I think the review process can be improved, and is worth improving. If we improve the process, we will feel much more confident about the papers we accept, and I suspect we will be more confident to accept more of them. To this end, I think that a tiered process can really help. To add a bit more information about it (Andrew's full description in included below): At the POPL discussion, one goal that was raised was to improve the number of "expert" reviews per paper. People are dissatisfied when their paper is rejected by self-proclaimed non-experts. I believe that Jens pointed out that this year's POPL had 77% papers with one "X" review. I asked Andrew what the related metric was for the Oakland tiered review process, and he said: > A little hard to compare. We had a 4-point confidence scale, where 3 > was "confident" and 4 was "this is my area". > > All papers had at least one 3 or 4 reviewer. 88% had two or more 3 > or 4 reviewers. A little over half the papers had a 4 reviewer. I was happy with this year's POPL program so I agree we could probably keep accepting more papers (that is, more than 39). But I think it would be great to improve the review process, too. -Mike On Jan 16, 2010, at 11:37 PM, Andrew Myers wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > By popular request, the following is a more detailed description of > the > reviewing process that Dave Evans and I used for IEEE Security and > Privacy 2009. > > The reviewing process used by Oakland 2009 was adapted from a two-tier > processed used successfully by a few conferences in previous years. > It was > pioneered by Tom Anderson for SIGCOMM 2006, and used subsequently > by SOSP 2007 and OSDI 2008. > > Unlike most conference review processes, we had a two-tier PC, and > three rounds of reviewing. I believe that this structure helped us > make more informed decisions, led to better discussions at the PC > meeting, gave authors more feedback, and resulted in a better product > overall. We had 77 days to review 253 submissions. This may sound like > a lot of time, but reviewing for Oakland stretches across Christmas > and other winter holidays. > > The PC of 50 people was divided 25/25 into 'heavy' and 'light' groups. > Despite the names, these PC members did similar amounts of work. The > heavy members did a few more reviews and attended the PC meeting; > the light members participated in electronic discussion before the > meeting. Dividing the PC into half meant that we had a smaller group > at the PC meeting; and had more effective discussions than in previous > years. A two-tier PC also helped us recruit some PC members who > preferred not to travel. We did not distinguish between heavy and > light members in any external documents such as the proceedings. I > think this helped us recruit light members. > > Reviewing proceeded in three rounds, seen pictorially at > http://www.cs.cornell.edu/andru/oakland09/reviewing-slides.pdf. We > started round 1 with 249 credible papers. Each paper received one > heavy and one light reviewer. Reviewers had 35 days to complete up to > 12 reviews. Based on these initial reviews, the chairs rejected 36 > papers and marked 33 papers as probable rejects. > > In round 2, we had 180 papers considered fully live, each of which > received an additional heavy and light review. Papers considered > probable rejects were assigned just one additional reviewer. Round 2 > started just after Christmas, and reviewers had 20 days to complete > up to 12 reviews. After round 2, we had 3-4 reviews per live > paper. Papers all of whose reviews were negative were rejected at this > point, with some electronic discussion to make sure everyone involved > agreed. > > By round 3, we were down to 68 papers, most of which were pretty good > papers. Each live paper now received one additional heavy review, > ensuring that there were three reviewers present at the PC meeting for > each discussed paper. Reviewers received up to five papers to review, > in ten days. Based on these reviews and more electronic discussion, > we rejected four more papers. All papers with some support at this > point made it to the PC meeting. The chairs actively worked to > resolve papers through electronic discussion, which was important in > achieving closure. > > The PC meeting was a day and and half long, and resulted in 26 of > the 68 papers being chosen for the program. Each paper was assigned > a lead reviewer ahead of time. The lead reviewer presented not > only their own view, but also those of the light reviewers who > were not present. Where possible, we chose lead reviewers who were > positive and confident about their reviews. At some points, we had > breakout sessions for small groups of reviewers to discuss papers in > parallel. However, no paper was accepted without the whole PC > hearing the > reasons for acceptance. This seems important for a broad conference > like > Oakland (or POPL). > > One benefit of multiple rounds of reviewing was that we could do a > better job of assigning reviewers in later rounds, for three reasons: > first, the reviews helped us understand what the key issues were; > second, we asked reviewers explicitly for suggestions; third, we > could identify the problematic paper where all the reviews were > low-confidence and do hole-filling. We also asked external experts to > help review papers where we didn't have enough expertise in-house. > > In the end, all papers received between 2 and 8 reviews, and accepted > papers received between 5 and 8 reviews. The multiround structure > meant that reviewing effort was concentrated on the stronger papers, > and authors of accepted papers got more feedback, and often more > expert feedback, than they had in previous years. The reviewing load > was increased slightly over previous years for heavy reviewers (~23), > but decreased slightly (~20) for light reviewers. Keeping load mostly > constant was possible because we had a larger PC than in the past. > The two-tier structure meant that despite a larger PC, we could have a > smaller PC meeting. > > Filtering out weak papers early helped keep the reviewing load > manageable. Papers were rejected after round 1 only when they had > two confident, strongly negative reviews. The chairs did this in > consultation with each other. Papers with very negative reviews > but without high confidence, or confident reviews that were not as > negative, were considered probable rejects and assigned a third review > in round 2. If that review was positive, the paper received three > reviews in round 3 instead of the usual one, ensuring that it made > it to the PC meeting (this only happened in a couple of cases). PC > members did not report any concerns to us that good papers might have > been filtered out early. > > Assigning the right reviewers in round 1 makes both filtering and > assignment of additional reviewers more effective. To be able to > assign round-1 reviewers efficiently, it is important for the chairs > to get as much information from the PC as possible about what papers > they would like to review and about what topics they are expert. > > A final issue we put thought into was the rating scale. While the > rating scale might not seem that important, in past years the Oakland > committee had found that a badly designed rating scale could cause > problems. The four-point Identify the Champion scale (A-D) used by > many PL conferences works fine for single-round reviewing. But for > multiple rounds with early filtering, it's helpful to distinguish the > papers that are truly weak from the ones that merely don't make the > grade. Therefore, ratings came from the following scale: > > 1: Strong reject. Will argue strongly to reject. > 2: Reject. Will argue to reject (Identify the Champion's D) > 3: Weak reject. Will not argue to reject (C) > 4: Weak accept. Will not argue to accept (B) > 5: Accept. Will argue to accept. (A) > 6: Strong accept. Will argue strongly to accept. > > As in Identify the Champion, giving the ratings meaningful semantics > helped ensure consistency across reviewers. Papers that received 1's > and 2's were easy to filter out after round 1; we rejected papers > with confident 1/1 or 1/2 ratings, and some 2/2's. Having the extreme > ratings of 1 and 6 also seemed to give reviewers a little more excuse > to use 2 and 5 as ratings, staking out stronger positions than they > might have otherwise. The absence of a middle 'neutral' point usefully > forced reviewers to lean one way or the other. > > Overall, this reviewing process probably involved somewhat more total > work for the chairs than a conventional reviewing process, but it was > also spread out more over the reviewing period. Problems could be > identified and addressed much earlier. Total work for PC members was > comparable to a conventional process. Some PC members appreciated > that the multiple intermediate deadlines prevented a last-minute rush > to get reviews done, and that the average quality of reviewed papers > was higher. > > Hope this helps, > > -- Andrew From simonpj at microsoft.com Mon Jan 25 05:32:04 2010 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 25 Jan 2010 10:32:04 +0000 Subject: [TYPES] Two-tier reviewing process In-Reply-To: References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> Message-ID: <59543203684B2244980D7E4057D5FBC10AFC9D45@DB3EX14MBXC306.europe.corp.microsoft.com> I've been enjoying the POPL reviewing debate, although not contributing much because I thought I'd used up my bandwidth allocation on my two posts. But Mike raises a new question here, about review quality. | I find the lack of discussion on the integrity/quality of the review | process a bit surprising, and unfortunate. My sense was that the | agreement to relax current standards for paper acceptance is based | somewhat on a resignation that reviewing, whatever the process, is | destined to be flawed. In my original post I argued, to the contrary, that our current review process represents excellent value to authors, and is a very effective use of reviewers time. Poor judgements are sometimes made, and I'm all for discussing ways to improve that, *provided* they do not do so by assuming more reviewing effort. Finding ways to eliminate rejected papers early, to focus more reviewing effort on better papers -- fine! | process, to be more fair and effective. Simon's amended proposal for | accepting more papers starts by saying we should have a "quality bar" | but does not suggest how to set this bar, and then assumes that | whatever the method, more papers would be/should be accepted. Setting the quality bar is hard, and is bound to be a judgement call. My main point is that I think the bar is too high at the moment. If that is true, then the problem *cannot* be solved by refining the reviewing process, no matter how well. Simon From cbj at it.uts.edu.au Tue Jan 26 19:26:36 2010 From: cbj at it.uts.edu.au (Barry Jay) Date: Wed, 27 Jan 2010 11:26:36 +1100 Subject: [TYPES] Grand Challenge Problems? In-Reply-To: <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> References: <4176823f1001161404o64ec4f21g17d50afcef77c8d3@mail.gmail.com> <7EA9DE1E-7577-466E-B7FC-DF202048FB8E@ccs.neu.edu> <28003_1263683523_o0GNC275000048_7fa251b71001161458q72afd2d1g945e71cd598920d2@mail.gmail.com> <4176823f1001161914j3176fdb2nec04fc53e57af3a6@mail.gmail.com> Message-ID: <4B5F883C.8070204@it.uts.edu.au> On Jan 16, 2010, at 5:04 PM, Purush Iyer wrote: >If y'all can come up with a list of five to ten challenges I would appreciate it very much. Hi Purush Here is my suggestion for a grand challenge: Automate the Scientific Method Many of us think of computers as logic machines: input is trivial so inference rules! However, computers now have access to unbounded amounts of data from the net, sensor networks and other logical machines. This *experience* of the world leads to inductive knowledge (think data mining) but best results combine this with deduction (logic) using the scientific method. One way of meeting this grand challenge would be to discover some useful theory in astronomy or particle physics or biology, much as automated theorem-proving was promoted by the first proof of the four-colour theorem. Such systems could then be used to analyse markets or legal systems, to discover web-services, etc. Many of the components of such a system already exist, e.g. data miners and theorem-provers. Also, object-oriented languages are able to incorporate experience while functional programming languages automate logic. The challenge requires us to develop a programming paradigm that embraces both experience and logic. One way of combining these uses our pattern calculus ( http://www.springer.com/computer/foundations/book/978-3-540-89184-0 ) Simple observations and hypotheses can be represented as cases built from a pattern and a result. These can be combined within pattern-matching functions to produce complex hypotheses. The ideas above have been elaborated in an informal essay ( http://www-staff.it.uts.edu.au/~cbj/Publications/psm.pdf ) Barry Jay From nr at cs.tufts.edu Wed Jan 27 21:11:22 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Wed, 27 Jan 2010 21:11:22 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: (sfid-H-20100126-162439-+179.30-1@multi.osbf.lua) References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> (sfid-H-20100126-162439-+179.30-1@multi.osbf.lua) Message-ID: <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> > At the POPL discussion, one goal that was raised was to improve the > number of "expert" reviews per paper. People are dissatisfied when > their paper is rejected by self-proclaimed non-experts. I believe > that Jens pointed out that this year's POPL had 77% papers with one > "X" review. I went back and got archival data for ICFP 2007. ICFP is a significantly smaller conference which that year had only 120 submissions. 110 of 120 submissions (91%) received at least one X review. When comparing these data, here are some points to keep in mind: - ICFP reviewing was double-blind that year. - Otherwise ICFP used substantially the same review process that POPL uses now. - POPL is probably a broader conference than ICFP, which may make it more difficult to find expert external reviewers. I remember great difficulty in finding external reviewers for papers involving functional programming and XML---many were multi-author papers, and this is a small community with a lot of cross- fertilization, so there were quite a few papers for which all the obvious expert reviewers had conflicts. (One of the problems with double-blind review is that it makes a prudent program chair more cautious about conflicts of interest.) Norman From nr at cs.tufts.edu Wed Jan 27 21:29:58 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Wed, 27 Jan 2010 21:29:58 -0500 (EST) Subject: [TYPES] Improving the quality of POPL reviews? Message-ID: <20100128022958.9408D60A30B2F@labrador.cs.tufts.edu> Mike Hicks writes to ask that we not forget to consider the quality of reviews we receive from POPL and from other SIGPLAN conferences. Regarding the quality of the reviews I myself receive, I lie between Simon Peyton Jones (gold dust) and Matthias Felleisen (not much improvement). I am often impressed by the number of reviews my submissions receive, and I am usually happy enough with the quality. Occasionally I get a real gem from a reviewer who teaches me a better way to explain my work, but this is the exception, not the rule. The stated goals of the (probably dead) proposal from the Steering Committee did not include providing better reviews. And while the gradual review process described by Andrew Myers sounds attractive (more and better reviews for better submissions), it's not obvious to me how it might affect the quality of the reviews overall. For those who might be interested in better reviews, I propose the following cheap and cheerful experiment: during the author-response period (which is now nearly standard), allow each author to rate each review anonymously, on a 3-point scale: +1 This review was clear and helpful 0 (do nothing) Typical review, not noteworthy -1 I was disappointed in the quality of this review Fearful authors could deposit their votes in escrow with a trusted third party. Aggregate data could be published in the proceedings. Truly brave committees could make their scores public. I expect that evaluation by authors would make reviews slightly better, but that authors would quickly adjust their expectations, so in the long run scores would be stable. Reviews would be just slightly better than they are now---not a bad outcome. I hope that reviewers who were repeatedly and consistently downrated would stop receiving invitations to serve on program commmittees. Norman Ramsey From bcpierce at cis.upenn.edu Thu Jan 28 20:47:57 2010 From: bcpierce at cis.upenn.edu (Benjamin Pierce) Date: Thu, 28 Jan 2010 20:47:57 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> (sfid-H-20100126-162439-+179.30-1@multi.osbf.lua) <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> Message-ID: <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> I'm suspicious of statistics about number of X reviews. I know what X is supposed to mean ("I am an expert in the topic") but in practice I see many people (including myself) acting as if it means "I understood the paper completely," and therefore often falling back to Y to indicate things like "Although I'm an expert, the paper was poorly explained and I couldn't completely understand it in a reasonable amount of time." This isn't to say that comparing figures for POPL and ICFP is not worthwhile -- just that the numbers themselves should be taken with a grain of salt. - Benjamin On Jan 27, 2010, at 9:11 PM, Norman Ramsey wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > >> At the POPL discussion, one goal that was raised was to improve the >> number of "expert" reviews per paper. People are dissatisfied when >> their paper is rejected by self-proclaimed non-experts. I believe >> that Jens pointed out that this year's POPL had 77% papers with one >> "X" review. > > I went back and got archival data for ICFP 2007. ICFP is a > significantly smaller conference which that year had only 120 > submissions. 110 of 120 submissions (91%) received at least one X > review. When comparing these data, here are some points to keep in > mind: > > - ICFP reviewing was double-blind that year. > - Otherwise ICFP used substantially the same review process that > POPL uses now. > - POPL is probably a broader conference than ICFP, which may make it > more difficult to find expert external reviewers. > > I remember great difficulty in finding external reviewers for papers > involving functional programming and XML---many were multi-author > papers, and this is a small community with a lot of cross- > fertilization, so there were quite a few papers for which all the > obvious expert reviewers had conflicts. (One of the problems with > double-blind review is that it makes a prudent program chair more > cautious about conflicts of interest.) > > > > Norman From stevez at cis.upenn.edu Fri Jan 29 03:01:11 2010 From: stevez at cis.upenn.edu (Stephan Zdancewic) Date: Fri, 29 Jan 2010 08:01:11 +0000 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> (sfid-H-20100126-162439-+179.30-1@multi.osbf.lua) <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> Message-ID: <4B6295C7.1060906@cis.upenn.edu> Following up on Benjamin's comments about "X" reviews. There are two different axes that are important when understanding a review. One is the reviewer's *expertise* in the subject of the paper. Another is the reviewer's *confidence* in his or her assessment carried out in the review. Using only one score to indicate both leads to some confusion, since the two properties get conflated. As Benjamin suggests, I often find myself wanting to indicate confidence when I'm not an expert, and sometimes, even though I'm an expert in terms of the related work, I still don't have high confidence in my review (perhaps because it's a really intricate paper). --Steve Benjamin Pierce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I'm suspicious of statistics about number of X reviews. I know what X > is supposed to mean ("I am an expert in the topic") but in practice I > see many people (including myself) acting as if it means "I understood > the paper completely," and therefore often falling back to Y to > indicate things like "Although I'm an expert, the paper was poorly > explained and I couldn't completely understand it in a reasonable > amount of time." > > This isn't to say that comparing figures for POPL and ICFP is not > worthwhile -- just that the numbers themselves should be taken with a > grain of salt. > > - Benjamin > > > On Jan 27, 2010, at 9:11 PM, Norman Ramsey wrote: > > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> >>> At the POPL discussion, one goal that was raised was to improve the >>> number of "expert" reviews per paper. People are dissatisfied when >>> their paper is rejected by self-proclaimed non-experts. I believe >>> that Jens pointed out that this year's POPL had 77% papers with one >>> "X" review. >>> >> I went back and got archival data for ICFP 2007. ICFP is a >> significantly smaller conference which that year had only 120 >> submissions. 110 of 120 submissions (91%) received at least one X >> review. When comparing these data, here are some points to keep in >> mind: >> >> - ICFP reviewing was double-blind that year. >> - Otherwise ICFP used substantially the same review process that >> POPL uses now. >> - POPL is probably a broader conference than ICFP, which may make it >> more difficult to find expert external reviewers. >> >> I remember great difficulty in finding external reviewers for papers >> involving functional programming and XML---many were multi-author >> papers, and this is a small community with a lot of cross- >> fertilization, so there were quite a few papers for which all the >> obvious expert reviewers had conflicts. (One of the problems with >> double-blind review is that it makes a prudent program chair more >> cautious about conflicts of interest.) >> >> >> >> Norman >> > > From wadler at inf.ed.ac.uk Fri Jan 29 07:04:41 2010 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 29 Jan 2010 12:04:41 +0000 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B6295C7.1060906@cis.upenn.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> Message-ID: Just a personal plea here. The meaning of X, Y, Z was fixed clearly in Nierstrasz's paper: X: I am an expert in the subject area of this paper. Y: I am knowledgeable in the area, though not an expert. Z: I am not an expert. My evaluation is that of an informed outsider. This was a huge improvement over the 'confidence' scores we used to give, and which were often used to form a weighted average of numerical scores, a horrid scheme which led to the spurious impression that a 6.7 paper must be better than a 6.3, say. Please don't go back to the bad old days of rating for confidence. Of course you should say how confident you are in your rating, but the place to do that is in the text. The only things we need scores for are overall rating (ABCD) and expertise (XYZ). Everything else of import can be dealt with in the text of the referee's report. If you are an expert and you are not confident because the paper is intricate, the best service you can render to the PC chair, to the PC, to the conference, and to the author is to give an X rating---and then explain your confidence level and the reason for it in the review. Note also that while the quest to find an X rating for every paper is good, the best possibility is for a paper to receive both X and Z reviews. (Preferably both high!) Cheers, -- P On Fri, Jan 29, 2010 at 8:01 AM, Stephan Zdancewic wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Following up on Benjamin's comments about "X" reviews. ?There are two > different axes that are important when understanding a review. ?One is > the reviewer's ?*expertise* in the subject of the paper. ?Another is the > reviewer's *confidence* in his or her assessment carried out in the > review. ?Using only one score to indicate both leads to some confusion, > since the two properties get conflated. ?As Benjamin suggests, I often > find myself wanting to indicate confidence when I'm not an expert, and > sometimes, even though I'm an expert in terms of the related work, I > still don't have high confidence in my review (perhaps because it's a > really intricate paper). > > --Steve > > > Benjamin Pierce wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> I'm suspicious of statistics about number of X reviews. ?I know what X >> is supposed to mean ("I am an expert in the topic") but in practice I >> see many people (including myself) acting as if it means "I understood >> the paper completely," and therefore often falling back to Y to >> indicate things like "Although I'm an expert, the paper was poorly >> explained and I couldn't completely understand it in a reasonable >> amount of time." >> >> This isn't to say that comparing figures for POPL and ICFP is not >> worthwhile -- just that the numbers themselves should be taken with a >> grain of salt. >> >> ? ? ?- Benjamin >> >> >> On Jan 27, 2010, at 9:11 PM, Norman Ramsey wrote: >> >> >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ?] >>> >>> >>>> At the POPL discussion, one goal that was raised was to improve the >>>> number of "expert" reviews per paper. People are dissatisfied when >>>> their paper is rejected by self-proclaimed non-experts. ?I believe >>>> that Jens pointed out that this year's POPL had 77% papers with one >>>> "X" review. >>>> >>> I went back and got archival data for ICFP 2007. ?ICFP is a >>> significantly smaller conference which that year had only 120 >>> submissions. ?110 of 120 submissions (91%) received at least one X >>> review. ?When comparing these data, here are some points to keep in >>> mind: >>> >>> ?- ICFP reviewing was double-blind that year. >>> ?- Otherwise ICFP used substantially the same review process that >>> ? ?POPL uses now. >>> ?- POPL is probably a broader conference than ICFP, which may make it >>> ? ?more difficult to find expert external reviewers. >>> >>> I remember great difficulty in finding external reviewers for papers >>> involving functional programming and XML---many were multi-author >>> papers, and this is a small community with a lot of cross- >>> fertilization, so there were quite a few papers for which all the >>> obvious expert reviewers had conflicts. ?(One of the problems with >>> double-blind review is that it makes a prudent program chair more >>> cautious about conflicts of interest.) >>> >>> >>> >>> Norman >>> >> >> > > > -- .\ Philip Wadler, Professor of Theoretical Computer Science ./\ School of Informatics, University of Edinburgh / \ http://homepages.inf.ed.ac.uk/wadler/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From matthias at ccs.neu.edu Fri Jan 29 09:28:47 2010 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Fri, 29 Jan 2010 09:28:47 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> Message-ID: <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> Of course, Benjamin's and Steve's discussion about Expertise vs Confidence indicates that we simply have way too little time to evaluate conference submissions fairly and robustly -- so why does this community insist on using this brittle and unreliable mechanism to evaluate scientific work? -- Matthias On Jan 29, 2010, at 7:04 AM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Just a personal plea here. > > The meaning of X, Y, Z was fixed clearly in Nierstrasz's paper: > > X: I am an expert in the subject area of this paper. > Y: I am knowledgeable in the area, though not an expert. > Z: I am not an expert. My evaluation is that of an informed outsider. > > This was a huge improvement over the 'confidence' scores we used to > give, and which were often used to form a weighted average of > numerical scores, a horrid scheme which led to the spurious impression > that a 6.7 paper must be better than a 6.3, say. > > Please don't go back to the bad old days of rating for confidence. Of > course you should say how confident you are in your rating, but the > place to do that is in the text. The only things we need scores for > are overall rating (ABCD) and expertise (XYZ). Everything else of > import can be dealt with in the text of the referee's report. > > If you are an expert and you are not confident because the paper is > intricate, the best service you can render to the PC chair, to the PC, > to the conference, and to the author is to give an X rating---and then > explain your confidence level and the reason for it in the review. > > Note also that while the quest to find an X rating for every paper is > good, the best possibility is for a paper to receive both X and Z > reviews. (Preferably both high!) > > Cheers, -- P > > > > On Fri, Jan 29, 2010 at 8:01 AM, Stephan Zdancewic > wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> Following up on Benjamin's comments about "X" reviews. There are two >> different axes that are important when understanding a review. One >> is >> the reviewer's *expertise* in the subject of the paper. Another >> is the >> reviewer's *confidence* in his or her assessment carried out in the >> review. Using only one score to indicate both leads to some >> confusion, >> since the two properties get conflated. As Benjamin suggests, I >> often >> find myself wanting to indicate confidence when I'm not an expert, >> and >> sometimes, even though I'm an expert in terms of the related work, I >> still don't have high confidence in my review (perhaps because it's a >> really intricate paper). >> >> --Steve >> >> >> Benjamin Pierce wrote: >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ] >>> >>> I'm suspicious of statistics about number of X reviews. I know >>> what X >>> is supposed to mean ("I am an expert in the topic") but in >>> practice I >>> see many people (including myself) acting as if it means "I >>> understood >>> the paper completely," and therefore often falling back to Y to >>> indicate things like "Although I'm an expert, the paper was poorly >>> explained and I couldn't completely understand it in a reasonable >>> amount of time." >>> >>> This isn't to say that comparing figures for POPL and ICFP is not >>> worthwhile -- just that the numbers themselves should be taken >>> with a >>> grain of salt. >>> >>> - Benjamin >>> >>> >>> On Jan 27, 2010, at 9:11 PM, Norman Ramsey wrote: >>> >>> >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ] >>>> >>>> >>>>> At the POPL discussion, one goal that was raised was to improve >>>>> the >>>>> number of "expert" reviews per paper. People are dissatisfied when >>>>> their paper is rejected by self-proclaimed non-experts. I believe >>>>> that Jens pointed out that this year's POPL had 77% papers with >>>>> one >>>>> "X" review. >>>>> >>>> I went back and got archival data for ICFP 2007. ICFP is a >>>> significantly smaller conference which that year had only 120 >>>> submissions. 110 of 120 submissions (91%) received at least one X >>>> review. When comparing these data, here are some points to keep in >>>> mind: >>>> >>>> - ICFP reviewing was double-blind that year. >>>> - Otherwise ICFP used substantially the same review process that >>>> POPL uses now. >>>> - POPL is probably a broader conference than ICFP, which may >>>> make it >>>> more difficult to find expert external reviewers. >>>> >>>> I remember great difficulty in finding external reviewers for >>>> papers >>>> involving functional programming and XML---many were multi-author >>>> papers, and this is a small community with a lot of cross- >>>> fertilization, so there were quite a few papers for which all the >>>> obvious expert reviewers had conflicts. (One of the problems with >>>> double-blind review is that it makes a prudent program chair more >>>> cautious about conflicts of interest.) >>>> >>>> >>>> >>>> Norman >>>> >>> >>> >> >> >> > > > > -- > .\ Philip Wadler, Professor of Theoretical Computer Science > ./\ School of Informatics, University of Edinburgh > / \ http://homepages.inf.ed.ac.uk/wadler/ > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > From d.r.ghica at cs.bham.ac.uk Fri Jan 29 09:55:42 2010 From: d.r.ghica at cs.bham.ac.uk (Dan Ghica) Date: Fri, 29 Jan 2010 14:55:42 +0000 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> Message-ID: <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> On 29 Jan 2010, at 14:28, Matthias Felleisen wrote: > Of course, Benjamin's and Steve's discussion about Expertise vs > Confidence > indicates that we simply have way too little time to evaluate > conference > submissions fairly and robustly -- so why does this community insist > on > using this brittle and unreliable mechanism to evaluate scientific > work? I think the lack of time is only one problem. For me there are two problems at least as serious: 1. Omitting proofs because of 'lack of space'. I see no reason why the PC does not demand that all details of all proofs are included in an appendix, not to appear in the printed proceedings but to be part of the refereeing process (perhaps even to be included in the online proceedings). Maybe we cannot check the proofs in details, but we can check a lot of things about the proofs, especially their existence. There are important papers out there, with high citation counts and fairly important results where the full proofs have never appeared. This is a problem. 2. Non-reproducible experiments. I had to referee papers about tools that could not be downloaded running on examples that could not be released. Even if the experimental data looks interesting, is it meaningful? This can be solved as well. For a paper that I submitted once I set up a virtual machine image with my tool running, loaded with all the examples, which could be reproduced at the push of a button. It's not that hard. Thanks to all the contributors for an interesting a rather therapeutic discussion. Dan Dan Ghica Lecturer / ARF d.r.ghica at cs.bham.ac.uk From matthias at ccs.neu.edu Fri Jan 29 10:24:54 2010 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Fri, 29 Jan 2010 10:24:54 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> Message-ID: Dan, not to be contrarian, but how would the submission of additional materials solve the lack of time problem that prevents high-quality reviewing in the first place? (Just to clarify: I actually accept that some submissions get high quality reviews at some conferences. My problem is with the 'average' of reviews not those few who deliver outstanding work under this extreme circumstances.) -- Matthias On Jan 29, 2010, at 9:55 AM, Dan Ghica wrote: > > On 29 Jan 2010, at 14:28, Matthias Felleisen wrote: > >> Of course, Benjamin's and Steve's discussion about Expertise vs >> Confidence >> indicates that we simply have way too little time to evaluate conference >> submissions fairly and robustly -- so why does this community insist on >> using this brittle and unreliable mechanism to evaluate scientific work? > > > I think the lack of time is only one problem. > > For me there are two problems at least as serious: > > 1. Omitting proofs because of 'lack of space'. I see no reason why the PC does not demand that all details of all proofs are included in an appendix, not to appear in the printed proceedings but to be part of the refereeing process (perhaps even to be included in the online proceedings). Maybe we cannot check the proofs in details, but we can check a lot of things about the proofs, especially their existence. There are important papers out there, with high citation counts and fairly important results where the full proofs have never appeared. This is a problem. > > 2. Non-reproducible experiments. I had to referee papers about tools that could not be downloaded running on examples that could not be released. Even if the experimental data looks interesting, is it meaningful? This can be solved as well. For a paper that I submitted once I set up a virtual machine image with my tool running, loaded with all the examples, which could be reproduced at the push of a button. It's not that hard. From dreyer at mpi-sws.org Fri Jan 29 10:28:16 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Fri, 29 Jan 2010 16:28:16 +0100 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> Message-ID: <7fa251b71001290728l5bf48253qb0ee737774b2620e@mail.gmail.com> Hi, Matthias. I know how you feel about the CS overemphasis on conferences, but I am nonetheless a bit mystified by your interpretation of the expertise vs. confidence discussion. Just because experts are not always confident about their conference reviews does not per se imply anything about anything. Experts are not always confident about their journal reviews either. When I give an expert review for a conference paper, I spend a significant amount of time reviewing it, typically several days. For a journal paper, I may spend more time checking details and reading over carefully and so forth, but at a high level my expert conference reviews and my journal reviews are roughly similar in quality. At the end of either kind of review period, I may or may not be confident about my review, usually depending on how well the paper was written. If the paper was clearly written, I am usually more confident in my review. Thus, I feel that the confidence of my expert reviews reflects (at least to some extent) the quality of the paper. Am I some weird outlier? Maybe so. There are certainly expert reviewers who clearly "phone it in" due to time constraints or because they don't care. But in my (comparatively limited) experience reading other people's reviews (both of my and other people's papers), I have seen a lot of really thorough expert reviews. It would be nice if program committees reliably gave more weight to those *thorough* reviews. My impression is that sometimes they do and sometimes they don't, and especially when the only thorough review is an external review, the PC has a tendency to avoid abdicating their judgmental authority to some external reviewer who cannot engage in the PC discussion. (The PC's behavior in such cases is understandable, and may in some cases be the right thing to do, but it nonetheless means that external reviews have second-class status.) I view that as a real problem, and frankly I'm not sure what the right solution is. Derek On Fri, Jan 29, 2010 at 3:28 PM, Matthias Felleisen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Of course, Benjamin's and Steve's discussion about Expertise vs > Confidence > indicates that we simply have way too little time to evaluate conference > submissions fairly and robustly -- so why does this community insist on > using this brittle and unreliable mechanism to evaluate scientific work? > > -- Matthias > > > > > > On Jan 29, 2010, at 7:04 AM, Philip Wadler wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ?] >> >> Just a personal plea here. >> >> The meaning of X, Y, Z was fixed clearly in Nierstrasz's paper: >> >> X: I am an expert in the subject area of this paper. >> Y: I am knowledgeable in the area, though not an expert. >> Z: I am not an expert. My evaluation is that of an informed outsider. >> >> This was a huge improvement over the 'confidence' scores we used to >> give, and which were often used to form a weighted average of >> numerical scores, a horrid scheme which led to the spurious impression >> that a 6.7 paper must be better than a 6.3, say. >> >> Please don't go back to the bad old days of rating for confidence. ?Of >> course you should say how confident you are in your rating, but the >> place to do that is in the text. ?The only things we need scores for >> are overall rating (ABCD) and expertise (XYZ). ?Everything else of >> import can be dealt with in the text of the referee's report. >> >> If you are an expert and you are not confident because the paper is >> intricate, the best service you can render to the PC chair, to the PC, >> to the conference, and to the author is to give an X rating---and then >> explain your confidence level and the reason for it in the review. >> >> Note also that while the quest to find an X rating for every paper is >> good, the best possibility is for a paper to receive both X and Z >> reviews. ?(Preferably both high!) >> >> Cheers, ?-- P >> >> >> >> On Fri, Jan 29, 2010 at 8:01 AM, Stephan Zdancewic > > wrote: >>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>> ?] >>> >>> Following up on Benjamin's comments about "X" reviews. ?There are two >>> different axes that are important when understanding a review. ?One >>> is >>> the reviewer's ?*expertise* in the subject of the paper. ?Another >>> is the >>> reviewer's *confidence* in his or her assessment carried out in the >>> review. ?Using only one score to indicate both leads to some >>> confusion, >>> since the two properties get conflated. ?As Benjamin suggests, I >>> often >>> find myself wanting to indicate confidence when I'm not an expert, >>> and >>> sometimes, even though I'm an expert in terms of the related work, I >>> still don't have high confidence in my review (perhaps because it's a >>> really intricate paper). >>> >>> --Steve >>> >>> >>> Benjamin Pierce wrote: >>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>> ?] >>>> >>>> I'm suspicious of statistics about number of X reviews. ?I know >>>> what X >>>> is supposed to mean ("I am an expert in the topic") but in >>>> practice I >>>> see many people (including myself) acting as if it means "I >>>> understood >>>> the paper completely," and therefore often falling back to Y to >>>> indicate things like "Although I'm an expert, the paper was poorly >>>> explained and I couldn't completely understand it in a reasonable >>>> amount of time." >>>> >>>> This isn't to say that comparing figures for POPL and ICFP is not >>>> worthwhile -- just that the numbers themselves should be taken >>>> with a >>>> grain of salt. >>>> >>>> ? ? ?- Benjamin >>>> >>>> >>>> On Jan 27, 2010, at 9:11 PM, Norman Ramsey wrote: >>>> >>>> >>>>> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >>>>> ?] >>>>> >>>>> >>>>>> At the POPL discussion, one goal that was raised was to improve >>>>>> the >>>>>> number of "expert" reviews per paper. People are dissatisfied when >>>>>> their paper is rejected by self-proclaimed non-experts. ?I believe >>>>>> that Jens pointed out that this year's POPL had 77% papers with >>>>>> one >>>>>> "X" review. >>>>>> >>>>> I went back and got archival data for ICFP 2007. ?ICFP is a >>>>> significantly smaller conference which that year had only 120 >>>>> submissions. ?110 of 120 submissions (91%) received at least one X >>>>> review. ?When comparing these data, here are some points to keep in >>>>> mind: >>>>> >>>>> ?- ICFP reviewing was double-blind that year. >>>>> ?- Otherwise ICFP used substantially the same review process that >>>>> ? ?POPL uses now. >>>>> ?- POPL is probably a broader conference than ICFP, which may >>>>> make it >>>>> ? ?more difficult to find expert external reviewers. >>>>> >>>>> I remember great difficulty in finding external reviewers for >>>>> papers >>>>> involving functional programming and XML---many were multi-author >>>>> papers, and this is a small community with a lot of cross- >>>>> fertilization, so there were quite a few papers for which all the >>>>> obvious expert reviewers had conflicts. ?(One of the problems with >>>>> double-blind review is that it makes a prudent program chair more >>>>> cautious about conflicts of interest.) >>>>> >>>>> >>>>> >>>>> Norman >>>>> >>>> >>>> >>> >>> >>> >> >> >> >> -- >> .\ Philip Wadler, Professor of Theoretical Computer Science >> ./\ School of Informatics, University of Edinburgh >> / ?\ http://homepages.inf.ed.ac.uk/wadler/ >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> > > From alan.schmitt at polytechnique.org Fri Jan 29 11:16:14 2010 From: alan.schmitt at polytechnique.org (Alan Schmitt) Date: Fri, 29 Jan 2010 17:16:14 +0100 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <7fa251b71001290728l5bf48253qb0ee737774b2620e@mail.gmail.com> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <7fa251b71001290728l5bf48253qb0ee737774b2620e@mail.gmail.com> Message-ID: <25ec8ca61001290816rc2a293v3214780def844c5e@mail.gmail.com> On Fri, Jan 29, 2010 at 4:28 PM, Derek Dreyer wrote: > It would be nice if program committees reliably gave more weight to > those *thorough* reviews. ?My impression is that sometimes they do and > sometimes they don't, and especially when the only thorough review is > an external review, the PC has a tendency to avoid abdicating their > judgmental authority to some external reviewer who cannot engage in > the PC discussion. ?(The PC's behavior in such cases is > understandable, and may in some cases be the right thing to do, but it > nonetheless means that external reviews have second-class status.) ?I > view that as a real problem, and frankly I'm not sure what the right > solution is. This is a great argument for electronic (pre) meeting: invite the external reviewer to participate in the discussion of the paper. Alan From kim at cs.pomona.edu Fri Jan 29 12:21:38 2010 From: kim at cs.pomona.edu (Kim Bruce) Date: Fri, 29 Jan 2010 09:21:38 -0800 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> Message-ID: <91E731DE-E494-4CB8-8FC2-15DA4CD504BB@cs.pomona.edu> I agree with Matthias that we place too much emphasis on conference submissions, and agree with those who recently have been making more of a plea for computer scientists to move more toward journal publications. I usually tell people that conference submissions are refereed for interest (novelty, impact, etc.) and "likely correctness" -- with high variance in acceptance due to a restricted pool of referees and the time pressure. On the other hand, journal publications are refereed with more weight on correctness (after all, these are usually the only places you get to see the details necessary to verify correctness). We should be moving to a hybrid system where results are announced at conferences, but are only considered settled when they appear in journals (with several conference papers likely being coalesced into a more significant journal article). I'm not saying we should move away from refereed conferences, but they should not be the be-all and end-all. I don't know how best to get there, but it seems like a good idea to start moving in that direction. Unless we change incentives for researchers, we will continue on the same path we are on now. Kim On Jan 29, 2010, at 6:28 AM, Matthias Felleisen wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > Of course, Benjamin's and Steve's discussion about Expertise vs > Confidence > indicates that we simply have way too little time to evaluate conference > submissions fairly and robustly -- so why does this community insist on > using this brittle and unreliable mechanism to evaluate scientific work? > > -- Matthias > From wand at ccs.neu.edu Fri Jan 29 13:15:31 2010 From: wand at ccs.neu.edu (Mitchell Wand) Date: Fri, 29 Jan 2010 13:15:31 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> Message-ID: <1bd18ad51001291015j13ea1795sb168bb97f51c0313@mail.gmail.com> On Fri, Jan 29, 2010 at 7:04 AM, Philip Wadler wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list] > > Just a personal plea here. > > The meaning of X, Y, Z was fixed clearly in Nierstrasz's paper: > > X: I am an expert in the subject area of this paper. > Y: I am knowledgeable in the area, though not an expert. > Z: I am not an expert. My evaluation is that of an informed outsider. > > I think the problem that Benjamin points out is really two problems: I may be an expert in the general subject are of the paper, but not in the particular corner that this paper is exploring. Or it may be that I am an expert in the subject area, but the paper is sufficiently unclear that I may have gotten it wrong. I suspect that these often coincide. Perhaps it might be helpful to subdivide the X rating into two: X1: I am an expert in the general subject area of this paper, and I have good confidence in my evaluation. X2: I am an expert in the general subject area of this paper, but I do not have good confidence in my evaluation. This is not a issue for Y and Z reviews, which already announce themselves as less important. Just a thought. --Mitch From alan.schmitt at polytechnique.org Fri Jan 29 13:41:06 2010 From: alan.schmitt at polytechnique.org (Alan Schmitt) Date: Fri, 29 Jan 2010 19:41:06 +0100 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <91E731DE-E494-4CB8-8FC2-15DA4CD504BB@cs.pomona.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <91E731DE-E494-4CB8-8FC2-15DA4CD504BB@cs.pomona.edu> Message-ID: <25ec8ca61001291041o2fb7047ckb63c995353076951@mail.gmail.com> On Fri, Jan 29, 2010 at 6:21 PM, Kim Bruce wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > I agree with Matthias that we place too much emphasis on conference submissions, and agree with those who recently have been making more of a plea for computer scientists to move more toward journal publications. ?I usually tell people that conference submissions are refereed for interest (novelty, impact, etc.) and "likely correctness" -- with high variance in acceptance due to a restricted pool of referees and the time pressure. ?On the other hand, journal publications are refereed with more weight on correctness (after all, these are usually the only places you get to see the details necessary to verify correctness). ?We should be moving to a hybrid system where results are announced at conferences, but are only considered settled when they appear in journals (with several conference papers likely being coalesced into a more significant journal article). ?I'm not saying we should move away from refereed conferences, but they should not be the be-all and end-all. Unfortunately journals often require new results. I have tried several times to published coalesced papers with proofs, and invariably I was asked where was the new, unpublished result. But maybe I was unlucky. Alan From matthias at ccs.neu.edu Fri Jan 29 14:47:57 2010 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Fri, 29 Jan 2010 14:47:57 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <25ec8ca61001291041o2fb7047ckb63c995353076951@mail.gmail.com> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <91E731DE-E494-4CB8-8FC2-15DA4CD504BB@cs.pomona.edu> <25ec8ca61001291041o2fb7047ckb63c995353076951@mail.gmail.com> Message-ID: <500F8764-9ED2-4CBE-8D44-65515F09C938@ccs.neu.edu> On Jan 29, 2010, at 1:41 PM, Alan Schmitt wrote: > On Fri, Jan 29, 2010 at 6:21 PM, Kim Bruce wrote: >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list >> ] >> >> I agree with Matthias that we place too much emphasis on conference >> submissions, and agree with those who recently have been making >> more of a plea for computer scientists to move more toward journal >> publications. I usually tell people that conference submissions >> are refereed for interest (novelty, impact, etc.) and "likely >> correctness" -- with high variance in acceptance due to a >> restricted pool of referees and the time pressure. On the other >> hand, journal publications are refereed with more weight on >> correctness (after all, these are usually the only places you get >> to see the details necessary to verify correctness). We should be >> moving to a hybrid system where results are announced at >> conferences, but are only considered settled when they appear in >> journals (with several conference papers likely being coalesced >> into a more significant journal article). I'm not saying we should >> move away from refereed conferences, but they should not be the be- >> all and end-all. > > Unfortunately journals often require new results. I have tried several > times to published coalesced papers with proofs, and invariably I was > asked where was the new, unpublished result. But maybe I was unlucky. It's much more likely that the culture of publication is all wrong already. People may lack the time, training, tools to perform the journal reviewing tasks properly -- and they may just lack the incentives given our emphasis on the haphazard conference process. To some extent, it's a chicken-and-egg problem, which is why it is so difficult to change things. -- Matthias From alur at seas.upenn.edu Fri Jan 29 16:02:54 2010 From: alur at seas.upenn.edu (Rajeev Alur) Date: Fri, 29 Jan 2010 16:02:54 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> Message-ID: <4B634CFE.1060107@seas.upenn.edu> >> we simply have way too little time to evaluate conference >> submissions fairly and robustly Implicit in this statement, and much of the discussion so far about the review process, is the assumption that conference reviewers are somehow failing. The real problem is with the current submission process that is causing enormous overload: there are too many conferences, everyone (experts and non-experts) is writing and submitting too many papers, but there are too few (so-called) expert reviewers (the challenge of getting serious reviews in a timely manner is a headache for journal editors also). If the total number of papers that need to be reviewed in a given year across programming languages reduces, the quality of review process (and the quality of research, also) will improve. In the current set-up, there is no "cost" associated with submitting work that is not yet quite ready, or resubmitting a rejected paper repeatedly, or trying to make 2 papers out of one. One proposal to fix this is to enforce a rule such as: You can be a co-author on at most 2 submissions to top-tier SIGPLAN sponsored conferences in a given calendar year. Such a rule would put some burden on authors to submit their best work to conferences, and reviewing would be less onerous and better. We can always have lots of workshops where authors can present their ongoing work, or papers that got rejected from POPL. --rajeev From avik at cs.umd.edu Fri Jan 29 16:52:59 2010 From: avik at cs.umd.edu (Avik Chaudhuri) Date: Fri, 29 Jan 2010 16:52:59 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B634CFE.1060107@seas.upenn.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> <4B634CFE.1060107@seas.upenn.edu> Message-ID: > One proposal to fix this is to enforce a rule such as: > You can be a co-author on at most 2 submissions to top-tier SIGPLAN > sponsored conferences in a given calendar year. I don't want to add anything to this discussion (because I don't consider myself qualified to do so), but the above rule just seems wrong to me. What if an advisor has 3 students working on their own PL projects, and all of them want to submit their work to such a conference? Either the advisor can't be a co-author on one of these papers, or one of the students has to back out: in either case, unfair! -Avik. From nr at cs.tufts.edu Fri Jan 29 17:40:12 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Fri, 29 Jan 2010 17:40:12 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> (sfid-H-20100129-170509-+177.01-1@multi.osbf.lua) References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> (sfid-H-20100129-170509-+177.01-1@multi.osbf.lua) Message-ID: <20100129224012.EBB09603FE73B@labrador.cs.tufts.edu> > ... we simply have way too little time to evaluate conference > submissions fairly and robustly -- so why does this community insist on > using this brittle and unreliable mechanism to evaluate scientific work? Because the trains arrive on time. I can't be the only one who routinely sees *years* elapse between submission and publication of a journal paper. Indeed, although it is an unusual case I had one paper *accepted* at a respected journal quite some years ago, but the paper has not appeared, and I have no idea when it is going to appear. As they say in the software biz, shipping is also a feature. Norman From nr at cs.tufts.edu Fri Jan 29 17:48:30 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Fri, 29 Jan 2010 17:48:30 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B6295C7.1060906@cis.upenn.edu> (sfid-H-20100129-170510-+126.20-1@multi.osbf.lua) References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <7BD6B94C-9036-4899-85BF-79E5B1315537@ccs.neu.edu> <4B4C9BED.6020209@cs.cornell.edu> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> (sfid-H-20100126-162439-+179.30-1@multi.osbf.lua) <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> (sfid-H-20100129-170510-+126.20-1@multi.osbf.lua) Message-ID: <20100129224830.2E734603FE73B@labrador.cs.tufts.edu> > Following up on Benjamin's comments about "X" reviews. There are two > different axes that are important when understanding a review. One is > the reviewer's *expertise* in the subject of the paper. Another is the > reviewer's *confidence* in his or her assessment carried out in the > review. Using only one score to indicate both leads to some confusion, > since the two properties get conflated. I second this argument loudly. It is a major flaw in the "Identify the Champion" model that Oscar Nierstrasz has so widely publicized. While "Identify the Champion" is far superior to what it replaced, reviewers have a strong tendency to conflate expertise with confidence. The program chair can provide detailed guidelines to the effect that A and D indicate high-confidence reviews, and that low-confidence reviews should received B and C scores, but busy PC members cannot keep these subtleties in mind. The problem with "Identify the Champion" is that there aren't enough independent axes for a review to express: - this is how much I know about the subject area - this is the degree to which I want the submission in (or out) - this is how confident I am in my recommendations Those poor people who reviewed papers for me may remember receiving rather complex, confusing instructions about the scores I was looking for. Here is a brief excerpt for the 'B' score, sometimes called 'weak accept': B: I think this paper should be accepted, but I am not sufficiently confident in my own judgment to want to argue for it at the meeting. I am hoping someone more confident or more knowledgeable will make the case. - or - B: I see both good and bad in this paper, and I don't want to take a strong stand. At the meeting, if others want to accept it, great; if not, a good argument against it will either convince me to reject it or will help me discover that I want to argue for it after all. I would be delighted if someone were to step forward and propose an alternative to or refinement of ABCD XYZ. God knows we have been using long enough that collectively, we should be well aware of its flaws. Norman P.S. I agree with Benjamin not to take the fraction of 'X' reviews too seriously. From nr at cs.tufts.edu Fri Jan 29 17:57:34 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Fri, 29 Jan 2010 17:57:34 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <25ec8ca61001291041o2fb7047ckb63c995353076951@mail.gmail.com> (sfid-H-20100129-170549-+55.76-1@multi.osbf.lua) References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <91E731DE-E494-4CB8-8FC2-15DA4CD504BB@cs.pomona.edu> <25ec8ca61001291041o2fb7047ckb63c995353076951@mail.gmail.com> (sfid-H-20100129-170549-+55.76-1@multi.osbf.lua) Message-ID: <20100129225734.340FD603FE73B@labrador.cs.tufts.edu> > Unfortunately journals often require new results. I have tried several > times to published coalesced papers with proofs, and invariably I was > asked where was the new, unpublished result. But maybe I was unlucky. I've had similar experiences. This is another cultural problem: if a result has already appeared in a workshop, then it is difficult to publish a closely related paper in a conference. And if a result has already appeared in a conference, it is difficult to publish a closely related paper in a journal. I'm be unsurprised to learn that any number of important results have never been validated by the kind of thorough review that only a refereed journal can provide. As an author, why should I bother preparing my work for journal publication when the outcome is uncertain and there is much greater incentive for me to invest the same effort in the next conference submission? Norman From nr at cs.tufts.edu Fri Jan 29 18:03:42 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Fri, 29 Jan 2010 18:03:42 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B634CFE.1060107@seas.upenn.edu> (sfid-H-20100129-170601-+76.57-1@multi.osbf.lua) References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> <4B634CFE.1060107@seas.upenn.edu> (sfid-H-20100129-170601-+76.57-1@multi.osbf.lua) Message-ID: <20100129230344.9104E603FE73B@labrador.cs.tufts.edu> > In the current set-up, there is no "cost" associated with > submitting work that is not yet quite ready, or resubmitting a > rejected paper repeatedly, or trying to make 2 papers out of one. This observation reminds me of a proposal Todd Proebsting made years ago (tongue planted firmly in cheek): anyone submitting a paper to PLDI must pay a $500 submission fee. If the committee agrees your submission is not a complete waste of the reviewers' time, you get your $500 back. Otherwise it goes to charity. I don't take such a proposal seriously, but I will share some advice I got from a senior scientist once: if you are given a paper to review, and the paper is obviously junk, return the paper to the editor or program chair with a short note saying that the paper is of such low quality that it does not merit a full review. I play this card rarely, but it always saves me a great deal of time, since it is difficult to write a decent review for a very weak paper, especially for a journal. Norman From vijay at saraswat.org Fri Jan 29 20:24:19 2010 From: vijay at saraswat.org (Vijay Saraswat) Date: Fri, 29 Jan 2010 20:24:19 -0500 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B634CFE.1060107@seas.upenn.edu> References: <59543203684B2244980D7E4057D5FBC10AF9F075@DB3EX14MBXC306.europe.corp.microsoft.com> <4B4F4A99.8080304@cs.cornell.edu> <59543203684B2244980D7E4057D5FBC10AFA2C3F@DB3EX14MBXC306.europe.corp.microsoft.com> <4B523FBD.8020907@cs.cornell.edu> <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> <4B634CFE.1060107@seas.upenn.edu> Message-ID: <4B638A43.3080503@saraswat.org> Rajeev Alur wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > One proposal to fix this is to enforce a rule such as: > You can be a co-author on at most 2 submissions to top-tier SIGPLAN > sponsored conferences in a given calendar year. > Such a rule would put some burden on authors to submit their best work > to conferences, > and reviewing would be less onerous and better. > Hi Rajeev -- The problem of too many poor papers circulating is definitely real. But this seems like a particularly bad way to fix this -- people can be involved in many different lines of work that come to fruition in the same year, a professor is guiding many students, a person is contributing code to many systems etc. Here is an alternate suggestion picking up on your observation that currently there is no cost of submission. The only currency relevant is reputation. What if each of the SIGPLAN conferences also published a tally of who submitted how many paper(*). This would make public the knowledge that A submitted X papers and got Y accepted this year. I think just the social stigma associated with submitting 20 papers and getting 2 accepted would keep people from submitting too many papers :-). (*) The conference should not publish any other information about rejected papers, e.g. title of submitted paper, who co-authored with who etc. From roberto at dicosmo.org Sat Jan 30 05:01:52 2010 From: roberto at dicosmo.org (Roberto Di Cosmo) Date: Sat, 30 Jan 2010 11:01:52 +0100 Subject: [TYPES] Two-tier reviewing process In-Reply-To: <4B638A43.3080503@saraswat.org> References: <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> <4B634CFE.1060107@seas.upenn.edu> <4B638A43.3080503@saraswat.org> Message-ID: <20100130100152.GA23474@traveler> On Fri, Jan 29, 2010 at 08:24:19PM -0500, Vijay Saraswat wrote: > > The only currency relevant is reputation. Dear Vijay, thanks for attracting attention to this point... I agree that in our small world reputation is one of the most relevant components of 'currency'. Let me humbly offer some food for thought, though. Reputation is a sophisticated object, and IMHO it does not or should not come just from the number of papers published: in our field, we know well that designing a powerful abstraction is an accomplishment worth hundreds of incremental improvements of the state of the art, so one breakthrough paper that subsumes dozens of previous works should not be counted as 'just one more paper'. Unfortunately, the current system does not encourage investing time in breakthrough papers, and linking reputation to number of papers published is part of the broken mechanism that should be fixed. Your proposal of publishing a track record of rejects seems at first to go in the right direction: the very same bad habit that links reputation to number of papers published will probably en up in linking bad reputation to number of rejects one author gets. But if one thinks a bit about it, things are more complex than that: if, as it has been advocated at large previously, there are not enough slots in our conferences today to accomodate all the good papers, your proposal will end up with double punishment for those good papers that do not go through... not only you dont get accepted (bad enough), but you also get a blame mark (exposed to the public pillory of authors got rejected). A natural result would be high resentment from those authors that beleive their work is really good, and yet got dumped because of what they feel is a faulty selection process... I am not sure this is what the community wants... Yours, --Roberto > > Rajeev Alur wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > > > One proposal to fix this is to enforce a rule such as: > > You can be a co-author on at most 2 submissions to top-tier SIGPLAN > > sponsored conferences in a given calendar year. > > Such a rule would put some burden on authors to submit their best work > > to conferences, > > and reviewing would be less onerous and better. > > > Hi Rajeev -- > > The problem of too many poor papers circulating is definitely real. But > this seems like a particularly bad way to fix this -- people can be > involved in many different lines of work that come to fruition in the > same year, a professor is guiding many students, a person is > contributing code to many systems etc. > > Here is an alternate suggestion picking up on your observation that > currently there is no cost of submission. > > > What if each of the SIGPLAN conferences also published a tally of who > submitted how many paper(*). This would make public the knowledge that A > submitted X papers and got Y accepted this year. I think just the social > stigma associated with submitting 20 papers and getting 2 accepted would > keep people from submitting too many papers :-). > > (*) The conference should not publish any other information about > rejected papers, e.g. title of submitted paper, who co-authored with who > etc. > -- --Roberto Di Cosmo ------------------------------------------------------------------ Professeur En delegation a l'INRIA PPS E-mail: roberto at dicosmo.org Universite Paris Diderot WWW : http://www.dicosmo.org Case 7014 Tel : ++33-(0)1-44 27 86 55 5, Rue Thomas Mann Fax : ++33-(0)1-44 27 86 54 F-75205 Paris Cedex 13 FRANCE. ------------------------------------------------------------------ Attachments: MIME accepted Word deprecated, http://www.rfc1149.net/documents/whynotword ------------------------------------------------------------------ Office location: Bureau 6C15 (6th floor) 175, rue du Chevaleret, XIII Metro Chevaleret, ligne 6 ------------------------------------------------------------------ From soloviev at irit.fr Sat Jan 30 07:08:00 2010 From: soloviev at irit.fr (soloviev@irit.fr) Date: Sat, 30 Jan 2010 13:08:00 +0100 (CET) Subject: [TYPES] Two-tier reviewing process In-Reply-To: <20100130100152.GA23474@traveler> References: <20100128021124.86EA860A30B2F@labrador.cs.tufts.edu> <8F6DE8C1-4FFC-4C96-BF65-5FFD49906D24@cis.upenn.edu> <4B6295C7.1060906@cis.upenn.edu> <3511DD76-17D3-4399-BCB7-C33A165DBAE9@ccs.neu.edu> <97CA0B7E-4FA2-4891-B85F-45E14121BEEB@cs.bham.ac.uk> <4B634CFE.1060107@seas.upenn.edu> <4B638A43.3080503@saraswat.org> <20100130100152.GA23474@traveler> Message-ID: <08c733d01edfdf9a26ca4ef846f9a9b4.squirrel@websecu.irit.fr> Dear members of the list, I would like to express my agreement with Roberto. Surely, we have to think what the -real- interests of the community are, and not merely follow the current. Let us be frank, the idea to make all evaluation and even reputation quantitative is mostly imposed from outside, by various administrations that want to have an easy way to control scientific activities. The flaws of these quantitative methods, their basic imprecision were discussed a lot. But if the evaluation method is flawed, it is open to manipulation, to group politics etc. Should we add one more possibility of abuse? So... All the best, Sergei Soloviev > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > On Fri, Jan 29, 2010 at 08:24:19PM -0500, Vijay Saraswat wrote: > >> >> The only currency relevant is reputation. > > Dear Vijay, > thanks for attracting attention to this point... > I agree that in our small world reputation is one of the > most relevant components of 'currency'. > > Let me humbly offer some food for thought, though. > > Reputation is a sophisticated object, and IMHO it does not > or should not come just from the number of papers published: > in our field, we know well that designing a powerful abstraction > is an accomplishment worth hundreds of incremental improvements > of the state of the art, so one breakthrough paper that subsumes > dozens of previous works should not be counted as 'just one more paper'. > > Unfortunately, the current system does not encourage investing time > in breakthrough papers, and linking reputation to number of papers > published is part of the broken mechanism that should be fixed. > > Your proposal of publishing a track record of rejects seems > at first to go in the right direction: the very same bad habit that > links reputation to number of papers published will probably en up > in linking bad reputation to number of rejects one author gets. > > But if one thinks a bit about it, things are more complex than that: if, > as > it has been advocated at large previously, there are not enough slots in > our > conferences today to accomodate all the good papers, your proposal will > end up > with double punishment for those good papers that do not go through... > not only > you dont get accepted (bad enough), but you also get a blame mark (exposed > to > the public pillory of authors got rejected). > > A natural result would be high resentment from those authors that beleive > their > work is really good, and yet got dumped because of what they feel is a > faulty > selection process... > > I am not sure this is what the community wants... > > Yours, > > --Roberto > >> >> Rajeev Alur wrote: >> > [ The Types Forum, >> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > >> > One proposal to fix this is to enforce a rule such as: >> > You can be a co-author on at most 2 submissions to top-tier SIGPLAN >> > sponsored conferences in a given calendar year. >> > Such a rule would put some burden on authors to submit their best work >> > to conferences, >> > and reviewing would be less onerous and better. >> > >> Hi Rajeev -- >> >> The problem of too many poor papers circulating is definitely real. But >> this seems like a particularly bad way to fix this -- people can be >> involved in many different lines of work that come to fruition in the >> same year, a professor is guiding many students, a person is >> contributing code to many systems etc. >> >> Here is an alternate suggestion picking up on your observation that >> currently there is no cost of submission. >> >> >> What if each of the SIGPLAN conferences also published a tally of who >> submitted how many paper(*). This would make public the knowledge that A >> submitted X papers and got Y accepted this year. I think just the social >> stigma associated with submitting 20 papers and getting 2 accepted would >> keep people from submitting too many papers :-). >> >> (*) The conference should not publish any other information about >> rejected papers, e.g. title of submitted paper, who co-authored with who >> etc. >> > > -- > --Roberto Di Cosmo > > ------------------------------------------------------------------ > Professeur En delegation a l'INRIA > PPS E-mail: roberto at dicosmo.org > Universite Paris Diderot WWW : http://www.dicosmo.org > Case 7014 Tel : ++33-(0)1-44 27 86 55 > 5, Rue Thomas Mann Fax : ++33-(0)1-44 27 86 54 > F-75205 Paris Cedex 13 > FRANCE. > ------------------------------------------------------------------ > Attachments: > MIME accepted > Word deprecated, http://www.rfc1149.net/documents/whynotword > ------------------------------------------------------------------ > Office location: > > Bureau 6C15 (6th floor) > 175, rue du Chevaleret, XIII > Metro Chevaleret, ligne 6 > ------------------------------------------------------------------ > From Lists at Alessio.Guglielmi.name Sat Jan 30 16:01:24 2010 From: Lists at Alessio.Guglielmi.name (Alessio Guglielmi) Date: Sat, 30 Jan 2010 22:01:24 +0100 Subject: [TYPES] Two-tier reviewing process Message-ID: <20100130210127.5DF7C14CDA8@janeway.inf.tu-dresden.de> Hello, At 09:21 -0800 29/1/10, Kim Bruce wrote: >I'm not saying we should move away from refereed conferences, but >they should not be the be-all and end-all. At 14:47 -0500 29/1/10, Matthias Felleisen wrote: >To some extent, it's a chicken-and-egg problem, which is why >it is so difficult to change things. At 16:02 -0500 29/1/10, Rajeev Alur wrote: >The real problem is with the current submission process that is >causing enormous overload: there are too many conferences, everyone >(experts and non-experts) is writing and submitting too many papers, >but there are too few (so-called) expert reviewers (the challenge of >getting serious reviews in a timely manner is a headache for journal >editors also). At 11:01 +0100 30/1/10, Roberto Di Cosmo wrote: >Reputation is a sophisticated object, and IMHO it does not or should >not come just from the number of papers published: in our field, we >know well that designing a powerful abstraction is an accomplishment >worth hundreds of incremental improvements of the state of the art, >so one breakthrough paper that subsumes dozens of previous works >should not be counted as 'just one more paper'. Just a few examples of criticism I have recently read on the list (and I also got personal messages along the same lines). There seem to be many that agree on the points above, and in general on the feeling that our over-reliance on conferences, especially for hiring and tenure, is grotesque. It reminds me of the game of Canabalt (try it if you didn't already: ). I wonder whether we can make our dissatisfaction more visible than just some complaining on the list, for example by agreeing on a little manifesto and adopting some healthy behaviour, to be publicized on our web pages. I suggest the compromise that I adopt: refusal to deal, in any way (papers, committees, refereeing, invitations), with conferences that publish proceedings, except for papers with colleagues that are seeking jobs/tenure. The objective is not to kill conferences, just to correct the imbalance. For those that might think that this can slow their career: of course the freedom from the frenzy will make you have better, non-incremental ideas! Be bold, eventually you'll be better off. For the manifesto, I propose to keep it simple and to the point: you don't discover America with a canoe, you don't go to the moon on a firework, etc. I'm happy to collect ideas and make a synthesis. -Alessio From soloviev at irit.fr Sun Jan 31 06:09:33 2010 From: soloviev at irit.fr (soloviev@irit.fr) Date: Sun, 31 Jan 2010 12:09:33 +0100 (CET) Subject: [TYPES] Two-tier reviewing process In-Reply-To: <20100130210127.5DF7C14CDA8@janeway.inf.tu-dresden.de> References: <20100130210127.5DF7C14CDA8@janeway.inf.tu-dresden.de> Message-ID: <2353713a43d08a382220d2648d232fac.squirrel@websecu.irit.fr> Dear Alessio, your suggestion seems to me not very clear. Do you mean that conferences that publish proceedings are not good? I was recently disappointed a few times with certain conferences (well, organized by philosophers of mathematics) where we made a usual effort to write a reasonnably detailed conference contribution and they published only a very short abstracts, and put the full texts on some ephemere web site. (It was not clear from their "call for papers"!) Also the phrase seems ambiguous: >refusal to deal... with conferences that publish proceedings, except for > papers with colleagues that are seeking jobs/tenure. Who is making an exception - you (for papers with colleagues?) or the organizers (for the papers with their colleagues - and you don't approve?) It is difficult to agree with such position, if it is indeed what you suggest to put in the manifesto. Best regards, Sergei Soloviev > I suggest the compromise that I adopt: > refusal to deal, in any way (papers, committees, refereeing, > invitations), with conferences that publish proceedings, except for > papers with colleagues that are seeking jobs/tenure. The objective is > not to kill conferences, just to correct the imbalance. > > -Alessio > From Lists at Alessio.Guglielmi.name Sun Feb 7 05:34:22 2010 From: Lists at Alessio.Guglielmi.name (Alessio Guglielmi) Date: Sun, 7 Feb 2010 11:34:22 +0100 Subject: [TYPES] Dealing with conferences for the environment conscious In-Reply-To: <2353713a43d08a382220d2648d232fac.squirrel@websecu.irit.fr> References: <20100130210127.5DF7C14CDA8@janeway.inf.tu-dresden.de> <2353713a43d08a382220d2648d232fac.squirrel@websecu.irit.fr> Message-ID: <20100207103429.510D414CD8B@janeway.inf.tu-dresden.de> Dear Sergei and list members, I clarify the proposal I had made last week about participating in conferences. The problem to address is publication inflation, which is favoured by the abundance of conferences and their abnormal relevance in hiring and tenure decisions. The only conferences that pose a problem in this respect are the `refereed' ones, whose proceedings are published (and so they constitute valid beans). Clearly, there are good aspects to conferences, so I propose a compromise, with the hope to gently push the system towards a more reasonable balance between the well pondered and the frantic. Come to think of it, my proposal, which only concerns conferences with `refereed' and published proceedings, could be presented in three varieties: 1 - PESCETARIAN: Only submit papers where some other author from oneself is in need of a job or tenure (and they speak at the conference), and reject invitations as invited speaker. 2 - VEGETARIAN: In addition to 1, do not `review' papers for those conferences (and compensate with very accurate reviews of journal papers) and do not participate in their committees. 3 - VEGAN: Do not submit to, participate in, review for, and organize conferences with published proceedings. (I can imagine fruitarianism: only read and cite journal papers, the good fruits of research, no living creature harmed; but that seems extreme.) If that is of any interest, I'm a vegetarian, and have been such for thirteen years before getting a permanent job. There are many that seem to agree with the general principles of my proposal, although perhaps not on the details. I was wondering if we can reach a consensus and state our position in an organised and visible fashion, so to have an impact. Ciao, -Alessio At 12:09 +0100 31/1/10, soloviev at irit.fr wrote: >Dear Alessio, > >your suggestion seems to me not very clear. Do you mean >that conferences that publish proceedings are not good? I was >recently disappointed a few times with certain conferences >(well, organized by philosophers of mathematics) where >we made a usual effort to write a reasonnably detailed >conference contribution and they published only a very >short abstracts, and put the full >texts on some ephemere web site. >(It was not clear from their "call for papers"!) > >Also the phrase seems ambiguous: >>refusal to deal... with conferences that publish proceedings, except for >> papers with colleagues that are seeking jobs/tenure. >Who is making an exception - you (for papers with colleagues?) >or the organizers (for the papers with their colleagues - >and you don't approve?) > >It is difficult to agree with such position, if it is indeed >what you suggest to put in the manifesto. > >Best regards, > >Sergei Soloviev > > > >> I suggest the compromise that I adopt: >> refusal to deal, in any way (papers, committees, refereeing, >> invitations), with conferences that publish proceedings, except for >> papers with colleagues that are seeking jobs/tenure. The objective is >> not to kill conferences, just to correct the imbalance. >> > > -Alessio From dreyer at mpi-sws.org Sun Feb 7 06:41:42 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sun, 7 Feb 2010 12:41:42 +0100 Subject: [TYPES] Dealing with conferences for the environment conscious In-Reply-To: <20100207103429.510D414CD8B@janeway.inf.tu-dresden.de> References: <20100130210127.5DF7C14CDA8@janeway.inf.tu-dresden.de> <2353713a43d08a382220d2648d232fac.squirrel@websecu.irit.fr> <20100207103429.510D414CD8B@janeway.inf.tu-dresden.de> Message-ID: <7fa251b71002070341r37ec259cid2b0532df7bbdce4@mail.gmail.com> With all due respect, Alessio, none of the options you've set forth makes much sense to me (although I do appreciate the nomenclature). Specifically: > 1 - PESCETARIAN: Only submit papers where some other author from > oneself is in need of a job or tenure (and they speak at the > conference), and reject invitations as invited speaker. For most people, I sense this would ultimately be such a minor restriction that it's hardly worth talking about. The vast majority of papers published in, say, POPL, have multiple authors, very often including a student author (who will eventually seek a job), or a junior faculty member seeking tenure, or a member of a research lab who needs to publish in POPL to maintain their standing. This pescetarian option would not prevent the authors of that very large subset of papers from continuing to submit to POPL or other major refereed conferences. > 2 - VEGETARIAN: In addition to 1, do not `review' papers for those > conferences (and compensate with very accurate reviews of journal > papers) and do not participate in their committees. I understand this is your current position, but to me it is untenable and clearly the worst of the three. If you are going to submit papers to a conference, you ought to be willing to review papers for it. Only seems fair. End of story. > 3 - VEGAN: Do not submit to, participate in, review for, and organize > conferences with published proceedings. This position is undeniably consistent. However, if it is even too extreme for you, Alessio, I suspect it is unlikely to catch on. I'm going to invoke my moderator's prerogative to cut the discussion at this point concerning the best way to subvert the conference system. If you have constructive suggestions to add, feel free to send them to me or Alessio, and we can compile them into a single post at a later date. Best regards, Derek From dreyer at mpi-sws.org Sun Feb 7 07:33:49 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sun, 7 Feb 2010 13:33:49 +0100 Subject: [TYPES] Dealing with conferences for the environment conscious In-Reply-To: <7fa251b71002070341r37ec259cid2b0532df7bbdce4@mail.gmail.com> References: <20100130210127.5DF7C14CDA8@janeway.inf.tu-dresden.de> <2353713a43d08a382220d2648d232fac.squirrel@websecu.irit.fr> <20100207103429.510D414CD8B@janeway.inf.tu-dresden.de> <7fa251b71002070341r37ec259cid2b0532df7bbdce4@mail.gmail.com> Message-ID: <7fa251b71002070433q4d0bab29v1db9bda15125e7de@mail.gmail.com> A final word about this (for now): Alessio asked me to clarify that he requires his `seeking job' coauthors to referee as many conference papers as they get reviews for, so that there is no unbalance. Usually, when he gets a review request from a conference, he suggests one of his coauthors that might be `in debt'. He feels that my accusation that the "vegetarian" option is unfair was based on a misrepresentation of what it is. Actually, I don't feel that I've misrepresented anything, and I don't see how Alessio's clarification changes anything. I still think the "vegetarian" option is unfair and inconsistent. But there you have it. Derek On Sun, Feb 7, 2010 at 12:41 PM, Derek Dreyer wrote: > With all due respect, Alessio, none of the options you've set forth > makes much sense to me (although I do appreciate the nomenclature). > Specifically: > >> 1 - PESCETARIAN: Only submit papers where some other author from >> oneself is in need of a job or tenure (and they speak at the >> conference), and reject invitations as invited speaker. > > For most people, I sense this would ultimately be such a minor > restriction that it's hardly worth talking about. ?The vast majority > of papers published in, say, POPL, have multiple authors, very often > including a student author (who will eventually seek a job), or a > junior faculty member seeking tenure, or a member of a research lab > who needs to publish in POPL to maintain their standing. ?This > pescetarian option would not prevent the authors of that very large > subset of papers from continuing to submit to POPL or other major > refereed conferences. > >> 2 - VEGETARIAN: In addition to 1, do not `review' papers for those >> conferences (and compensate with very accurate reviews of journal >> papers) and do not participate in their committees. > > I understand this is your current position, but to me it is untenable > and clearly the worst of the three. ?If you are going to submit papers > to a conference, you ought to be willing to review papers for it. > Only seems fair. ?End of story. > >> 3 - VEGAN: Do not submit to, participate in, review for, and organize >> conferences with published proceedings. > > This position is undeniably consistent. ?However, if it is even too > extreme for you, Alessio, I suspect it is unlikely to catch on. > > I'm going to invoke my moderator's prerogative to cut the discussion > at this point concerning the best way to subvert the conference > system. ?If you have constructive suggestions to add, feel free to > send them to me or Alessio, and we can compile them into a single post > at a later date. > > Best regards, > Derek > From rwh at cs.cmu.edu Thu Mar 4 11:26:51 2010 From: rwh at cs.cmu.edu (Robert Harper) Date: Thu, 4 Mar 2010 11:26:51 -0500 Subject: [TYPES] voevodsky lecture on type theory and homotopy theory Message-ID: <5D7F61C4-3F87-4373-8FE4-DBDBDF8F8FDB@cs.cmu.edu> Hello everyone, The video of a lecture by Vladimir Voevodsky entitled "The Equivalence Axiom and Univalent Models of Type Theory" given at Carnegie Mellon on Feb 4 is available online at http://wms2.andrew.cmu.edu:81/nmvideo/SCS_lecture2b-4-10-hint.mov . Some related notes and papers may be found on Vladimir's home page: http://www.math.ias.edu/~vladimir/Site3/home.html . Bob Harper From gdp at inf.ed.ac.uk Mon Mar 22 09:51:22 2010 From: gdp at inf.ed.ac.uk (Gordon Plotkin) Date: Mon, 22 Mar 2010 13:51:22 +0000 Subject: [TYPES] Very sad news Message-ID: Dear Colleagues, I am deeply saddened to pass on the following message from Barney and Chlo? Milner: "We are sorry to announce that Robin Milner died on Saturday 20th March, in Cambridge, just three days after the funeral of his wife, Lucy. He will be greatly missed by his family and friends, as well as the academic community." Gordon Plotkin From kyagrd at gmail.com Mon May 17 17:57:59 2010 From: kyagrd at gmail.com (Ahn, Ki Yung) Date: Mon, 17 May 2010 14:57:59 -0700 Subject: [TYPES] Conversion elimination lemma for Pure Type Systems? Message-ID: <4BF1BBE7.5050102@gmail.com> While I'm trying to prove a property of a calculus which is a variation of PTS, I needed a conversion elimination lemma for PTS which can be stated as follows: "When a PTS typing judgment |- M : A is provable and the term M and its type A are both in normal forms, then there exists a derivation that does not use any conversion rule." Of course, I am assuming the equality in the conversion rule of PTS to be beta-equality only, since other definitional equality (e.g. s1 = s2 for some sort s1 and s2) can obviously require conversion rule for the judgments of terms and types with normal forms. Seems like quite an obvious property, so I am wondering whether there has there been any proofs for such lemma in the literature? Thanks in advance. From drl at cs.cmu.edu Mon May 17 20:22:38 2010 From: drl at cs.cmu.edu (Dan Licata) Date: Mon, 17 May 2010 20:22:38 -0400 Subject: [TYPES] Conversion elimination lemma for Pure Type Systems? In-Reply-To: <4BF1BBE7.5050102@gmail.com> References: <4BF1BBE7.5050102@gmail.com> Message-ID: <20100518002238.GD6131@cs.cmu.edu> Hi Ki Yung, If I've understood you correctly, then I don't think the theorem that you want is true. Consider the following example written in LF (which is a PTS with Pi-kinds and Pi-types), written in Twelf notation: signature: o : type. t : o -> type. k : o. r : {f : o -> o} t (f k). example : t k = r ([ x ] x). Both the type of example (t k) and the term (r ([ x ] x)) are normal. However, to show the typing, you have to use conversion to show that t (([ x ] x) k) = t k : x : o |- x : o ------------------ . r : {f : o -> o} t (f k) ([ x ] x) : o -> o . ------------------------------------------------ . r ([ x ] x) : t (([ x ] x) k) t (([ x ] x) k) = t k ------------------------------------------------------------------------- r ([ x ] x) : t k A technique called "hereditary substitution" might be helpful to you, as it builds the uses of conversion into the substitution operation, so you get better control over where in the derivation conversion happens. Some introductions to it are here: http://www.cs.cmu.edu/~fp/papers/andrews08.pdf http://www.cs.cmu.edu/~drl/pubs/hl07mechanizing/hl07mechanizing.pdf -Dan On May17, Ahn, Ki Yung wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > While I'm trying to prove a property of a calculus which is a variation > of PTS, I needed a conversion elimination lemma for PTS which can be > stated as follows: > > "When a PTS typing judgment |- M : A is provable and the term M and its > type A are both in normal forms, then there exists a derivation that > does not use any conversion rule." > > Of course, I am assuming the equality in the conversion rule of PTS to > be beta-equality only, since other definitional equality (e.g. s1 = s2 > for some sort s1 and s2) can obviously require conversion rule for the > judgments of terms and types with normal forms. > > Seems like quite an obvious property, so I am wondering whether there > has there been any proofs for such lemma in the literature? > > Thanks in advance. > From xhli06 at 163.com Thu May 20 22:56:55 2010 From: xhli06 at 163.com (Xuhui Li) Date: Fri, 21 May 2010 10:56:55 +0800 (CST) Subject: [TYPES] what does a negative proposition represent in type theory? Message-ID: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> Hi, All In type theory there is a proper correspondance between compound propositions and types, such as A & B, A -> B. However, what does a negative proposition like "?A" mean? What is an element of the type "?A"? Further, suppose x:A and y:?A, is there certain rules to reduce x and y just like reduce A&?A in logic system? Thanks. Xuhui From kthielen at liquidnet.com Fri May 21 09:48:02 2010 From: kthielen at liquidnet.com (Kalani Thielen) Date: Fri, 21 May 2010 09:48:02 -0400 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> Message-ID: I've seen negation interpreted as continuations (not A = A -> bottom). Maybe this paper will be helpful: http://www.cs.rutgers.edu/~ccshan/polar/paper.pdf With your example, I believe that x:A and y:not A, A&(not A) = A&(A -> bottom) can be "reduced" in the way that you suggest by applying y to x (yielding bottom -- i.e.: giving the continuation what it wants). Hope this helps (and/or more qualified people can explain in more detail). -----Original Message----- From: types-list-bounces at lists.seas.upenn.edu [mailto:types-list-bounces at lists.seas.upenn.edu] On Behalf Of Xuhui Li Sent: Thursday, May 20, 2010 10:57 PM To: types-list at lists.seas.upenn.edu Subject: [TYPES] what does a negative proposition represent in type theory? [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] Hi, All In type theory there is a proper correspondance between compound propositions and types, such as A & B, A -> B. However, what does a negative proposition like "?A" mean? What is an element of the type "?A"? Further, suppose x:A and y:?A, is there certain rules to reduce x and y just like reduce A&?A in logic system? Thanks. Xuhui From u.s.reddy at cs.bham.ac.uk Fri May 21 17:11:35 2010 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Fri, 21 May 2010 22:11:35 +0100 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> Message-ID: <19446.63239.839000.470771@gargle.gargle.HOWL> Xuhui Li writes: > In type theory there is a proper correspondance between compound propositions > and types, such as A & B, A -> B. However, what does a negative proposition > like "?A" mean? What is an element of the type "?A"? Further, suppose x:A and > y:?A, is there certain rules to reduce x and y just like reduce A&?A > in logic system? If you are talking about intuitionistic type theory, then not(A) is just a short hand notation for A -> false. Everything else follows from that. Cheers, Uday Reddy From noam.zeilberger at gmail.com Fri May 21 17:25:33 2010 From: noam.zeilberger at gmail.com (Noam Zeilberger) Date: Fri, 21 May 2010 23:25:33 +0200 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: <19656f.a8c2.128bb2b44d5.Coremail.xhli06@163.com> References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> <19656f.a8c2.128bb2b44d5.Coremail.xhli06@163.com> Message-ID: Well yes you're right that there are other interpretations of negation -- I wasn't sure what you were looking for! With this clarification, I think the question you are asking is really at the "extrinsic" or "refinement type" level (related to intersection and union types, etc.). Before we can get there, we need to start at the intrinsic level. Let's say that a type A denotes some set of values. At this level, the standard interpretation of negation is as building the set of continuations, i.e., the type ~A denotes the set of continuations for A with answer type _|_. Given a value V : A and a continuation K : ~A, we can combine them to produce a "contradiction" K $ V : _|_, in other words just a program that computes an answer (of type _|_) by passing V to K. But you're interested in a different notion of negation-as-complementation, so let's continue... At the extrinsic level, we consider *refinements* S,T of A, which can be interpreted as subsets, or properties of A-values. For example, if A = naturals, we can define S = primes, T = evens, etc. And now, yes, it makes sense to consider an additional, very different form of negation (which I'll write -) as complementation. The refinement type -S denotes a set of values, rather than a set of continuations: the set of A-values which don't satisfy S. Now, how does non-contradiction fit in? Well, there is certainly no contradiction about a type like S \times -S: as you said, e.g., this could simply denote pairs of prime and non-prime numbers. However, at the extrinsic level we can also consider "logical conjunction" as intersection rather than product. And the type S \cap -S is contradictory in a very concrete sense: it denotes the set of A-values which both satisfy S and don't satisfy S -- in other words the empty set. Noam On Fri, May 21, 2010 at 4:01 PM, Xuhui Li wrote: > Hi Noam, > > Thanks for your patient?reply. Sorry for the wrong "negation" symbol?that > confused you. > > I know that ~A can be defined as A->_|_. However, this interpretation is not > "intuitive". > Firstly, A and ~A is interpreted as complementary sets in common models, but > here ~A is a function. > Secondly, for x:A and y:(~A),? (x,y):(A & ~A) = y(x)? this equation is just > applied to A&~A or for any? type A & B ? > For example, if we treat ~A is a complementary proposition of A, lets assume > A is prime number, then x:A means x is a prime number, and y:~A means y is > not a prime number. (x,y) means a pair of a prime number?with any > object?other than a prime number. Can this lead to a contradiction? > > Best Regards > Xuhui > > > > ?> -----????----- >> ???: "Noam Zeilberger" >> ????: 2010?5?21? ??? >> ???: "Xuhui Li" >> ??: >> ??: Re: [TYPES] what does a negative proposition represent in type >> theory? >> >> Hello Xuhui, >> >> I am unfamiliar with your notation: what is the "?" connective? If >> you mean negation (which I've usually seen written ~ or ?), then the >> standard interpretation of an element of type ~A is as a >> "continuation". There is a close link between the different >> double-negation translations of classical into intuitionistic/minimal >> logic (by Kolmogorov, G?del, etc.), and different >> "continuation-passing transformations" on the lambda calculus. I'd be >> happy to point you to references if you're interested (and apologies >> if this was not what you were talking about). >> >> Noam >> >> On Fri, May 21, 2010 at 4:56 AM, Xuhui Li wrote: >> > [ The Types Forum, >> > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> > >> > Hi, All >> > In type theory there is a proper correspondance between compound >> > propositions and types, such as A & B, A -> B. However, what does a negative >> > proposition like "?A" mean? What is an element of the type "?A"? Further, >> > suppose x:A and y:?A, is there certain rules to reduce x and y just like >> > reduce A&?A ?in logic system? >> > >> > Thanks. >> > >> > Xuhui >> > > > > ________________________________ > ????????????????????? From u.s.reddy at cs.bham.ac.uk Fri May 21 19:06:03 2010 From: u.s.reddy at cs.bham.ac.uk (Uday S Reddy) Date: Sat, 22 May 2010 00:06:03 +0100 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> <19656f.a8c2.128bb2b44d5.Coremail.xhli06@163.com> Message-ID: <19447.4571.556000.938374@gargle.gargle.HOWL> This exchange suggests that you are not talking about intuitionistic type theory, but something else. But it is not clear what. > > Secondly, for x:A and y:(~A),? (x,y):(A & ~A) = y(x)? this > > equation is just applied to A&~A or for any? type A & B ? > > For example, if we treat ~A is a complementary proposition of A, > > lets assume A is prime number, then x:A means x is a prime number, > > and y:~A means y is not a prime number. (x,y) means a pair of a > > prime number?with any object?other than a prime number. Can this > > lead to a contradiction? This example is misleading. "prime number" is not a proposition, but rather a predicate. You can regard "there exists a prime number" as a proposition, and a particular prime number x as its witness. However, note that ~A is then "there does not exist a prime number", for which there is no proof. If you are trying to produce a logic where A & ~A has a proof, it is not likely to be a very good logic, is it? Cheers, Uday Reddy > > > > Best Regards > > Xuhui > > > > > > > > ?> -----????----- > >> ???: "Noam Zeilberger" > >> ????: 2010?5?21? ??? > >> ???: "Xuhui Li" > >> ??: > >> ??: Re: [TYPES] what does a negative proposition represent in type > >> theory? > >> > >> Hello Xuhui, > >> > >> I am unfamiliar with your notation: what is the "?" connective? If > >> you mean negation (which I've usually seen written ~ or ?), then the > >> standard interpretation of an element of type ~A is as a > >> "continuation". There is a close link between the different > >> double-negation translations of classical into intuitionistic/minimal > >> logic (by Kolmogorov, G?del, etc.), and different > >> "continuation-passing transformations" on the lambda calculus. I'd be > >> happy to point you to references if you're interested (and apologies > >> if this was not what you were talking about). > >> > >> Noam > >> > >> On Fri, May 21, 2010 at 4:56 AM, Xuhui Li wrote: > >> > [ The Types Forum, > >> > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > > >> > Hi, All > >> > In type theory there is a proper correspondance between compound > >> > propositions and types, such as A & B, A -> B. However, what does a > negative > >> > proposition like "?A" mean? What is an element of the type "?A"? Further, > >> > suppose x:A and y:?A, is there certain rules to reduce x and y just like > >> > reduce A&?A ?in logic system? > >> > > >> > Thanks. > >> > > >> > Xuhui > >> > > > > > > > ________________________________ > > ????????????????????? From avik at cs.umd.edu Fri May 21 23:22:27 2010 From: avik at cs.umd.edu (Avik Chaudhuri) Date: Fri, 21 May 2010 23:22:27 -0400 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> <19656f.a8c2.128bb2b44d5.Coremail.xhli06@163.com> Message-ID: Following up on Noam's comments, complementary types indeed seem to be quite useful for logical reasoning about programs (e.g., in scripting languages with typecase). A subtyping of the form A < ~B can be particularly useful. This logically means A \cap B = \emptyset, and is equivalent to B < ~A. In "program analysis" terminology, it means that values of type A cannot alias values of type B. -Avik. On May 21, 2010, at 5:25 PM, Noam Zeilberger wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Well yes you're right that there are other interpretations of negation > -- I wasn't sure what you were looking for! With this clarification, > I think the question you are asking is really at the "extrinsic" or > "refinement type" level (related to intersection and union types, > etc.). > > Before we can get there, we need to start at the intrinsic level. > > Let's say that a type A denotes some set of values. > At this level, the standard interpretation of negation is as building > the set of continuations, i.e., the type ~A denotes the set of > continuations for A with answer type _|_. > > Given a value V : A and a continuation K : ~A, we can combine them to > produce a "contradiction" K $ V : _|_, in other words just a program > that computes an answer (of type _|_) by passing V to K. > > But you're interested in a different notion of > negation-as-complementation, so let's continue... > > At the extrinsic level, we consider *refinements* S,T of A, which can > be interpreted as subsets, or properties of A-values. For example, if > A = naturals, we can define S = primes, T = evens, etc. And now, yes, > it makes sense to consider an additional, very different form of > negation (which I'll write -) as complementation. The refinement type > -S denotes a set of values, rather than a set of continuations: the > set of A-values which don't satisfy S. > > Now, how does non-contradiction fit in? Well, there is certainly no > contradiction about a type like S \times -S: as you said, e.g., this > could simply denote pairs of prime and non-prime numbers. However, at > the extrinsic level we can also consider "logical conjunction" as > intersection rather than product. And the type S \cap -S is > contradictory in a very concrete sense: it denotes the set of A-values > which both satisfy S and don't satisfy S -- in other words the empty > set. > > Noam > > On Fri, May 21, 2010 at 4:01 PM, Xuhui Li wrote: >> Hi Noam, >> >> Thanks for your patient reply. Sorry for the wrong "negation" >> symbol that >> confused you. >> >> I know that ~A can be defined as A->_|_. However, this >> interpretation is not >> "intuitive". >> Firstly, A and ~A is interpreted as complementary sets in common >> models, but >> here ~A is a function. >> Secondly, for x:A and y:(~A), (x,y):(A & ~A) = y(x)? this equation >> is just >> applied to A&~A or for any type A & B ? >> For example, if we treat ~A is a complementary proposition of A, >> lets assume >> A is prime number, then x:A means x is a prime number, and y:~A >> means y is >> not a prime number. (x,y) means a pair of a prime number with any >> object other than a prime number. Can this lead to a contradiction? >> >> Best Regards >> Xuhui >> >> >> >> > -----????----- >>> ???: "Noam Zeilberger" >>> ????: 2010?5?21? ??? >>> ???: "Xuhui Li" >>> ??: >>> ??: Re: [TYPES] what does a negative proposition represent in >>> type >>> theory? >>> >>> Hello Xuhui, >>> >>> I am unfamiliar with your notation: what is the "?" connective? If >>> you mean negation (which I've usually seen written ~ or ?), then >>> the >>> standard interpretation of an element of type ~A is as a >>> "continuation". There is a close link between the different >>> double-negation translations of classical into intuitionistic/ >>> minimal >>> logic (by Kolmogorov, G?del, etc.), and different >>> "continuation-passing transformations" on the lambda calculus. >>> I'd be >>> happy to point you to references if you're interested (and apologies >>> if this was not what you were talking about). >>> >>> Noam >>> >>> On Fri, May 21, 2010 at 4:56 AM, Xuhui Li wrote: >>>> [ The Types Forum, >>>> http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >>>> >>>> Hi, All >>>> In type theory there is a proper correspondance between compound >>>> propositions and types, such as A & B, A -> B. However, what does >>>> a negative >>>> proposition like "?A" mean? What is an element of the type "?A"? >>>> Further, >>>> suppose x:A and y:?A, is there certain rules to reduce x and y >>>> just like >>>> reduce A&?A in logic system? >>>> >>>> Thanks. >>>> >>>> Xuhui >>>> >> >> >> ________________________________ >> ????????????????????? From gc at pps.jussieu.fr Sun May 23 11:19:38 2010 From: gc at pps.jussieu.fr (Giuseppe Castagna) Date: Sun, 23 May 2010 23:19:38 +0800 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> <19656f.a8c2.128bb2b44d5.Coremail.xhli06@163.com> Message-ID: <4BF9478A.7030906@pps.jussieu.fr> Le 21/05/2010 23:25, Noam Zeilberger a ?crit : > [ The Types Forum,http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Well yes you're right that there are other interpretations of negation > -- I wasn't sure what you were looking for! With this clarification, > I think the question you are asking is really at the "extrinsic" or > "refinement type" level (related to intersection and union types, > etc.). > > Before we can get there, we need to start at the intrinsic level. > > Let's say that a type A denotes some set of values. > At this level, the standard interpretation of negation is as building > the set of continuations, i.e., the type ~A denotes the set of > continuations for A with answer type _|_. > Or, you can take the "semantic subtyping" setting where types are set of values, and union, intersection, containement *and* negationtypes are interpreted set-theoretically. So in particular not(A) is the set of all values that do not have type A. However you end up in a more syntactic world since for instance the implication A -> B is the set of well-typed lambda abstractiond that when applied to an expression of type A, if they return a value, then this is in B; but for cardinality reasons it cannot denote the set of functions from A to B (For more details see JACM vol.?55, n.?4, pag.?1-64, 2008.) Beppe From noam.zeilberger at gmail.com Mon May 24 07:40:55 2010 From: noam.zeilberger at gmail.com (Noam Zeilberger) Date: Mon, 24 May 2010 13:40:55 +0200 Subject: [TYPES] what does a negative proposition represent in type theory? In-Reply-To: <4BF9478A.7030906@pps.jussieu.fr> References: <21e59c.2742.128b8cac1ca.Coremail.xhli06@163.com> <19656f.a8c2.128bb2b44d5.Coremail.xhli06@163.com> <4BF9478A.7030906@pps.jussieu.fr> Message-ID: Hello Giuseppe, On Sun, May 23, 2010 at 5:19 PM, Giuseppe Castagna wrote: > Le 21/05/2010 23:25, Noam Zeilberger a ?crit : >> Well yes you're right that there are other interpretations of negation >> -- I wasn't sure what you were looking for! ?With this clarification, >> I think the question you are asking is really at the "extrinsic" or >> "refinement type" level (related to intersection and union types, >> etc.). >> [...] > Or, you can take the "semantic subtyping" setting where types are set of > values, and union, intersection, containement *and* negationtypes ?are > interpreted set-theoretically. Yes, I meant "extrinsic" in the general sense, which includes settings like semantic subtyping. The terminology is from Reynolds, and is roughly what people sometimes call the "Curry view", opposed to the "Church view". In "The Meaning of Types: From Intrinsic to Extrinsic Semantics" he writes: In an _intrinsic semantics_, only phrases that satisfy typing judgements have meanings. Indeed, meanings are assigned to the typing judgements, rather than to the phrases themselves, so that a phrase that satis?es several judgements will have several meanings. [...] In contrast, in an _extrinsic semantics_, the meaning of each phrase is the same as it would be in a untyped language, regardless of its typing properties. In this view, a typing judgement is an assertion that the meaning of a phrase possesses some property. The extrinsic view is not incompatible with the intrinsic view -- it can be seen as a refinement, adding another layer of "semantic types" or "refinement types" to an already intrinsically-typed language. So with Xuhui's example, we can consider "is prime" as a property of untyped values, or more precisely of intrinsically-typed natural numbers -- both approaches make sense, but with the latter when we take the complement property we get the slightly more manageable set of non-prime naturals, rather than the set of non-prime naturals together with all untyped values which are not natural numbers (of course we could cut down the latter by intersection with the property "is natural"). But in any case, my point was just that Xuhui's question was a bit confusing, but makes sense theoretically under the view of types-as-properties. (This is brushing aside important questions such as, Does it make sense to negate *any* property?, and How do you design a decidable type system with complementation?) Noam From rpollack at inf.ed.ac.uk Mon May 24 08:05:36 2010 From: rpollack at inf.ed.ac.uk (Randy Pollack) Date: Mon, 24 May 2010 13:05:36 +0100 Subject: [TYPES] Conversion elimination lemma for Pure Type Systems? Message-ID: <19450.27536.741465.48641@locatelli.inf.ed.ac.uk> There has been some thinking about where conversion is necessary in PTS derivations. I suggest you look at: @article{Jutting93, author = "L.S. van Benthem Jutting", title = "Typing in {P}ure {T}ype {S}ystems", journal = ic, year = 1993, pages = "30--41", volume = "105", number = "1", month = jul } @InProceedings{JuttingMcKinnaPollack93, author = "L.S. van Benthem Jutting and J. McKinna and R. Pollack", title = "Checking Algorithms for {P}ure {T}ype {S}ystems", booktitle = "TYPES'93: Workshop on Types for Proofs and Programs, Selected Papers", editor = "Barendregt and Nipkow", pages = "19--61", volume = 806, series = lncs, publisher = springer, year = 1994 } Best, Randy Pollack -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From francois.garillot at inria.fr Wed Jun 2 05:37:46 2010 From: francois.garillot at inria.fr (=?iso-8859-1?Q?Fran=E7ois?= Garillot) Date: Wed, 02 Jun 2010 11:37:46 +0200 Subject: [TYPES] References from TYPES '94 (& surounding documents) Message-ID: <87r5kpojv9.fsf@ks27096.kimsufi.com> Over the last couple of weeks, I have been trying to get my hands on a couple of hard-to-find references from Peter Aczel. They're mentioned in (among others) the bibliography of _Type Algorithm in Type Theory with Inheritance_, by Amokrane Saibi (POPL 94, doi:10.1145/263699.263742), and are relevant to the coercion system of the Coq theorem prover: P. Aczel. _ A notion of class for theory development in algebra (in a predicative type theory)_. Presented at Workshop for Proofs and Programs, Bastad, Sweden, June 1994. P. Aczel. _Simple overloading for type theories_ Privately circulated notes, 1994 I would be very grateful if any of the denizens of the mailing list could provide any help as to how I might get a copy of those. My attempts at finding them online, on the shelves of some of my older colleagues, or by directly contacting the author have thus far been unsuccessful. -- Fran?ois Garillot From nr at cs.tufts.edu Thu Jun 10 14:00:20 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Thu, 10 Jun 2010 14:00:20 -0400 Subject: [TYPES] seeking correct term for a quasi-dependent type Message-ID: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> A new feature of the Glasgow Haskell Compiler, called 'indexed type families', makes it possible to write something that isn't a true dependent type but isn't parametrically polymorphic either. I'd like help figuring out what to call it. Here's an example that is pretty close to what we're actually doing: outputFact :: forall a . a -> Info a Although 'Info' has the same syntax as a parametric type *constructor*, it is actually a *partial* *function* on types: data Fact = ... data Middle = ... data Last = ... newtype Label = Label Int -- here follows the definition of 'Info': type family Info a :: * type instance Info Middle = Fact type instance Info Last = [(Label, Fact)] How do I describe the function 'outputFact'? It's not a classic dependent function, because the type of the result doesn't depend on the value of the argument. But the type of the result *does* depend on the static *type* of the argument. And outputFact is less polymorphic than it might appear, since the bound type variable 'a' cannot be instantiated at any type, but only at a type in the domain of the type function 'Info'. I'm a little reluctant to call 'outputFact' a "dependent function" or to say that it has a "dependent-function type", because those words mean something else. But I'm at a loss as to how to describe it. What sort of type is the type forall a . a -> Info a ? Norman From Tim.Sweeney at epicgames.com Thu Jun 10 15:46:37 2010 From: Tim.Sweeney at epicgames.com (Tim Sweeney) Date: Thu, 10 Jun 2010 15:46:37 -0400 Subject: [TYPES] seeking correct term for a quasi-dependent type In-Reply-To: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> References: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> Message-ID: Norman, I'd say Info is a "type-indexed type", so outputFact is a "type-indexed function". I see a type-indexed type is weaker than: * A "type-dependent type" -- which implies a richer mechanism for expressing dependencies beyond case decomposition by exact-match of types. * A "value-dependent type" -- e.g. natural numbers less than i, for a non-constant natural number variable i. * A dependent type, allowing mixing and matching dependency on types and values. Tim Sweeney Epic Games ________________________________________ From: types-list-bounces at lists.seas.upenn.edu [types-list-bounces at lists.seas.upenn.edu] On Behalf Of Norman Ramsey [nr at cs.tufts.edu] Sent: Thursday, June 10, 2010 2:00 PM To: types-list at lists.seas.upenn.edu Cc: nr at cs.tufts.edu Subject: [TYPES] seeking correct term for a quasi-dependent type [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] A new feature of the Glasgow Haskell Compiler, called 'indexed type families', makes it possible to write something that isn't a true dependent type but isn't parametrically polymorphic either. I'd like help figuring out what to call it. Here's an example that is pretty close to what we're actually doing: outputFact :: forall a . a -> Info a Although 'Info' has the same syntax as a parametric type *constructor*, it is actually a *partial* *function* on types: data Fact = ... data Middle = ... data Last = ... newtype Label = Label Int -- here follows the definition of 'Info': type family Info a :: * type instance Info Middle = Fact type instance Info Last = [(Label, Fact)] How do I describe the function 'outputFact'? It's not a classic dependent function, because the type of the result doesn't depend on the value of the argument. But the type of the result *does* depend on the static *type* of the argument. And outputFact is less polymorphic than it might appear, since the bound type variable 'a' cannot be instantiated at any type, but only at a type in the domain of the type function 'Info'. I'm a little reluctant to call 'outputFact' a "dependent function" or to say that it has a "dependent-function type", because those words mean something else. But I'm at a loss as to how to describe it. What sort of type is the type forall a . a -> Info a ? Norman From jgmorris at cs.pdx.edu Thu Jun 10 15:56:23 2010 From: jgmorris at cs.pdx.edu (J. Garrett Morris) Date: Thu, 10 Jun 2010 12:56:23 -0700 Subject: [TYPES] seeking correct term for a quasi-dependent type In-Reply-To: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> References: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> Message-ID: On Thu, Jun 10, 2010 at 11:00 AM, Norman Ramsey wrote: > A new feature of the Glasgow Haskell Compiler, called 'indexed type > families', makes it possible to write something that isn't a true > dependent type but isn't parametrically polymorphic either. This is not new with type families; exactly the same sort of type can be achieved with functional dependencies. For instance: class F t u | t -> u -- requires that 'u' be functionally dependent on 't' instance F Int Bool instance F Bool Char foo :: F t u => t -> u foo = ... In this case, the type of foo cannot range freely in both t and u; while Int -> Bool is a valid typing, Int -> Char is not. It might seem that this case is not as tricky, since there is an explicit qualifier limiting the type of foo. However, the differences between this version and the version with type families are purely syntactic. There have been various suggestions for allowing classes with functional dependencies to be written as functions, which would allow the declaration of foo to be written: foo :: t -> F t Perhaps doing the reverse would help to explain the type in your message? If instead of writing outputFact :: a -> Info a we wrote outputFact :: b ~ Info a => a -> b then it looks more like a "conventional" qualified type. /g From dan.doel at gmail.com Thu Jun 10 16:06:10 2010 From: dan.doel at gmail.com (Dan Doel) Date: Thu, 10 Jun 2010 16:06:10 -0400 Subject: [TYPES] seeking correct term for a quasi-dependent type In-Reply-To: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> References: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> Message-ID: <201006101606.10448.dan.doel@gmail.com> On Thursday 10 June 2010 2:00:20 pm Norman Ramsey wrote: > A new feature of the Glasgow Haskell Compiler, called 'indexed type > families', makes it possible to write something that isn't a true > dependent type but isn't parametrically polymorphic either. > I'd like help figuring out what to call it. Here's an > example that is pretty close to what we're actually doing: > > outputFact :: forall a . a -> Info a > > Although 'Info' has the same syntax as a parametric type > *constructor*, it is actually a *partial* *function* on types: > > data Fact = ... > data Middle = ... > data Last = ... > newtype Label = Label Int > > -- here follows the definition of 'Info': > type family Info a :: * > type instance Info Middle = Fact > type instance Info Last = [(Label, Fact)] > > How do I describe the function 'outputFact'? > It's not a classic dependent function, because the type of the result > doesn't depend on the value of the argument. But the type of the > result *does* depend on the static *type* of the argument. > > And outputFact is less polymorphic than it might appear, since the > bound type variable 'a' cannot be instantiated at any type, but only > at a type in the domain of the type function 'Info'. > > I'm a little reluctant to call 'outputFact' a "dependent function" > or to say that it has a "dependent-function type", because those > words mean something else. But I'm at a loss as to how to describe it. > > What sort of type is the type > > forall a . a -> Info a Are you certain functions with that type are not parametric? I can only think of one (or two, depending on seq): undefined -- \_ -> undefined You will get errors if you try to instantiate 'a' to a type without an instance, but the definition is parametric. The only ways you can create non- trivial functions of the type are: 1) Type classes class OutputFact a where outputFact :: a -> Info a instance OutputFact Middle where ... 2) GADTs data T a where M :: T Middle L :: T Last outputFact :: forall a. T a -> a -> Info a outputFact M x = ... outputFact L x = ... But neither of those functions has the type 'forall a. a -> Info a'. The former has a constraint signaling its use of type-case, and the latter has the GADT parameter (I'm still not certain how to think about GADTs with regard to parametricity, though). And it is necessary to use one of these two methods (or write at a specific, concrete type), because without knowing what 'a' is, your function cannot know what 'Info a' is, so undefined will be the only value available for it to produce. Info itself is not parametric (using a somewhat generalized notion of parametricity; the usual one applies to value-level functions, I believe), but neither is 'data Id a = Id a' if you really get into it. -- As for how to describe things like Info, type families are like defining types by recursion. You see this in dependently typed languages. For instance, vectors can be defined by recursion over the size index: Vec : Type -> Nat -> Type Vec A z = Unit Vec A (s n) = A * Vec A n the oddities are that: 1) type families are open and partial 2) type families are using recursion over *types*, whereas you can't do that (directly) in most dependently typed languages, since types themselves aren't defined by induction, though you could define an inductive type of codes to recurse over. This isn't unique to type families, though, because classes are the same way: open, partial and defined by recursion over the structure of types; they're just for defining values that way, rather than types (hence why they're one of the ways to write outputFact above). -- Dan From nr at cs.tufts.edu Fri Jun 11 18:15:50 2010 From: nr at cs.tufts.edu (Norman Ramsey) Date: Fri, 11 Jun 2010 18:15:50 -0400 Subject: [TYPES] seeking correct term for a quasi-dependent type In-Reply-To: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> (sfid-H-20100610-153204-+156.59-1@multi.osbf.lua) References: <20100610180020.BDBCF60156F67@labrador.cs.tufts.edu> (sfid-H-20100610-153204-+156.59-1@multi.osbf.lua) Message-ID: <20100611221550.438D860258F00@labrador.cs.tufts.edu> Regarding > outputFact :: forall a . a -> Info a > type family Info a :: * > ... Thanks to all for your help. I am better informed, and at the very least I know that there is not a standard term for what I am looking for. I like the analogy with functional dependencies and the qualified type outputFact :: b ~ Info a => a -> b Norman From ggrov at staffmail.ed.ac.uk Wed Jun 23 08:29:47 2010 From: ggrov at staffmail.ed.ac.uk (Gudmund Grov) Date: Wed, 23 Jun 2010 13:29:47 +0100 Subject: [TYPES] Call for Participation: SICSA Summer School on Formal Reasoning & Representation of Complex Systems Message-ID: <6D4BC785-B823-4E96-8DF3-951EA9006260@staffmail.ed.ac.uk> *** Apologies for multiple copies *** Call for Participation ----------------------- SSFRR 2010 SICSA Summer School on Formal Reasoning & Representation of Complex Systems 14-15 August 2010 -- Heriot-Watt University campus -- Edinburgh Satellite summer school of VSTTE 2010 http://dream.inf.ed.ac.uk/events/ssfrr-2010/ ---------------------- ABOUT --------- The summer school will give a broad overview of software verification techniques, addressing both bottom-up and top-down approaches with a strong focus on the formal representation and reasoning themes. The school consists of eight introductory lectures, each concentrating on an unique aspect of one or both of the overall themes. The topics of the lectures include inductive theorem proving; SAT and SMT solving; proof planning and rippling; rely/guarantee conditions; separation logic; operating system verification; process algebras and formal analysis of security. The school is intended for PhD students and researchers working within one or both of these themes, however familiarity with any of the techniques is not a prerequisite. All lectures are meant to be introductory. PRESENTERS --------- The following will present at the summer school: * Robert Atkey (University of Strathclyde) & Ewen Maclean (Heriot-Watt University) * Alan Bundy & Lucas Dixon (University of Edinburgh) * Jane Hillston (University of Edinburgh) * Cliff Jones (University of Newcastle) * Gerwin Klein (National ICT Australia) * J Strother Moore (University of Texas at Austin) * Natarajan Shankar (SRI) * Graham Steel (INRIA) REGISTRATION --------- Registration is available from: http://www.macs.hw.ac.uk/vstte10_reg/Registration.php The registration fee is ?110, which also covers materials and lunches. Accommodation at the Heriot-Watt campus (?42.50 per night incl. VAT and breakfast) can be including with your booking when you register. This is highly recommended since the summer school will coincide with several of the famous Edinburgh festivals -- where hotel prices in town tend to be very inflated. SICSA will cover registration and two nights campus accommodation for SICSA students (students from most Scottish Universities -- see http://www.sicsa.ac.uk/ to check if you are eligible). The number of SICSA students is limited, and a decision on ranking if this number is exceeded will only be taken if necessary. PRELIMINARY PROGRAM --------- The summer school has the following preliminary program (timing and titles may still change): Saturday: * 09:00: Registration * 09:30: J Moore -- Machines Reasoning about Machines - 39 Years and Counting * 11:00: Coffee break * 11:30: Gerwin Klein -- Specification and Refinement in Operating System Verification * 13:00: Lunch * 14:00: Bob Atkey/Ewen Maclean -- Amortised Resource Analysis and Functional Correctness with Separation Logic * 15:30: Coffee break * 16:00: Alan Bundy/Lucas Dixon -- Proof-planning, inductive reasoning, and beyond * 17:30: End Sunday: * 09:30: Natarajan Shankar -- Verification using SAT and SMT solvers * 11:00: Coffee break * 11:30: Graham Steel -- Formal Analysis of Key Management APIs * 13:00: Lunch * 14:00: Cliff Jones -- Tackling concurrency by reasoning explicitly about inference * 15:30: Coffee break * 16:00: Jane Hillston -- From Milner to Markov and Back: Stochastic process algebras and their equivalence relations * 17:30: End VENUE --------- The summer school is a satellite event of VSSTE 2010 (see http://www.macs.hw.ac.uk/vstte10/) and will be held the two days before the main event: Saturday 14th and Sunday 15th August 2010. Like VSTTE 2010, it will be held at the Edinburgh campus of Heriot-Watt University. ORGANISERS --------- The summer school is jointly organised by The School of Informatics at Edinburgh University and The School of Mathematical and Computer Sciences at Heriot-Watt University by: * Lucas Dixon (Edinburgh) * Gudmund Grov (Edinburgh) * Ewen Maclean (Heriot-Watt) CONTACT --------- The organisers can be contacted at the following email address: ssfrr-2010 at inf.ed.ac.uk. -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From sweirich at cis.upenn.edu Wed Jul 21 05:15:17 2010 From: sweirich at cis.upenn.edu (Stephanie Weirich) Date: Wed, 21 Jul 2010 10:15:17 +0100 Subject: [TYPES] lazy evaluation References: Message-ID: <8F01F8BA-EF48-48C7-AB75-52F0ADDF4702@cis.upenn.edu> > > ---------- Forwarded message ---------- > Date: Tue, 20 Jul 2010 06:51:09 +0100 (BST) > From: Thomas Forster > Subject: lazy evaluation > > > I have just been giving a talk on evaluation strategies to the > philosphers here (i'm in Canterbury, NZ) and one of the old-timers > there suggested that lazy evaluation is first clearly described in > Quine *Methods of Logic*. The edition i have is the British (RKP) > edition of 1952, and there is indeed a discussion of lazy evaluation > there - but not under that name of course. It on pp 26ff. > > Does anybody know of any earlier description of lazy evaluation than > this? > > tf > > URL: www.dpmms.cam.ac.uk/~tf; DPMMS ph: +44-1223-337981; > Canterbury office fone: +64-3-3642987 x 8152 > Mobile in NZ +64-210580093. > (UEA ph: +44-1603-593588; mobile in UK +44-7887-701-562;) > From giacomo90 at libero.it Tue Aug 24 08:46:18 2010 From: giacomo90 at libero.it (giacomo90@libero.it) Date: Tue, 24 Aug 2010 14:46:18 +0200 (CEST) Subject: [TYPES] Not able to define some recursive types in "Fullfomsubref" Message-ID: <234122.2594651282653978552.JavaMail.root@wmail26> Hello, I am a CS student at Bologna University and I'm studiying Pierce's book on my own. The first problem concerns the definition of inductive types with variants. I want to define a datatype that in OCaml should be: # type 'a varexpr = EVal of 'a | EVar of string | EList of ('a varexpr lis) So it would be typed as: # Varexpr = lambda A. All X. (A->X)->(String->X)->(List X->X)->X; I have no problems defining the costructor for the EVal or EVar case, but my problem is defining the EList case: # elis = lambda A. lambda arg:List (Varexpr A). (lambda X. lambda l:(A->X). lambda r:(String->X). lambda t:(List X- >X). (t ...... ) ) but it's not correct if I write (t arg) because I don't pass to arg the arguments "[X] l r t". And even the following form is not correct: # ell = lambda A. lambda arg:List (Varexpr A). lambda X. lambda l:(A->X). lambda r:(String->X). lambda t:(List X->X). t (arg [List X] (lambda hd:X. lambda tl:List X. cons [X] (hd [H] l r t) tl ) (nil [X])); Another problem that I have is the definition of a double-recursive algorithm, such as defining the equivalence of two values (ie.) of the same type. For example an algorithm in OCaml for defining equivalence recursively is: # let rec eqnat x y = match x with | O -> ( match y with O -> true | _ -> false) | S t -> (match y with O -> false | S u -> eqnat t u) But I cannot find a solution without using the letrec construct wich uses the fix operator; in fact the following form is not correct: # /* using the constructors o and s(x) similarly in OCaml */ # eqnat = lambda x:Mynat. lambda y:Mynat. ( x [Bool] (lambda x1:Bool. (y [Bool] (lambda y1:Bool. ...x... y...? ) false )) (y [Bool] (lambda y1:Bool. false) true) ); Even if i could use an algorithm with one way recursion as: # and = lambda a:Boolean. lambda b:Boolean. (a [Boolean] b f); # isZer = lambda u:Natu. (u [Boolean] (lambda x:Boolean. f) t); # sub = lambda a:Natu. lambda b:Natu. (b [Natu] (lambda x:Natu. precN x) a); # eqN = lambda a:Natu. lambda b:Natu. and (isZer (sub a b)) (isZer (sub b a)); Another problem related to the first one is the definition of a simple recursive datatype that I could define in OCaml as follows: # type 'a rec = In ('a rec -> 'a) # let out (In x) = x And I think that is of a type # Recu = lambda A. All X. ((X->A)->X)->X; But, I think, for the same problem, I cannot definre the costructor "In" for the datatype Recu. All this in order to define the Y operator like I could do in OCaml: # let y f = (fun x a -> f (out x x) a) (In (fun x a -> f (out x x) a)) I would really appreciate any answer to my questions or even a suggestion. Is my aim vain to pursue? Thanks. Regards. Giacomo Bergami. From wagnerdm at seas.upenn.edu Tue Aug 24 13:25:15 2010 From: wagnerdm at seas.upenn.edu (wagnerdm@seas.upenn.edu) Date: Tue, 24 Aug 2010 13:25:15 -0400 Subject: [TYPES] Not able to define some recursive types in "Fullfomsubref" In-Reply-To: <234122.2594651282653978552.JavaMail.root@wmail26> References: <234122.2594651282653978552.JavaMail.root@wmail26> Message-ID: <20100824132515.13851erhcs4bhuo0@webmail.seas.upenn.edu> Quoting "giacomo90 at libero.it" : > the arguments "[X] l r t". And even the following form is not correct: > > # ell = lambda A. lambda arg:List (Varexpr A). > lambda X. lambda l:(A->X). lambda r:(String->X). lambda t:(List X->X). > t (arg [List X] (lambda hd:X. lambda tl:List X. cons [X] (hd [H] l r > t) tl ) (nil [X])); That type annotation on hd looks suspicious. Perhaps you can make headway by changing it to "lambda hd:Varexpr A"? You might also try defining the "map" function that you may know from OCaml for applying a function to each element of a list. > uses the fix operator; in fact the following form is not correct: > > # /* using the constructors o and s(x) similarly in OCaml */ > # eqnat = lambda x:Mynat. lambda y:Mynat. > ( x [Bool] (lambda x1:Bool. (y [Bool] (lambda > y1:Bool. ...x... > y...? ) false )) > (y [Bool] (lambda y1:Bool. false) true) > ); I guess I don't know exactly which fixpoint operator is available in the language you're using, but one common type for it looks like fix : forall a. (a -> a) -> a So, if the final function you want has type (Mynat -> Mynat -> Bool), then the function you need to take a fixpoint of has type ((Mynat -> Mynat -> Bool) -> (Mynat -> Mynat -> Bool)). Does that help? If not, then my intuition for writing fixpoints is that they should take a function that knows what to do for "small" input values, and write a new function that knows what to do for slightly larger input values -- e.g. for ones with one more "succ", or, in the case of your other question, for ones with one more "in". Perhaps these suggestions can help you figure out the answers for yourself. =) ~d From cppljevans at suddenlink.net Tue Aug 31 15:22:55 2010 From: cppljevans at suddenlink.net (Larry Evans) Date: Tue, 31 Aug 2010 14:22:55 -0500 Subject: [TYPES] [newbie] is {initial, terminal} object the identity for {coproduct, product}? Message-ID: <4C7D568F.4060303@suddenlink.net> Define + as the coproduct operator, IOW, X+Y is the coproduct of X and Y for some category C. Define * as the product operator, IOW X*Y is the product of X and Y for some category C. Define 0 as the initial object of C. Define 1 as the terminal object of C. Is it true that, for all objects, X in C: X+0 = X 0+X = X X*1 = X 1*X = X ? Also, what's X*0 and X+1? TIA. -Larry From wagnerdm at seas.upenn.edu Tue Aug 31 22:31:36 2010 From: wagnerdm at seas.upenn.edu (wagnerdm@seas.upenn.edu) Date: Tue, 31 Aug 2010 22:31:36 -0400 Subject: [TYPES] [newbie] is {initial, terminal} object the identity for {coproduct, product}? In-Reply-To: <4C7D568F.4060303@suddenlink.net> References: <4C7D568F.4060303@suddenlink.net> Message-ID: <20100831223136.24666ks2t7abbmsk@webmail.seas.upenn.edu> Quoting Larry Evans : > Define + as the coproduct operator, > IOW, X+Y is the coproduct of X and Y for some category C. > Define * as the product operator, > IOW X*Y is the product of X and Y for some category C. > Define 0 as the initial object of C. > Define 1 as the terminal object of C. Being completely careful here, we must observe that if we view + as an operator, then X+Y is merely picking out one of many possible coproducts of X and Y. (Of course, any other coproducts that exist are isomorphic.) We must make a similar caveat for *, 0, and 1 (which are particular initial and terminal objects, though again unique up to isomorphism). > Is it true that, for all objects, X in C: > > X+0 = X > 0+X = X > X*1 = X > 1*X = X Then, here, we must take equality as isomorphism, of course. It's pretty straightforward to show that X is *a* coproduct of X and 0 -- just take id : X -> X and the unique arrow i : 0 -> X as the injections. Therefore X is isomorphic to whatever object X+0 happens to be. The remaining equations follow by symmetry and duality. > Also, what's X*0 and X+1? I wasn't able to come up with a more edifying description of these objects than simply expanding the definitions. Perhaps somebody else can come up with some further property of X*0/X+1 or show that there isn't anything additional we can assume...? ~d From andrew.polonsky at gmail.com Fri Sep 10 11:30:05 2010 From: andrew.polonsky at gmail.com (Andrew Polonsky) Date: Fri, 10 Sep 2010 17:30:05 +0200 Subject: [TYPES] Range Property for H Message-ID: Dear all, I would like to announce a solution to problem #3 at http://tlca.di.unito.it/opltlca/ The result will be presented at the upcoming TYPES conference in Warsaw. Extended abstract is available at http://folk.uib.no/apo081/Range%20Property.pdf A more thorough presentation, along with additional results concerning the problem, will be a part of my PhD thesis, to be submitted later this month. I would be happy to provide copies upon request. Of course, comments on the first draft are appreciated. I thank Henk Barendregt for the immensely beautiful problem, and all the researchers that have worked on it and provided partial solutions. Their work has inspired me greatly. All the best, Andrew From andreas.abel at ifi.lmu.de Sun Sep 12 20:02:57 2010 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Mon, 13 Sep 2010 02:02:57 +0200 Subject: [TYPES] [newbie] is {initial, terminal} object the identity for {coproduct, product}? In-Reply-To: <20100831223136.24666ks2t7abbmsk@webmail.seas.upenn.edu> References: <4C7D568F.4060303@suddenlink.net> <20100831223136.24666ks2t7abbmsk@webmail.seas.upenn.edu> Message-ID: <3D72F3BC-3AA7-44FE-8B4C-E769A10BA04F@ifi.lmu.de> In some categories 0 = 1, e.g. in the category of groups, so sometimes X * 0 = X (which bad math students believe to hold also in the natural numbers ;-). --Andreas On Sep 1, 2010, at 4:31 AM, wagnerdm at seas.upenn.edu wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > Quoting Larry Evans : > >> Define + as the coproduct operator, >> IOW, X+Y is the coproduct of X and Y for some category C. >> Define * as the product operator, >> IOW X*Y is the product of X and Y for some category C. >> Define 0 as the initial object of C. >> Define 1 as the terminal object of C. > > Being completely careful here, we must observe that if we view + as an > operator, then X+Y is merely picking out one of many possible > coproducts of X and Y. (Of course, any other coproducts that exist are > isomorphic.) We must make a similar caveat for *, 0, and 1 (which are > particular initial and terminal objects, though again unique up to > isomorphism). > >> Is it true that, for all objects, X in C: >> >> X+0 = X >> 0+X = X >> X*1 = X >> 1*X = X > > Then, here, we must take equality as isomorphism, of course. It's > pretty straightforward to show that X is *a* coproduct of X and 0 -- > just take id : X -> X and the unique arrow i : 0 -> X as the > injections. Therefore X is isomorphic to whatever object X+0 happens > to be. The remaining equations follow by symmetry and duality. > >> Also, what's X*0 and X+1? > > I wasn't able to come up with a more edifying description of these > objects than simply expanding the definitions. Perhaps somebody else > can come up with some further property of X*0/X+1 or show that there > isn't anything additional we can assume...? > > ~d > Andreas Abel <>< Du bist der geliebte Mensch. Theoretical Computer Science, University of Munich Oettingenstr. 67, D-80538 Munich, GERMANY andreas.abel at ifi.lmu.de http://www2.tcs.ifi.lmu.de/~abel/ From marco.devillers at gmail.com Fri Sep 17 11:19:50 2010 From: marco.devillers at gmail.com (M.C.A. (Marco) Devillers) Date: Fri, 17 Sep 2010 17:19:50 +0200 Subject: [TYPES] OT: Peer-review in a world with rational scientists Message-ID: A couple of months ago there was a discussion on this list regarding (peer-)reviewing processes. Though, with little experience, I had nothing to contribute to that discussion, I thought the next paper might be well accepted on this list: "Peer-review in a world with rational scientists: Toward selection of the average," Stefan Thurner, Rudolf Hanel. The arxiv link: http://arxiv.org/abs/1008.4324v1 The link straight to the article (pdf): http://arxiv.org/pdf/1008.4324v1 Cheers, Marco From skothari at uwyo.edu Fri Sep 17 14:35:26 2010 From: skothari at uwyo.edu (Sunil Kothari) Date: Fri, 17 Sep 2010 12:35:26 -0600 Subject: [TYPES] Replacement lemma Message-ID: <569A6878A4717D40A1268BEC3F1713BD34940B999C@ponyexpress-mb1.uwyo.edu> Hello All, I sent the below query to Coq mailing list but generated no response. My apologies if you receive multiple copies of this email. I was wondering if there are any machine checked proofs (in any theorem prover) of replacement lemma [1,2] . I reproduce the lemma as stated by Wright and Felleisen in [1] below: If 1. D is a derivation concluding \Gamma |- C [t]: \tau, 2. D1 is a sub-derivation of D concluding \Gamma? |- t :\tau', 3. D1 occurs in D in the position corresponding to the hole ([]) in C, and 4. \Gamma' |- t? : \tau' is derivable. Then \Gamma |- C[t?] :\tau is derivable . Also, I wanted to know if there is a corresponding lemma in a constraint setting where the sequent becomes a 4-place relation. For example, \Gamma, E |- t: \tau, where E denotes a list of equational constraints. If there is such a lemma, has it been formalized in any theorem prover ? Any pointers/references will be highly appreciated. Thanks, Sunil References: [1] Andrew K. Wright and Matthias Felleisen. A syntactic approach to type soundness. Information and Computation, 115(1):38-94, 1994. [2] J. Roger Hindley and Jonathan P. Seldin. Introduction to Combinators and /\-Calculus. Cambridge University Press, 1986. From wirth at logic.at Fri Sep 17 16:52:42 2010 From: wirth at logic.at (Dr. Claus-Peter Wirth) Date: Fri, 17 Sep 2010 22:52:42 +0200 Subject: [TYPES] OT: Peer-review in a world with rational scientists In-Reply-To: References: Message-ID: <32D080F4-C002-4040-BD3D-8BACF50DF8AE@logic.at> Dear Marco, I was not at all happy about the high bandwidth of the discussion on the subject in this list back then, but this paper is really interesting. Taken seriously, we should stop peer review and use random selection: Better quality and less work. Unless we are so "rational" that we prefer nepotism to science. But I am afraid that might be actually the case. Thanks for the most interesting posting, CP On 17.09.2010, at 17:19, M.C.A. (Marco) Devillers wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > A couple of months ago there was a discussion on this list regarding > (peer-)reviewing processes. Though, with little experience, I had > nothing to > contribute to that discussion, I thought the next paper might be well > accepted on this list: > > "Peer-review in a world with rational scientists: Toward selection > of the > average," Stefan Thurner, Rudolf Hanel. > > The arxiv link: http://arxiv.org/abs/1008.4324v1 > The link straight to the article (pdf): http://arxiv.org/pdf/1008.4324v1 > > Cheers, > Marco From selinger at mathstat.dal.ca Sat Sep 18 12:41:25 2010 From: selinger at mathstat.dal.ca (Peter Selinger) Date: Sat, 18 Sep 2010 13:41:25 -0300 (ADT) Subject: [TYPES] OT: Peer-review in a world with rational scientists In-Reply-To: <32D080F4-C002-4040-BD3D-8BACF50DF8AE@logic.at> Message-ID: <20100918164125.EE5D35C281@chase.mathstat.dal.ca> What a bunch of hogwash. That paper makes a number of highly flawed assumptions. The first such assumption is that papers are accepted or rejected by some kind of vote of anonymous referees. In reality, papers are accepted or rejected by named editors (or PC chairs). These editors use the advice of referees that are not anonymous to the editors, but are responsible for checking the integrity of the reports (i.e., whether the recommendations are supported by good reasons), as well as making the final decision. Therefore, the strategy "I reject all papers that are better than my own work" is not at all rational. Bad decisions will negatively affect the editor's reputation, as well as the referee's reputation among the circle of editors (and PC members etc). In addition, some journals actually publish the name of the responsible editor along with each paper. I think this is very good practice and I wish more journals did this. It is a great incentive to discourage false positives, as well as acknowledging the editor's (sometimes considerable) work. In my experience, most people have higher standards in refereeing than in authoring; it only takes an average referee to point out potential flaws in an above-average paper. I believe this is simply human nature: it is much easier to criticize than to produce. From the viewpoint of increasing overall quality, this tendency is positive for peer review. -- Peter Dr. Claus-Peter Wirth wrote: > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Dear Marco, > > I was not at all happy about the high bandwidth of the discussion > on the subject in this list back then, > but this paper is really interesting. > > Taken seriously, we should stop peer review and use random selection: > Better quality and less work. > > Unless we are so "rational" that we prefer nepotism to science. > But I am afraid that might be actually the case. > > Thanks for the most interesting posting, > CP > > On 17.09.2010, at 17:19, M.C.A. (Marco) Devillers wrote: > > > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > > ] > > > > A couple of months ago there was a discussion on this list regarding > > (peer-)reviewing processes. Though, with little experience, I had > > nothing to > > contribute to that discussion, I thought the next paper might be well > > accepted on this list: > > > > "Peer-review in a world with rational scientists: Toward selection > > of the > > average," Stefan Thurner, Rudolf Hanel. > > > > The arxiv link: http://arxiv.org/abs/1008.4324v1 > > The link straight to the article (pdf): http://arxiv.org/pdf/1008.4324v1 > > > > Cheers, > > Marco > From is99zcp at hotmail.com Wed Nov 17 02:34:48 2010 From: is99zcp at hotmail.com (=?gb2312?B?1uyzo8X0?=) Date: Wed, 17 Nov 2010 15:34:48 +0800 Subject: [TYPES] how to typing for logic equality operator Message-ID: I am reading the 19th chapter of Types and Programming Languages. I encounter a problem during adding a typing rule for logic equality operator == to FJ. According to the Java equality operator semantic, the typing rule seems to be described as follow: e1 : T1 e2 : T2 T1 <: T2 || T2 <: T1 -------------------------------- e1 == e2 : Bool However, term substitution doesn't preserve the typing rule, since the relation T1' <: T2' || T2' <: T1' may be not hold after a substitution, where [x ->s]e1 : T1' and [x->s]e2 : T2'. who can help me to solve the problem, thank you in advance. From wagnerdm at seas.upenn.edu Thu Nov 18 20:18:46 2010 From: wagnerdm at seas.upenn.edu (wagnerdm at seas.upenn.edu) Date: Thu, 18 Nov 2010 20:18:46 -0500 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: References: Message-ID: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> Well, this rule looks a bit strange: Quoting ??? : > e1 : T1 e2 : T2 T1 <: T2 || T2 <: T1 > -------------------------------- > e1 == e2 : Bool I would expect something along these lines instead: e1 : T1 e2 : T2 T1 <: T T2 <: T ---------------------------------- e1 == e2 : Bool Does this rule have better substitution properties? If not, can you briefly describe examples of a well-typed term (using either rule) and a substitution that breaks the typed-ness? (I'm not even sure I understand why the rule you proposed doesn't work.) ~d From dreyer at mpi-sws.org Fri Nov 19 04:58:03 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Fri, 19 Nov 2010 10:58:03 +0100 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> Message-ID: Interesting. I believe the original poster's rule >> e1 : T1 e2 : T2 T1 <: T2 || T2 <: T1 >> -------------------------------- >> e1 == e2 : Bool corresponds precisely to the typing rule for reference equality described in the Java language spec: http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.21.3 There, it explicitly states: "A compile-time error occurs if it is impossible to convert the type of either operand to the type of the other by a casting conversion (?5.5). The run-time values of the two operands would necessarily be unequal." From what I can tell, a casting conversion basically means either an upcast or a downcast. The rule Daniel suggests: > e1 : T1 e2 : T2 T1 <: T T2 <: T > ---------------------------------- > e1 == e2 : Bool would allow pretty much any equality test to typecheck by choosing T to be Object. So that doesn't correspond to Java typechecking. However, the original poster had a point about substitution. As far as I can tell, the substitution theorem A.14 on page 532 of Pierce's TAPL book ceases to hold in the presence of the first equality rule above. The theorem states roughly that: If G, x:B |- t : D and G |- s : A, where A <: B, then G |- [x->s]t : C for some C <: D. The issue is that [x->s]t has type C for some C <: D, but does not have type D itself, due to the lack of an implicit subsumption rule in FJ. Thus, if we try to prove this theorem for the first equality rule above, we get stuck because: [x->s]e1 : T1' <: T1 [x->s]e2 : T2' <: T2 T1 <: T2 || T2 <: T1 but we have no reason to believe that T1' <: T2' || T2' <: T1', which is what we need to show. (Incidentally, substitution works fine for Daniel's rule.) So it seems to me the original poster was asking a good question. One can easily turn the failed proof above into a counterexample to subject reduction. Derek On Fri, Nov 19, 2010 at 2:18 AM, wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Well, this rule looks a bit strange: > > Quoting ??? : > > > I would expect something along these lines instead: > > > Does this rule have better substitution properties? If not, can you briefly > describe examples of a well-typed term (using either rule) and a > substitution that breaks the typed-ness? (I'm not even sure I understand why > the rule you proposed doesn't work.) > > ~d > From rendel at informatik.uni-marburg.de Fri Nov 19 09:24:01 2010 From: rendel at informatik.uni-marburg.de (Tillmann Rendel) Date: Fri, 19 Nov 2010 15:24:01 +0100 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> Message-ID: <4CE68881.8060905@informatik.uni-marburg.de> Derek Dreyer wrote: > the typing rule for reference equality described in the Java language spec: > > http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.21.3 > > There, it explicitly states: "A compile-time error occurs if it is > impossible to convert the type of either operand to the type of the > other by a casting conversion (?5.5). The run-time values of the two > operands would necessarily be unequal." From what I can tell, a > casting conversion basically means either an upcast or a downcast. This rule breaks subject reduction not only technically, but also in spirit. The rationale for this rule is to disallow expressions with statically known values. But the idea of reduction semantics is to incrementally transform an expression into a value, and clearly, the value of a value is always statically known. The rationale of this rule therefore clashes with the idea of reduction semantics to encode the evaluation of a program in a sequence of programs which are more and more statically known. This is similar to other parts of the Java language specification which deal with "obviously boring code". For example, javac accepts the following sequence of statements: if (true) return 27; return 42; But rejects the following: return 27; return 42; I would expect a reduction semantics for Java statements to reduce the former sequence of statements into the latter, so this example seems to break subject reduction, too. And again, javac's behavior is mandated by the Java language spec, Sec. 14.21: > It is a compile-time error if a statement cannot be executed because > it is unreachable. Every Java compiler must carry out the conservative > flow analysis specified here to make sure all statements are reachable. I think these two problems, and all similar ones, are best dealt with by considering such static checks not as type checks, but as separate static analyses to be performed after type checking. They do not need to be modeled in a formal type system, then, and do not contribute nor disturb subject reduction. If type safety can be established for this smaller type system, it can not later be broken by ruling out some so-far well-typed programs by additional static analyses. So my advice to the original poster would be: Choose a weaker typing rule for ==. As long as you can prove subject reduction for that weaker rule, your proof also works for the stronger rule actually used in Java compilers, if you show that the stronger rule implies the weaker rule. But from the point of view of language design, I wonder what the point of specifying such details in a language specification might be? Tillmann From wadler at inf.ed.ac.uk Fri Nov 19 10:53:12 2010 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 19 Nov 2010 15:53:12 +0000 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: <4CE68881.8060905@informatik.uni-marburg.de> References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> <4CE68881.8060905@informatik.uni-marburg.de> Message-ID: We dealt with exactly the issue Tillmann Rendel describes in our design of Featherweight Java. See quote below. -- P Featherweight Java: A Minimal Core Calculus for Java and GJ ATSUSHI IGARASHI University of Tokyo BENJAMIN C. PIERCE University of Pennsylvania and PHILIP WADLER Avaya ACM Transactions on Programming Languages and Systems, Vol. 23, No. 3, May 2001, Pages 396?450. One technical innovation in FJ is the introduction of ?stupid? casts. There are three rules for type casts: in an upcast the subject is a subclass of the target; in a downcast the target is a subclass of the subject; and in a stupid cast the target is unrelated to the subject. The Java compiler rejects as ill typed an expression containing a stupid cast, but we must allow stupid casts in FJ if we are to formulate type soundness as a subject reduction theorem for a small-step semantics. This is because an expression without stupid casts may reduce to one containing a stupid cast. ... We indicate the special nature of stupid casts by including the hypothesis stupid warning in the type rule for stupid casts (T-SCAST); an FJ typing corresponds to a legal Java typing only if it does not contain this rule. (Stupid casts were omitted from Classic Java [Flatt et al. 1998a], causing its published proof of type soundness to be incorrect; this error was discovered independently by ourselves and the Classic Java authors). [p. 405] On Fri, Nov 19, 2010 at 2:24 PM, Tillmann Rendel wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > Derek Dreyer wrote: >> >> the typing rule for reference equality described in the Java language >> spec: >> >> >> http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.21.3 >> >> There, it explicitly states: "A compile-time error occurs if it is >> impossible to convert the type of either operand to the type of the >> other by a casting conversion (?5.5). The run-time values of the two >> operands would necessarily be unequal." ?From what I can tell, a >> casting conversion basically means either an upcast or a downcast. > > This rule breaks subject reduction not only technically, but also in spirit. > The rationale for this rule is to disallow expressions with statically known > values. But the idea of reduction semantics is to incrementally transform an > expression into a value, and clearly, the value of a value is always > statically known. The rationale of this rule therefore clashes with the idea > of reduction semantics to encode the evaluation of a program in a sequence > of programs which are more and more statically known. > > > This is similar to other parts of the Java language specification which deal > with "obviously boring code". For example, javac accepts the following > sequence of statements: > > ?if (true) return 27; > ?return 42; > > But rejects the following: > > ?return 27; > ?return 42; > > I would expect a reduction semantics for Java statements to reduce the > former sequence of statements into the latter, so this example seems to > break subject reduction, too. And again, javac's behavior is mandated by the > Java language spec, Sec. 14.21: >> >> It is a compile-time error if a statement cannot be executed because >> it is unreachable. Every Java compiler must carry out the conservative >> flow analysis specified here to make sure all statements are reachable. > > > I think these two problems, and all similar ones, are best dealt with by > considering such static checks not as type checks, but as separate static > analyses to be performed after type checking. They do not need to be modeled > in a formal type system, then, and do not contribute nor disturb subject > reduction. If type safety can be established for this smaller type system, > it can not later be broken by ruling out some so-far well-typed programs by > additional static analyses. > > So my advice to the original poster would be: Choose a weaker typing rule > for ==. As long as you can prove subject reduction for that weaker rule, > your proof also works for the stronger rule actually used in Java compilers, > if you show that the stronger rule implies the weaker rule. > > > But from the point of view of language design, I wonder what the point of > specifying such details in a language specification might be? > > ?Tillmann > > -- .\ Philip Wadler, Professor of Theoretical Computer Science ./\ School of Informatics, University of Edinburgh /? \ http://homepages.inf.ed.ac.uk/wadler/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From kos at informatik.uni-marburg.de Fri Nov 19 11:04:31 2010 From: kos at informatik.uni-marburg.de (Klaus Ostermann) Date: Fri, 19 Nov 2010 17:04:31 +0100 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: <4CE68881.8060905@informatik.uni-marburg.de> References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> <4CE68881.8060905@informatik.uni-marburg.de> Message-ID: Hello all, >> There, it explicitly states: "A compile-time error occurs if it is >> impossible to convert the type of either operand to the type of the >> other by a casting conversion (?5.5). The run-time values of the two >> operands would necessarily be unequal." ?From what I can tell, a >> casting conversion basically means either an upcast or a downcast. I believe that the original poster's problem of formalizing Java's equality operator and also the examples Tillmann gave (and many more examples, such as the resolution of method overloading or Java's "? : " operator) call for a more sophisticated style of formalization, namely one with type elaboration rules which transform the program during type checking into a form suitable for runtime type checking. Type elaboration rules allow one to have a kind of two-stage type system where one can still cleanly formalize subject reduction. A good example for such a semantics that fits to the intention of the original poster would be Felleisen et al's "ClassicJava" formalization in their POPL'98 paper on "Classes and Mixins". For instance, for the "==" operator, the type elaboration rules could type-check and annotate the equality operation as follows: e1:T1 e2:T2 T1 <: T2 ---------------------------------------------------- e1 == e2 => e1 ==[T2] e2 : Bool e1:T1 e2:T2 T2 <: T1 ---------------------------------------------------- e1 == e2 => e1 ==[T1] e2 : Bool and then the "runtime" typing rule would be e1:T1 e2:T2 T1 <: T T2 <: T ---------------------------------------------------- e1 == [T] e2 : Bool Klaus From kos at informatik.uni-marburg.de Fri Nov 19 12:04:45 2010 From: kos at informatik.uni-marburg.de (Klaus Ostermann) Date: Fri, 19 Nov 2010 18:04:45 +0100 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> <4CE68881.8060905@informatik.uni-marburg.de> Message-ID: 2010/11/19 Philip Wadler : > We indicate the > special nature of stupid casts by including the hypothesis stupid > warning in the type rule for stupid casts (T-SCAST); an FJ typing > corresponds to a legal Java typing only if it does not contain this > rule. (Stupid casts were omitted from Classic Java [Flatt et al. > 1998a], causing its published proof of type soundness to be incorrect; > this error was discovered independently by ourselves and the Classic > Java authors). ?[p. 405] One should note, though, that the Classic Java type system can be fixed without using something like stupid casts. As far as I recall, in the corrected 1999 tech report version of the Classic Java paper, Flatt et al introduce a dedicated runtime typing rule for casts which looks something like this: e : C ------------- (D) e : D Their system is still correct w.r.t the Java typing rules because the type elaboration rules are more restrictive than this runtime typing rule. For more precise runtime typing, one could also annotate the type cast with the original static type of the expression during type elaboration, i.e. something like this: e0 : C D <: C ------------------------------------- (D) e0 => (D) e0[C] : D and then the following runtime typing rule e0 : C' C' <: C --------------------- (D) e0[C] : D I guess that once the discrepancies between static and runtime typing become bigger and bigger, it pays off to formalize type elaboration rules because it is not clear to me whether this "stupid cast" technique can be generalized to other techniques such as resolution of method overloading. Klaus From matthias at ccs.neu.edu Fri Nov 19 12:57:30 2010 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Fri, 19 Nov 2010 12:57:30 -0500 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> <4CE68881.8060905@informatik.uni-marburg.de> Message-ID: <7CBADACF-BF1C-4A5F-BBC1-6C9946C43D17@ccs.neu.edu> concerning: Flatt, Krishnamurthi, & Felleisen, Classic Java, POPL'98: Phil is correct. We messed up both the statement of the type soundness theorem (typo level omission of the divergence clause) and its proof (not treating these 'casts' properly) in POPL'98. Klaus is correct, too. We fixed the system in a TR and some follow-up paper. -- Matthias On Nov 19, 2010, at 12:04 PM, Klaus Ostermann wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > > 2010/11/19 Philip Wadler : >> We indicate the >> special nature of stupid casts by including the hypothesis stupid >> warning in the type rule for stupid casts (T-SCAST); an FJ typing >> corresponds to a legal Java typing only if it does not contain this >> rule. (Stupid casts were omitted from Classic Java [Flatt et al. >> 1998a], causing its published proof of type soundness to be incorrect; >> this error was discovered independently by ourselves and the Classic >> Java authors). [p. 405] > > > One should note, though, that the Classic Java type system can be fixed without > using something like stupid casts. As far as I recall, in the > corrected 1999 tech report version > of the Classic Java paper, Flatt et al introduce a dedicated runtime > typing rule for > casts which looks something like this: > > e : C > ------------- > (D) e : D > > Their system is still correct w.r.t the Java typing rules because the > type elaboration > rules are more restrictive than this runtime typing rule. > > For more precise runtime typing, one could also annotate the type cast > with the original static type of the expression during type elaboration, i.e. > something like this: > > e0 : C D <: C > ------------------------------------- > (D) e0 => (D) e0[C] : D > > and then the following runtime typing rule > > e0 : C' C' <: C > --------------------- > (D) e0[C] : D > > I guess that once the discrepancies between static and runtime typing become > bigger and bigger, it pays off to formalize type elaboration rules > because it is not > clear to me whether this "stupid cast" technique can be generalized to other > techniques such as resolution of method overloading. > > Klaus From wadler at inf.ed.ac.uk Fri Nov 19 13:19:40 2010 From: wadler at inf.ed.ac.uk (Philip Wadler) Date: Fri, 19 Nov 2010 18:19:40 +0000 Subject: [TYPES] how to typing for logic equality operator In-Reply-To: <7CBADACF-BF1C-4A5F-BBC1-6C9946C43D17@ccs.neu.edu> References: <20101118201846.12483b8erpogbeye@webmail.seas.upenn.edu> <4CE68881.8060905@informatik.uni-marburg.de> <7CBADACF-BF1C-4A5F-BBC1-6C9946C43D17@ccs.neu.edu> Message-ID: I should note I included that line from the paper not to cast aspersions on Matthias and co., but to point out that even excellent researchers can fall foul of these subtleties. Yours, -- P On Fri, Nov 19, 2010 at 5:57 PM, Matthias Felleisen wrote: > > concerning: Flatt, Krishnamurthi, & Felleisen, Classic Java, POPL'98: > > Phil is correct. We messed up both the statement of the type soundness theorem (typo level omission of the divergence clause) and its proof (not treating these 'casts' properly) in POPL'98. > > Klaus is correct, too. We fixed the system in a TR and some follow-up paper. > > -- Matthias > > > > > On Nov 19, 2010, at 12:04 PM, Klaus Ostermann wrote: > >> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ] >> >> 2010/11/19 Philip Wadler : >>> We indicate the >>> special nature of stupid casts by including the hypothesis stupid >>> warning in the type rule for stupid casts (T-SCAST); an FJ typing >>> corresponds to a legal Java typing only if it does not contain this >>> rule. (Stupid casts were omitted from Classic Java [Flatt et al. >>> 1998a], causing its published proof of type soundness to be incorrect; >>> this error was discovered independently by ourselves and the Classic >>> Java authors). ?[p. 405] >> >> >> One should note, though, that the Classic Java type system can be fixed without >> using something like stupid casts. As far as I recall, in the >> corrected 1999 tech report version >> of the Classic Java paper, Flatt et al introduce a dedicated runtime >> typing rule for >> casts which looks something like this: >> >> e : C >> ------------- >> (D) e : D >> >> Their system is still correct w.r.t the Java typing rules because the >> type elaboration >> rules are more restrictive than this runtime typing rule. >> >> For more precise runtime typing, one could also annotate the type cast >> with the original static type of the expression during type elaboration, i.e. >> something like this: >> >> e0 : C ? D <: C >> ------------------------------------- >> (D) e0 => (D) e0[C] : D >> >> and then the following runtime typing rule >> >> e0 : C' ? C' <: C >> --------------------- >> (D) e0[C] : D >> >> I guess that once the discrepancies between static and runtime typing become >> bigger and bigger, it pays off to formalize type elaboration rules >> because it is not >> clear to me whether this "stupid cast" technique can be generalized to other >> techniques such as resolution of method overloading. >> >> Klaus > > > -- .\ Philip Wadler, Professor of Theoretical Computer Science ./\ School of Informatics, University of Edinburgh /? \ http://homepages.inf.ed.ac.uk/wadler/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From moez at cs.rice.edu Sat Nov 20 00:03:05 2010 From: moez at cs.rice.edu (Moez A. Abdel-Gawad) Date: Sat, 20 Nov 2010 07:03:05 +0200 Subject: [TYPES] =?utf-8?q?how_to_typing_for_logic_equality_operator?= =?utf-8?b?4oCP?= Message-ID: <4CE75689.70303@cs.rice.edu> Incidentally, it seems there is a minor (but embarrassing?) typo in the FJ paper (only in the TOPLAS version). The flaw, in Theorem 2.4.3 (FJ Type Soundness), on p. 408, is in the last part of the theorem stating that 'C <: D'. I believe there must be a 'not' (/) over the '<:' symbol. Otherwise (ie, if the condition as stated is true), the (up-)cast subexpression could simply evaluate/reduce to 'new C([e])', and would not be a failing typecast (as explained close to the end of p.401). The same error is repeated in Theorem 3.4.3 (FGJ Type Soundness), I believe, for 'Phi |- N <: P', on p. 415. Or am I missing something? -Moez From dreyer at mpi-sws.org Sat Nov 20 05:10:13 2010 From: dreyer at mpi-sws.org (Derek Dreyer) Date: Sat, 20 Nov 2010 11:10:13 +0100 Subject: [TYPES] how to typing for logic equality operator Message-ID: Yes, Moez, this would appear to be a typo. The typo is corrected in the presentation of FJ in Pierce's TAPL book (Theorem 19.5.4, Progress). Derek > Incidentally, it seems there is a minor (but embarrassing?) typo in the FJ paper (only in the TOPLAS version). > > The flaw, in Theorem 2.4.3 (FJ Type Soundness), on p. 408, is in the last part of the theorem stating that 'C <: D'. > > I believe there must be a 'not' (/) over the '<:' symbol. ?Otherwise (ie, if the condition as stated is true), the (up-)cast subexpression could simply evaluate/reduce to 'new C([e])', and would not be a failing typecast (as explained close to the end of p.401). > > The same error is repeated in Theorem 3.4.3 (FGJ Type Soundness), I believe, for 'Phi |- N <: P', on p. 415. > > Or am I missing something? > > -Moez > From matthias at ccs.neu.edu Sat Nov 20 11:36:02 2010 From: matthias at ccs.neu.edu (Matthias Felleisen) Date: Sat, 20 Nov 2010 11:36:02 -0500 Subject: [TYPES] [a tad off-topic] functional programming for middle schools, typed or not? Message-ID: <83E54D03-89FE-4223-A78B-CD6932EC5E01@ccs.neu.edu> Calling all functional programmers. If you saw my talk at ICFP this year, I mentioned how we had created a middle-school level outreach project. The coordinator (Emmanuel Schanzer) has started a Facebook "like" page for CS Ed Week next month and is hoping for 1,000 "likes". Here is the link: http://www.facebook.com/pages/1000-people-for-kids-programming-videogames-with-algebra/174472965903551 Thanks -- Matthias From cconway at cs.nyu.edu Tue Nov 23 14:17:32 2010 From: cconway at cs.nyu.edu (Christopher L Conway) Date: Tue, 23 Nov 2010 14:17:32 -0500 Subject: [TYPES] Scott and De Bakker's "A Theory of Programs" Message-ID: I'm trying to hunt down a copy of Scott and De Bakker's "A Theory of Programs" (1969). Does anyone on the list have a copy they could share with me, or know of where I can obtain a copy? Thanks, Chris