[POPLmark] Meta-comment

Greg Morrisett greg at eecs.harvard.edu
Wed May 11 13:35:40 EDT 2005


Karl Crary wrote:
> I think the reason that we tend to see this when combining features, and 
> not when working with single features in isolation, is that for simply 
> core languages we are capable of running through the proofs and 
> isolating the important (ie, interesting) issues in our heads.  When we 
> go to write the proofs down, we usually have dealt with everything but 
> the niggling details.

Yes, we're definitely on the same page here.  My point was that
in doing paper-and-pencil proofs, I've never uncovered a major
unsoundness.  But by their very nature, the paper-and-pencil proofs
are limited to small core languages where, in some sense, once
you write down the typing rules, the soundness is essentially
obvious and the proof is well, a tedious dead-end exercise.  I'm
not claiming that the proof wasn't important (it's a great
reassurance), just that it isn't a serious bug-finder.

I expect a much bigger payoff as we (the whole community, not
just CMU) move to mechanized proofs.  For one thing, I think
that today, we put a lot of trust into referees and authors
because very few of us actually check the proofs.

As an editor, when presented with a paper that has a huge number
of typing rules and a long, tedious subject-reduction proof, I have
to think long and hard about which people I'm going to
torture with a request to referee.  I'd much rather torture
a machine.

For another, I think we'll reclaim a lot of paper space which used
to be filled with proof sketches and typing rules and instead
be able to concentrate on the really important explanation of
the ideas.  Finally, we'll be able to ask questions more along
the lines of "does this scale to handle all of these other
linguistic features?"

I guess I see an analogy with code here.  In the bad-old days,
one published pseudo code when discussing an algorithm.  It
was always annoying when trying to convert the psuedo code to
real code that you find the algorithm is riddled with parse
errors, type errors, logic errors etc., or you find out that
their pseudo-programming language has magic operators like
"choose on empty sets".

In contrast, when someone publishes a literate program, you know
that the code parses, type-checks, and probably actually runs, and
by golly it's a useful artifact that's much easier to extend.  You
also know that the author didn't forgo the "uninteresting cases"
and accidentally let something bad slip through.  They actually
had to think about things end-to-end.

Now putting together a literate program is a lot more work,
but you get a lot more benefit.  I'm just hoping that we
can get a lot more benefit out of all of the "proving" that
people are doing.

-Greg


More information about the Poplmark mailing list