from left field, Re: [POPLmark] Meta-comment
David Naumann
naumann at cs.stevens.edu
Wed May 18 14:53:16 EDT 2005
On the topic of declarative vs procedural interfaces to theorem provers, I
would like to point out the work of Don Syme. His dissertation title
should suffice: Declarative Theorem Proving for Operational Semantics.
My very limited experience is with PVS. It's interface is similar to the
HOL provers in using procedural scripts. In principle this is
unattractive, as Kevin points out, but in practice the interesting
structure of a proof can sometimes be made quite clear in the statement
and use of key lemmas. For many purposes, it is the procedurally-oriented
provers that currently offer the most automation.
About using PVS for metatheory: Its semantics relies on ZFC, the
semantics is less simple that HOL, and it is highly engineered (not for
metatheory) ---about as far from PRA or FPCC as you can get. On the other
hand, I concur with Benjamin Pierce that our general aim is to "greatly
increase confidence in the correctness of mathematical arguments across a
range of specific topics in PL theory". I used PVS to check soundness of a
secure flow analysis for a fragment of Java [Banerjee&Naumann, JFP 15(2);
the PVS work will appear in TPHOLS]. Is PVS a wacky choice? Perhaps.
Am I as confident as I would be had the proofs been checked in HOL or CoQ?
Of course not. Is my confidence increased? Of course.
Will I use PVS again for such purposes? Definitely yes, partly because I
like coding in a highly expressive logic. The object language semantics is
a straightforward denotational one (the second reason for "left field" in
my subject line) and it is encoded in PVS quite directly, using dependent
types and predicate subtypes. Greg Morrisett wrote "I still find modeling
an abstract machine with a heap, which I want to consider as a partial map
from (alpha-varying) labels to values (which may have free occurrences of
those labels), to be very awkward to encode." My noninterference proof is
a logical relations argument for a language with dynamic memory
allocation, so I had to deal with exactly this problem. Renaming of
addresses is an integral part of the logical relation, both to account for
memory allocation and to account for observation at a given security
level. PVS doesn't have a magic strategy "mod by bijective renaming"
and anyway renaming doesn't factor out so simply from the noninterference
property. But it wasn't very painful to work with the elementary
definitions and reasoning just as we did in the journal paper.
The CMU folks might make me mend my ways if they've used Twelf to check
their elegant monadic analysis of information flow [Crary, Kliger,
Pfenning, JFP 15(2)].
--dave
On Wed, 11 May 2005, Kevin Watkins wrote:
...
> Another question, maybe directed at Jerome Vouillon: I know how to go
> through a Twelf proof, and, clause by clause, transform it into a
> series of lemmas, each of which is based on a rule induction of the
> sort that one sees in POPL papers. Each clause becomes one case of
> such an induction. The mapping is easy to explain, and in fact, many
> proofs you have seen in papers written by CMUers recently have arisen
> as just such translations of Twelf proofs. Is there a similarly
> clean, simple, line-by-line interpretation of the Vouillon solution?
> For example, just to pick a line at random, does the phrase
>
> "simpl; intro H3; rewrite H; trivial; generalize H3; case (get_bound
> e' X); simpl; trivial; intros; discriminate"
>
> correspond to a proof step that one would usually see in a
> pencil-and-paper rule induction? Can someone (without finding the
> particular place in the Vouillon solution from which this example was
> drawn) tell me precisely what it means? Or is it only comprehensible
> in terms of the imperative effect it has on Coq's internal proof
> search state at the moment it's invoked? Can someone explain what the
> effect of, say, reversing "trivial; intros" to "intros; trivial" would
> be?
>
> It may be that my example is unfair because I haven't given enough
> context for the phrase to be understandable in isolation. (In Twelf,
> of course, one can't understand a clause without seeing the type
> declarations for the syntax that's mentioned in it, and the type
> declarations and modes of the relations used in it. Presumably in Coq
> one would at least need to see the definitions of the inductive types,
> and such.) In that case, I'd be interested to know how much context
> is necessary to make the phrase sensible.
>
> Kevin
> _______________________________________________
> Poplmark mailing list
> Poplmark at lists.seas.upenn.edu
> http://lists.seas.upenn.edu/mailman/listinfo/poplmark
>
More information about the Poplmark
mailing list