[Provers] Re: [POPLmark] Submissions page and comments updated
Robert Harper
rwh at cs.cmu.edu
Mon May 2 22:30:21 EDT 2005
Please see the Harper-Stone Definition of Standard ML, in particular
the dynamic semantics of the internal language. Whether formally or
informally, there is no room or use for evaluation contexts as a means
of specifying the operational semantics.
To repeat, there is absolutely no obstacle to using this method for the
challenge, but there is no advantage to it of any kind. Our
formulation is adequate for the task, and we have demonstrated that it
is entirely suitable for solving the challenge problem. It is not a
good use of our time, and there is nothing to be learned, from trying
to redo it in some style that is presumed a priori to be better.
A goal of research in logical frameworks is to see to it that the
"informal definition" and its "encoding" (so-called) are one and the
same, with little or no air between them. I have found that this is
achievable in many, many cases, but as I've said earlier this evening,
one must be willing to listen to the framework to see what approach is
better and exactly why.
Can we please move on to a substantive issue?
Bob
On May 2, 2005, at 7:47 PM, Peter Sewell wrote:
>
> Perhaps it would be useful to clarify our original thinking
> w.r.t. evaluation contexts in the challenge problems.
>
> We would like to understand the best tools and idioms to use for
> expressing the definitions and metatheory of programming languages.
> Now, we're really concerned
>
> 1) with expressing the _primary_ definition of a
> language, especially while it is under development, rather than
> with _encoding_ an existing pen-and-paper definition; and
>
> 2) with non-trivial languages, e.g. including exceptions, new name
> generation, computation under certain binders, and concurrency,
> and with an eye towards definitions in the 100-page scale.
>
> These affect the representation trade-offs: if your focus is language
> design, especially for a large language, you need the primary
> definition to be concise, easy to understand, and easy to change. On
> the other hand, if your focus is metatheory proofs about an existing
> language definition, you may care most about making those proofs
> slick.
>
> In posing the challenge we had to simplify in both directions: to make
> it possible to compare the results we fixed a particular pen-and-paper
> definition, and to make the challenge manageable we cut down to a
> small calculus.
>
> For that calculus in isolation, whether one uses explicit reduction
> closure rules or evaluation contexts makes little difference to an
> informal development. The definitions are equally comprehensible; the
> proofs are slightly simpler for the former style. It wouldn't be
> surprising to find that mechanisation is smoother in the former style.
>
> As one moves to the world of (1) and (2), however, we believe that for
> pen-and-paper semantics using evaluation contexts is an important
> pragmatic tool for writing concise and malleable definitions. We
> therefore guess that a good mechanized definition style would also use
> evaluation contexts here, and so we want to see how it would be done
> in the various systems. Being able to contrast the different possible
> evaluation context formalisations, including e.g. Frank's dependent
> functions, would be very interesting.
>
> (Of course, it's possible that we're wrong about this: formalisation
> might shift the balance back towards preferring explicit rules for
> reduction closure, exception propagation, global name extrusion, etc.
> If so, we'd obviously like to know it -- but it needs an argument that
> applies to (2), not just to Fsub.)
>
> The bottom line is that we're not really asking how best to formalise
> Fsub, but how best to express some new (and large) semantics in the
> future.
>
> Peter
>
>
> _______________________________________________
> Poplmark mailing list
> Poplmark at lists.seas.upenn.edu
> http://lists.seas.upenn.edu/mailman/listinfo/poplmark
More information about the Poplmark
mailing list