[POPLmark] Poplmark Digest, Vol 10, Issue 5

Michael Norrish Michael.Norrish at nicta.com.au
Sun Jun 4 20:43:18 EDT 2006


Karl Crary writes:
 
> For contrast, let me pick on Coq.  In Coq, a term can have almost any 
> structure.  This means that adequacy for Coq representations would 
> depend on a very deep normalization result for the Calculus of Inductive 
> Constructions.  (As an aside, we can note here that, to my knowledge, 
> the normalization proof has never been extended to cover some important 
> axioms in the Coq implementation.)  Still, I think it's easy to imagine 
> how one would do it (modulo the extra axioms), and I would be very 
> interested to see someone work this story out.  I think the high-level 
> story would be similar for HOL + Nominal Logic, except the account of 
> compositionality would be more complicated (and more interesting).
 
I don't think anyone would ever attempt the project you are describing
for HOL.  Rather, we would prove what I think might be called an
adequacy result entirely at the object level.  It is trivial to
establish a type of first order, algebraic terms corresponding to the
named syntax with explicit free variables, and not identified up to
alpha-equivalence.  I think this is the type that we all accept as the
fundamental bedrock, to which we can all revert if we must, however
unpleasant its properties.  This is the type for which notions of
recursion and induction are uncontroversial.

In particular, we can define notions of substitution, and
alpha-equivalence for this type.  I think the nice way to define
substitution is to follow the approach taken by Brotherston and
Vestergaard in their RTA paper.  This defines a total function by
primitive recursion, but this function does not do any renaming.  This
means that substitution at this level only has the "right" properties
if the bound names have been picked appropriately.  It is then easy to
define alpha-equivalence and beta-reduction.  (The latter requires
explicit alpha-conversion steps to make appropriate reductions
possible.)

Working at this level is ugly, but possible, and further, all of our
systems should be able to do this reasoning (I hope).  

To my mind, the adequacy project is to show the appropriate
connections between this first-order algebraic type, and our preferred
type, whether it be a quotient, or one with two sorts of variable
names, or HOAS or whatever.

I did just this to validate first the Gordon-Melham terms, and then
terms I constructed by performing a quotient.  You can see the result
(for quotiented terms) at

http://hol.cvs.sourceforge.net/hol/hol98/examples/lambda/raw_syntaxScript.sml?revision=1.5&view=markup

For example, there is the theorem

  ``(capt v t INTER fv u = {}) ==>
            (collapse (subst u v t) = [collapse u/v] (collapse t))``

which shows the commuting of substitution at the two levels with
respect to the map from first-order to quotient (via the function
"collapse").  The notation with the square brackets is substitution at
the level of quotiented terms.  The unadorned "subst" is the relation
(with arguments in the same order left-to-right) at the first-order,
unquotiented level. The precondition requires that there be no
boundnames in t that have scope over occurrences of the free variable
v and which also occur in the free variables of u.  In other words,
this is the pre-condition giving us that substitution will behave
nicely at the first order level.

Later, there is

  ``EQC alpha t u = (collapse t = collapse u)``

which says that t and u are related by the "equivalence closure" of
the one-step alpha relation iff they collapse to the same term. 

Last, there are theorems relating the two notions of beta-reduction.
In particular, 

  ``compat_closure beta (collapse t) (collapse u) =
       (EQC alpha O beta O EQC alpha) t u``

where "compat_closure beta" is the relation at the quotiented level,
and where the corresponding relation at the first order level is the
relational composition (capital O's) of alpha-conversion, one
beta-step, and alpha-conversion again. 

Given that the various systems out there should be able to reason
about first-order, algebraic structures, and also about their own
encoding of choice for lambda-calculus terms, why should they not also
be able to perform this adequacy benchmark?

Michael.



More information about the Poplmark mailing list