[POPLmark] Poplmark Digest, Vol 10, Issue 5

Michael Norrish Michael.Norrish at nicta.com.au
Thu Jun 1 03:18:07 EDT 2006


Daniel C. Wang writes:

> So, I'm curious how much work is it to extend your lambda-calculus
> term language? How, reusable is the theory so I can build arbitrary
> term languages with binders? I'll have to admit that I suspect
> nominal approaches see very appealing right now.

There is no good automatic HOL4 technology for doing this (I may get
around to implementing it).  In total, I have done four different
languages "by hand" (and used them in non-trivial ways):
  untyped lambda calculus  (standard sorts of results)
  lambda-calculus with labelled redexes (for residual theory)
  lambda-calculus with labelled redexes and weighted variables 
    (for Barendregt's proof of the finiteness of developments)
  Fsub types (for the algorithmic and standard sub-typing relations
              of poplmark part1)

The Isabelle nominal datatypes package *does* offer automatic support
for creating these types and reasoning about them in the same sorts of
ways that I have (with nice structural and rule inductions, with nice
recursion combinators etc).

So, to explicitly answer your questions: I don't find it so much work
to define new types in HOL4, but I wouldn't really wish it on anyone
else.  On the other hand, my approach is the "nominal approach", and
this does extend to new types easily given the developing automation
available in Isabelle's nominal datatype package.
 
> Micro-benchmarks should be short and clearly define what part of the
> system the micro-benchmark exercises. My claim is that the
> lambda-height challenge that I've proposed tests how easy it is to
> reuse and integrate existing theories of binding and other theories
> such as the natural numbers.

I agree that this is a nice example.  I'd go further: if a tool can't
do this sort of integration of theories, then it really is pretty
deficient.  Indeed, I want to be able to simultaneously work with all
of HOL's theories (measurable sets and probability theory, to fixed
width words, to model checking temporal logic) at the same time as I
work with types with binders.
 
> There are perhaps other specific areas worth constructing simple
> challenge problems for such as maintaining a context of facts and
> induction over open terms. I think it's worth thinking about how to
> decompose the features needed to do "real proofs" rather than just
> trying to jump into large scale proving without a clear idea of
> where you think the challenges will be. Coming up with
> micro-benchmarks forces you to decompose the problem into a set of
> concrete criteria. I think the current poplmark challenge fails in
> that respect.

One possible micro-benchmark might lie in incorporating possibly
infinite reduction sequences.  This might give systems an opportunity
to show off their ability to deal with co-algebraic data types.

Michael.


 



More information about the Poplmark mailing list