Re: Formalising variable bindings/Show Me

John Staples (
Mon, 20 Jun 1994 15:53:00 +1000 (EST)

In arguing that lambda calculus/HOL `completely' solves the problem of
formalising variable bindings, I believe John Harrison is
overlooking the distinction between syntax and semantics.

Lambda calculus syntax provides only the simplest sort of quantifier
syntax - as John H. says, \x.t[x] - or at best, a small extension of
that to include a type for the bound variable. Such a typed quantifier
syntax is easily definable (given a definition mechanism!) in any form
of set theory which has a Hilbert epsilon or iota quantifier (`some x
such that', or `the unique x such that'). I agree with John H. that some
such Hilbert quantifier is a practical necessity.

Yes, any reasonably expressive mathematical framework can define semantics
for a range of quantifiers, as in John H's example. (First order) set
theory can do that at least as comprehensively as higher order logic, since
the latter is semantically a part of set theory.

But an adequate semantics is not enough for convenient practical use.
People want practical support, as provided by relevant syntax and syntax

A simple example: to define new quantifiers q by declarations
of the form
q x A = Term
what conditions on A, and on Term, are necessary and sufficient for q to
have an appropriate semantics and for the declaration to be a correct axiom
scheme? I believe a good answer is:

* A is a metavariable.

* Term is an extended term in the sense that occurrences of the
metavariable A (but no other metavariables) are allowed.

* Term has no free occurrences of object variables (unless q has
arguments - but here I'm looking at the simplest case, where
q has no arguments).

* Every occurrence of A in Term is within a binding of x, and is
not within a binding of any other object variable.

I think John H. would say something like the following (I'd be pleased
to be corrected):

define the constant q by introducing the axiom
q = \a.HTerm
where a is an object level variable (not a meta variable)
and where HTerm can be obtained from Term by replacing all
occurrences of A by (ax). Then, sugar q(\x.A[a]) to q x A.

Here are two limitations of this latter approach.

* It introduces lambda abstractions and applications of higher
order terms, which then must be got rid of by lambda evaluation
(which leads to a more complicated notion of unification) and by

* The use of A[x], rather than just A, in John H's \.A[x] illustrates
that his approach is less abstract. That is because for each occurrence
A of a metavariable, one must define the object variables (x in the
example) which are bound in the surrounding Term. Consequently,
for example, \x.A[x] does not (meta-) unify with the body of
\y. \x. A[x,y]. In the approach I suggest, \x.A does unify with
(indeed, is identical with) the body of \y. \x.A. Thus, more
powerful unification is achieved using a less powerful unification
algorithm, merely by adopting a more direct formalisation of ordinary
mathematical discourse.

Bottom line:
Fascinating as this discussion is, QED does *not* depend on John H. agreeing
with me! We need only to agree that QED be broad and flexible enough to
provide a framework in which different formalisms can interchange knowledge,
and can evolve.