Re: Hardware verification

Paul Jackson (
Wed, 14 Dec 1994 13:44:39 -0500

A few years ago I was studying the spec. that Geoff Barrett wrote in the
Z specification language for the T800 Transputer floating-point microcode
(see below for references).

I came across several errors, that may be what Kahan was referring to in
his remarks. One of the errors can probably be counted as a typo and
probably would have been caught by a basic Z syntax-checker. Others were
logical errors; the spec. stated a `theorem' that was false and
part of the spec. was inconsistent. For those interested in
the technical details, I describe below the errors that I noticed.

I talked to Geoff Barrett in 1990 or '91 about this, and I seem to
remember him saying that subsequent to the spec being submitted for
publication, the errors were noticed and corrected.

At any rate, the fact that there were errors at all I think indicates the
importance of having mechanical type-checking / theorem-proving support
to back up formal specifications; all these errors would have been caught
very quickly if the spec. had been mechanically checked. Then again,
maybe all that was needed was some independent careful human appraisal of
the spec.

I'd be curious to hear from anyone who is up on the current status of
proof tools for Z. I know that at ICL in the UK, R.B.Jones has been
working on such tools. Could they be used today to flush out the errors
I mention below?



The Z Spec was published in:

Geoff Barrett, "Formal Methods Applied to a Floating-Point
Number System".
IEEE Transactions on Software Engineering,
Vol 15, No. 5, May 1989, p611-621.

A slightly earlier version of this paper was:

Geoff Barrett, "Formal Methods Applied to a Floating-Point
Number System".
Technical Monograph PRG-58. Jan 1987
Oxford University Computing Laboratory.

The errors I was able to find last night, going back to look at the
paper, include the following (referring to the IEEE version). All these
errors pertain just to the specification of rounding. I've tried to make
the comments intellible, even if you don't have the paper at hand.

The paper introduces a set of real numbers `ApproxValues', that are
intended to be exactly all the real numbers that can be denoted by a
floating-point bit-vector in some given format. Initially, the
set is introduced abstractly; all that is required is that it
satisfy certain constraints. Only later is the connection made to some
floating-point format.

The paper also introduces two real numbers `MinValue' and `MaxValue' that
delimit the ApproxValues set, and introduces a subset of

ApproxValues U {MinValue,MaxValue}

that are called `PreferredValues'. These are the values one rounds to
in the `round to nearest' mode when one has to make a choice about
rounding to one of two distinct equidistant values that can actually be

1. p614. Left column, top: the paper notes a `theorem' that when rounding
towards +infinity, and MinValue is not in ApproxValues, the rounded value
cannot be MinValue. In light of the IEEE spec. this is a sensible claim
to want to make (though I thought that the Z spec should have been clear
from the start about whether MinValue was considered to be in
ApproxValues or not). However, I could not see how this is a consequence
of the definition of what it is to `round towards +infinity' given in the
`Above' schema on p613. (`Schemas' in Z are the basic units that many
definitions come in; they declare types of variables and specify
constraints on those variables.) Indeed, it seems that the spec is
non-deterministic; when rounding up a value below MinValue, it allows
both MinValue and the most negative element of ApproxValues as legitimate
rounded values.

2. p613, centre. The `Round_Signature' schema specifies the consistency
of the PreferredValues set. It says that given any two real numbers v1,
v2 in the set,

ApproxValues U {MinValue,MaxValue}

such that v1 > v2,

there is a element p in PreferredValues such that v1 >= p >= v2.

At the foot of p614, the FP_Round1 schema introduces definitions for
ApproxValues, PreferredValues, MinValue and MaxValue that are based on a
given floating-point format. Unfortunately, the consistency constraint
mentioned above on PreferredValues isn't satisfied by these definitions.
In particular, if v1 is MaxValue and v2 is the greatest element of
ApproxValues (neither of which are in PreferredValues), there is no
suitable p.

3. p615, top. In the Exception_Spec signature, a (') (prime) suffix is
omitted from a variable `value'. This would have been caught by a
simple Z syntax checker, because `value' without a prime suffix is not
declared anywhere in the context of this signature.

Received: from localhost (listserv@localhost) by (8.6.4/8.6.4) id JAA01965 for qed-out; Wed, 8 Feb 1995 09:03:07 -0600
Received: from ( []) by (8.6.4/8.6.4) with SMTP id JAA01956 for <>; Wed, 8 Feb 1995 09:02:48 -0600
Received: by (5.67+1.6W/2.8Wb)
id AA09549; Thu, 9 Feb 95 00:02:38 JST
Message-Id: <>
To: QED Group <>
Subject: Jack Jansen: Broadcast of OpenMath workshop
Date: Thu, 09 Feb 1995 00:02:37 +0900
From: Yozo Toda <>
Precedence: bulk
X-UIDL: 792607164.103

I just find today "OpenMath" activity.
It seems they aim at making standard for language and structure
of computer algebra systems.
anyone know about this?

-- yozo.

------- Forwarded Message

Message-Id: <>
Subject: Broadcast of OpenMath workshop
Organisation: Multi-media group, CWI, Kruislaan 413, Amsterdam
Phone: +31 20 5924098(work), +31 20 5924199 (fax), +31 20 6160335(home)
X-Last-Band-Seen: Treble Spankers (Kroeg, 21-1)
X-Mini-Review: Surfing through the 9Ts..
Date: Tue, 07 Feb 1995 15:18:33 +0100
From: Jack Jansen <>

As announced last month, next thursday and friday (February 9 and 10)
we'll be broadcasting the OpenMath workshop. OpenMath is a protocol
for doing mathematics over the internet, see for

We'll be sending audio (idvi) and as-slow-as-possible video of the
slides, at the following times:
thursday: 11.00-17.00 MET (GMT+1)
friday: 09.00-17.00

For MBone-related questions/remarks/etc contact me at or +31 20 5924098. Don't ask me about the subject
matter, though, but contact in stead.
- --
Jack Jansen | If I can't dance I don't want to be part of | your revolution -- Emma Goldman
uunet!!jack G=Jack;S=Jansen;O=cwi;PRMD=surf;ADMD=400net;C=nl

------- End of Forwarded Message

Received: (from listserv@localhost) by (8.6.10/8.6.10)
id NAA26022 for qed-out; Sun, 30 Apr 1995 13:44:24 -0500
Received: from ( []) by (8.6.10/8.6.10) with SMTP
id NAA26017 for <>; Sun, 30 Apr 1995 13:44:17 -0500
Received: from spock ( by (4.1/AI-4.10) for id AA01337; Sun, 30 Apr 95 14:44:10 EDT
From: (David McAllester)
Received: by spock (4.1/AI-4.10) id AA21652; Sun, 30 Apr 95 14:44:08 EDT
Date: Sun, 30 Apr 95 14:44:08 EDT
Message-Id: <9504301844.AA21652@spock>
In-Reply-To: Randy Pollack's message of Thu, 27 Apr 1995 20:22:37 +0200 <>
Subject: Paper on reflection
Precedence: bulk
X-UIDL: 799269024.000

[The constructed proof of the four color theorem] is clearly big, but not
clearly infeasible.

I interpret John Harrison as saying, among other things, that one can
always write complex decision procedures in a style in which they
output proofs. It seems to me that the real issue is one of constant
factors. A constant factor of 100 can be the difference between
feasible and infeasible. It seems very likely to me that in the four
color theorem an HOL tactic would run a factor of 100 or more slower
than a C program (probably more like a 1000). Of course to use a C
program in a foundational verifier one would have to verify C (as John
notes). Since the four color theorem is by now fairly old, it may be
feasible to solve it with a slow program. But other "black-box"
theorems may be more challenging, e.g., recent results on the
non-existence of certain quasi-groups. It seems to me that
cutting-edge black-box theorems will always be challenging for a
foundational prover unless it can verify and run C code or some
imperative equivalent.

I am not saying that these black-box theorems are important to the QED
effort. Rather, they seem like the primary place where computational
extensions, such as reflection, might pay off.


Received: (from listserv@localhost) by (8.6.10/8.6.10)
id EAA01488 for qed-out; Mon, 1 May 1995 04:49:05 -0500
Received: from ( []) by (8.6.10/8.6.10) with ESMTP
id EAA01483 for <>; Mon, 1 May 1995 04:48:58 -0500
Received: from (user jrh (rfc931)) by
with SMTP (PP-6.5) to cl; Mon, 1 May 1995 10:48:22 +0100
Subject: Re: Paper on reflection
In-reply-to: Your message of "Sun, 30 Apr 1995 14:44:08 EDT." <9504301844.AA21652@spock>
Date: Mon, 01 May 1995 10:48:18 +0100
From: John Harrison <>
Message-ID: <"">
Precedence: bulk
X-UIDL: 799333024.001

David McAllester writes:

| It seems to me that cutting-edge black-box theorems will always be
| challenging for a foundational prover unless it can verify and run C code
| or some imperative equivalent.

I think this is true. On the other hand, technological improvements might mean
that we only lag a few years behind the state of the art. Besides, it doesn't,
as David notes, seem very relevant to QED, which is largely concerned with
"textbook" mathematics (and as such is already way behind the state of the
art). In verification applications the problem is more serious, since here
"brute-force" proof techniques are pretty common.


Received: (from listserv@localhost) by (8.6.10/8.6.10)
id GAA02202 for qed-out; Mon, 1 May 1995 06:41:22 -0500
Received: from ( []) by (8.6.10/8.6.10) with ESMTP
id GAA02197 for <>; Mon, 1 May 1995 06:41:15 -0500
Received: from (user lcp (rfc931)) by
with SMTP (PP-6.5) to cl; Mon, 1 May 1995 12:40:27 +0100
X-Mailer: exmh version 1.6 4/21/95
Subject: Isabelle users workshop
X-uri: <URL:>
X-face: "OrDM]eXxWpb;,!g'n)u!-ss/8qvWB4*r>rA5~IAaMPwt$YO^oBckRP3N&D0.K"wKN7B>
Date: Mon, 01 May 1995 12:40:14 +0100
From: Lawrence C Paulson <>
Message-ID: <"">
Precedence: bulk
X-UIDL: 799333024.005

18-19 September 1995 [NOTE REVISED DATES]
Cambridge, UK

This workshop is to allow Isabelle users to exchange techniques and results.
Both finished work and work in progress can be reported. There will be a
programme of short talks and possibly demonstrations. A printed proceedings
will be produced provided enough papers are submitted. There will be no
formal refereeing; within reason, all talks and papers will be accepted.
There will also be time for informal discussions.

Administration will be minimal in order to keep the costs down. However, we
expect to be able to arrange College accommodation for participants at a cost
of less than #50 (UK) per day, inclusive of all meals. As an alternative, we
can supply a list of hotels. There will be a small registration fee.

The workshop immediately preceeds HOA '95 and LOPSTR '95.

For information on how to get to the Computer Laboratory, please see the URL

If you are interested in attending, please let me know. Send your title and
abstract if you would like to give a talk or demo. We hope to compile the
proceedings electronically from papers submitted in Postscript format; please
suppress page numbering.

The deadline for abstracts is 1 June 1995, and the deadline for proceedings
papers is 1 July.

Lawrence C Paulson, University Lecturer
Computer Laboratory, University of Cambridge,
Pembroke Street, Cambridge CB2 3QG, England
Tel: +44(0)21223 334623 Fax: +44(0)1223 334678

Received: (from listserv@localhost) by (8.6.10/8.6.10)
id MAA09338 for qed-out; Mon, 1 May 1995 12:43:08 -0500
Received: from ( []) by (8.6.10/8.6.10) with ESMTP
id MAA09326 for <>; Mon, 1 May 1995 12:42:56 -0500
Received: from by (PMDF V4.3-13 #6012)
id <>; Mon, 01 May 1995 10:43:16 -0700 (PDT)
Date: Mon, 01 May 1995 10:31 -0700 (PDT)
Subject: Re: Paper on reflection
Message-id: <>
MIME-version: 1.0
Content-transfer-encoding: 7BIT
Precedence: bulk
X-UIDL: 799352364.003

>Subject: Paper on reflection
>Date: 4/30/95 11:44 AM
>... Of course to use a C
>program in a foundational verifier one would have to verify C (as John
> ... David

Not to mention having to verify the operation of the hardware on which the C
code is running (as per the recent Pentium flap).

Rob Harris

Received: (from listserv@localhost) by (8.6.10/8.6.10)
id KAA28732 for qed-out; Tue, 2 May 1995 10:35:40 -0500
Received: from ( []) by (8.6.10/8.6.10) with ESMTP
id KAA28726 for <>; Tue, 2 May 1995 10:35:33 -0500
Received: from by (PMDF V4.3-13 #6012)
id <>; Tue, 02 May 1995 08:35:46 -0700 (PDT)
Date: Tue, 02 May 1995 07:41 -0700 (PDT)
Subject: Re[2]: QED Workshop.
To:, shepherd_david/
Message-id: <>
MIME-version: 1.0
Content-transfer-encoding: 7BIT
Precedence: bulk
X-UIDL: 799429964.000

Item Subject: Message text

> Indeed, the Uribe & Stickel paper referenced below compares BDDs (which are
> taking the world by storm in hardware verification) and the rather older
> Davis-Putnam algorithm, and concludes that sometimes one is better,
> the other, depending on the kind of problem considered.

One solution may be to create probablistic proofs for problems which are beyond
the capabilities of an otherwise 'best' implementable proof engine. A criteria
for establishing the importance of secondary evidence - i.e. use of a (to be
proven) conjecture which leads to a result proven by other means would raise
the probability that a proof of the conjecture exists - would be necessary. An
unproven conjecture may be 'proven' at a significance level of 0.98, for
example. This idea may be adaptable to extremely large problems, but would
require estimating a probability of truth when an absolute proof can, in
principle at least, be rigorously obtained. This may not be acceptable. I
seem to recall reading something along the lines of such proofs, but I find no
reference at hand. It may have been something in _Mathematics_Magazine_.

Though at last year's CAV meeting in Stanford (CAV is heavily automated proof
and BDD etc orientated) several people were saying that BDDs were running
out of steam with the size of problems people wanted to do (we're talking
several days CPU time on some of the biggest machines that Sun have etc)
and that they were wanting to look into theorem provers to see if that
approach could be layered on top of BDDs to give handle the sorts of
abstractions etc that are required to get BDD systems down to manageable