Re[2]: QED Workshop.

ma_friesel@ccmail.pnl.gov
Tue, 02 May 1995 07:41 -0700 (PDT)

Item Subject: Message text

> Indeed, the Uribe & Stickel paper referenced below compares BDDs (which are
> taking the world by storm in hardware verification) and the rather older
> Davis-Putnam algorithm, and concludes that sometimes one is better,
sometimes
> the other, depending on the kind of problem considered.

One solution may be to create probablistic proofs for problems which are beyond
the capabilities of an otherwise 'best' implementable proof engine. A criteria
for establishing the importance of secondary evidence - i.e. use of a (to be
proven) conjecture which leads to a result proven by other means would raise
the probability that a proof of the conjecture exists - would be necessary. An
unproven conjecture may be 'proven' at a significance level of 0.98, for
example. This idea may be adaptable to extremely large problems, but would
require estimating a probability of truth when an absolute proof can, in
principle at least, be rigorously obtained. This may not be acceptable. I
seem to recall reading something along the lines of such proofs, but I find no
reference at hand. It may have been something in _Mathematics_Magazine_.

Though at last year's CAV meeting in Stanford (CAV is heavily automated proof
and BDD etc orientated) several people were saying that BDDs were running
out of steam with the size of problems people wanted to do (we're talking
several days CPU time on some of the biggest machines that Sun have etc)
and that they were wanting to look into theorem provers to see if that
approach could be layered on top of BDDs to give handle the sorts of
abstractions etc that are required to get BDD systems down to manageable
sizes.