I very much disagree. Formal systems serve the math community just
about as well as they possibly could. But they do not serve the
engineering, education, law, business, and medicine communities very
well. I believe they can and they should, and QED is a way to make
it happen.
I recently heard of some research (sorry... I don't have a citation
handy) that showed that the more a community as a whole understands
something, the easier it is to each individuals in the community that
something. Hence if more folks knew about formal systems, it would
be easier for students (and engineers, layers...) to pick it up.
By way of disclaimer, I should explain that my background in formal
systems is limited to a few undergrad courses in logic, automata
theory, analysis of programs, symbolic computation, and topology,
plus independent study of interesting projects like Larch on the net
and books like Hofstadter's G.E.B.
Some folks have mentioned Mathematica as an example of the way things
should work. I would suggest that Mathematica is yet another application
of basically broken technology. If I were using a symbolic mathematics
tool, I'd use the oldest one I could get my hands on: that's the one
with the most bugs worked out of it.
I saw Steven Wolfram speak soon after the release of Mathematica. At
the end of the presentation, I asked him what methods he had used to
verify the correctness of Mathematica. He described the same sort of
Software Quality Assurance techniques found elsewhere in the software
industry: regression testing, visual inpection, monkey testing, etc.
If I came out with a proof of Fermat's Last Theorem that consisted of
135,000 cases where it works, how many of you would accept that as
proof? And yet that's how large, complex systems are usually built
these days.
As folks get more comfortable with computers and technology, they will
be used more and more agressively in all aspects of our lives. We
drive around in cars that nobody understands without the aid of
automated diagnostic equipment. We use sophisticated Tomography and
Magnetic Resonance Imaging to diagnose medical conditions.
Some folks will say that ultimately these systems are subject to
nonlinear effects, so that formal systems would provide a false sense
of security at best.
But there are many aspects of these systems and many others that could
be readily checked by formal methods, saving countless hours and
dollars.
Some folks have wondered where the funding for this project will come
from. Money for mathematical logic is scarce, apparently. But look
at all the money thrown at Total Quality Assurance programs and such!
Anyway... I have considerable experience with the World-Wide Web
and internet publishing and disemination of information in general.
I'd like to see QED evolve as a shared knowledge base over the Web
in the next few years. It's a very good match for the distributed
database and user interface (i.e. visualization and linguistics)
issues involved in this project -- not that the database and
user interface work is done -- but the WWW is an ideal vehicle
for experimentation and deployment of the technology. Plus, it's
a popular wave to ride on!
So you can perhaps get the Web enthusiasts to build you a distributed
database and a user interface. From reading over the list, that leaves
you folks to decide on a logic system.
I like the "if it won't fit on the front side of a page, it's too
complicated" maxim. PRA and FS0 seem to meet this criterion, though
I'd like to see an even more explicit definition given. In the
definition quoted by Dr. Boyer, they begin with specific symbols 0, and
'. Then f0, f1, and so on for +, *, and the rest of the
p.r. functions. (I need to review what f2, f3, etc. would be). But
then they start with "terms t..." where they use meta-symbols like t,
without making the distinction sufficiently clear for my book.
On the other hand, this project is as much about communication as it
is about machine verification. Toward that end, the "little theories"
approach looks like a better way to build "reusable code," to use a
term from the computer science vernacular.
One critical issue seems to be the ability for diagonal techniques
similar to Goedel numbering. I gather that PRA, FS0, and perhaps
Nqthm and HOL are able to acuqire new inference rules through such
diagonal techniques. But I'd have to work through an example by hand
to really believe it.
That brings be to another point: I'm not sure if you're interested
in reaching a tremendously wide audience at this point, but for
an untrained person like myself, it helps tremendously to see concrete
examples that demonstrate the issues.
For example, there was a claim that constructivists don't accept the
Mean Value theorem (in some context that I can't recall precisely).
It would be most instructive to a person like myself who is not
intimately familiar with the terms "constructivist" and
"intuitionistic" to see this example laid out in gory detail.
Unfortunately, a reference to a Journal article probably wouldn't help
me too much. I suppose I could make a trek to the library (but I
probably wouldn't... call me lazy), but I think that even if I did,
the one relevant article would require that I read several others,
and probably some books to familarize myself with the terminology.
Perhaps that's just the nature of the beast. But I suspect that the
essential part of the argument can be separated from the surrounding
jargon, if anyone with a firm grasp of the topic had the time,
patience, and motivation.
Well... after browsing the archive of this list and "surfing" the
resources available on the net, I have drafted something of a survey
of what I found. It is little more than an annotated bibliography
of QED and related topic on the net. But for what it's worth, see:
http://www.hal.com/~connolly/survey/QED.html
Dan
Daniel W. Connolly "We believe in the interconnectedness of all things"
Software Engineer, Hal Software Systems, OLIAS project (512) 834-9962 x5010
<connolly@hal.com> http://www.hal.com/%7Econnolly/index.html