Let me point you to the following recent blog post from Prof Keith Devlin, entitled “What is a proof, really?”
After a lifetime in professional mathematics, during which I have read a lot of proofs, created some of my own, assisted others in creating theirs, and reviewed a fair number for research journals, the one thing I am sure of is that the definition of proof you will find in a book on mathematical logic or see on the board in a college level introductory pure mathematics class doesn’t come close to the reality.
For sure, I have never in my life seen a proof that truly fits the standard definition. Nor has anyone else.
The usual maneuver by which mathematicians leverage that formal notion to capture the arguments they, and all their colleagues, regard as proofs is to say a proof is a finite sequence of assertions that could be filled in to become one of those formal structures.
It’s not a bad approach if the goal is to give someone a general idea of what a proof is. The trouble is, no one has ever carried out that filling-in process. It’s purely hypothetical. How then can anyone know that the purported proof in front of them really is a proof?
Click the link to read the rest of the article. Also read the comments below the article to see what conversation has already been generated.
I won’t be shy in saying that I disagree with Keith Devlin. Maybe I misunderstand the subtle nuance of his argument. Maybe I haven’t done enough advanced mathematics. Please help me understand.
Devlin says that proofs created by the mathematical community (on the blackboard, and in journals) are informal and non-rigorous. I think we all agree with him on this point.
But the main point of his article seems to be that these proofs are non-rigorous and can never be made rigorous. That is, he’s suggesting that there could be holes in the logic of even the most vetted & time-tested proofs. He says that these proofs need to be filled in at a granular level, from first principles. Devlin writes, “no one has ever carried out that filling-in process.”
The trouble is, there is a whole mathematical community devoted to this filling-in process. Many high-level results have been rigorously proven going all the way back to first principles. That’s the entire goal of the metamath project. If you haven’t ever stumbled on this site, it will blow your mind. Click on the previous link, but don’t get too lost. Come back and read the rest of my post!
I’ve reread his blog post multiple times, and the articles he linked to. And I just can’t figure out what he could possibly mean by this. It sounds like Devlin thoroughly understands what the metamath project is all about, and he’s very familiar with proof-checking and mathematical logic. So he definitely isn’t writing his post out of ignorance–he’s a smart guy! Again, I ask, can anyone help me understand?
I know that a statement is only proven true relative to the axioms of the formal system. If you change your axioms, different results arise (like changing Euclid’s Fifth Postulate or removing the Axiom of Choice). And I’ve read enough about Gödel to understand the limits of formal systems. As mathematicians, we choose to make our formal systems consistent at the expense of completeness.
Is Devlin referring to one of these things?
I don’t usually make posts that are so confrontational. My apologies! I didn’t really want to post this to my blog. I would have much rather had this conversation in the comments section of Devlin’s blog. I posted two comments but neither one was approved. I gather that many other comments were censored as well.
Here’s the comment I left on his blog, which still hasn’t shown up. (I also left one small comment saying something similar.)
You said you got a number of comments like Steven’s. Can you approve those comments for public viewing? (one of those comments was mine!)
I think Steven’s comment has less to do with computer *generated* proofs as it does with computer *checked* proofs, like those produced by the http://us.metamath.org/ community.
There’s a big difference between the proof of the Four Color Theorem, which doesn’t really pass our “elegance” test, and the proof of which can be found here: http://us.metamath.org/mpegif/efipi.html
A proof like the one I just linked to is done by humans, but is so rigorous that it can be *checked* by a computer. For me, it satisfies both my hunger for truth AND my hunger to understand *why* the statement is true.
I don’t understand how the metamath project doesn’t meet your criteria for the filling in process. I’ll quote you again, “The trouble is, no one has ever carried out that filling-in process. It’s purely hypothetical. How then can anyone know that the purported proof in front of them really is a proof?”
What is the metamath project, if not the “filling in” process?
If anyone wants to continue this conversation here at my blog, uncensored, please feel free to contribute below :-). Maybe Keith Devlin will even stop by!
I’m getting the impression that Devlin is concerning himself chiefly with the “facts on the ground” about proofs – a difference between the official definition and the use of the word in practice. When he says no one has ever filled in the dots completely, I interpret that as hyperbole.
The overall message is that (a majority of?) proofs are provided with details elided and never intended to be fully specified. This makes sense from the perspective of expert mathematicians – “trivial” details are a waste of ink and space. Presumably, for a student who comes in thinking proofs must be fully complete, there’s a huge culture shock when they begin studying the kind of proofs published in journals, and when they’re expected to write proofs that same way. There’s a lot of conventions and some new skills to learn.
I also thought it was interesting that in the comments, there was some discussion of proofs as dual-purposed: not only establishing accuracy, but providing new mathematical insights. I can see where many computer-verifications (and some formal verifications) might be considered, by a subset of readers, uninteresting or even irrelevant. After all, they only increase confidence in a result, not show you what it means.
That’s my best guess at a summary, anyway.
Yes, I think we can agree with Devlin that most proofs don’t rigorously fill in the gaps. But I don’t think he’s being hyperbolic–he seems to be quite serious when he claims no one has ever done the filling-in process. Devlin writes, “It’s purely hypothetical. How then can anyone know that the purported proof in front of them really is a proof?” And “For sure, I have never in my life seen a proof that truly fits the standard definition. Nor has anyone else.”
Yet I linked to an entire website with such proofs. Major results, not just trivial ones, have been made rigorous in ZFC.
I like your comments about the culture shock experienced by people new to the mathematical community, and how we look to proofs for elegance and insight, not just utility and truth. I completely agree.
Merry Christmas, by the way! 🙂
I think what Keith is saying (and it is subtle) is that math is ultimately based on language, and language or “meaning” is inexact — even if you only employ logic notations, you are using symbols whose precise meaning must be agreed upon by everyone — it’s a sort of giant recursive or tautological system ultimately based on assumptions that can’t be fully tested. In some ways it does tie back to Gödelian limits on formal systems. We can’t get “outside” the system well enough to fully test the system (that we are a part of and create). Just as changing a single axiom in Euclidian geometry leads to other geometries, all math proofs are based on axiomatic assumptions that just might vary in a different universe somewhere; universality (and thus “proof”) can’t be established in a deeply-meaningful way — and even if you had a completely rigorous proof, you couldn’t know it!
Language isn’t exact, but mathematical language *is* exact–formal systems exist to make language exact. You say the precise meaning must be agreed on by everyone. But don’t we all agree? So what’s the problem?
If the basic rules of logic can be broken (like the rules of replacement) then it’s really not worth having this discussion! We have to assume the rules of logic are true a priori, and those let us evaluate the truth of our more flexible formal systems inside mathematics. You can change axioms and reason to new conclusions. I’m a little out of my area of expertise, though, so please feel free to correct me!
If this really is Devlin’s argument, then it is subtle indeed. I doubt it’s his point, though, because he makes no reference to Gödel’s incompleteness theorems.
I actually like Devlin’s main points about the role of mainstream mathematical proofs. I guess I’m just getting hung up on some of the things he said in those opening paragraphs. I disagree, and I just can’t get past that!
In any proof there is absolutely no need to “fill in ALL the details” as intermediate assertions are references to already proved (and agreed) results. The only reason that school proofs are needed to be “complete” is that the students are beginners, and the complete proofs are not then going to be unreadably long. Devlin’s arguments are often somewhat strange. He has written about “Math problems do not have a right answer” but his argument is that real world problems do not have a right answer. Any mathematical model of the real world has a “correct” answer mathematically speaking, but of course may be of no practical use.
I am now following your blog, thanks to a link on an email from the Devlin blog.
Great comments. Thanks for stopping by, Howard!
“Meaning” always has a subjective element, so even barebones, logical notation can’t be rigorously assured to hold identical meanings for all using it. Maybe intuition-based foundational terms like “point,” “line,” “angle,” “equals,” “number,” “set,” etc. etc. “mean” the identical thing to all of us, but maybe at a deep level they DON’T. Intuition can be wrong or varied, and what we call “proofs” are essentially either tautologies stemming from circular definitions, or else remain open, however slightly, to disproof.
I’ll try an analogy on you — it’s NOT great 😦 but perhaps will get a point across:
An old conundrum in perceptual psychology is whether or not two people seeing what they call the color “red” actually see the same thing, or is it possible that what you call “red” is what I would call “blue” if I could it with see with your sight. Essentially there’s no way to know. Even though we spend our lives thinking we agree on what “red” is, we don’t know if that agreement is real.
This whole question reminds me a bit of the divide between Platonists and non-Platonists: many mathematicians are now philosophical non-Platonists, but most admit that when they DO mathematics they “operate” as if they are Platonists. I think (assume?) Keith would acknowledge that he works in the day-to-day math-world as if a Platonist, even while he’d argue philosophically that mathematics is NON-Platonist.
I’m a Platonist through and through. But I acknowledge that many mathematicians are non-Platonists now, as you say. I understand where they’re coming from, I just disagree :-).
Funny to “do your daily work” as an idealist when in fact you’re not. Seems like the opposite of what non-mathematicians do. Non-mathematicians “do their daily work” in the grime of reality while still holding to some ideal. But the non-Platonist mathematician operates under ideal conditions on a daily basis while holding that reality really doesn’t represent that ideal. It’s a little ironic!
Thanks for the helpful contributions to the conversation, Shecky R.
true by construction is also conveniently powerful
third year into an undergrad degree in math I switched to dual biochem and theology
maybe wolfram [alpha] will license photomath for everyone else
2 + 2 = 5 … for very large values of 2