Wikipedia:Reference desk/Archives/Mathematics/2013 November 2

Mathematics desk
< November 1 << Oct | November | Dec >> November 3 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 2

edit

How confident are we that basic axiomatic systems are consistent?

edit

How confident are we that basic axiomatic systems mathematicians work with today are consistent? How hard is it to imagine that 4000 years from now, people will say that the poor people of third millennium spent time exploring primitive axiomatic systems each of which was totally inconsistent and could - though these people did not know it - be used to prove either assertions or their opposites.

Is that a reality that's within the realm of something we can imagine? I mean, I can imagine people saying that, 4000 years from now. But what about more rationally? Is it in the realm of the imaginable that we are living in deeply inconsistent axiomatic systems through some benightedness that we are totally unaware of? 212.96.61.236 (talk) 01:46, 2 November 2013 (UTC)[reply]

Well, this is a question on which reasonable people can disagree. Edward Nelson thinks that even Peano arithmetic is inconsistent, and he's a serious, respected mathematician. In my view you might as well ask whether you're a brain in a vat. However, this in itself is a major slippage from the old view that the consistency of mathematics was an apodeictic certainty, even more certain than that you're not a brain in a vat. --Trovatore (talk) 02:02, 2 November 2013 (UTC)[reply]
I doubt Nelson has a proof of that. Bubba73 You talkin' to me? 02:06, 2 November 2013 (UTC)[reply]
Not yet :-) --Trovatore (talk) 02:55, 2 November 2013 (UTC)[reply]
I'm not holding my breath. Bubba73 You talkin' to me? 03:10, 2 November 2013 (UTC)[reply]
Aw, isn't that cute! Brain-in-a-vat thinks it has breath to hold! --Trovatore (talk) 03:28, 2 November 2013 (UTC) [reply]
Consistency of even the axiomatic approach and logic itself is essentially a matter of faith: confidence it it is empirically built, and does not correspond to any identifiable part of physics. Logic is a construct that we have evolved/learned to understand and manage the world, and only has applicability to concepts and categories of the models that we have constructed (not the "real" world itself). Just like Newtonian physics was found to be inaccurate in an unexpected but fundamental way with the discovery of special relativity, logic may have limited applicability in its own domain. Add to this that axiomatic systems are varied, that consistency is difficult to prove even in the simplest systems of axioms, and that we need more than the simplest axiomatic systems to deal with most useful mathematics, and that this has brought with it logical incompleteness, I'd suggest that it would be foolish to claim a high level of confidence at the 4000-year level. — Quondum 16:38, 2 November 2013 (UTC)[reply]
I get this, but it is interesting as an 'act of faith' because it is very falsifiable. Someone could very strongly prove the inconsistency of an axiomatic system, and that would kind of be the 'last word' on that system, no? So at the moment it seems like such a proof would have to be inordinately nuanced or extensive (since the systems we work with seem consistent enough) but once made or discovered they would no longer allow any mathematician who could not point out a flaw in them to practice rigorous mathematics as normal. In this sense it is quite different from matters of faith. I found the rest of your comment highly interesting. 212.96.61.236 (talk) 19:40, 2 November 2013 (UTC)[reply]
Right - if it is inconsistent, that can be proven. The statement that it is consistent is falsifiable. Bubba73 You talkin' to me? 19:42, 2 November 2013 (UTC)[reply]

See also here:

"So I deny even the existence of the Peano axiom that every integer has a successor. Eventually we would get an overflow error in the big computer in the sky, and the sum and product of any two integers is well-defned only if the result is less than p, or if one wishes, one can compute them modulo p. Since p is so large, this is not a practical problem, since the overflow in our earthly computers comes so much sooner than the overflow errors in the big computer in the sky.

However, one can still have `general' theorems, provided that they are interpreted correctly. The phrase `for all positive integers' is meaningless. One should replace it by: `for finite or symbolic integers'. For example, the statement: (n + 1)^2 = n^2 + 2n + 1 holds for all integers" should be replaced by: (n + 1)^2 = n^2 + 2n + 1 holds for finite or symbolic integers n" . Similarly, Euclid's statement: `There are infinitely many primes' is meaningless. What is true is: if p1 < p2 < ... < pr < p are the first r finite primes, and if p1p2...pr + 1 < p, then there exists a prime number q such that pr + 1 <= q <= p1p2 ... pr + 1. Also true is: if pr is the `symbolic rth prime', then there is a symbolic prime q in the discrete symbolic interval [pr + 1; p1p2 ... pr + 1]. By hindsight, it is not surprising that there exist undecidable propositions, as meta-proved by Kurt Godel. Why should they be decidable, being meaningless to begin with! The tiny fraction of first order statements that are decidable are exactly those for which either the statement itself, or its negation, happen to be true for symbolic integers. A priori, every statement that starts "for every integer n" is completely meaningless." Count Iblis (talk) 20:31, 2 November 2013 (UTC)[reply]

The problem (or, at least, one problem) with this ultrafinitistic philosophy is that it fails to give an explanation for the observed consistency (that is, failure to find an inconsistency) in mathematical theories.
Now, you could say that that doesn't need an explanation — there may be an inconsistency but we just haven't found it yet. In other words, you reduce it to a "brute fact". But any physical observations could equally be brute facts, so that's not terribly satisfying.
On the other hand, mathematical realism (sometimes called Platonism) explains the observations perfectly well. --Trovatore (talk) 20:37, 2 November 2013 (UTC)[reply]
Right, that's why I said that an inconsistency proof of basic axiomatic systems would have to be extraordinarily nuanced or complicated: it couldn't be simple at all, since on a simple scale the systems we work with seem very consistent.
On the other hand I have a question for you guys: one respondent above (Bubba73) said that if it is inconsistent, this can be proven. Is this necessarily the case? Maybe an axiomatic system could be inconsistent without either it, or the system with which it is being examined, being strong enough to prove this inconsistency? After all, we no longer expect to be able to 'prove' everything that is true (incompleteness theorem). So I think the person above (Bubba73) might be wrong in saying that if it is inconsistent, this can be proven. 212.96.61.236 (talk) 00:12, 3 November 2013 (UTC)[reply]
If an axiomatic system is inconsistent, then by definition, there is a proof in that system of both a statement and its negation. In principle, those proofs can be found, and therefore, yes, the inconsistency can be proved. In practice, it is possible that the proofs may not be humanly discoverable (they might even be too long to be expressed in the observable universe). --Trovatore (talk) 00:51, 3 November 2013 (UTC)[reply]
If it is inconsistent then you can derive a contradiction from the axioms (in a finite number of steps). That is what it means. Bubba73 You talkin' to me? 01:29, 3 November 2013 (UTC)[reply]
One thing that seems to be missing from this discussion is Russel's paradox. Before it was discovered, set theory seemed to be a fairly natural extension of logic and therefore on sound footing. Basing the rest of mathematics on set theory was seen as a way of making it more rigorous. People probably reasoned, as above, that a contraction extremely unlikely to be found even if one existed. There were high hopes that a proof of consistency would be found. So when a contraction was found, in particular one that could be expressed in a single paragraph, it was very unexpected. Russel found (in programming lingo) a rather kludgy work-around to remove the paradox, improved upon since but still a bit of a kludge. Which is why a consistency proof seemed more important and why Gödel's theorem was more of a problem than it seems now. Right now the status is "so far so good" operating on the assumption that if there was a contradiction then it would have been found by now. But by that reasoning there is a higher probability that set theory will be proven inconsistent than that the Riemann hypothesis will be proven. If there is a contradiction what will probably happen is what happened with Russel's paradox, someone will weaken the axioms enough so that the contradiction disappears while keeping enough so that set theory still works as a basis for the rest of mathematics. So accountants and engineers can sleep peacefully in the knowledge that they won't wake up to find 2+2=5 or square wheels work better than round ones. --RDBury (talk) 09:29, 3 November 2013 (UTC)[reply]
So it is true that the RP is a historical instance of falsification in mathematics (which is good, it proves it's falsifiable!). But there are some problems with your narrative, just the same.
Mainly, it is not clear that Cantor ever tried to unify set theory and logic, and if he did, he realized his mistake long before Russell. Remember how Cantor got started: He was not coming from abstract logic at all; he started with real analysis, specifically, the points of discontinuity of Fourier sums. His first "sets" were specifically sets of real numbers. If you look at Contributions to the Founding of the Theory of Transfinite Numbers today (BTW you can get it at Amazon for twelve bucks — just go do it; you'll remember it better than anything else you're likely to have spent it on), there is nothing at all that is jarring compared to a modern approach based on the iterative hierarchy.
So summing up, RP relies on a conflation of the extensional and intensional notion of set, and it's not clear Cantor ever did that. (The historiography on the question is mixed.)
As for Russell's "kludgy" fix, that would be the system of Principia Mathematica, which is a historical dead-end. No one uses it today. Probably the biggest barrier to entry is the awful notation; personally, I have never seriously tried to get over that barrier, so I don't know how much of a kludge it actually is.
But it really doesn't matter, because modern set theory does not descend from Russell, but from Ernst Zermelo, who got Cantor right. I do not see Zermelo as having worked around a flaw in Cantor as exposed by Russell — what he did was go back carefully to the iterative idea, arguably already implicit in Cantor, and formalize that.
So summing up a second time, yes, there is an a posteriori component to all this, but the idea that modern set theory is a shaky workaround to an original flawed design is not accurate. --Trovatore (talk) 18:43, 3 November 2013 (UTC)[reply]
Having looked up the dates, I should say that the Contributions translation I'm talking about is from 1915, fourteen years after the Russell paradox and eight after Zermelo's first work on the subject, and I don't know how much Philip Jourdain, who translated it, might have tweaked the translation in response. I should try to work through the original Beiträge (1895,1897) sometime and see. --Trovatore (talk) 19:29, 3 November 2013 (UTC)[reply]
It's Frege, not Cantor, that you want to look to for a unification of set theory and logic (and it was in response to a preprint of Frege's manuscript that Russell first conceived of the paradox). Also Russell's system was that of type theory (in practice ramified type theory to get around other related paradoxes) which has been influential in its own way, and lives on in New Foundations which still kicks around of the outskirts. -- Leland McInnes (talk) 02:55, 4 November 2013 (UTC)[reply]
Yes, exactly: That was Frege's mistake. Set theory is not the same as logic. --Trovatore (talk) 03:49, 4 November 2013 (UTC)[reply]
We have a (short) history of the development of axiomatic systems, and have found that we need to tweak and redefine things to sidestep inconsistencies at subtle levels as we discover them. The tweaks may occur at the level of semantics (exact assigned meaning to formal constructs), at the level of axioms, or in the not-too-failsafe application of the deductive reasoning process. If I were a software engineer looking at this as a software development project, I'd say that the rate of discovery of serious bugs suggests that we should project that there are several nasty ones still lurking, waiting for discovery in the next centuries.
It may help to recognize that today we have a multitude of axiomatic systems, and without being more specific about these, the OP's question is a little without context. Are we really that confident that the axiom of choice will not be found as inconsistent with other axioms? Are we really sure that we have adequately formally axiomatized set theory? I'd suggest that even the axiomatic approach to mathematics might end up being tweaked or replaced eventually. I think we would do well to consider the lessons learned by physicists in the wake of the confidence of around 1900, before the rug was yanked from under their feet. — Quondum 18:12, 4 November 2013 (UTC)[reply]
Taking your points one by one:
  1. Are we sure that AC will not be found inconsistent with the other axioms? Well, it depends on what "the other axioms" are. But for the other axioms of Zermelo–Fraenkel set theory (ZF), the answer is yes, provided those axioms are not inconsistent by themselves. See constructible universe. You can even throw in some pretty strong large-cardinal axioms, provided their inner model theory has been worked out (though the arguments get much more difficult, and I suppose you could say we could have made a mistake, but then we always could have made a mistake).
  2. Are we sure we have adequately formalized set theory? Not sure what that means, really. Formalization is not the key to confidence. No one really uses formal theories — formal theories are an object of study, not what we actually work in.
  3. The axiomatic approach might end up being tweaked? We don't really use it now, not really. We have the axiom systems as a fixed point of reference, but it's not how we work in practice.
  4. "Consider the lessons learned by physicists." Again, not sure just what this means. They got embarrassed by thinking they had it all worked out, and we could get embarrassed too? But getting embarrassed is not the end of the world. Fortune favors the bold. What is to be gained by stating things half-heartedly?
So the bottom line is, for the axiom systems normally thought of as foundational (Peano arithmetic, ZFC, large cardinals) we have very very considerable confidence that they are consistent. Of course, we could be wrong. But we could always be wrong. --Trovatore (talk) 19:16, 4 November 2013 (UTC)[reply]
(I want to clarify what I said in point 1 about "you can even throw in some pretty strong large-cardinal axioms". What I meant was, for some fairly strong large-cardinal axiom X, we can show that if ZF+X is consistent, then so is ZF+AC+X, by techniques analogous to the ones that show that if ZF is consistent, then so is ZF+AC. I certainly did not mean we can show that if ZF is consistent, then so is ZF+X. We can't do that — that's kind of the whole point.) --Trovatore (talk) 20:59, 4 November 2013 (UTC)[reply]
Basically you're saying: we're considerably more confident than we were before, because this time we've been able to dot all the "i"s and cross all the "t"s to our satisfaction, but we accept that we may still have overlooked something. And that consistency has not been proven, but the confidence is based on, for want of a better word, experience and intuition. And the process by which we work is not formal (yes I know: formal systems are horribly limited) but relies on wetware and a lot of difficult manipulations of immensely complex and subtle concepts that only the brightest can understand in depth, and a lot of trust in fellow mathematicians for building a consistent mathematical system of which each person can check only a part, held together by redundancy for self-checking. Results like Gödel's incompleteness theorems come about at a high level of complexity, giving us new insights into axiomatic systems, truth and provability. Add to this that there are presumably different possible useful consistent choices for axioms, not necessarily equivalent. I'm not trying to belittle the confidence in say ZFC (which I do not claim to have studied), just trying to pinpoint the source of what confidence we have, so that the original question can be contextualised. — Quondum 16:39, 5 November 2013 (UTC)[reply]
The source of confidence is (at least partly) a posteriori. It's of the same kind as our confidence in physical laws. Yes, it could be wrong; so what? Anything we think could be wrong. --Trovatore (talk) 19:08, 5 November 2013 (UTC)[reply]
Agreed. But look at the question: what a posteriori perspectives in this could we envision for 4000 years hence? I can quite see us having made some major changes to our axiomatic systems in that time period. — Quondum 05:59, 6 November 2013 (UTC)[reply]
Well, hopefully we will have discovered more about sets, and will therefore have stronger axiomatic treatments, yes. Maybe, for example, we'll know whether the continuum hypothesis is true or false, which is a question that can't be decided by the current axioms. I don't think any of the existing axioms will be left out, because I believe they are true statements about sets.
I did want to go back and comment on your line about how we think "this time" we have things right. This suggests to me that you have accepted the same narrative as RDBury, that there was a previous, generally accepted, axiomatic treatment of sets, considered foundational, which was then refuted by Russell.
I want to suggest that that narrative is just very very wrong and non-factual. In my view, there's a straight line from the non-axiomatic Cantorian notion of set (at least, his later one), through Zermelo, von Neumann, and Goedel, to today's set theory, and that Cantor's pre-axiomatic conception was never refuted at all, but still holds good (though hopefully we have a clearer view of it than he did). What was refuted was Frege's interpretation of that theory, and the Frege–Russell "logicist" notion of set. So that's a historical detour, worth trying, but proved wrong. Not an indication of "bugs that we've found, and who knows if there are more". --Trovatore (talk) 08:59, 6 November 2013 (UTC)[reply]
Well, I have a very fuzzy and anecdotal concept of the timeline, which is probably horribly distorted. But one should also be careful of retrospective pruning of the path that mathematics took: if modern-day consensus is used to filter any dead-end paths from the tree of mathematical progression, of course it will always seem as though we never took a wrong turn. — Quondum 07:07, 8 November 2013 (UTC)[reply]

A follow up Q: How confident are we that various axiom candidates are consistent and also correct? I mean by "axiom candidates" mostly large cardinal axioms. I leave it to you to interpret "correct" here. I'm sure opinions differ about what "correct" means. Interesting thread. Keep it boiling! YohanN7 (talk) 21:38, 4 November 2013 (UTC)[reply]

Well, you might argue that it's almost the definition of a large-cardinal axiom, that it "ought to be" true if it's consistent. Just to take a really really simple example, a natural model of ZFC+"there is no inaccessible cardinal" is Vκ, where κ is the smallest inaccessible cardinal. Or, L is a model of ZFC+"there is no measurable cardinal", not because any actual measurable cardinal κ is not an element of L, but because the ultrafilters that witness that κ is measurable are too complicated to live inside L.
So in some sense we naturally get models where the large-cardinal axiom does not hold by, in a natural way, leaving stuff out.
That means that those models are deficient in a very clear way — the intended model of ZFC, the von Neumann universe, is not supposed to leave anything out. At each step, you take the full powerset of what came before, not leaving out any subsets, and you iterate through all the ordinals, not just some of them.
So if there "can" be an inaccessible (or measurable) cardinal, then there "should" be. Now, you could argue that there might be ways that there "can't" be such a cardinal that are not captured by first-order inconsistency, so I don't want to present this argument as more definitive than it is, but hopefully you sort of see the direction it's going. --Trovatore (talk) 01:52, 5 November 2013 (UTC)[reply]
Let me know if I understand you correctly. My interpretation (and a couple of questions again) of what you write follows:
Suppose large cardinal axiom K is either true or false. Then we have (by definition) models where K is true and false respectively. (These aren't very interesting in this context, and you don't mention them).
But suppose large cardinal axiom K is true. Then we (in addition to the model obtained above) have a model in which K is not true, namely Vκ (or else it wouldn't be a large cardinal axiom). By iterating this model, using the full power set axiom, we get a model in which K is true.
Is this correct so far?
If so, it still seems to require a "leap of faith" to conclude that a model of K will appear in the V, if we a priori don't assume it (like I did in the second step above). I suspect that the leap of faith in question is encapsulated in the quotation marks you use for the word "should".
Another speculation: Suppose that a nontrivial proposition is consistent, but not provable. Now, our system of axioms may not be complete, but this doesn't mean that it is totally incompetent when it comes to ruling out nonsense. Thus if a nontrivial statement passes the no-nonsense test, isn't there a pretty big likelihood for it to be true? There is a leap of faith here too. YohanN7 (talk) 21:14, 5 November 2013 (UTC)[reply]
Hmm..., that speculation of mine in the last paragraph doesn't stand up to inspection, at least not unless "nontrivial statement" is qualified a bit.

An exponential question

edit

Dear Wikipedians:

For the following exponential function:

 

I was able to successfully deduce the original equation is  . However, I have arrived at this answer via trial and error. I am wondering if there is a more systematic way of deducing this answer.

Ed: The +7 part I was able to derive from the fact that the function's asymptote is y=7. The coefficient and exponent parts I had to do via trial and error.

Thanks,

L33th4x0r (talk) 21:37, 2 November 2013 (UTC)[reply]

There is never just one solution to such problems. There are infinitely many algebraic expressions that fit (interpolate) n points on a graph. 51kwad (talk) 15:26, 7 November 2013 (UTC)[reply]
You could use a Nonlinear regression and if a general equation of that form is tested, those five points will determine the coefficients. Bubba73 You talkin' to me? 23:03, 2 November 2013 (UTC)[reply]
Step by step simplification. You subtracted the y values from the guessed asymptote to get (x,7-y)=(-6,3/4), (-5,3/2), (-4,3), (-3,6), (-2,12). Note that the y-7 values are doubled each time x is increased by one. So (x,(7-y)*2^(-x))=(-6,48), (-5,48), (-4,48), (-3,48), (-2,48). Note that the dependent variable has the constant value (7-y)*2^(-x)=48. Solve the equation to get the final result y=7-48*2^x. This is slightly more systematic than mere trial and error. Bo Jacoby (talk) 14:04, 3 November 2013 (UTC).[reply]
Right. Your correct equation   can be written more simply as   In parametric form you would postulate   If you have the hypothesis that a=7, then you have  , so   hence   where ln is the natural logarithm. So now you can run the linear regression Y=B+Cx where   and see if you get a perfect R2 of 1.0. Then since   and   you can compute   and  . [corrected version]Duoduoduo (talk) 15:42, 3 November 2013 (UTC)[reply]

Thank you all for your help! L33th4x0r (talk) 00:06, 4 November 2013 (UTC)[reply]

  Resolved