There are two reasonable ways to measure the length of a proof: by steps or by symbols. For step-length, we can brute-force-search for proofs and be certain we've found the shortest one as long as our axiom system is finite - for symbol-length, we only need that the language is finite and our axiom system is computably axiomatized (e.g. PA and ZFC).
That said, in either case we have a serious issue as far as complexity is concerned: as long as our axiom system is "sufficiently rich," for every computable function F there is some theorem φ which has no proof of length ≤F(length(φ)). (Basically, we're looking at the Busy Beaver function in disguise when we talk about bounding proof searches ahead of time.) — StackExchange
You get points if you can make clear just what the busy beaver function is.the Busy Beaver function. — Shawn
"Proof" needs a rather careful definition. One such, purely mechanistic, is a sequence of sequences of symbols such that they are related under modus ponens and the theorem in question is the last. Brute force would then be, starting with sequences of length one, and lexicographically, testing each one in order. The first encountered then being the shortest. "Testing" itself should not be a difficult step.For first order theories, is it correct that there is a brute force algorithm that tells us the shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)? — Shawn
shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)? — Shawn
But I don't know enough about the busy beaver problem and its comparison with incompleteness to say anything about it. — TonesInDeepFreeze
I see a paradox. If I'm anywhere near the ballpark, finding a shorter proof (should) take(s) longer than finding a longer proof? — TheMadFool
One can also phrase what we’re asking in terms of the infamous Busy Beaver function. Recall that BB(n), or the nth Busy Beaver number, is defined to be the maximum number of steps that any n-state Turing machine takes when run on an initially blank tape, assuming that the machine eventually halts. The Busy Beaver function was the centerpiece of my 1998 essay Who Can Name the Bigger Number?, which might still attract more readers than anything else I’ve written since. As I stressed there, if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory. For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.
But the BB function has a second amazing property: namely, it’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle. To see why, consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory. Clearly such a machine could be built, with some finite number of states k. But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent! For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.
What I find problematic with defining proof length in terms of numbers of symbols in one is that it seems to miss the point. Proofs are, if you really look at it, logical entities and symbols are not, at least not in the numerical sense. — TheMadFool
In my humble opinion, proof "lengths" must be measured, if possible, in terms of how many logical steps are taken from the start (premises) to the end (conclusion). To illustrate,
1. p v q [premise]
2. ~p [premise]
Ergo,
3. q [conclusion] — TheMadFool
The proof "length" in the above argument, a disjunctive syllogism, is 3 since 3 logical steps were taken. If we use symbol count then the same proof has a proof length of 6 symbols. I'm not sure but it might be that there's a correlation between symbol sets for a particular logical system and the number of logical steps necessary for a proof in that system. — TheMadFool
Also, natural deduction seems to employ a classical method which consists of 3 propositions, 2 premises and 1 conclusion. What this means is, if we take into account logical steps instead of symbol count, proof lengths should be multiples of 3. This immediately gives us an easy technique for finding put if a given proof is the shortest available or not. If a proof has logical steps that aren't multiples of 3, something's wrong. [Warning! I might've overlooked other factors that might affect the number of logical steps in a proof]. — TheMadFool
Symbol length is just one way to measure the length of a proof. It's the one the OP is interested in but it's not the only one. — fishfry
There's no preferred way, there are just different ways to measure things. — fishfry
What I find problematic with defining proof length in terms of numbers of symbols in one is that it seems to miss the point — TheMadFool
proof "lengths" must be measured, if possible, in terms of how many logical steps are taken from the start (premises) to the end (conclusion). — TheMadFool
natural deduction seems to employ a classical method which consists of 3 propositions, 2 premises and 1 conclusion — TheMadFool
:sweat: So, I wasn't talking nonsense. — TheMadFool
That's news to me. The way it seems to me, there's no point in talking about a book - the book being a message of some kind - in terms of how many words are in it. — TheMadFool
Similarly, proofs - logical entities - shouldn't be viewed as symbols. — TheMadFool
The way it seems to me, there's no point in talking about a book - the book being a message of some kind - in terms of how many words are in it. Similarly, proofs - logical entities - shouldn't be viewed as symbols. — TheMadFool
logically, these two arguments are identical, their premises are identical, their conclusions are too. — TheMadFool
Go back to read the posts. — TonesInDeepFreeze
If we go by symbol count, argument B is much, much shorter than argument A i.e. symbol count judges argument A and B as different. However, logically, these two arguments are identical, their premises are identical, their conclusions are too. — TheMadFool
The first step in counting symbols is to fix the alphabet and inference rules. What you've done is use two different symbolic systems. — fishfry
it becomes more a linguistic issue than a logical one. — TheMadFool
it becomes more a linguistic issue than a logical one. — TheMadFool
We would also have to look into the smallest unit of thought which I reckon is a single concept, each assigned an exclusive symbol of its own. — TheMadFool
For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.
consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory. Clearly such a machine could be built, with some finite number of states k. But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent! For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.
Pretty wild stuff. — fishfry
What I'm asking is if there's a way to determine partial computable functions from total computable functions for the Busy Beaver issue not to arise? — Shawn
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.