• Shawn
    13.2k
    I have posted elsewhere the following thought:

    For first order theories, is it correct that there is a brute force algorithm that tells us the shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)?

    With the answer being:

    There are two reasonable ways to measure the length of a proof: by steps or by symbols. For step-length, we can brute-force-search for proofs and be certain we've found the shortest one as long as our axiom system is finite - for symbol-length, we only need that the language is finite and our axiom system is computably axiomatized (e.g. PA and ZFC).

    That said, in either case we have a serious issue as far as complexity is concerned: as long as our axiom system is "sufficiently rich," for every computable function F there is some theorem φ which has no proof of length ≤F(length(φ)). (Basically, we're looking at the Busy Beaver function in disguise when we talk about bounding proof searches ahead of time.)
    StackExchange

    With the above in mind, let us assume that a brute force algorithm doesn't exist that can circumvent the Busy Beaver function. Therefore, because of this does this necessarily mean that the alphabetical complexity* of algorithms themselves are also unascertainable given the above?
  • tim wood
    9.3k
    the Busy Beaver function.Shawn
    You get points if you can make clear just what the busy beaver function is.

    For first order theories, is it correct that there is a brute force algorithm that tells us the shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)?Shawn
    "Proof" needs a rather careful definition. One such, purely mechanistic, is a sequence of sequences of symbols such that they are related under modus ponens and the theorem in question is the last. Brute force would then be, starting with sequences of length one, and lexicographically, testing each one in order. The first encountered then being the shortest. "Testing" itself should not be a difficult step.
  • TheMadFool
    13.8k
    shortest proof length for any given theorem ('length' means the sum of the lengths of the formulas that appear as lines in the proof)?Shawn

    I see a paradox. If I'm anywhere near the ballpark, finding a shorter proof (should) take(s) longer than finding a longer proof? A dilemma in the making? Should we save space (short proofs) or should we save time (long proofs)? :chin:
  • TonesInDeepFreeze
    3.7k


    We're addressing one particular metric: length of of the sum of the lengths of the formulas. This is not meant in itself to imply anything about the time it takes human beings to devise or compose proofs. This is not paradoxical.
  • Shawn
    13.2k
    @TonesInDeepFreeze

    I got an answer to this question in regards to partial computability and total computability.

    See:

    https://math.stackexchange.com/questions/4164255

    I think the issue here is a confusion re: partial vs. total computable functions. (Below I fix some "appropriate ambient axiom system," say ZFC - so e.g. "theorem" means "ZFC-theorem" and so on.)

    Via brute-force-search, the partial function p defined as follows is computable:
    p(#φ)={The length of the shortest ZFC-proof of φ↑if ZFC⊢φ otherwise,

    where "↑" denotes "is undefined" and #φ is the Godel number of φ. This is utterly unproblematic. Note, however, that p is not always defined. This makes sense: if I try to brute-force-search for a proof of a non-theorem φ, I'll just spin my wheels forever and not get anywhere. Granted, for some specific non-theorems there may be an easy argument that they are not provable from ZFC, but (i) I can't expect that in general (this is a consequence of Godel's theorem) and (ii) regardless of point (i), such cleverness isn't something the brute-force-searching-function p does.

    So all that is pretty simple. When we try to relate p to total computable functions, however, things get a bit weird. While computable, p grows extremely fast: for every total computable function t there is some n such that p(n)↓>t(n). (Here "↓>" means "is defined and greater than.") This may feel absurd at first, but in fact is a key point illustrating just how much more "free" partial computable functions are than total computable functions.

    ---
    The busy beaver function, or any of its many variants (e.g. we could forget it entirely and just think about Godel's incompleteness theorem), comes into the picture when we prove the claim in the previous paragraph. But it's utterly irrelevant to the existence of the original partial computable function p.
    ----
  • Shawn
    13.2k
    I posted something to the sort of:

    When you mention point (i), that you can't expect this in general, as a consequence of Godel's theorem, what do you mean? Furthermore, when you say that "free" partial computable functions exist that are non-subject to the Busy Beaver problem, what does that render them as? Finally, are you artificially drawing this distinction between partially computable and total computable for the complexity class of NP complete or are they in separate classes?

    To the above.
  • TonesInDeepFreeze
    3.7k


    Modulo any typos or formatting glitches from one forum to another, of course, I agree that the function is partial.

    And I take the poster's word about the growth rate.

    But I don't know enough about the busy beaver problem and its comparison with incompleteness to say anything about it.
  • Shawn
    13.2k
    But I don't know enough about the busy beaver problem and its comparison with incompleteness to say anything about it.TonesInDeepFreeze

    See:

    https://math.stackexchange.com/questions/4164255/alphabetical-complexity-of-algorithms/4164291?noredirect=1#comment8623761_4164291
  • fishfry
    3.4k
    I see a paradox. If I'm anywhere near the ballpark, finding a shorter proof (should) take(s) longer than finding a longer proof?TheMadFool

    That reminds me of the famous quote attributed to Blaise Pascal, inventor of the Pascal programming language, I would have written a shorter letter, but I did not have the time.

    https://www.npr.org/sections/13.7/2014/02/03/270680304/this-could-have-been-shorter

    (*) Just kidding.


    You definitely got your money's worth from Noah's answer. He also commented, "This is something you've done in your other posts: you're throwing around computability/complexity-theoretic terminology in too vague a way to really be addressed. You should see if you can precisely pose one of the various questions you're asking; I think trying to do this will help clarify the issues."

    You (and other thread participants) might enjoy this amazing paper by Scott Aaronson and one of his students, The 8000th Busy Beaver number eludes ZF set theory

    One can also phrase what we’re asking in terms of the infamous Busy Beaver function. Recall that BB(n), or the nth Busy Beaver number, is defined to be the maximum number of steps that any n-state Turing machine takes when run on an initially blank tape, assuming that the machine eventually halts. The Busy Beaver function was the centerpiece of my 1998 essay Who Can Name the Bigger Number?, which might still attract more readers than anything else I’ve written since. As I stressed there, if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory. For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.

    But the BB function has a second amazing property: namely, it’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle. To see why, consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory. Clearly such a machine could be built, with some finite number of states k. But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent! For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.

    Pretty wild stuff.
  • TheMadFool
    13.8k
    Thanks for the encouraging words and I like the joke about how Blaise Pascal woul've written a shorter letter had he had the time :smile:

    What I find problematic with defining proof length in terms of numbers of symbols in one is that it seems to miss the point. Proofs are, if you really look at it, logical entities and symbols are not, at least not in the numerical sense.

    In my humble opinion, proof "lengths" must be measured, if possible, in terms of how many logical steps are taken from the start (premises) to the end (conclusion). To illustrate,

    1. p v q [premise]
    2. ~p [premise]
    Ergo,
    3. q [conclusion]

    The proof "length" in the above argument, a disjunctive syllogism, is 3 since 3 logical steps were taken. If we use symbol count then the same proof has a proof length of 6 symbols. I'm not sure but it might be that there's a correlation between symbol sets for a particular logical system and the number of logical steps necessary for a proof in that system.

    Also, natural deduction seems to employ a classical method which consists of 3 propositions, 2 premises and 1 conclusion. What this means is, if we take into account logical steps instead of symbol count, proof lengths should be multiples of 3. This immediately gives us an easy technique for finding put if a given proof is the shortest available or not. If a proof has logical steps that aren't multiples of 3, something's wrong. [Warning! I might've overlooked other factors that might affect the number of logical steps in a proof].
  • fishfry
    3.4k
    What I find problematic with defining proof length in terms of numbers of symbols in one is that it seems to miss the point. Proofs are, if you really look at it, logical entities and symbols are not, at least not in the numerical sense.TheMadFool

    Symbol length is just one way to measure the length of a proof. It's the one the OP is interested in but it's not the only one.

    On the other hand from a formal perspective, a proof is just a syntactic entity. It's just a long string of symbols. So it does make sense to count the total number of symbols and call that the length of a proof. It's not the only way, but it's a perfeclty sensible way.


    In my humble opinion, proof "lengths" must be measured, if possible, in terms of how many logical steps are taken from the start (premises) to the end (conclusion). To illustrate,

    1. p v q [premise]
    2. ~p [premise]
    Ergo,
    3. q [conclusion]
    TheMadFool

    Of course that is "a" way to measure proof lenght. But when you say proof lengths MUST be measured that way, it's a bit dogmatic. Like saying distances MUST be measured in meters, whereas I, a confirmed Yank, prefer feet and yards. There's no preferred way, there are just different ways to measure things.

    The proof "length" in the above argument, a disjunctive syllogism, is 3 since 3 logical steps were taken. If we use symbol count then the same proof has a proof length of 6 symbols. I'm not sure but it might be that there's a correlation between symbol sets for a particular logical system and the number of logical steps necessary for a proof in that system.TheMadFool

    I don't think anyone's disagreeing with this, it's just that the OP is interested in total symbol length.

    Also, natural deduction seems to employ a classical method which consists of 3 propositions, 2 premises and 1 conclusion. What this means is, if we take into account logical steps instead of symbol count, proof lengths should be multiples of 3. This immediately gives us an easy technique for finding put if a given proof is the shortest available or not. If a proof has logical steps that aren't multiples of 3, something's wrong. [Warning! I might've overlooked other factors that might affect the number of logical steps in a proof].TheMadFool

    Again, if I'm understanding this thread, the OP is interested in total symbol length as the measure of the length of a proof. But there could be other measures.
  • TheMadFool
    13.8k
    Symbol length is just one way to measure the length of a proof. It's the one the OP is interested in but it's not the only one.fishfry

    :sweat: So, I wasn't talking nonsense.

    There's no preferred way, there are just different ways to measure things.fishfry

    That's news to me. The way it seems to me, there's no point in talking about a book - the book being a message of some kind - in terms of how many words are in it. Similarly, proofs - logical entities - shouldn't be viewed as symbols.
  • TonesInDeepFreeze
    3.7k
    What I find problematic with defining proof length in terms of numbers of symbols in one is that it seems to miss the pointTheMadFool

    No, because you don't understand the very particular point of this particular enquiry. This is not about saying how we should regard the notion of proof in some philosophical sense. Rather, it is a mathematical question about a "brute force" algorithm for generating formal proofs, which are sequences of sequences of symbols, and thereby enumerating the shortest proofs.

    Whatever your views about what proofs should be, the context of mathematical logic (thus the context of the questions posed at the other forum) is that proofs are sequences of sequences of symbols (or other finitary characterizations such as certain kinds of trees, tableaux, et. al).

    proof "lengths" must be measured, if possible, in terms of how many logical steps are taken from the start (premises) to the end (conclusion).TheMadFool

    That is one way to do it. We seem to get different answers to the "brute force" question depending on whether we ask about "the sum of the lengths of the formulas" or "the number of lines".

    We are interested in how "hard" (scare quotes there are needed) it is a for a Turing machine to enumerate all the proofs. Toward that consideration, see that merely stating the number of lines might not be as instructive:

    Consider a proof with only three lines, but the formulas on each of those lines are over a billion symbols each. That is a short proof in terms of lines, but not a short proof in terms of lengths of the expressions.

    natural deduction seems to employ a classical method which consists of 3 propositions, 2 premises and 1 conclusionTheMadFool

    Where did you read such a thing?
  • fishfry
    3.4k
    :sweat: So, I wasn't talking nonsense.TheMadFool

    I can't be certain of that. Symbol count is a sensible way to measure the length of a proof. I wasn't entirely sure what you were getting at, so I said it was another way. I suppose it is. I didn't follow the divisible by 3 part.

    That's news to me. The way it seems to me, there's no point in talking about a book - the book being a message of some kind - in terms of how many words are in it.TheMadFool

    Some writers get paid by the word. Didn't your English teacher ever tell you to write a 200 word essay or whatever? Before the Internet, newspaper writers had to adhere to strict word counts to fill up but not exceed the available space. Word count is one of the most common metrics in writing.

    Similarly, proofs - logical entities - shouldn't be viewed as symbols.TheMadFool

    But that's exactly what they are in formal terms. Formally you have an alphabet, and strings of symbols written using the alphabet. If you have a collection of strings that can derive another string, using carefully specified derivation rules, that's a proof. The total length of the characters in the strings is the length of the proof. Those are standard definitions. In fact in doing proofs about proofs, one often uses induction on the symbol length of a proof.
  • TonesInDeepFreeze
    3.7k
    The way it seems to me, there's no point in talking about a book - the book being a message of some kind - in terms of how many words are in it. Similarly, proofs - logical entities - shouldn't be viewed as symbols.TheMadFool

    We are not saying that the import of a proof is the number of symbols in it.
  • TheMadFool
    13.8k
    Ok. I present below an argument in two symbolic versions, one in plain English and the other using logical notation, of the same argument.

    In English (Argument A)
    1. If x is greater than two then x is greater than one
    2. x is greater than two
    Ergo,
    3. x is greater than one [1, 2 modus ponens]

    In logical notation (argument B)
    1. (x > 2) -> (x > 1)
    2. x > 2
    Ergo,
    3. x > 1 [1, 2 DS]

    If we go by symbol count, argument B is much, much shorter than argument A i.e. symbol count judges argument A and B as different. However, logically, these two arguments are identical, their premises are identical, their conclusions are too.

    :chin:
  • TonesInDeepFreeze
    3.7k
    logically, these two arguments are identical, their premises are identical, their conclusions are too.TheMadFool

    A machine can check the second proof, But a machine cannot check the first proof (unless the machine had those English phrases programmed as fixed syntactical pieces, in which case the first argument would also be formal).
  • TheMadFool
    13.8k
    Go back to read the posts.TonesInDeepFreeze

    I hear you, I hear you, old chap (I really hope "old chap" is appropriate) :lol:
  • TonesInDeepFreeze
    3.7k


    As far as I can tell, you don't understand the nature and motivation for formal languages. You would benefit from an introductory chapter in a book that explains it, and an introductory textbook on symbolic logic.
  • fishfry
    3.4k
    If we go by symbol count, argument B is much, much shorter than argument A i.e. symbol count judges argument A and B as different. However, logically, these two arguments are identical, their premises are identical, their conclusions are too.TheMadFool

    The first step in counting symbols is to fix the alphabet and inference rules. What you've done is use two different symbolic systems.
  • TheMadFool
    13.8k
    The first step in counting symbols is to fix the alphabet and inference rules. What you've done is use two different symbolic systems.fishfry

    Indeed, you're correct. We must reduce the symbol set's size to a minimum. Ambiguity must be eliminated at all cost I suppose and that's gonna affect the symbol count, increasing it. We would also have to look into the smallest unit of thought which I reckon is a single concept, each assigned an exclusive symbol of its own. What else? I'm out of my elements I'm afraid.

    What strikes me as odd is if we go by the number of symbols in judging proof length then, it becomes more a linguistic issue than a logical one.
  • fishfry
    3.4k
    it becomes more a linguistic issue than a logical one.TheMadFool

    It's a syntactic issue. Proofs are syntax. Symbol manipulation. A computer can verify a proof. There's no meaning in syntax. A proof just means you have a collection of symbol strings that can be manipulated according to predefined rules to end up with some other symbol string. Formally, that's what a proof is.
  • TonesInDeepFreeze
    3.7k
    it becomes more a linguistic issue than a logical one.TheMadFool

    It's a mathematical one.
  • TonesInDeepFreeze
    3.7k
    We would also have to look into the smallest unit of thought which I reckon is a single concept, each assigned an exclusive symbol of its own.TheMadFool

    Roughly speaking, those are the primitives.
  • Shawn
    13.2k
    For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.

    consider again a Turing machine M that halts if and only if there’s a contradiction in ZF set theory. Clearly such a machine could be built, with some finite number of states k. But then ZF set theory can’t possibly determine the value of BB(k) (or BB(k+1), BB(k+2), etc.), unless ZF is inconsistent! For to do so, ZF would need to prove that M ran forever, and therefore prove its own consistency, and therefore be inconsistent by Gödel’s Theorem.

    Pretty wild stuff.fishfry

    Is this due to the bounded values between partial computable functions and total computable functions is itself indeterminate to determine thus complexity, and therefore, following from this complexity for the precise boundary is unascertainable?

    Also, asking @TonesInDeepFreeze?
  • fishfry
    3.4k
    Is this due to the bounded values between partial computable functions and total computable functions is itself indeterminate to determine thus complexity, and therefore, following from this complexity for the precise boundary is unascertainable?Shawn

    You broke my parser.
  • Shawn
    13.2k
    You broke my parser.fishfry

    What I'm asking is if there's a way to determine partial computable functions from total computable functions for the Busy Beaver issue not to arise?
  • fishfry
    3.4k
    What I'm asking is if there's a way to determine partial computable functions from total computable functions for the Busy Beaver issue not to arise?Shawn

    What specifically is the Busy Beaver issue? I'm not following the details. I'm sure I don't know the answer anyway. Noah's response on SE seemed comprehensive.

    Can you say in your own words what the Busy Beaver function is, and what's interesting about it?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.