• 9
I have two questions about logic/reasoning.

1) Is it ever reasonable to concede the truth of each of the premises of a deductive argument and yet deny the conclusion. Let me explain. Suppose we have an argument in modus ponens form:

1. If x, then y.
2. x
3. Therefore, y.

Now, suppose that while it is more plausible that premise 1 and 2 are true than their contraries, it is not that much more plausible. That is, suppose, for the sake of my question, that premise 1 and 2 each have a 65% chance of being true, and therefore their contraries only have 35%. Consequently, it would seem reasonable to accept both premises as true--that is, assuming you don't believe that nothing should be admitted unless it is absolutely certain and indubitable--and since the argument is intended to be deductive, it would seem that the conclusion would also have to be accepted since it follows from the premises. However, is this necessarily the case? The reason is because for the conclusion to be true, both premises have to be true. However, while each premise individually considered is more likely to be true than their contraries, the chance (mathematically speaking, in this example) that both are true at the same time is 0.65 * 0.65, or 0.4225 (42.25%). This, obviously, is lower than 50%. Since this is the case, can it be rational to think premises of a deductive argument are true and yet waver on the conclusion being true?

2) Denying the antecdent is an invalid form of reasoning. However, don't we use it a lot in everyday life. For example, suppose a friend were to say to us: "If you attend my party, you will receive a prize." Immediately, we would think, "If I don't attend the party, I won't receive a prize. But since I want a prize, I must attend the party." And, I suppose that if one decided not to attend and then wondered why he did not receive the prize, the friend would say: "You did not receive a prize since you didn't come the party."

Isn't this a kind of denying the antecedent? Or, if not that, at least a tendency in everyday reasoning to assume that the opposite of a conditional is true? For example, if a parent says to their kid, "If you listen, you won't be punished," the kid probably assumes that the contrary is also true, that is, "If you don't listen, you wil be punished." Is it legitimate, however, if someone says, "If x, then y," to then assume, "If not x, then not y?"
• 7.6k
I like to look at modus tollens and other (argument) forms based on it in terms of possibility

Modus ponens

1. If I punch X then X feels pain (assuming X is your average Joe with no weird medical conditions)
2. I punch X
Ergo
3. X feels pain

Affirming the consequent (fallacy)

1. If I punch X then X feels pain
2. X feels pain
Thus,
3. I punch(ed) X

Possibility-based Affirming the consequent (not a fallacy)

1. If I punch X then X feels pain
2. X feel pain
Hence,
3. It is possible that I punch(ed) X

The same possibility-based logic applies to denying the antecedent fallacy.

As for accepting the truth of the premises in a valid syllogism and rejecting the conclusion, that isn't possible unless you're Kurt Patrick Wise (vide infra)

As I shared with my professors years ago when I was in college, if all the evidence in the universe turns against creationism, I would be the first to admit it, but I would still be a creationist because that is what the Word of God seems to indicate. — Kurt Patrick Wise

Mr. Wise is onto something in my humble opinion. We're at liberty to believe whatever we want, but falsehoods tend to be injurious to health, like cigarettes, but some say, truth hurts. The dilemma we face, lies or truths, translates into pain or pain. C'est la vie! Choose the lesser of two evils, betwixt Scylla and Charybdis. I talk too much.
• 999
Is it legitimate, however, if someone says, "If x, then y," to then assume, "If not x, then not y?"

Quite often, yes, it is. "If" in conversational language does not always behave like a material implication in logic. True material implications are often irrelevant or misleading in conversation. "If the Government falls today, then I'm having a party tonight." This is probably true materially, because I've got a party planned, whether or not the Government falls. But I'm implying that my party will be a celebration and that if the Government doesn't fall then I will not have a party.

Other funny ifs include: "If you want a drink, there's beer in the fridge." We cannot apply modus tollens to that statement. It doesn't follow that if there's no beer in the fridge then you don't want a drink.

I'll leave it there, if you don't mind. (So - does that mean if I won't leave it there then you do mind??)

*****

Except your first question is interesting as well. I think you are talking about supposition, willingness to entertain a proposition, etc. So:

I'm willing to entertain the possibility that if p, then q
I'm willing to entertain the possibility that p
But:
I'm not willing to the entertain the possibility that q.

I think in such a case I'm not contradicting myself - because being willing to entertain a possibility is not to make an assertion and without an assertion there can be no contradiction. I'm behaving inconsistently in what I'm willing to entertain or not. I think the model here is G E Moore's "It's raining, but I don't believe it's raining", which is not a contradiction (rain and my beliefs are independent) but definitely sounds like one.
• 2.3k
Is it ever reasonable to concede the truth of each of the premises of a deductive argument and yet deny the conclusion

If F(65%) then G(65%)
F(65%)
Therefore, G(65%)

So the unweighted possibility space would look like:

F| G| F->G
T| T| T
T| F| F
F| T| T
F| F| T

For any particular instance you'd have to input a randomized number, or perhaps this is a model of a real process (like flipping an unfair coin) and you'd just have to flip the coin to determine the truth value of each proposition.

That is, it seems that you'd just have to determine the individual truth-value of each proposition, then you could determine whether some connective holds or not in the old-fashioned truth-table way.

I wouldn't want to refuse the possibility a priori, but I do think that denying the conclusion of a deductive argument where its premises are true would depend upon the content of the example -- and probability doesn't seem to undermine validity because its at a different "level" of determination than what truth-tables are at: probability determines if such-and-such a proposition is true in a given instance, and then based upon the value of that event you can apply the normal calculus.
• 1.3k

suppose, for the sake of my question, that premise 1 and 2 each have a 65% chance of being true

1. If x, then y.
2. x
3. Therefore, y.

OK about (1). But (2) is a true. How can it also be probable?
Of course, in your scheme, both of them are incomplete. So neither (1) can stand as a hypothesis nor (2) can stand as a statement. And therefore, neither (3) can stand as a conclusion.

You should at least give an example for this. (The example that you give in the second part of your description is for some other scheme. So I'll have to do that myself:

1. If x is present then y is also present.
2. x is present.
3. Therefore, y is present too.

The only probable thing is the existence of x (It doesn't matter how much probable it is. That is, math is irrelevant here.) (2) ascertains this existence. It's not probable anymore; it's certain. Therefore (3) is certain, too.

You can vary this scheme in a million ways. It will always be true.
Well, except if you can give an example that denies that. :smile:

***

Now, in the example that you give in the second part of your description, which is for a different scheme, indeed, stating that "If I don't attend the party, I won't receive a prize is a wrong deduction; it doesn't ensues from "If you attend my party, you will receive a prize".
• 109
Your mistake is to confuse "deductive" and "inductive". Modus Ponens is a deductive argument; if A and B are indisputably true, then C follows necessarily. But if you introduce probabilities ("65% likely to be true"), it is no longer Modus Ponens; it is an argument in inductive reasoning, which is something entirely different and much more complicated!
• 2.6k
Since this is the case, can it be rational to think premises of a deductive argument are true and yet waver on the conclusion being true?
I don't know if "this" is the case, but rational thinkers can provisionally, or temporarily, accept stated premises, without committing to a conclusion drawn from them. Unfortunately, most people are poor judges of statistical probability. That's the whole point of Bayesian Inference or "subjective probability".

Since the information (facts?), upon which we reason, is always limited, and possibly incomplete or erroneous, it makes sense to recalculate your initial estimate of truth (probability) as more information comes availiable. Modus Ponens is an idealized model of reasoning, and may be how computers think, in principle (garbage in-- garbage out). But human reasoning is usually influenced by prior beliefs, momentary feelings, and areas of ignorance or misinformation. So, it's reasonable to doubt your own reasoning in uncertain situations. Our brains evolved to make quick intuitive judgments from incomplete information. Not to statistically analyze precise premises for numerical probability. :nerd:

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
https://en.wikipedia.org/wiki/Bayesian_inference

Judgment Under Uncertainty :
People appear to be poor at Bayesian reasoning (just as Tversky & Kahneman claim) when they are given problems that express the relevant information as percents and that ask them to judge the probability of a single event (e.g., “What is the chance that a person who tests positive for the disease actually has it?”). But a different picture emerges when information is presented in a more ecologically valid format.
https://www.cep.ucsb.edu/topics/stats.htm
• 3.9k
while each premise individually considered is more likely to be true than their contraries, the chance (mathematically speaking, in this example) that both are true at the same time is 0.65 * 0.65, or 0.4225 (42.25%)

This isn’t right though, because the p → q and p are not independent. p → q is true in cases where p is false, or where both p and q are true. Since we’re doing p & p → q, we’re supposed to be interested only in the part where p and q are both true. The probability of that would be pr(p) * pr(q), but of course we don’t have a probability for q — that’s what we want to find. More importantly, we only have a probability for p → q.

Suppose the only cases for P ⊃ Q are ~P & ~Q or P & Q. Then we would be getting 35% of our 65 from ~P and the remaining 30% from a Q entirely contained within P. That leaves about 54% of P unoccupied by Q. That sounds like it’s too high. Shouldn’t it be only 35%?

Nope. P ⊃ Q intuitively says that P is contained within Q, but in this case, it’s only partially contained. 35% of the time we get P without Q, and that’s 35% of the total space, not of P.

So now we have a low figure for pr(Q) of 0.30, worst case scenario where P ⊃ Q is more often true because P is false and Q is false.

The biggest number for Q would be if it entirely contained ~P. (Taking all the cases where P ⊃ Q would have been true anyway because ~P.) That gets us, as before, 35% of the total space, and the same overlap as before where P & Q is another 30% of the total space. The high number then is pr(Q) = 0.65.

Where we end up is that pr(Q) = 0.3 + x, where x is some value between 0 and 35. I think that’s as much as we can do.

It might be noteworthy that we know P and Q are not disjoint, Q entirely contained in ~P, because that would leave only 35% for cases where ~P or Q, and that’s too small.

And again, Q cannot entirely contain P, and that includes being the same as P, because then P ⊃ Q would climb to 100% (all of ~P and all of P with no overlap).

It makes some intuitive sense that the max value for Q can’t exceed the max values for the premises, but could be even lower.

That’s my take on it. Happy to be corrected.

Oh, and this is a different thing:

If F(65%) then G(65%)
F(65%)
Therefore, G(65%)

We had a probability for the whole conditional, plus a second premise giving a probability for its antecedent, but no probability for the consequent. If we already knew that pr(G) = 0.65, why we would we bother trying to calculate it?
• 2.3k
We had a probability for the whole conditional, plus a second premise giving a probability for its antecedent, but no probability for the consequent. If we already knew that pr(G) = 0.65, why we would we bother trying to calculate it?

As I read the OP, the scenario specified that premise 1 and premise 2 each have a 65% chance of being true, rather than the implication having a probability -- so my thought was to simply separate out probability from the truth-table to make the deduction work. For the argument to work we'd probably want to use "and" rather than material implication though, right?

P
Q
Therefore,
P ^ Q

Works out better for the OP's point, that the conclusion is less likely than each initial premise, due to two events being a part of it rather than 1 event, and the conjunct only holding true when both are true.

That is, I took the scenario to be demonstrating how validity becomes invalid when possibility probability is a part of a given proposition.
• 3.9k
suppose a friend were to say to us: "If you attend my party, you will receive a prize." Immediately, we would think, "If I don't attend the party, I won't receive a prize. But since I want a prize, I must attend the party."

Forgot to do this part!

This is called “perfecting the conditional.” It’s a known thing, that in everyday conversation, conditionals are often taken to be biconditionals. This is a solid example. The standard one is “If you cut my grass, I’ll give you $10.” Maybe I’ll give you$10 even if you don’t cut my grass, but it’s taken to mean “If and only if you cut my grass, I’ll give you \$10.”

Works out better for the OP's point, that the conclusion is less likely than each initial premise

I think that’s probably true. I just worked the example given.

Logic can be mapped onto probability somewhat naturally (Ramsey thought they were one thing), but there are some pitfalls. (Lewis has a famous result in “Probability of Conditionals and Conditional Probabilities,” for instance, showing that you can’t interpret pr(P ⊃ Q) as pr(Q | P).) The inferences are very similar, but I think there are objections — at least, interpretively — to taking truth as probability 1 and falsehood as probability 0. (Also: picking a real number, winning the lottery, the usual probability 0 stuff.) Formally, though, it does make some sense to think of logic as a special case of a more general calculus of probabilities.

Not sure what vocabulary we should use for this sort of thing, but “validity” feels really out of place. Once you’re doing probabilities, that’s what you’re doing.
• 2.3k
Logic can be mapped onto probability somewhat naturally...Formally, though, it does make some sense to think of logic as a special case of a more general calculus of probabilities.

That's interesting. New for me, at least ! :D --

Not sure what vocabulary we should use for this sort of thing, but “validity” feels really out of place. Once you’re doing probabilities, that’s what you’re doing.

This makes sense to me. The only conclusion that doesn't really make sense to me is that probability undermines validity -- how to work that out, I'm not sure, but that's really the only belief I'm holding onto here.

I was attempting to "make sense" of probability in some way with my knowledge of baby-logic -- I thought maybe if you separated out the steps, basically, you'd have something that works. But you're saying it can be mapped, but in so doing you're not really testing validity anymore. It's wholly different.
• 3.9k

We can do some stuff with validity in a way. It’s far more common, I think, to claim straight up that P ⊃ Q, but give a probability for P. Then you can reason from P being entirely contained within Q, and you get that pr(Q) is at least as great as pr(P) (because it might include some of ~P, the freebies). That makes ⊃ “probability conserving” in a sense, that you get out at least as much as you put in, you don’t lose anything, just as in logic we want inferences to be truth preserving. (For probabilities, the biconditional is just =, because each side is greater than or equal to the other.)
• 3.9k

Should have said, the interesting stuff is with conditional probabilities, but it can be harder to wrap your head around at first.
• 3.9k

Dang. Should also have said that the rest of the mapping is that logic’s or is + (but you have to not double count where they overlap) and and is *, all of which is perfectly natural because logic is a kind of algebra. In logic, we deal with functions that map propositions to a discrete set {0, 1}, but with probability it’s a mapping to the entire interval [0, 1]. You can say that logic is a special case of probability, but it might be better to say that probability is a generalization of logic.

We are already, sadly, approaching the limits of my knowledge here, but there are folks around that know this stuff much better than I do.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal