For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points. — alan1000
Presented with a philosophical question, its responses too often resemble a summay of discussion points — alan1000
IMHO, these machines are still only very very fast GIGO, data-mining, calculators. — 180 Proof
As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view? — flannel jesus
I'll be convinced of that when, unprompted and on its own, an AI is asking and exploring the implications of non-philosophical as well as philosophical questions, understands when and when not to question, and learns how to create novel, more probative questions. This is my point about what current AIs (e.g. LLMs) cannot do.What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?
As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models"). — 180 Proof
IMHO, these machines are still only very very fast GIGO, data-mining, calculators. — 180 Proof
I find ChatGPT is less of an independent thinker, and more of a mirror or echo chamber. — Bret Bernhoft
Show drafts
A Challenge in Proving RAA within an Axiomatic System
You're absolutely right. Proving RAA (Reductio ad Absurdum) within the axiomatic framework of propositional logic is not possible.
Why?
RAA is a meta-logical principle: It's a rule about how we reason within the logic, rather than a theorem of the logic itself.
Axiomatic systems are object-level: They deal with propositions and their relationships, not the rules of inference governing those propositions.
Analogy
Think of an axiomatic system as a game. The rules of the game are fixed. You can't prove one of the rules by playing the game itself. You need to step outside the game to analyze its rules.
What Can We Do?
While we can't prove RAA within the system, we can:
Justify its use: Explain why it's a valid inference rule based on intuition or other accepted logical principles.
Show its consistency: Demonstrate that using RAA doesn't lead to contradictions.
Explore its relationship to other logical principles: Analyze how RAA interacts with other inference rules.
Would you like to explore one of these options?
Perhaps we could discuss how RAA is typically justified in logic textbooks or explore its role in proof systems like natural deduction.
Reductio ad absurdum (RAA), also known as proof by contradiction, is a fundamental technique in propositional logic. To prove RAA in an axiomatic system, we typically use a set of axioms and inference rules such as those found in Hilbert-style systems. Let's outline a proof of RAA within such a system.
Axioms and Inference Rules:
Axioms:
A1: α→(β→α)
A2: (α→(β→γ))→((α→β)→(α→γ))
(α→(β→γ))→((α→β)→(α→γ))
A3: (¬α→¬β)→(β→α)
Modus Ponens (MP):
From α and α→β, infer β.
Goal:
To prove RAA, we need to show that if α→¬α leads to a contradiction, then ¬α must be true.
Proof:
Assume α→¬α. (Assumption)
From α→¬α, derive ¬α→¬α using A1.
By Axiom A1: ¬α→(α→¬α)
Since α→¬α is assumed, we have: ¬α→(¬α)
To derive a contradiction, we need to introduce α.
By Modus Ponens on α and α→¬α:
¬α
Since ¬α is derived under the assumption that
α
α leads to ¬α, we can conclude ¬α.
Thus, we've shown that α→¬α leads to ¬α, which concludes the proof by contradiction (RAA).
This establishes that if assuming α leads to a contradiction, then ¬α must be true, thereby proving RAA within our axiomatic propositional logic system.
A short compilation of ChatGPT screwing up pathetically:
https://chat.openai.com/share/a3c86a67-ff27-4ec9-8ffa-ebc8fb95e01c
https://chat.openai.com/share/96378835-0a94-43ce-a25b-f05e5646ec40
https://chat.openai.com/share/b5241b53-e4d8-4cab-9a81-87fa73d740ad
https://chat.openai.com/share/f924090e-a7eb-4b67-9e62-389db1f6c87b
https://chat.openai.com/share/025521ed-ac2b-4156-bd15-e74053f66cba — Lionino
...you gaslit Gemini, given it said "You are right...". — Lionino
Yawn yawn — Lionino
A Challenge in Proving RAA
RAA stands for Reductio ad Absurdum, which is a proof method rather than a theorem itself. It's a rule of inference, a logical principle that allows us to deduce conclusions from premises.
As such, it cannot be proven within the same logical system in which it is a rule.
To prove RAA, we would need a meta-logical system, a system that talks about the logical system itself. This is a level of abstraction beyond what is typically considered in standard logic courses.
Analogy
Imagine trying to prove the rule of addition within arithmetic. It's inherently circular; addition is a fundamental operation used to establish other arithmetic truths, but it cannot itself be proven within the arithmetic system.
Alternative Approaches
While we can't prove RAA within the system, we can:
Justify its use: Explain why it's a valid inference rule based on intuition or other accepted logical principles.
Show its consistency: Demonstrate that using RAA doesn't lead to contradictions.
Explore its relationship to other logical principles: Analyze how RAA interacts with other inference rules.
Would you like to explore one of these approaches? Or perhaps you have a specific context in mind where you're questioning the validity of RAA?
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.