• Bret Bernhoft
    222
    I recently had a conversation with ChatGPT about whether an AI system can philosophize on its own. Or whether it is more appropriately perceived to be an aid/tool for the human philosopher.

    In short, ChatGPT agrees that AI systems cannot create unique philosophies, and that users of AIs should understand this. But that Artificial Intelligences are extremely useful for running thought experiments and calculating large sums of data. What are your thoughts?
  • alan1000
    200
    I would say that, at the current level of our technlogical development, ChatGPT has got it just about right... but that in ten or twenty years more, perhaps not.

    You don't need AI to calculate large sums of data. We've been doing that since the Romans invented the abacus. (Little-known fact: expert abacus operators in Hong Kong could outperform the original IBM PC-XT!) .

    AI comes more to the foreground in running thought experiments, but at this stage it does not seem to outperform human effort. Thought experiments, by definition, involve conceptual scenarios which go beyond any prior experience, and AI is (for the moment) not very good at this, except in certain very specialised fields like anti-biotic research, where its development has been very intensive. But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,

    For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.
  • alan1000
    200
    "But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,"

    Apologies for the bad grammar. Must proofread more carefully.
  • Bret Bernhoft
    222
    For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.alan1000

    Thank you for your response. And I agree, that modern artificial intelligence has a difficult time with truly novel arguments for a strongly stated philosophical case or position.

    The responses that I've observed coming from AI concerning philosophy are (in my opinion) mid-grade debate team material. Which isn't a bad thing, necessarily. But, as you point out, these kinds of tools must do better before they're taken more seriously.

    Time will tell.
  • Heracloitus
    499
    Presented with a philosophical question, its responses too often resemble a summay of discussion pointsalan1000

    Exactly the result to be expected from a large language model


    AI was not even created to do philosophy. It's completely irrelevant to consider whether AI can entertain thought experiments or exhibit satisfactory rational argumentation.
  • 180 Proof
    15.3k
    Current AIs (e.g. LLMs) cannot philosophize (i.e. raise and reflect on foundational questions in order to reason dialectically towards – in order to understand when and how to create – more probative inquiries) because these 'cognitive systems' are neither embodied (i.e. synthetic phenomenology) nor programmed-trained to emulate metacognition (i.e. self-in-possible-worlds-modeling). IMHO, these machines are still only very very fast GIGO, data-mining, calculators.
  • flannel jesus
    1.8k
    IMHO, these machines are still only very very fast GIGO, data-mining, calculators.180 Proof

    Only?

    Have you read this? https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
  • flannel jesus
    1.8k
    And so that fact that there's very strong evidence of internal models doesn't shift your confidence in that at all?
  • 180 Proof
    15.3k
    About "current AIs (e.g. LLMs)"? No.
  • flannel jesus
    1.8k
    Do you agree that there is strong evidence for internal models? Are you saying "yes there are internal models, but that still makes them only very very fast GIGO, data-mining, calculators."? Or are you saying "no internal models"?
  • flannel jesus
    1.8k
    I see, fair enough.

    So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view? What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?

    For me, internal models are already past that, as I'm sure you can tell. They're more than statistics, more than calculators, but less than a lot of aspects of human intelligence in a lot of ways, and probably not conscious.
  • 180 Proof
    15.3k
    So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view?flannel jesus
    As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").

    What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?
    I'll be convinced of that when, unprompted and on its own, an AI is asking and exploring the implications of non-philosophical as well as philosophical questions, understands when and when not to question, and learns how to create novel, more probative questions. This is my point about what current AIs (e.g. LLMs) cannot do.
  • flannel jesus
    1.8k
    (thumbs up emoji) thanks for clarifying
  • Lionino
    2.7k
    You can use character.ai and talk to "Socrates", "Plato", "Kant", "Descartes". You can also make the AI break character and start talking to you as a normal person/AI. It/they does/do give some good input, it is worth the try.
  • Banno
    24.8k
    These supposed AI's are just stringing words together based on probability. What those words do is not part of the calculation. They sometimes string word together that are significant statistically but which do not match what is happening in the world.

    They make shit up.

    Their confabulation makes them useless as an authority. One has to check that what they claim is actually the case.

    But since so many posters hereabouts seem to do much the same thing, what the hell.
  • apokrisis
    7.3k
    As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").180 Proof

    Yep. Does it flinch as you take back a foot before kicking its cabinet?

    IMHO, these machines are still only very very fast GIGO, data-mining, calculators.180 Proof

    :up:

    For anyone interested in why large language models are being forced on us for free, here is the business case back story. AI sells more hardware and operating system licences.

    This has been how the computer industry has been thinking since IBM leased mainframe systems in the 1970s. Get the customer on an escalator of data bloat. Promise huge bottom-line productivity improvements and rake in the cash as the customer keeps coming back for more CPU and memory as they chase the productivity mirage.

  • Bret Bernhoft
    222


    These are fair points. And true points.

    I find ChatGPT is less of an independent thinker, and more of a mirror or echo chamber. Relying on ChatGPT for grand new ideas, is difficulty. But working with ChatGPT like a potter molds clay, is quite helpful.
  • Lionino
    2.7k
    Another episode of folks who can't even code Hello World claiming AI doesn't actually exist because it doesn't conform to their sophomoric restrictive view of what 'intelligence' means even though Gemini is able to debate better than them. Yawn yawn.
  • apokrisis
    7.3k
    I find ChatGPT is less of an independent thinker, and more of a mirror or echo chamber.Bret Bernhoft

    And it has a problem in that the more data it averages over, the more it will bland out and become useless. If it starts consuming its own generated media, that compounds the problem.

    So it seems sharpest at a reasonable sized sampling and grows dumber after that.

    Of course, every problem has its solutions. But I was around for the AI hype of the early 1980s. Lisp and parallel computers were going to be running your corporation said ICL. IBM snickered into its sleeve as it kept on grinding ICL out of its own home market.

    This is a good video on that data issue....

  • Banno
    24.8k


    The Gemini that produced this?

    In an axiomatic propositional logic, prove RAA
    Show drafts
    A Challenge in Proving RAA within an Axiomatic System

    You're absolutely right. Proving RAA (Reductio ad Absurdum) within the axiomatic framework of propositional logic is not possible.

    Why?

    RAA is a meta-logical principle: It's a rule about how we reason within the logic, rather than a theorem of the logic itself.
    Axiomatic systems are object-level: They deal with propositions and their relationships, not the rules of inference governing those propositions.
    Analogy

    Think of an axiomatic system as a game. The rules of the game are fixed. You can't prove one of the rules by playing the game itself. You need to step outside the game to analyze its rules.

    What Can We Do?

    While we can't prove RAA within the system, we can:

    Justify its use: Explain why it's a valid inference rule based on intuition or other accepted logical principles.
    Show its consistency: Demonstrate that using RAA doesn't lead to contradictions.
    Explore its relationship to other logical principles: Analyze how RAA interacts with other inference rules.
    Would you like to explore one of these options?

    Perhaps we could discuss how RAA is typically justified in logic textbooks or explore its role in proof systems like natural deduction.

    ChatGPT just went ahead and did it:

    Reductio ad absurdum (RAA), also known as proof by contradiction, is a fundamental technique in propositional logic. To prove RAA in an axiomatic system, we typically use a set of axioms and inference rules such as those found in Hilbert-style systems. Let's outline a proof of RAA within such a system.
    Axioms and Inference Rules:
    Axioms:
    A1: α→(β→α)
    A2: (α→(β→γ))→((α→β)→(α→γ))
    (α→(β→γ))→((α→β)→(α→γ))
    A3: (¬α→¬β)→(β→α)
    Modus Ponens (MP):
    From α and α→β, infer β.
    Goal:
    To prove RAA, we need to show that if α→¬α leads to a contradiction, then ¬α must be true.
    Proof:
    Assume α→¬α. (Assumption)
    From α→¬α, derive ¬α→¬α using A1.
    By Axiom A1: ¬α→(α→¬α)
    Since α→¬α is assumed, we have: ¬α→(¬α)

    To derive a contradiction, we need to introduce α.

    By Modus Ponens on α and α→¬α:
    ¬α
    Since ¬α is derived under the assumption that
    α
    α leads to ¬α, we can conclude ¬α.

    Thus, we've shown that α→¬α leads to ¬α, which concludes the proof by contradiction (RAA).
    This establishes that if assuming α leads to a contradiction, then ¬α must be true, thereby proving RAA within our axiomatic propositional logic system.

    But not all that well.
  • Lionino
    2.7k
    The average person does not know what → or axiom mean, so I think it is doing quite well. I have the feeling however that you gaslit Gemini, given it said "You are right...".

    For the record, I have a collection of screw-ups myself:

  • Banno
    24.8k
    ...you gaslit Gemini, given it said "You are right...".Lionino

    If I can gaslight it, it's not all that good.

    But actually what it is referring to was it's first go at the question, in which it did not even attempt to provide a proof of RAA but instead simply explained what RAA was.

    Yawn yawnLionino

    Indeed.

    I'm not familiar with Gemini. Not a fan of Google.

    A Challenge in Proving RAA
    RAA stands for Reductio ad Absurdum, which is a proof method rather than a theorem itself. It's a rule of inference, a logical principle that allows us to deduce conclusions from premises.
    As such, it cannot be proven within the same logical system in which it is a rule.
    To prove RAA, we would need a meta-logical system, a system that talks about the logical system itself. This is a level of abstraction beyond what is typically considered in standard logic courses.
    Analogy
    Imagine trying to prove the rule of addition within arithmetic. It's inherently circular; addition is a fundamental operation used to establish other arithmetic truths, but it cannot itself be proven within the arithmetic system.
    Alternative Approaches
    While we can't prove RAA within the system, we can:
    Justify its use: Explain why it's a valid inference rule based on intuition or other accepted logical principles.
    Show its consistency: Demonstrate that using RAA doesn't lead to contradictions.
    Explore its relationship to other logical principles: Analyze how RAA interacts with other inference rules.
    Would you like to explore one of these approaches? Or perhaps you have a specific context in mind where you're questioning the validity of RAA?

    Notice how it confabulates an inability to prove RAA within any system.
  • Lionino
    2.7k
    Claude
    Reveal
    To prove the Rule of Assumption for Absurdity (RAA) in axiomatic propositional logic, we need to demonstrate that if assuming a proposition P leads to a contradiction, then we can conclude ¬P. Let's approach this step-by-step:

    First, let's state the RAA rule formally:
    If Γ, P ⊢ Q and Γ, P ⊢ ¬Q, then Γ ⊢ ¬P
    Where Γ is a set of assumptions, P is the proposition we're assuming, and Q is any proposition.
    We'll start with the assumptions:
    Γ, P ⊢ Q (Assumption 1)
    Γ, P ⊢ ¬Q (Assumption 2)
    Our goal is to prove Γ ⊢ ¬P
    We'll use the following axioms and rules:

    Modus Ponens (MP): If A and A → B, then B
    Deduction Theorem (DT): If Γ, A ⊢ B, then Γ ⊢ A → B
    Axiom of Contradiction: ¬(Q ∧ ¬Q)


    Proof:
    a) Γ, P ⊢ Q ∧ ¬Q (from Assumptions 1 and 2)
    b) Γ ⊢ P → (Q ∧ ¬Q) (by DT on step a)
    c) Γ ⊢ ¬(Q ∧ ¬Q) (Axiom of Contradiction)
    d) Γ ⊢ ¬P (by MP on b and c, and the definition of implication)
    Explanation of step d:

    From b, we have P → (Q ∧ ¬Q)
    From c, we have ¬(Q ∧ ¬Q)
    The definition of implication states that (A → B) ≡ (¬A ∨ B)
    So, P → (Q ∧ ¬Q) is equivalent to ¬P ∨ (Q ∧ ¬Q)
    Given ¬(Q ∧ ¬Q), the only way for ¬P ∨ (Q ∧ ¬Q) to be true is if ¬P is true



    Thus, we have proven that if assuming P leads to both Q and ¬Q (a contradiction), then we can conclude ¬P, which is the Rule of Assumption for Absurdity (RAA).


    Perplexity

    Reveal
    To prove Reductio ad Absurdum (RAA) in an axiomatic propositional logic framework, we start by understanding the principle itself. RAA states that if assuming a proposition α α leads to a contradiction, then α α must be false, which can be formally expressed as: if Γ ∪ { α } Γ∪{α} is inconsistent, then Γ ⊢ ¬ α Γ⊢¬α.
    Proof Outline Assumption:
    Assume α α is true.
    Derive a Contradiction:
    Show that this assumption leads to a contradiction, i.e., derive both β β and ¬ β ¬β for some proposition β β.
    Conclude: Since assuming α α leads to a contradiction, we conclude ¬ α ¬α.
    Detailed Proof Steps
    Assume α
    α: Begin by assuming the proposition α α is true.
    Show Inconsistency: From the premises in Γ Γ and the assumption α α, derive some proposition β β and its negation ¬ β ¬β. This can be done through the application of the axioms and rules of inference in your system (e.g., Modus Ponens, Axiom Schemas).
    Reach a Contradiction: The derivation of both β β and ¬ β ¬β indicates inconsistency, which means that the assumption α α cannot hold.
    Conclude ¬ α ¬α: By the principle of RAA, since the assumption α α leads to a contradiction, it follows that ¬ α ¬α must be true.
    Example Using Axioms In a system like Łukasiewicz's, you might use the following axioms:
    Axiom Schema 1: α → ( β → α ) α→(β→α)
    Axiom Schema 2: ( α → ( β → γ ) ) → ( ( α → β ) → ( α → γ ) ) (α→(β→γ))→((α→β)→(α→γ))
    Axiom Schema 3: ( ¬ α → ¬ β ) → ( β → α ) (¬α→¬β)→(β→α)
    Using these axioms, you can construct a proof that leads to a contradiction when assuming α α is true, thereby allowing you to conclude ¬ α ¬α through RAA.
    Conclusion This proof structure effectively demonstrates the principle of Reductio ad Absurdum in propositional logic. The key is to derive a contradiction from the assumption, which then allows for the conclusion of the negation of that assumption. This method is foundational in formal logic and is widely applicable across various logical systems.
  • fishfry
    3.4k
    They make shit up.

    Their confabulation makes them useless as an authority. One has to check that what they claim is actually the case.

    But since so many posters hereabouts seem to do much the same thing, what the hell.
    Banno

    :100:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.