• Bret Bernhoft
    221
    I recently had a conversation with ChatGPT about whether an AI system can philosophize on its own. Or whether it is more appropriately perceived to be an aid/tool for the human philosopher.

    In short, ChatGPT agrees that AI systems cannot create unique philosophies, and that users of AIs should understand this. But that Artificial Intelligences are extremely useful for running thought experiments and calculating large sums of data. What are your thoughts?
  • alan1000
    198
    I would say that, at the current level of our technlogical development, ChatGPT has got it just about right... but that in ten or twenty years more, perhaps not.

    You don't need AI to calculate large sums of data. We've been doing that since the Romans invented the abacus. (Little-known fact: expert abacus operators in Hong Kong could outperform the original IBM PC-XT!) .

    AI comes more to the foreground in running thought experiments, but at this stage it does not seem to outperform human effort. Thought experiments, by definition, involve conceptual scenarios which go beyond any prior experience, and AI is (for the moment) not very good at this, except in certain very specialised fields like anti-biotic research, where its development has been very intensive. But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,

    For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.
  • alan1000
    198
    "But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,"

    Apologies for the bad grammar. Must proofread more carefully.
  • Bret Bernhoft
    221
    For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.alan1000

    Thank you for your response. And I agree, that modern artificial intelligence has a difficult time with truly novel arguments for a strongly stated philosophical case or position.

    The responses that I've observed coming from AI concerning philosophy are (in my opinion) mid-grade debate team material. Which isn't a bad thing, necessarily. But, as you point out, these kinds of tools must do better before they're taken more seriously.

    Time will tell.
  • Heracloitus
    498
    Presented with a philosophical question, its responses too often resemble a summay of discussion pointsalan1000

    Exactly the result to be expected from a large language model


    AI was not even created to do philosophy. It's completely irrelevant to consider whether AI can entertain thought experiments or exhibit satisfactory rational argumentation.
  • 180 Proof
    14.8k
    Current AIs (e.g. LLMs) cannot philosophize (i.e. raise and reflect on foundational questions in order to reason dialectically towards – in order to understand when and how to create – more probative inquiries) because these 'cognitive systems' are neither embodied (i.e. synthetic phenomenology) nor programmed-trained to emulate metacognition (i.e. self-in-possible-worlds-modeling). IMHO, these machines are still only very very fast GIGO, data-mining, calculators.
  • flannel jesus
    1.8k
    IMHO, these machines are still only very very fast GIGO, data-mining, calculators.180 Proof

    Only?

    Have you read this? https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
  • flannel jesus
    1.8k
    And so that fact that there's very strong evidence of internal models doesn't shift your confidence in that at all?
  • 180 Proof
    14.8k
    About "current AIs (e.g. LLMs)"? No.
  • flannel jesus
    1.8k
    Do you agree that there is strong evidence for internal models? Are you saying "yes there are internal models, but that still makes them only very very fast GIGO, data-mining, calculators."? Or are you saying "no internal models"?
  • flannel jesus
    1.8k
    I see, fair enough.

    So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view? What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?

    For me, internal models are already past that, as I'm sure you can tell. They're more than statistics, more than calculators, but less than a lot of aspects of human intelligence in a lot of ways, and probably not conscious.
  • 180 Proof
    14.8k
    So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view?flannel jesus
    As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").

    What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?
    I'll be convinced of that when, unprompted and on its own, an AI is asking and exploring the implications of non-philosophical as well as philosophical questions, understands when and when not to question, and learns how to create novel, more probative questions. This is my point about what current AIs (e.g. LLMs) cannot do.
  • flannel jesus
    1.8k
    (thumbs up emoji) thanks for clarifying
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.