• Don Wade
    211
    Describe how you might program a computer, or robot, to philosophize, or mimic a philosopher - such that it would be able to fool a group of experts on a standardized test. I envision a test similar to the Turing Test for intelligence, but more in line with what this Philosophy Forum would consider a good test. I'm looking for input on both: how to program the computer, or how to test the computer.
  • 180 Proof
    15.4k
    AlphaGoZeno ...
  • jgill
    3.9k
    Maybe this is somehow connected to your question. Maybe not.
  • counterpunch
    1.6k
    Person: I saw some puppies in a shop window - so I bought one! What did I buy?
    Computer: a shop window!
  • Don Wade
    211
    Thanks for the post. Nope, not what I was looking for but still a good article. I believe computers can mimic philosophers good enough to pass a test made specifically to test them - but it will not be easy to program the cpmputer, or test it, unless we can first understand more of how to define philosophy.
  • Don Wade
    211
    I don't see a philosophical computer as a hoax. Not sure I know what you mean otherwise.
  • counterpunch
    1.6k


    A play on words?Don Wade

    No, it's a dialogue typical of a Turing test - and an indication of what you're up against trying to program a computer to do philosophy. A great deal of what is meant is unspoken; or ambiguously expressed, and a computer doesn't have the real world embodied experience to be able to discern that buying a puppy is something someone might do, whereas, buying a shop window is not.
  • fishfry
    3.4k
    ↪fishfry I don't see a philosophical computer as a hoax. Not sure I know what you mean otherwise.Don Wade

    Buzzwordy bullshit (*) taken for profundity, as any such philosophical chatbot must necessarily be..

    (*) I use the word in the sense of Harry Frankfurt, and not as a barnyard epithet. "Speech intended to persuade without regard for truth." Since machines don't do semantics, that's all that could be output by any such computer program as you propose.

    https://en.wikipedia.org/wiki/On_Bullshit
  • Caldwell
    1.3k
    Describe how you might program a computer, or robot, to philosophize, or mimic a philosopher - such that it would be able to fool a group of experts on a standardized test.Don Wade

    Seriously, dude, this is contradiction at its best!

    Philosophize and standardized test? Which one? You can't have both!
  • Don Wade
    211
    A point of view may at first sound like a contradiction with another point of view. That in itself is part of philosophy until alignments can be made to show agreement. That's where discussion can help.
  • Don Wade
    211
    Good points! Ambiguity is part of philosophy. That's what seperates it from science. A pjilosophical computer would realize that and make use of it in debate.
  • counterpunch
    1.6k


    Good points! Ambiguity is part of philosophy. That's what seperates it from science. A pjilosophical computer would realize that and make use of it in debate.Don Wade

    Yeah! Maybe your philosophical computer can also make remarks dripping with obvious sarcasm! That's always good in a debate!
  • jgill
    3.9k
    Ambiguity is part of philosophy. That's what seperates it from scienceDon Wade

    Praise the Lord for that distinction. :wink:
  • Caldwell
    1.3k
    A point of view may at first sound like a contradiction with another point of view. That in itself is part of philosophy until alignments can be made to show agreement. That's where discussion can help.Don Wade
    And therein lies the misplaced belief about philosophy. Philosophy thrives in pointing out distinction, in defining a domain, in laying foundation, even in definition. Anyone who proclaims alignments and agreements in just about anything is probably lazy.
  • Don Wade
    211
    A lazy philosopher can still be a philosopher!
  • TheMadMan
    221
    Since philosophy is of the brain/mind, an AI may pretend to be a philosopher. On how to do it, go back to the basics. Philosophy is a collection of words with different meaning and definitions.
    So in a philosophical discussion the computer must find which meaning and definition the human philosopher meant when a word is used. So take a philosophy statement, the computer must know all meanings and definitions of each word and then (somehow) detect which meaning of each word was the human philosopher considering in that statement. So its a lot of work just to give each individual word all possible meanings and then a hard work (perhaps impossible?) for the AI to find which of the meaning is the human philosopher choosing at that instant.
  • Don Wade
    211
    Good points, but it seems you are looking at a "scientific method" of examining the discussion. Using your discussion: In philosophy a heavy object will fall faster than a light object. In science (from experiments) we understand both objects fall at the same rate. A philosophical computer needs only to understand the probability of what you might believe when given information. One might even argue we don't know anything, but we believe we do.
  • TheMadMan
    221
    If you want to rely on the probability of what one might believe, I don't think it would in a serious discussion but it may work on a vague, superficial one. Though I wouldn't care much for a "mumbo jumbo" discussion.

    Another way this might work is if the AI asks for a definition of each term (kind of annoying but ok).
    Lets say a statement from the human has 3 important terms and the AI asks for definition, but then there needs to be a relation between those 3 terms in the AI system for it to give a somewhat acceptable response, but still better than just choosing random meaning for each term.

    I don't know much about computing but this this seems to me a lot of work but still interesting.
  • Arne
    821
    if you must first define philosophy before you can simulate philosophy, then you will define philosophy in such way that you can simulate philosophy. And then your computer will be unable to entertain any philosophical notions not contained within its definition. It is the classic garbage in/garbage out dilemma.

    In a Kierkegaardian sense, the inherent dynamics of being (philosophy) are such that it will overflow any box (definition) in which you try to contain it.

    There can be no philosophy if the definition of philosophy is not itself an issue for philosophy.

    And does not the entire project rest upon the unstated presumption that a particular type of entity (human?) is uniquely situated to decide what is and what is not philosophy?

    Interesting topic.
  • Don Wade
    211
    There can be no philosophy if the definition of philosophy is not itself an issue for philosophy.
    I believe we could say the same about "truth". How well can anyone define truth in philosophy - yet we still search for it.
  • Arne
    821
    How well can anyone define truth in philosophy - yet we still search for it.Don Wade

    Searching for truth is not the same as defining truth.

    And defining truth is not the same as defining philosophy.

    My point remains the same. I suspect your project would be more worthwhile if you let go of the mistaken belief that it's success depends upon a definition of philosophy.

    It is your project, you solicited opinions, I provided mine, and I wish you nothing but success.
  • Don Wade
    211
    I believe a good philosophical computer would first need to be a good psychologist. It is important to know the probability of someone accepting your statement because it may seem true to them. Example: If I make a statement: "Columbus discovered America". Many people may believe that to be true and would accept the statement. But, it's not true at all. Close, but not true. It is more important in philosophy to have someone believe your statement, rather than have scientific proof.
  • Don Wade
    211
    Searching for truth is not the same as defining truth. How can one search for something they can't define?
  • TheMadMan
    221
    A computer cannot be a philosopher or a psychologist, it can only appear to be one by coding it. Statement like "Columbus discovered America" are easy for AI to deal with since it is a matter of fact (whether correct or incorrect), the difficulty is dealing with opinions and viewpoints which is what philosophy centers on.
  • Don Wade
    211
    We agree. Now, back to my original post: Can we build a computer that may "seem" to be philosophical? That is; it could fool a board of judges. Please note the difference between seems to be, and is - such as in the Turing Test.
  • Arne
    821
    Searching for truth is not the same as defining truth. How can one search for something they can't define?Don Wade

    and what if you don't see it because it doesn't fit your definition?

    there is a wide range between having an idea of what you are in search of and having a clear definition of what you are in search of.

    searching and defining is an interactive process.

    unless you have already been there, you cannot be certain how it will look until you get there.

    And why would you want to?
  • Don Wade
    211
    We agree. However, one can be in search of a "vague" concept/idea - and not necessarily have a clear definition.
  • TheMadMan
    221
    Oh yes one can easily build one that "seems" but whether it can fool judges Im not so sure. I guess it depends on the person that the AI is communicating to.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.