• Daemon
    591
    A fly in a fly-bottle has no understanding of bottles. But it does at least exist as an entity, which is a prerequisite for understanding. A machine (a computer) is not an entity in the appropriate sense.

    The first entities on Earth were single-celled organisms. The cell wall is the boundary between the organism and the rest of the world. No such boundary exists between a computer and the rest of the world.

    Can a 'thinking machine', according to this definition(?), 'understand'? I suspect, if so, it can only understand to the degree it can recursively map itself180 Proof

    It isn't appropriate to talk (in the present context) about the computer "itself".
  • InPitzotl
    880
    But it does at least exist as an entity, which is a prerequisite for understanding.Daemon
    Why is it a prerequisite?
  • Daemon
    591
    Because understanding can't take place without an entity which understands.
  • 180 Proof
    13.9k
    A fly in a fly-bottle has no understanding of bottlesDaemon
    Someone ought to tell that to Wittgenstein.
  • Daemon
    591
    I think he knew already.
  • 180 Proof
    13.9k
    I know, I'm just pointing out that it's you, not I, who misunderstands a metaphor for understandung to the point of taking it literally.
  • Daemon
    591
    Your use of the metaphor wasn't very helpful. We don't reach understanding in the way the fly gets out of the bottle.
  • 180 Proof
    13.9k
    When you're literal-minded, metaphors can't be helpful to you.
  • Daemon
    591
    Say something interesting ffs.
  • Ennui Elucidator
    494
    But isn't the point that understanding is a demonstration of proficiency? To the extent that a fly can escape from a bottle by other than chance, is that evidence of understanding?

    Not that I understand Wittgenstein or much of anything else.

    When we turn to understanding, by contrast, some have claimed that a new suite of cognitive abilities comes onto the scene, abilities that we did not find in ordinary cases of propositional knowledge. In particular, some philosophers claim that the kind of mental action verbs that naturally come to the fore when we think about understanding—“grasping” and “seeing”, for example—evoke mental abilities “beyond belief”, i.e., beyond simple assent or taking-to-be-true (for an overview, see Baumberger, Beisbart, & Brun 2017). — SEP on Psychology of Understanding
  • Ennui Elucidator
    494
    And just because I think only one person briefly mentioned it, let's be a bit express about the Chinese Room and how it relates to minds and understanding.

    4.4 The Other Minds Reply

    Related to the preceding is The Other Minds Reply: “How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers.”

    Searle’s (1980) reply to this is very short:

    The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In ‘cognitive sciences’ one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.

    Critics hold that if the evidence we have that humans understand is the same as the evidence we might have that a visiting extra-terrestrial alien understands, which is the same as the evidence that a robot understands, the presuppositions we may make in the case of our own species are not relevant, for presuppositions are sometimes false. For similar reasons, Turing, in proposing the Turing Test, is specifically worried about our presuppositions and chauvinism. If the reasons for the presuppositions regarding humans are pragmatic, in that they enable us to predict the behavior of humans and to interact effectively with them, perhaps the presupposition could apply equally to computers (similar considerations are pressed by Dennett, in his discussions of what he calls the Intentional Stance).
  • 180 Proof
    13.9k
    FFS. I try not to waste words on the literal-minded. :yawn:
  • Daemon
    591


    Just for clarity, the part from "Critics hold" onwards is the SEP and not Searle.

    The evidence we have that humans understand is not the same as the evidence that a robot understands. The problem of other minds isn't a real problem, it's more in the nature of a conundrum, like Zeno's paradoxes. The paradox of Achilles' Arrow for example is supposed to show that a flying arrow doesn't move. But it does.

    The nature of consciousness is such that I can't experience your understanding of language. But I can experience my own understanding, and you can experience yours. It would be ridiculous for me to believe that I am the only one who operates this way, and it would be ridiculous for you to believe that you are the only one.

    With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation tool often generates English sentences that make it look like it understands Dutch, but I know it doesn't, because I programmed it.
  • Ennui Elucidator
    494


    It is hard to not completely change topics and respond to your point. Suffice it to say that minds are really hard stuff. This may be due to the fact that people are generally unwilling to treat the idea inclusively (things are in till proven out vs. things are out till proven in). Accepting for a moment that understanding is a function of an agent demonstrating a particular capability, I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people). Being a bit aggressive, I might suggest that you can't rule out panpsychism and so despite your being responsible for the behavior and assemblage of a computer, it may very well be minded (or multiply minded) sufficiently to understand what it is doing. We have no present way to demarcate minded from non-minded besides interpreting behavior. If something behaves like it understands (however strictly or loosely you want to define demonstrating a skill/ability/competency), bringing up whether it has a mind sufficient for agency doesn't do much work - it merely states the obvious: we don't know what has a mind.

    I suppose if being explicable renders a thing mindless, increasing number of things that previously were marginally minded (after we admitted that maybe more than just white men could have minds) would go back to not having them. I just don't know how our minds will survive the challenge 10,000 years from now (when technology is presumably vastly superior to what we managed to create in the last hundred or so years). Before you know it, we will be arguing about p-Zombies. For my part, I might approach the thing with humility and err on the side of caution (animated things are minded) rather than dominion (people are special and can therefore subjugate the material world aside from other people).
  • Daemon
    591
    I think it is easy enough to say that understanding can not be discrete, i.e. that a system that can only do one thing (or a variety of things) well lacks agency for this purpose. However, at some point, a thing can do enough things well that it feels a bit like bad faith to say that it isn't an agent because you understand how it was constructed and how it behaves (indeed, if determinism obtains, the same could be said of people).Ennui Elucidator

    In the case of a computer, it isn't just that we know how it was constructed and how it behaves, the point is that we know it is not using understanding.

    Not only that: a computer is not an agent, we are the agents making use of it. It doesn't qualify for agency, any more than an abacus does.
  • InPitzotl
    880
    With a robot, we know that what looks like understanding isn't really understanding, because we programmed the bloody robot. My translation toolDaemon
    I'm a bit confused here. Is your translation tool a robot?
  • Daemon
    591
    There's no significant difference.
  • InPitzotl
    880
    There's no significant difference.Daemon
    There absolutely is a significant difference. How are you going to teach anything, artificial or biological, what a banana is if all you give it are squiggly scratches on paper? It doesn't matter how many times your CAT tool translates "banana", it will never encounter a banana. The robot at least could encounter a banana.

    Equating these two simply because they're programmed is ignoring this giant and very significant difference.
  • Daemon
    591
    A robot does not "encounter" things any more than a PC does. When we encounter something, we experience it, we see it, feel it, hear it. A robot does not see, feel or hear.
  • Daemon
    591
    I shouldn't be having to say this stuff. It feels like you are all suffering from a sort of mass delusion.
  • InPitzotl
    880
    I'm detecting a few problems here.
    A robot does not "encounter" things any more than a PC does. ...Daemon
    The question isn't about experiencing; it's about understanding. If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything. I am not asking them to be consciously aware of a car, to have percepts of bananas, to feel the edges of their wallet when they fish for it, etc. I am asking for certain implied things... it's a request, it's deniable, they should purchase the bananas, and they should actually deliver it to me. That they experience things is nice and all, but all I'm asking for is some bananas.
    When we encounter something, we experience it, we see it, feel it, hear it.Daemon
    I disagree with the premise, "'When humans do X, it involves Y' implies X involves Y". What you're asking me to believe is in my mind the equivalent of that asking "Can you go to the store and pick me up some bananas?" is asking someone to experience something; or phrased slightly more precisely, that my expectations that they understand this equate to my expectations that they (consciously?) experience things. And I don't think that's true. I think I'm just asking for some bananas.

    The other problem is that you missed the point altogether to excuse a false analogy. A human doesn't learn language by translating words to words, or by hearing dictionary definitions of words. It's kind of impossible for a human to come to the point of being able to understand "Can you go to the store and pick me up some bananas?" by doing what your CAT tool does. It's a prerequisite for said humans to interact with the world to understand what I'm asking by that question.

    IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.
  • Daemon
    591
    The question isn't about experiencing; it's about understanding.InPitzotl

    As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand. The same applies to robots.

    If I ask a person, "Can you go to the store and pick me up some bananas?", I am not by asking the question asking the person to experience anything.InPitzotl

    But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things.
    IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.InPitzotl

    IOW, your point was that robots aren't doing something that humans do, but that's kind of backwards from the point being made that you're replying to. It's not required here that robots are doing what humans do to call this significant; it suffices to say that humans can't understand without doing something that robots do that your CAT tool doesn't do.InPitzotl

    Neither my CAT tool nor a robot do what I do, which is to understand through experience.

    .
  • InPitzotl
    880
    As I emphasised in the OP, experience is the crucial element the computer lacks, that's the reason it can't understand.Daemon
    Nonsense. There are people who have this "crucial element", and yet, have no clue what that question means. If experience is "the crucial" element, what is it those people lack?

    I don't necessarily know if a given person would understand that question, but there's a test. If the person responds to that question by going to the store and bringing me some bananas, that's evidence the person has understood the question.
    The same applies to robots.Daemon
    But in order to understand your question, the person must have experienced stores, picking things up, bananas and a multitude of other things.Daemon
    Your CAT tool would be incapable of bringing me bananas if we just affix wheels and a camera on it. By contrast, a robot might pull it off. The robot would have to do more than just translate words and look up definitions like your CAT tool does to pull it off... getting the bananas is a little bit more involved than translating questions to Dutch.
    Neither my CAT tool nor a robot do what I do, which is to understand through experience.Daemon
    Neither your CAT tool nor a person who doesn't understand the question can do what a robot who brings me bananas and a person who brings me bananas do, which is to bring me bananas.

    I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.
  • Ennui Elucidator
    494
    I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas.InPitzotl

    I'm not so sure that Daemon accepts that the understanding is in the doing. A person and a robot acting identically on the line (see box, lift box, put in crate A, reset and wait for next box to be seen) do not both, on his view, understand because the robot is explicable (since he, or someone else, built it from scratch and programmed it down to the last detail). He is after minds as the locus of understanding, but he seems unwilling to accept that what has a mind is not based on explicability. It is a bit like a god of the gaps argument that grows ever smaller as our ability to explain grows ever larger. We will have minds (and understanding) only so long as someone can't account for us.
  • Daemon
    591
    Thank you, that is interesting, but it is definitely not what I am saying.

    My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

    I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.

    Because we can explain the robot, we know that its actions are not due to understanding based on experience.

    We will continue to have minds and understanding even after we understand our minds.
  • Daemon
    591
    I'm not arguing that robots experience things here. I'm arguing that it's a superfluous requirement. But even if you do add this superfluous requirement, it's certainly not the critical element. To explain what brings me the bananas when I ask for them, you have to explain how those words makes something bring me bananas. You can glue experience to the problem if you want to, but experience doesn't bring me the bananas that I asked for.InPitzotl

    We're not trying to explain how you get bananas, we're trying to explain understanding.
  • InPitzotl
    880
    We're not trying to explain how you get bananas, we're trying to explain understanding.Daemon
    "Can you go to the store and pick me up some bananas?"InPitzotl
    My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

    I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.
    Daemon
    A correct understanding of the question is comprised of relating it to a request for bananas. How this fits in to the world is how one goes about going to the store, purchasing bananas, coming to me and delivering bananas. You've added experiencing in there. You seem too busy trying to compare CAT tools not understanding and an English speaker understanding to relate understanding to the real test of it: the difference between a non-English speaker just looking at me funny and an English speaker bringing me bananas.

    So what you've tried to get me to do is accept that a robot, just like a CAT tool, doesn't understand, even if the robot brings me bananas; and the reason the robot does not understand the question is because the robot does not experience, just like the CAT tool. My counter is that the robot, just like the English speaker, is bringing me bananas, which is exactly what I meant by the question; the CAT tool is just acting like the non-English speaker, who does not understand the question (despite experiencing; surely the non-English speaker has experienced bananas, and even experiences the question... what's missing then?). "Bringing me bananas" is both a metaphor for what the English speaker correctly relates the question to that the non-English speaker doesn't, and how the English speaker demonstrates understanding the question.
  • Daemon
    591
    You can redefine "understanding" in such a way that it is something a robot or a computer can do, but the "understanding" I am talking about is still there. The kind a robot can't do.

    "If the baby fails to thrive on raw milk it should be boiled".
  • Josh Alfred
    226
    A) Artificial intelligence can utilize any sensory device and use it to compute. If you understand this you can also compare it to human sensory experience. There is little difference. Can you understand that? There is no doubt in my mind, that B) even if computers can not understand everything understandable by humans NOW in A future they will be able to. This (b) is clearly demonstrated by the advancements in computing devices that has taken decades to improve; where-upon their intelligences have gone through such testing as the the Turing Test and others.
  • InPitzotl
    880
    You can redefine "understanding" in such a way that it is something a robot or a computer can do, but the "understanding" I am talking about is still there.Daemon
    The concept of understanding you talked about on this thread doesn't even apply to humans. If "the reason" the robot doesn't understand is because the robot doesn't experience, then the non-English speaker that looked at me funny understood the question. Certainly that's broken.

    I think you've got this backwards. You're the one trying to redefine understanding such that "the robot or a computer" cannot do it. Somewhere along the way, your concept of understanding broke to the point that it cannot even assign lack of understanding to the person who looked at me funny.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.