• Josh Alfred
    226
    There is some kind of break and convergence between A) Being able to translate languages B) Understanding languages. I am not sure what those differences and similarities are, as I have never posited the two for comparison. Computers are capable of both. I think @TheMadFool is right on defining understanding. It requires referents and those referents require some kind of experience of their objects That is linguistic empiricism in a pure form, but what it doesn't account for is 1) how we know things through rational deduction, where ONE lacks experience yet knows the premises and conclusions to be valid or invalid deductively. 2) And probably a whole different milieu of other cognitive quandaries.
  • Daemon
    591
    A) Artificial intelligence can utilize any sensory device and use it to compute. If you understand this you can also compare it to human sensory experience. There is little difference. Can you understand that?Josh Alfred

    I can understand what you're saying, but it is quite wrong. When you experience through your senses you see, feel and hear. A computer does not see, feel and hear. I shouldn't need to be telling you this.
  • InPitzotl
    880
    When you experience through your senses you see, feel and hear.Daemon
    And yet, Josh (guessing) does not understand Sanskrit, and you do not understand understanding. A person who does not understand something does not understand it. I shouldn't need to be telling you this.

    You've convinced yourself that experience is the explanation for understanding. The problem is, experience does not explain understanding. A large number of animals also experience; but somehow, only humans have mastered human language. Experience cannot possibly be the explanation for understanding if it isn't even an explanation of understanding.
  • Daemon
    591


    I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

    To understand what "pain" means, for example, you need to have experienced pain.
  • InPitzotl
    880
    I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

    To understand what "pain" means, for example, you need to have experienced pain.
    Daemon
    Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.

    There's a gigantic difference between claiming that X is necessary for understanding, and claiming that X is necessary to understand X.

    ETA: Your claim is that experience is necessary for understanding. I interpret this claim as equivalent to saying that there can be no understanding without experience. The expected justification for this claim would be to show how understanding at all necessarily involves experience (because if it doesn't, the claim is wrong). This is quite different than pointing out areas of understanding that require experience (such as your pain example).

    Explain to me, for example, how you connect the requirement of experience to the example question requesting some bananas.
  • Daemon
    591
    I am not saying that experience is the explanation for understanding, I am saying that it is necessary for understanding.

    To understand what "pain" means, for example, you need to have experienced pain. — Daemon

    Your example isn't even an example of what you are claiming, unless you seriously expect me to believe that you believe persons with congenital analgesia cannot understand going to the store and getting bananas.
    InPitzotl
    to

    I don't really see what you're getting at here. I'm not saying you need to experience pain to understand shopping. You need to experience pain to understand pain.

    To understand shopping, you would need to have experienced shopping.
  • InPitzotl
    880
    To understand shopping, you would need to have experienced shopping.Daemon
    Pain is a feeling. Shopping is an act.

    If I see a person walking through the store, looking at various items, picking up some of them and putting them into the cart, the person is shopping. If I see a robot walking through the store, looking at various items, picking up some of them and putting them into the cart, the robot is shopping. It's hard to say what a robot feeling pain is by comparison, but that being all shopping is, that robot is shopping.

    Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)
  • Daemon
    591


    A robot is not an individual, an entity, an agent, a person. To say that a robot is shopping is a category error.

    Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.

    You could say that the robot is simulating shopping.

    Do you think the robot understands what it is doing?
  • Daemon
    591
    Also, are you implying nobody knows what my question means unless they have bought me bananas? (Prior to which, they have not experienced buying me bananas?)InPitzotl

    I wrote this above: My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experience.

    I don't mean that you need to have experienced a particular thing before you can understand it, but you do need to know how it fits in to the world which you have experienced.


    So a person can understand instructions to shop for your bananas if they have had sufficiently similar experiences.

    If the baby fails to thrive on raw milk, boil it.
  • InPitzotl
    880
    A robot is not an individual, an entity, an agent, a person.Daemon
    Just a quick reminder... we're not talking about robots in general. We're talking about a robot that can manage to go to the store and get me some bananas.

    I don't believe such a robot can possibly pull this off (with any sort of efficacy) without being an individual, an entity, or an agent.

    But, sure... it need not be a person.

    I suspect that your concept of individuality/agency drags in baggage I don't myself drag in.
    Of course in everyday conversation we talk as though computers and robots were entities, but here we need to be more careful.Daemon
    Okay, so let's be careful.
    To say that a robot is shopping is a category error.Daemon
    You could say that the robot is simulating shopping.Daemon
    Imagine this theory. Shopping can only be done by girls. I say, that's utter nonsense. Shopping does not require being a girl; I'm a guy, and I can certainly pull it off. But the objection is raised that it's a category error to claim that a guy can shop; you could say that I am simulating shopping.

    I don't quite buy that said argument counts as being careful. I'm certainly, in this particular hypothetical scenario, not committing a category error simply by claiming that I, a guy, can shop; it's not me that's claiming only girls shop. In fact, the suggestion that an event that actually occurs should be considered a simulation seems to raise red flags to me.

    This sounds like exactly the opposite of being careful.

    ETA: You've managed to formulate a theory that makes unjustified distinctions. There's now real shopping, where real bananas get put into real carts, money from real accounts change hands, and the real bananas are brought to me; and then there's simulated shopping, where all of that stuff also happens, but we're missing vital ingredient X. There's by this logic real walking, where one manages to perform a particular choreography of controlled falling in such a way as to invoke motion without falling over, and simulated walking, where all of this stuff happens, but you're not doing it with the right stuff. There's real surgery, where a surgeon might slice me open with a knife, remove a tumor, and sew me up while managing not to kill me; and simulated surgery, where all of this stuff happens... the tumor's still removed, I'm still alive... but the thing slicing me open didn't quite have feels in the right way.

    It seems to me there's no relevant difference here between the real thing and what you're calling a simulation... which is also the real thing, but is missing the ingredient you demand the real thing requires to call it real. All of this stuff still gets done... so to me, this is the ultimate test demonstrating that the thing you demand must be there to do it isn't in fact necessary at all. Are you sure you want this to be your standard that vital ingredient X is necessary? Because it sounds to me like this is the very definition of ingredient X not being vital.

    A genuine argument for ingredient X's vitality should not look like a True Scotsman fallacy. If experience isn't doing any work for you to explain something crucial about understanding, it's as I said superfluous... and your inclusion of it just to include it is simply baggage. If you have a good reason to suspect experience is necessary, that is what you should present; not just a narrative that lets you say that, but an explanation for how it critically fits in.
    Do you think the robot understands what it is doing?Daemon
    In a nutshell, yes. But again, to be clear, this does not stem from a principle that doing things is understanding. Rather, it's because this is precisely the type of task that requires understanding to do with any efficacy.
  • Daemon
    591
    There is some kind of break and convergence between A) Being able to translate languages B) Understanding languages. I am not sure what those differences and similarities are, as I have never posited the two for comparison. Computers are capable of both.Josh Alfred

    Researchers have compared the results of machine translation to a jar of cookies, only 5% of which are poisoned.

    Computers can make an amazingly good job of translating, but they don't do what we do when we translate. We use our understanding, and you can see from the faults in machine translation that that is what a computer lacks.

    If a computer could do what I can do, people would use Google Translate and I wouldn't have any work. Google Translate is free and I am quite expensive.

    What the computer lacks is involvement with the world.

    I put this sentence into Google Translate: "If the baby fails to thrive on raw milk, boil it."

    Google translated this into Dutch as "Als de baby niet gedijt op rauwe melk, kook hem dan."

    That means "If the baby fails to thrive on raw milk, boil him."

    Google Translate is extremely ingenious, but it lacks understanding, because it is not involved with the world as we are, through experience. QED.
  • TheMadFool
    13.8k
    My suggestion is that understanding something means relating it correctly to the world, which you know and can know only through experienceDaemon

    Mary's Room?

    The question is, does Mary learn anything new?

    I recall mentioning this before but what is red? Isn't it just our eyes way of perceiving 750 nm of the visible spectrum of light?

    Look at it in terms of language. This :point: 0 is sifr in Persian, zero in English and sunya in Hindi but do we claim that the Persian knows something more from the word "sifr" or that an Englishman got an extra amount of information from the word "zero" and so on?

    Likewise, does Mary get ahold of new information when she sees the color red? It's just 750 nm in eye language.

    I dunno. :chin:
  • Daemon
    591


    Mary's deficit in the room is only that she hasn't seen red. Apart from that she is a normal experiencing human being.

    A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.
  • Varde
    326
    Understanding is knowing a subject mechanically, such as with C++. If - there is - a - any number here - do - X.

    A tree is/can be green and brown...

    If you know each part of the former sentence, you therefore understand what is meant. To get a machine to understand then it must be programmed concisely.
  • TheMadFool
    13.8k
    Mary's deficit in the room is only that she hasn't seen red. Apart from that she is a normal experiencing human being.

    A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.
    Daemon

    As I tried to explain with Mary's room thought experiment, redness is just 750 nm (wavelength of red) in eye dialect. Just as you can't claim that you've learned anything new when the statement the burden of proof is translated in latin as onus probandi, you can't say that seeing red gives you any new information.
  • Daemon
    591
    As I tried to explain with Mary's room thought experiment, redness is just 750 nm (wavelength of red) in eye dialect. Just as you can't claim that you've learned anything new when the statement the burden of proof is translated in latin as onus probandi, you can't say that seeing red gives you any new information.TheMadFool

    What you've set out here is just one side of the disagreement about Mary's Room, but I am suggesting that not just red but everything you have learned comes from experience. Do you have a counter to that?
  • TheMadFool
    13.8k
    What you've set out here is just one side of the disagreement about Mary's Room, but I am suggesting that not just red but everything you have learned comes from experience. Do you have a counter to that?Daemon

    Yes, I think so. I'll give you an argument Socrates made.

    1. Nothing in our experience is truly, precisely, equal. Everything we encounter around us is either never equal or only approximately equal.

    Yet,

    2. We have the concept of perfect equality.

    Ergo,

    3. Not everything we know is drawn from experience.
  • InPitzotl
    880
    I recall mentioning this before but what is red? Isn't it just our eyes way of perceiving 750 nm of the visible spectrum of light?TheMadFool
    Eyes do not perceive, so the answer to the question is no (I'm sure you didn't literally mean that eyes perceive, but you have to be specific here enough for me to know what you did mean).

    Color vision in most humans is trichromatic; to such humans, 750nm light would affect the visual system in a particular way, that contrasts quite a bit from 550nm light. The tristimulus values for each would be X=0.735, Y=0.265, Z=0 and X=0.302, Y=0.692, Z=0.008 respectively. A protanope would be dichromatic; the protanope's visual system might have tristimulus distimulus values for each color as X=1.000, Y=0.000 and 550nm light as X=0.992, Y=0.008.

    Assuming Jack is typical, Jane has an inverted spectrum, and Joe is a protanope, Jack and Jane agree 750nm light is red and 550nm light is green; and Joe doesn't quite get what the fuss is about.
    A computer doesn't experience anything. All the information you and I have ever acquired has come from experience.Daemon
    Imagine a test. There are various swatches within 0.1 units of each other from X=0.735, Y=0.265, Z=0; and this is mixed in with various swatches within 0.1 units from X=0.302, Y=0.692, Z=0.008. Jack, Jane, Joe, and a robot affixed with a colorimeter are tasked to sort the swatches of the former kind together and the swatches of the latter kind together into separate piles. Jack, Jane, and the robot would be able to pass this test. Joe will have some difficulty.

    Jack and Jane do this task well using their experiences of seeing the swatches. Joe will have great difficulty with this task despite experiencing the swatches. The robot can be programmed to succeed at this test with success rates rivaling Jack and Jane, despite having no experiences.

    I'll grant that all of the information Jack, Jane, and Joe have ever acquired has come from experience. I'll grant that the robot here does not experience. But granting this, with regard to this test, Joe's the odd one out, not the robot.

    Maybe Jack, Jane, and Joe only being able to sort swatches using their experiences does not demonstrate that experience is the critical thing necessary to sort swatches correctly.
  • TheMadFool
    13.8k
    Eyes do not perceive, so the answer to the question is no (I'm sure you didn't literally mean that eyes perceive, but you have to be specific here enough for me to know what you did mean).

    Color vision in most humans is trichromatic; to such humans, 750nm light would affect the visual system in a particular way, that contrasts quite a bit from 550nm light. The tristimulus values for each would be X=0.735, Y=0.265, Z=0 and X=0.302, Y=0.692, Z=0.008 respectively. A protanope would be dichromatic; the protanope's visual system might have tristimulus values for each color as X=1.000, Y=0.000 and 550nm light as X=0.992, Y=0.008.

    Assuming Jack is typical, Jane has an inverted spectrum, and Joe is a protanope, Jack and Jane agree 750nm light is red and 550nm light is green; and Joe doesn't quite get what the fuss is about.
    InPitzotl

    Languages maybe mutually unintelligible but nothing new is added in translation from one to another. Joe's knowledge that red is 750 nm, even when he's blind to red, is equivalent to Jack and Jane seeing/perceiving red. Red is, after all, light of 750 nm in eye dialect.

    Here's a little thought experiment:

    If I say out loud to you "seven" and then follow that up by writing "7" and showing it to you, is there any difference insofar as the content of my spoken and written message is concerned?

    No!

    Both "seven" (aural) and "7" (visual) contain the same information - seven-ness.

    Likewise, seeing the actual color red is equivalent to knowing the number 750 (nm) - they're both the same thing and nothing new is learned by looking at a red object.
  • InPitzotl
    880
    Joe's knowledge that red is 750 nm,TheMadFool
    There's language translation, and there's wrong. What color is a polar bear, Santa's beard, and snow?
    If I say out loud to you "seven" and then follow that up by writing "7" and showing it to you, is there any difference insofar as the content of my spoken and written message is concerned?TheMadFool
    Your thought experiment is misguided. 7 is a number. Seven is another name for the number 7. But 7 aka seven is not a dwarf. There might be seven dwarves, but seven isn't a dwarf.
    Likewise, seeing the actual color red is equivalent to knowing the number 750 (nm) - they're both the same thing and nothing new is learned by looking at a red object.TheMadFool
    Seeing the actual color red is not equivalent to knowing the number 750nm. Colors are not wavelengths of light; wavelengths of light have color (if you isolate light to said wavelength photons and have enough to trigger color vision), but a wavelength of light and a color aren't the same thing. A polar bear is white, not red (except after a nice meal), despite his fur reflecting photons whose wavelength is 750nm. There's no such thing as a white photon. White is a color. Colors are not wavelengths of light.

    Joe also sees a color, in a color space we don't tend to name (because we're cruel?), when he sees 750nm light. But the color he sees is pretty much the same color as 550nm light. We call the former red, and the latter green.
  • TheMadFool
    13.8k
    Before we go any further, what do you think of the idea that perception is a language? It seems to be one; after all, the brain is interpreting the neural signals pouring into it through the senses.
  • InPitzotl
    880
    Before we go any further, what do you think of the idea that perception is a language?TheMadFool
    It might work as a metaphor, but I wouldn't go further than that.
  • TheMadFool
    13.8k
    It might work as a metaphor, but I wouldn't go further than that.InPitzotl

    Why?
  • InPitzotl
    880
    Why?TheMadFool
    It's not really the same thing, in short. Language does more than what perception does, and perception does more than what language does. They deserve different concepts. I don't think I want to elaborate here; I haven't bothered with the other thread yet (and once I do, I might just lurk, as I typically do way more often than comment).
  • TheMadFool
    13.8k
    It's not really the same thing, in short. Language does more than what perception does, and perception does more than what language does. They deserve different concepts. I don't think I want to elaborate here; I haven't bothered with the other thread yet (and once I do, I might just lurk, as I typically do way more often than comment).InPitzotl

    I hadn't thought it through too. It just seemed to make sense to me, intuitively that is. I guess it's nothing. G'day.
  • Daemon
    591
    I thought some examples of Gricean Implicature might amusingly illustrate what computers can't understand (and why):

    A: I broke a finger yesterday.
    Implicature: The finger was A's finger.

    A: Smith doesn’t seem to have a girlfriend these days.
    B: He has been paying a lot of visits to New York lately.
    Implicature: He may have a girlfriend in New York.

    A: I am out of petrol.
    B: There is a garage around the corner.
    Implicature: You could get petrol there.

    A: Are you coming out for a beer tonight?
    B: My in-laws are coming over for dinner.
    Implicature: B can't go out for a beer tonight.

    You can complete the remaining examples (a computer can't).

    A: Where is the roast beef?
    B: The dog looks happy.

    A: Has Sam arrived yet?
    B: I see a blue car has just pulled up.

    A: Did the Ethiopians win any gold medals?
    B: They won some silver medals.

    A: Are you having some cake?
    B: I'm on a diet.
  • InPitzotl
    880
    I thought some examples of Gricean Implicature might amusingly illustrate what computers can't understand (and why)Daemon
    I think you're running down the garden path.

    I'm a human. I experience things. I also understand things. I can do things like play perfect tic tac toe, go the store and buy bananas, and solve implicature puzzles.

    I'm also a programmer. I have the ability to "tell a computer what to do". I can easily write a program to play perfect tic tac toe. Not only can I do this, but I can specifically write said program by self reflecting on how I myself would play perfect tic tac toe; that is, I can appeal to my own intuitive understanding of tic tac toe, using self reflection, and emit this in the form of a formal language that results in a computer playing perfect tic tac toe.

    But by contrast, to write a program that drives a bot to go to the store and buy bananas, or to solve implicature puzzles, is incredibly difficult. Mind you, these are easy tasks for me to do, but that tic tac toe trick I pulled to write the perfect tic tac toe player just isn't going to cut it here.

    I don't think you're grasping the implication of this here. It sounds as if you're positing that you, a human, can easily do something... like go to the store and buy bananas, or solve implicatures... and a computer, which isn't a human, cannot. And that this implies that computers are missing something that humans have. That is the garden path I think you're running down... you have a bad impression. It's us humans that are building these computers that have, or don't have as the case may be, these capabilities. So when I show you my perfect tic tac toe playing program, that is evidence that humans understand tic tac toe. When I show you my CAT tool that can't even solve an implicature problem, this is evidence that humans have not solved the problem of implicature.

    And maybe they will; maybe in 15 years you'll be surprised. Your CAT tool will suddenly solve these implicatures like there's no tomorrow. But that just indicates that programmers solved implicatures... the CAT tool still wouldn't know what a banana is. How could it?

    The whole experience thing is a non-sequitur. I have just as much "experiencing" when I write tic tac toe as I do when I fail to make a CAT tool that solves implicatures. I don't think if I knew how to put experiences into the CAT tool that this would do anything to it that would help it solve implicatures. I certainly don't make that tic tac toe perfect player by coding in experiences. It's really easy to say humans have experiences, humans can do x, and computers cannot do x, therefore x requires experiences. But I don't grasp how this can actually be a justified theory. I don't get what "work" putting experiences in is being theorized to do to pull off what it's being claimed as being critical for.
  • Varde
    326
    To understand means to, get, mentally/spiritually.

    To know means to, have, mentally/spiritually.

    Though you may understand something, you may lose it when further complexities concerning it's concept arise.

    You understand shape, but at the mention of adv. Shape you seem to lose what you got.

    When you understand fully a concept, you can know about it - you can secure what you get from it.

    Knowing is halving, it's as simple as looking at this word - example - and being able to half all aspects of it(it's symbol, its meaning, it's reality, etc.). Halving in mind is not directly about the fraction(though it is), but more the process.

    I look up at the sky, I am able to say I know what it is, because I can quickly decompose it - half.
  • Daemon
    591
    You seem to be contradicting yourself. The other day you had a robot understanding things, now you say a computer doesn't know what a banana is.

    I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.
  • TheMadFool
    13.8k
    Me (to a stranger): Sir, can you give me the directions to the nearest hotel?

    Stranger (to me): Yeah, sure. Take this road and turn left at the second junction. There's a hotel there, a good one.

    ---

    Me (to Siri): Siri, can you give me the directions to the nearest hotel?

    Siri: The shortest route to the hotel nearest you is take x street, turn left at y street . It should take you about 3 minutes in light traffic.


    Both Siri and the kind stranger (seem to have) understood my question. A mini Turing Test.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.