• Michael Ossipoff
    1k
    Any device that can do what a person or other animal can do has "Consciousness". That's how it does those things, you know. — Michael Ossipoff

    If a Computer could experience for example the Color Red then I would agree. But a Computer does not Experience anything.
    SteveKlinko

    ...and you know that....how? If you aren't a computer, then how can you speak for what a computer does or doesn't experience?

    What does experience mean? I define "experience" as a purposefully-responsive device's interpretation of its surroundings and events in the context of that device's designed purposes.

    By that definition, yes a computer has experience.

    As I said, we tend to use "Consciousness" and "Experience" chauvinistically, applying those words only to humans or other animals. That's why I try to cater to that chauvinism by sometimes defining those words in terms of the speaker's perception of kinship with some particular other purposefully-responsive device..

    A Computer can be programmed to scan pixels in an image to find the Red parts. A Computer will look for pixels with values that are within a certain range of numbers. A Computer never has a Red experience but it can find the Red parts of an image.

    When you find the red part of an image, why should I believe that you have a red experience in a meaningful sense in which a computer doesn't?

    The computer finds the red part of the image. You find the red part of the image. Period (full-stop).

    You wouldn't report the red part of the image if you hadn't experienced it. The same can rightly be said of the computer.

    So just because it can find the Red parts of an image, like a Human can, it does not mean it has a Conscious Red experience while doing this.

    You call it a Conscious Experience when it's yours, or of another person, or maybe another animal. ...you or a purposeful-device sufficiently similar to you, with which you perceive some kinship.

    A Computer works in a different way than a Conscious being does.

    ...because you define a Conscious Being as something very similar to yourself.

    Science doesn't understand enough about Consciousness yet to design Machines that have Consciousness.

    ...if you're defining "Consciousness" as "ability to pass as human".

    Current technology can't yet produce a robot that acts indistinguishably similarly to a human and does any job that a human can do.

    Imitating or replacing humans is proving more difficult than expected. Life has evolved over billions of years of natural-selection. It wasn't reasonable to just expect to throw-together something to imitate or replace us in a few decades.

    If such a machine is ever built, some would say that it has Consciousness and Experience (as do we), and some would say that it doesn't (and that it's a philosophical zombie merely claiming to have feelings and experiences).

    Of course the former would be right.

    Michael Ossipoff
  • SteveKlinko
    101
    Any device that can do what a person or other animal can do has "Consciousness". That's how it does those things, you know. — Michael Ossipoff

    If a Computer could experience for example the Color Red then I would agree. But a Computer does not Experience anything. — SteveKlinko
    ...and you know that....how? If you aren't a computer, then how can you speak for what a computer does or doesn't experience?

    What does experience mean? I define "experience" as a purposefully-responsive device's interpretation of its surroundings and events in the context of that device's designed purposes.

    By that definition, yes a computer has experience.

    As I said, we tend to use "Consciousness" and "Experience" chauvinistically, applying those words only to humans or other animals. That's why I try to cater to that chauvinism by sometimes defining those words in terms of the speaker's perception of kinship with some particular other purposefully-responsive device..
    Michael Ossipoff
    Seriously ... you think a Computer experiences the color Red like we do? You know that the only thing happening in a computer at any instant of time is : Add, Subtract, Multiply, And, Or, Xor, Divide, Shift Left, Shift Right, compare two numbers, move numbers around in memory, plus a few more. If you have 4 cores then this is happening in 4 different places in the computer chip. Which one of these operations experiences the color Red?

    A Computer can be programmed to scan pixels in an image to find the Red parts. A Computer will look for pixels with values that are within a certain range of numbers. A Computer never has a Red experience but it can find the Red parts of an image. — SteveKlinko

    When you find the red part of an image, why should I believe that you have a red experience in a meaningful sense in which a computer doesn't?

    The computer finds the red part of the image. You find the red part of the image. Period (full-stop).

    You wouldn't report the red part of the image if you hadn't experienced it. The same can rightly be said of the computer.
    Michael Ossipoff
    Question is: Do you have a Red experience in any meaningful sense. Think about your Red experience. Think about the Redness of the Red. That Redness is a Property of a Conscious phenomenon. Think about how a Computer works. Add, Subtract, Multiply, etc. There are categorical differences with how the Human Brain functions and how a Computer functions. A Human Brain has trillions of Neurons firing simultaneously at any instant of time. A 4 core processor chip only has 4 places where things can happen at any given instant of time. Effectively a 4 core computer chip has only 4 Neurons.

    So just because it can find the Red parts of an image, like a Human can, it does not mean it has a Conscious Red experience while doing this. ---SteveKlinko

    You call it a Conscious Experience when it's yours, or of another person, or maybe another animal. ...you or a purposeful-device sufficiently similar to you, with which you perceive some kinship.



    A Computer works in a different way than a Conscious being does. ---SteveKlinko

    ...because you define a Conscious Being as something very similar to yourself.
    Michael Ossipoff
    I showed you how a Machine detects Color. It compares numbers in memory locations. It makes no sense to think that it also has a Red experience. It doesn't need a Red experience to detect colors. Machines and Brains do things using different methods.


    Science doesn't understand enough about Consciousness yet to design Machines that have Consciousness. ---SteveKlinko

    ...if you're defining "Consciousness" as "ability to pass as human".

    Current technology can't yet produce a robot that acts indistinguishably similarly to a human and does any job that a human can do.

    Imitating or replacing humans is proving more difficult than expected. Life has evolved over billions of years of natural-selection. It wasn't reasonable to just expect to throw-together something to imitate or replace us in a few decades.

    If such a machine is ever built, some would say that it has Consciousness and Experience (as do we), and some would say that it doesn't (and that it's a philosophical zombie merely claiming to have feelings and experiences).

    Of course the former would be right.
    Michael Ossipoff
    I'm not defining Consciousness as the ability to pass as Human. Most Birds can probably have a Conscious Red experience.

    I think that since our Brains are made out of Physical Matter that other kinds and Configurations of Matter could produce Consciousness. Science just needs to understand Consciousness more. I think that someday there will be full Conscious Androids and not mere Robots.
  • Wayfarer
    5.8k
    I suppose most of us are familiar with the concept of philosophical zombies, or p-zombies for short: beings that appear and act like humans and are completely indistinguishable from humans but do not have consciousness.BlueBanana

    So, how to test for a P-Zombie: ask some questions, like, 'how do you feel right now?' 'what is the most embarrassing thing that ever happened to you?', or 'what's you're favourite movie, and what did you like about it?' Engage it in conversation. I can't see how it could maintain the pretence of being, well, 'a being', for very long, as all it can do is regurgitate, or combine, various responses and information that has been uploaded into it (how, by the way? Is it a computer? If so, could it pass the Turing Test?)

    These challenges from 17th century philosophers still seem germane:

    if there were such machines with the organs and shape of a monkey or of some other non-rational animal, we would have no way of discovering that they are not the same as these animals. But if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs—for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action.

    René Descartes, Discourse on Method (1637)

    It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.

    Gottfried Leibniz, Monadology
  • BlueBanana
    780
    I wonder how Descartes would react to Siri :D anyway, I don't think the Turing test is a good method for detecting thinking process. A robot, or a zombie, could be programmed to answer questions about their feelings as if they had any.

    I can't see how it could maintain the pretence of being, well, 'a being', for very long, as all it can do is regurgitate, or combine, various responses and information that has been uploaded into it (how, by the way? Is it a computer? If so, could it pass the Turing Test?)Wayfarer

    Isn't that what humans do as well? We are fed information through our senses in infancy and childhood that we over the course of years learn how to react to.
  • BlueBanana
    780
    But Humans don't work like Robots.SteveKlinko

    Is the converse true? I think a robot works, although in a simplified way, like a human, making it possible for it to replicate the actions of conscious beings.

    The Conscious Visual experience contains an enormous amount of information that is all packed up into a single thing. The Neural Activity is not enough.SteveKlinko

    I think the opposite is the case. A conscious experience, whatever its benefits are, cannot be efficient. While containing all of the visual data provided by eyes, it also contains the experience of that data, which is such a rich experience we ourselves can't even begin to comprehend how it is created. The brain also unconsciously organizes and edits that data to a huge extent, filling gaps, causing us to perceive illusions, basically expanding our visual experience beyond what information is provided by the senses. For example,

    When I reach out to pick up my coffee cup I see my Hand in the Conscious Visual experience. If my hand is off track I adjust my hand movement until I can touch the handle and pick up the coffee cup.SteveKlinko

    a robot would only need to find a specific kind of group of pixels with a color matching the color of the cup. Conscious mind, for some reason, in a way wastes energy forming an idea of "cupness", equating that cup with other cup and connecting it to its intended usage as well as all the memories (unconscious or conscious) an individual has relating to cups. All that information could be broken down to individual points and be had access to by a robot, but instead human mind makes something so complex and incomprehensible.

    The existence of that idea also allows me to, while seeing a simple cup, appreciate my conscious perception of that cup. I still can't see the evolutionary value of that appreciation, though.
  • Wayfarer
    5.8k
    I wonder how Descartes would react to Siri?BlueBanana

    I think the Descartes of the quote I provided could have anticipated something like Siri.

    Isn't that what humans do as well?BlueBanana

    You're taking a lot for granted, and in such matters, that is not wise.
  • SteveKlinko
    101
    But Humans don't work like Robots. — SteveKlinko
    Is the converse true? I think a robot works, although in a simplified way, like a human, making it possible for it to replicate the actions of conscious beings.

    The Conscious Visual experience contains an enormous amount of information that is all packed up into a single thing. The Neural Activity is not enough. — SteveKlinko
    I think the opposite is the case. A conscious experience, whatever its benefits are, cannot be efficient. While containing all of the visual data provided by eyes, it also contains the experience of that data, which is such a rich experience we ourselves can't even begin to comprehend how it is created. The brain also unconsciously organizes and edits that data to a huge extent, filling gaps, causing us to perceive illusions, basically expanding our visual experience beyond what information is provided by the senses. For example,

    When I reach out to pick up my coffee cup I see my Hand in the Conscious Visual experience. If my hand is off track I adjust my hand movement until I can touch the handle and pick up the coffee cup. — SteveKlinko
    a robot would only need to find a specific kind of group of pixels with a color matching the color of the cup. Conscious mind, for some reason, in a way wastes energy forming an idea of "cupness", equating that cup with other cup and connecting it to its intended usage as well as all the memories (unconscious or conscious) an individual has relating to cups. All that information could be broken down to individual points and be had access to by a robot, but instead human mind makes something so complex and incomprehensible.

    The existence of that idea also allows me to, while seeing a simple cup, appreciate my conscious perception of that cup. I still can't see the evolutionary value of that appreciation, though.
    BlueBanana
    You're missing the reality that the Robot would most definitely need the concept of cupness to operate in the general world of things. Knowing the color of the handle of one particular cup might help with that cup. In the real world the Robot would need to understand cupness in order to find a cup in the first place. Then when it finds a cup it can determine what color it is.
  • Harry Hindu
    1.1k
    A robot, or a zombie, could be programmed to answer questions about their feelings as if they had any.BlueBanana
    Like I said, p-zombies cannot be programmed. They are dead inside. Humans are more like robots, where p-zombies are more like a mechanical contraption without any capacity for programming. Humans are programmable. P-zombies are not.

    You're taking a lot for granted, and in such matters, that is not wise.Wayfarer
    Every time you are asked what it is that is missing when we compare humans to computers, you weasel out of answering the question.

    You're missing the reality that the Robot would most definitely need the concept of cupness to operate in the general world of things. Knowing the color of the handle of one particular cup might help with that cup. In the real world the Robot would need to understand cupness in order to find a cup in the first place. Then when it finds a cup it can determine what color it is.SteveKlinko
    So, we design a robot with templates - a template for cups, for humans, for dogs, for cars, etc. - just like humans have. We humans have templates stored in our memory for recognizing objects. We end up getting confused, just like a robot would, when an object shares numerous qualities with different templates. The solution is to make a new template, like "spork". What would "sporkness" be? Using the word, "cupness" just goes to show what is wrong with philosophical discussions of the mind.
  • BlueBanana
    780
    You're missing the reality that the Robot would most definitely need the concept of cupness to operate in the general world of things. Knowing the color of the handle of one particular cup might help with that cup. In the real world the Robot would need to understand cupness in order to find a cup in the first place. Then when it finds a cup it can determine what color it is.SteveKlinko

    It would need the information the concept of cupness holds - or, some of that information. It doesn't need to compose all this knowledge into this abstract construction that also holds much unnecessary data. Cupness is much more than what a cup is and what counts as a cup, or even all the knowledge concerning cups there is. Cupness is like a generalized form of a thought (or dare I even say feeling) of a cup, something only a conscious and sentient being can grasp. That simultaneously compressed but also overly complex thought is something I don't believe would be necessary.
  • BlueBanana
    780
    Like I said, p-zombies cannot be programmed. They are dead inside. Humans are more like robots, where p-zombies are more like a mechanical contraption without any capacity for programming. Humans are programmable. P-zombies are not.Harry Hindu

    If they have memory, they can learn, and therefore they can be in a sense programmed through conditioning. So, do they have memory? Is memory a property of only mind, only brain or both? I'm not an expert but afaik the existence of memories in brain has been scientifically confirmed.

    Either way, the reflexes of p-zombies should work normally so at least classical conditioning should work on them.
  • Harry Hindu
    1.1k
    I don't see how you can have a brain, but no mind, or at least the potential for mind. Having memory means you have a mind. Many people on this thread are being inconsistent and attributing minds to humans but not to computers. Why? How do we know that humans have minds but computers don't? What is a mind if not memory that stores and processes sensory data?
  • BlueBanana
    780
    Having memory means you have a mind.Harry Hindu

    Let's interpret a footstep as a memory of something stepping there. What entity has this memory? Ground? The Earth? Some even larger system? Can any of these be said to have a mind?

    What is a mind if not memory that stores and processes sensory data?Harry Hindu

    Sentience to begin with. Conscious experiences.

    Many people on this thread are being inconsistent and attributing minds to humans but not to computers. Why? How do we know that humans have minds but computers don't?Harry Hindu

    They don't express any attributes that would imply them having a mind, and neither does their physical structure.
  • SteveKlinko
    101
    You're missing the reality that the Robot would most definitely need the concept of cupness to operate in the general world of things. Knowing the color of the handle of one particular cup might help with that cup. In the real world the Robot would need to understand cupness in order to find a cup in the first place. Then when it finds a cup it can determine what color it is. — SteveKlinkoSo, we design a robot with templates - a template for cups, for humans, for dogs, for cars, etc. - just like humans have. We humans have templates stored in our memory for recognizing objects. We end up getting confused, just like a robot would, when an object shares numerous qualities with different templates. The solution is to make a new template, like "spork". What would "sporkness" be? Using the word, "cupness" just goes to show what is wrong with philosophical discussions of the mind.Harry Hindu
    I agree about templates but don't understand your objection to saying cupness or even sporkness.
  • SteveKlinko
    101
    ↪BlueBanana I don't see how you can have a brain, but no mind, or at least the potential for mind. Having memory means you have a mind. Many people on this thread are being inconsistent and attributing minds to humans but not to computers. Why? How do we know that humans have minds but computers don't? What is a mind if not memory that stores and processes sensory data?Harry Hindu
    The Computer Mind would be equivalent to the Physical Human Mind (the Brain). But Humans have a further processing stage which is the Conscious Mind. When Humans see the Color Red there are Neurons firing for Red. But with Humans there is also that undeniable Conscious Red experience that happens. You can't really believe that a Computer has an actual Red experience. That would imply some Computer Self having that experience. Science knows very little about Consciousness so who knows maybe even a grain of sand has Consciousness. But you have to draw a line somewhere in order to study the problem.
  • Harry Hindu
    1.1k
    Let's interpret a footstep as a memory of something stepping there. What entity has this memory? Ground? The Earth? Some even larger system? Can any of these be said to have a mind?BlueBanana
    I don't know what you are asking here. IF we were to interpret a footprint as a memory of something stepping there, then yes, it would be part of the the mind of the Earth, I guess. But we don't call footprints "memories". We call them "evidence". We can also call your recall of a crime as evidence as well, so in a way, yes, memories are effects of certain causes, just like footprints. What makes footprints and memories distinct is that footprints are not used to obtain a certain goal by the same system that it is part of. Footprints are on the ground and part of the surface of the Earth. The Earth has no goals for which to use the memory/knowledge of that footprint, or any footprint for that matter. Organisms and computers (currently only working for human goals) are the only things that use information to obtain goals.


    Sentience to begin with. Conscious experiences.BlueBanana
    ...and what are conscious experiences?


    They don't express any attributes that would imply them having a mind, and neither does their physical structure.BlueBanana
    What attributes would imply that one has a mind? Wouldn't one attribute be that it uses stored information to act for it's own interests?

    How do we know that "physical" structure has anything to do with it? What is "physical" anyway?
  • Harry Hindu
    1.1k
    I agree about templates but don't understand your objection to saying cupness or even sporkness.SteveKlinko
    Then are you sure that you agree with me about templates? My point was that we have better, more accurate terms to use ("cup template") instead of these philosophically loaded terms, like "cupness".


    The Computer Mind would be equivalent to the Physical Human Mind (the Brain). But Humans have a further processing stage which is the Conscious Mind. When Humans see the Color Red there are Neurons firing for Red. But with Humans there is also that undeniable Conscious Red experience that happens. You can't really believe that a Computer has an actual Red experience. That would imply some Computer Self having that experience. Science knows very little about Consciousness so who knows maybe even a grain of sand has Consciousness. But you have to draw a line somewhere in order to study the problem.SteveKlinko
    No. You have to study it first to see where you should draw the line, or else that line would be subjective - arbitrary.

    There is no red out there. Red only exists as a representation of a certain wavelength of EM energy. Any system could represent that wavelength as something else - the written word, "red, a sound of the word, "red" being spoken, another color, or even something else entirely. No matter what symbol some system uses to represent that wavelength of EM energy, others that also have a different representation could eventually translate the other's symbol for that thing. That is what we do with translating languages.

    What we see is not what the world is. Our minds model the world as physical objects with boundaries and borders, but the world isn't like that. It is all process, which can include "mental" processes, and non-mental (what many might call "physical" processes). When you look at someone you see them as a physical being, but they are just an amalgam of certain processes, some of them being "mental" in nature. What I mean by "mental" is goal-oriented sensory information processing. Brains are what we see, but they are just models of other's mental processes.

    YOU are a process. What I mean is, your mind is a process - a mental process. It is what it is like to be your mental process of simulating the world in fine detail so that you can fine-tune your behavior for the extremely wide range of situations you will find yourself in during your lifetime. Your conscious experience is just a predictive model of the world and is not as the world is in real-time. It is continually updated with new sensory information milliseconds after the events in the world.

    So to say that computers cannot have minds seems to be out of the question, if we designed them to learn using the information they receive about the world and their own bodies through sensory devices and to represent the world (using the information from it's senses) in a way that enables it to fine-tune it's behavior to achieve it's own personal goals. In other words, the computers you have on your desktop probably do not have minds in the same sense that we do. There may be something it is like with it being a process like everything else, but it is without any self-awareness or independent thought.
  • SteveKlinko
    101
    I agree about templates but don't understand your objection to saying cupness or even sporkness. — SteveKlinkoThen are you sure that you agree with me about templates? My point was that we have better, more accurate terms to use ("cup template") instead of these philosophically loaded terms, like "cupness".Harry Hindu
    So you are just arguing about symantics? For me Cup Template and Cupness have the same meaning.


    The Computer Mind would be equivalent to the Physical Human Mind (the Brain). But Humans have a further processing stage which is the Conscious Mind. When Humans see the Color Red there are Neurons firing for Red. But with Humans there is also that undeniable Conscious Red experience that happens. You can't really believe that a Computer has an actual Red experience. That would imply some Computer Self having that experience. Science knows very little about Consciousness so who knows maybe even a grain of sand has Consciousness. But you have to draw a line somewhere in order to study the problem. — SteveKlinkoNo. You have to study it first to see where you should draw the line, or else that line would be subjective - arbitrary.Harry Hindu
    I think Science will get nowhere if it insists that a grain of sand has Consciousness.


    There is no red out there. Red only exists as a representation of a certain wavelength of EM energy. Any system could represent that wavelength as something else - the written word, "red, a sound of the word, "red" being spoken, another color, or even something else entirely. No matter what symbol some system uses to represent that wavelength of EM energy, others that also have a different representation could eventually translate the other's symbol for that thing. That is what we do with translating languages.

    What we see is not what the world is. Our minds model the world as physical objects with boundaries and borders, but the world isn't like that. It is all process, which can include "mental" processes, and non-mental (what many might call "physical" processes). When you look at someone you see them as a physical being, but they are just an amalgam of certain processes, some of them being "mental" in nature. What I mean by "mental" is goal-oriented sensory information processing. Brains are what we see, but they are just models of other's mental processes.

    YOU are a process. What I mean is, your mind is a process - a mental process. It is what it is like to be your mental process of simulating the world in fine detail so that you can fine-tune your behavior for the extremely wide range of situations you will find yourself in during your lifetime. Your conscious experience is just a predictive model of the world and is not as the world is in real-time. It is continually updated with new sensory information milliseconds after the events in the world.
    Harry Hindu
    I agree except that the Conscious experience of something like the color Red is more than "Just a Predictive Model of the World". It is a Conscious experience. Science does not know how to explain any Conscious experience yet.

    So to say that computers cannot have minds seems to be out of the question, if we designed them to learn using the information they receive about the world and their own bodies through sensory devices and to represent the world (using the information from it's senses) in a way that enables it to fine-tune it's behavior to achieve it's own personal goals. In other words, the computers you have on your desktop probably do not have minds in the same sense that we do. There may be something it is like with it being a process like everything else, but it is without any self-awareness or independent thoughtHarry Hindu
    I say Computers that we have today don't have Minds but I didn't say that they can never have Minds. We first have to understand how the Human Mind works and only then will we be able to design Conscious Machines. But with Consciousness everything is still on the table. Maybe we should study the Consciousness of a gran of sand first, but I doubt the Wisdom or Logic of doing that..
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment