• SteveKlinko
    395
    Artificial Intelligence is primarily implemented by a class of computer programs that can accomplish tasks that mimic Human Intelligence. Examples are things like Speech Recognition, Facial Recognition, and Self Driving Cars. With the improved computers and algorithms that we have today these kinds of computer capabilities have become increasingly more useful. But the Hype over all this is astounding. Marketing departments are trying to imply that these kinds of capabilities mean that there is an actual Conscious entity involved in the Speech Recognition, Facial Recognition, and Self Driving Car. But these are all just computer programs performing a specific task. These would have to be classified as Non-Conscious Artificial Intelligence. If Consciousness can be added to Machines then full Conscious Artificial Intelligence will be achieved.

    According to the Inter Mind Model (IMM) the Speech Recognition, Facial Recognition, and Self Driving Car capabilities would reside in the Machine Physical Mind (PM) which is the computer Hardware and Software. The Machine PM serves the same purpose as our Human PM (The Brain). But it seems that there is no capability for a Conscious Mind (CM) to have any Volitional effect on the Machine PM like occurs with a Human PM. The Machine PM is just mindlessly executing computer programs.

    There is speculation that the CM might interact Volitionally with the Human PM using Quantum Mechanical effects. The Wikapedia page for Quantum Consciousness says: The Quantum Mind or Quantum Consciousness group of hypotheses propose that Classical Mechanics cannot explain Consciousness. It posits that Quantum Mechanical phenomena, such as Quantum Entanglement and Superposition, may play an important part in the Brain's function and could form the basis of an explanation of consciousness.

    So we might expect that a CM would interact Volitionally with a Machine PM using Quantum Mechanical effects. But current technology does not allow for this in computer designs. The special connections are just not designed into the hardware at this time. We basically are not sure how to do this yet. But we have to start somewhere and the Machine Consciousness Experiment is an attempt to make such a Quantum Mechanical connection from a CM to a Machine PM. Note that the Quantum Mechanical effect is the Inter Mind because it connects the CM to the Machine PM.

    The Quantum Mechanical connection between the CM and the PM must be a two way street. Volition allows a CM to affect a PM in order to do things in Physical Space (PSp). So in this case the connection is from the CM to the PM. A CM also needs to perceive what's going on in a PM, and therefore in PSp, so this connection must be from the PM to the CM.

    For Humans, Neurons contain structures called Microtubules that operate based on Quantum Mechanical principles. If a CM (through an Inter Mind) is able to sense the state of a Human PM by sensing the state of the Microtubules then the CM might have the ability to sense all Neurons in the Human PM at the same time. For Vision the CM might be able to sense the state of the Visual Cortex areas in order to experience what the Visual areas are currently Seeing. The CM would experience it's own Personal CL.

    For Machines, there are no Neurons but there are Transistors which operate on Quantum Mechanical principles. What if a CM (through an Inter Mind) could sense the state of the Transistors in an electronic circuit? A TFT Display Monitor has a Transistor at each pixel location. Maybe a CM can sense the state of all these Transistors in order to See what is currently displayed on the monitor. This is similar to how the CM senses the Visual Cortex in Humans. The Machine might then experience it's own Personal CL similar to the Human experience.

    This situation could also work for camera chips and CMs. It may be the case that millions or billions of CMs have been experiencing what's going on in the world through Transistors for many years already. But these CMs have not had any way to affect anything in PSp because we have not designed the Volitional interfaces yet. Manufacturers are not making Androids (Robots with Consciousness) yet, just mindless Robots. But Androids are conceptually possible and are probably not far away.

    The problem with Artificial Intelligence today is that Scientists are working on the Computer Processing side of things while denying that a Conscious Visual Experience, for example, would greatly improve the operation of their Machines with respect to Visual Processing. Consider that billions of years of evolutionary development produced the Conscious Visual experience in the life forms on this planet so why do we want to deprive our Machines from using this kind of Data?
  • Brian A
    25
    I think the question that concludes the post operates on a faulty premise. To "deprive" the "machines" presupposes that the machines have the dignity that human persons have. But this is untrue.

    Also the post assumes a sort of dualism: that there is a difference between the CM and the PM. But my understanding is that most scientists reject that view and think that the so-called CM is itself the functioning of the PM. Therefore since dualism is not accepted among scientists, the question whether we ought to pursue the fusion of CM and and machines does not even arise. Rather the question becomes whether scientific technology will reach the point where the brain can be artificially replicated in machines. But this is doubtful since the brain is unique and very complex.
  • Rich
    3.2k
    Artificial Intelligence is primarily implemented by a class of computer programs that can accomplish tasks that mimic Human Intelligence. Examples are things like Speech Recognition, Facial Recognition, and Self Driving Cars.SteveKlinko

    This is not what computers do. The algorithms do not mimic and what's more, ultimately a human must adjust the algorithms. All the computers do is brute force data scanning with short cut filtering.
  • BC
    13.1k
    Using some clues, one can sound as though one understands the physical mind better than one actually can. While we have made some real progress in developing some understandings about how the brain works, we ]don't know far more than we do know.

    It's possible that we may not be able to transcend the limits of our brains to understand how the brain works.

    Given that we do not understand how our own intelligence is achieved, it seems very unlikely we will design an actual artificial intelligence. We may have to be content with computers that seem like they are intelligent, but are not. That doesn't strike me as a problem. Isn't it enough that we can build programs to perform very useful functions like speech recognition, or autonomous automobiles?
  • Rich
    3.2k
    speech recognitionBitter Crank

    Speech recognition is pretty much a joke as anyone who has to deal with such shoddy customer service software Will immediately recognize. As soon as I hear those silly questions on the phone, I start banging on 0 hoping that I might be lucky enough to get a human.

    Computers are good at very simple data filtering tasks which is why well run companies such as Amazon and Google avoid the so-called AI stuff.
  • BC
    13.1k
    The "customer service" answering and routing devices are designed to do two things: Demonstrate to the consumer just how unimportant he or she is to the company, and to discourage him or her from annoying the company any further. They could just as well say "Drop dead, creep."

    The system that Apple uses for it's voice to text feature uses big mainframe computers to perform the task of interpreting speech. I'm not sure whether it's a Google or Apple operation. That's how it manages to be as good as it is. Perfect? No, but it is head and shoulders above what "customer service" voice recognition fails to do even poorly.
  • TheMadFool
    13.8k
    The computer and AI makes an appearance along the way from infanthood to adulthood. The mind of a single human evolves, in simplest of terms, from the concrete to the abstract. Isn't that why a 5 year old, barring the born genius, can't understand advanced math like calculus. A 5 year old is taught through repetition, taught some rules of grammar or arithmetic and then given exercises to hone their skills. They don't understand the rules. They just mechanically apply them. Isn't that like a...computer? So, modern technology can replicate the mind of a 5 year old.

    The challenge is how do we replicate the adult mind that, unlike the 5 year old, can also understand above and beyond the mere application of rules.

    What's the difference between
    1. Mechanical application of rules
    And
    2. Comprehension of the logic behind these rules?

    An adult mind can do both while the computer can do only 1.

    Point to note is these two different mental faculties (see above 1 and 2) can only be perceived upon access to the inner workings of a person or a computer. If all we have is access to the output (human behavior, printouts, audiovisual displays) we simply can't make the distinction between a person and a computer.

    That brings us to an important conclusion. An AI needn't actually be a person. All it has to do is perfectly mimic a person to pass of as one. Without access to the inner world of circuits we simply can't tell a person from a good AI.
  • Rich
    3.2k
    Without access to the inner world of circuits we simply can't tell a person from a good AI.TheMadFool

    It will be a very peculiar day when humans cannot tell the difference between some dumb tool that they created and their own creative minds that created that dumb tool. Whatever sci fi writers might say, computers are basically fast filters of data. They have zero intuition and power to create something new. They follow simple instructions that we give them.
  • SteveKlinko
    395
    I think the question that concludes the post operates on a faulty premise. To "deprive" the "machines" presupposes that the machines have the dignity that human persons have. But this is untrue.

    Also the post assumes a sort of dualism: that there is a difference between the CM and the PM. But my understanding is that most scientists reject that view and think that the so-called CM is itself the functioning of the PM. Therefore since dualism is not accepted among scientists, the question whether we ought to pursue the fusion of CM and and machines does not even arise. Rather the question becomes whether scientific technology will reach the point where the brain can be artificially replicated in machines. But this is doubtful since the brain is unique and very complex
    Brian A

    True the Machine will not have the same dignity as a Human until it has Consciousness. I certainly am proposing a type of Dualism and the Inter Mind Model actually proposes a Triplistic model of the Mind..
  • SteveKlinko
    395
    Artificial Intelligence is primarily implemented by a class of computer programs that can accomplish tasks that mimic Human Intelligence. Examples are things like Speech Recognition, Facial Recognition, and Self Driving Cars. — SteveKlinko
    This is not what computers do. The algorithms do not mimic and what's more, ultimately a human must adjust the algorithms. All the computers do is brute force data scanning with short cut filtering.
    Rich

    True, but the pupose is to mimic Human Intelligence.
  • SteveKlinko
    395
    Using some clues, one can sound as though one understands the physical mind better than one actually can. While we have made some real progress in developing some understandings about how the brain works, we ]don't know far more than we do know.

    It's possible that we may not be able to transcend the limits of our brains to understand how the brain works.

    Given that we do not understand how our own intelligence is achieved, it seems very unlikely we will design an actual artificial intelligence. We may have to be content with computers that seem like they are intelligent, but are not. That doesn't strike me as a problem. Isn't it enough that we can build programs to perform very useful functions like speech recognition, or autonomous automobiles?
    Bitter Crank

    Agree 100%, we might not ever be able to do it, but it's fun trying.
  • Rich
    3.2k
    Actually not. The propose it's very simple: to create computer algorithms that can scan and find quickly. In no way does it mimic.
  • SteveKlinko
    395
    Speech recognition is pretty much a joke as anyone who has to deal with such shoddy customer service software Will immediately recognize. As soon as I hear those silly questions on the phone, I start banging on 0 hoping that I might be lucky enough to get a human.

    Computers are good at very simple data filtering tasks which is why well run companies such as Amazon and Google avoid the so-called AI stuff.
    Rich

    It's still not as good as people thought it would become but it is a lot better.
  • SteveKlinko
    395
    The computer and AI makes an appearance along the way from infanthood to adulthood. The mind of a single human evolves, in simplest of terms, from the concrete to the abstract. Isn't that why a 5 year old, barring the born genius, can't understand advanced math like calculus. A 5 year old is taught through repetition, taught some rules of grammar or arithmetic and then given exercises to hone their skills. They don't understand the rules. They just mechanically apply them. Isn't that like a...computer? So, modern technology can replicate the mind of a 5 year old.

    The challenge is how do we replicate the adult mind that, unlike the 5 year old, can also understand above and beyond the mere application of rules.

    What's the difference between
    1. Mechanical application of rules
    And
    2. Comprehension of the logic behind these rules?

    An adult mind can do both while the computer can do only 1.

    Point to note is these two different mental faculties (see above 1 and 2) can only be perceived upon access to the inner workings of a person or a computer. If all we have is access to the output (human behavior, printouts, audiovisual displays) we simply can't make the distinction between a person and a computer.

    That brings us to an important conclusion. An AI needn't actually be a person. All it has to do is perfectly mimic a person to pass of as one. Without access to the inner world of circuits we simply can't tell a person from a good AI.
    TheMadFool

    Just curious. What aspect of the Computer do you think will prevent it from doing 1 as well as 2? I think the missing aspect is Consciousness.
  • SteveKlinko
    395
    ↪SteveKlinko Actually not. The propose it's very simple: to create computer algorithm that can scan and find quickly. In know way does it mimic.Rich

    All I'm saying is that the Designers are trying to mimic. The mechanism that implements this is probably what you say.
  • SteveKlinko
    395
    It will be a very peculiar day when humans cannot tell the difference between some dumb tool that they created and their own creative minds that created that dumb tool. Whatever sci fi writers might say, computers are basically fast filters of data. They have zero intuition and power to create something new. They follow simple instructions that we give themRich

    If Machines can become Conscious then they would be as alive as you and me, just based on different Material principles. But that's the trick. Understanding what Consciousness is.
  • BC
    13.1k
    It's still not as good as people thought it would become but it is a lot better.SteveKlinko

    When I use speech-to-text on my phone or tablet, I get very good results. However, my phone is not doing the processing. A big mainframe computer at Google is providing the fast, accurate speech-to-text service.

    The utility company's voice-activated phone answering system is (apparently) using a worn out personal computer from the early 1980s programmed by a glue-sniffing teenager.
  • John Days
    146
    so why do we want to deprive our Machines from using this kind of Data?SteveKlinko

    Can you really deprive a machine of data?
  • SteveKlinko
    395
    so why do we want to deprive our Machines from using this kind of Data? — SteveKlinko
    Can you really deprive a machine of data?
    John Days

    Of course you can not let a Machine have access to certain kinds of data. If you are referring to the anthropomorphic character of the statement I would just say that we do that all the time. Machine learning, Machine Vision, etc.
  • John Days
    146
    If you are referring to the anthropomorphic character of the statementSteveKlinko

    Yes, I was. I think it's amazing the way our desire for meaning slips out. A computer that could feel deprived if it did not get the information it wanted. That's a statement full of desire for something more than just data. Something more than DNA or atomic particles; consciousness as a result of human excellence in engineering.

    It's interesting how so many people find themselves feeling outraged over the idea of a God creating them on the understanding that they will be subject to various behavioral expectations, and yet the idea of an AI which decides that it does not want to be subject to its creators behavioral expectations is the basis for many sci-fi horror plots.

    Maybe, when God wants a good jump scare, he tunes in to the humanity channel.
  • Wayfarer
    20.6k
    There's an insightful Time magazine story about David Gelertner, who is a professor of Computer Science at Yale, on the question of the possibilities (and impossibilities) of AI here.


    The human mind, Gelernter asserts, is not just a creation of thoughts and data; it is also a product of feelings. The mind emerges from a particular person's experience of sensations, images and ideas. The memories of these sensations are worked and reworked over a lifetime--through conscious thinking and also in dreams. "The mind," he says, "is in a particular body, and consciousness is the work of the whole body."

    Engineers may build sophisticated robots, but they can't build human bodies. And because the body--not just the brain--is part of consciousness, the mind alters with the body's changes. A baby's mind is different from a teenager's, which is not the same as an elderly person's. Feelings are involved: a lifetime of pain and elation go into the formation of a human mind. Loves, losses and longings. Visions. Scent--which was, to Proust, "the last vestige of the past, the best of it, the part which, after all our tears seem to have dried, can make us weep again." Music, "heard so deeply/That it is not heard at all, but you are the music/While the music lasts," as T.S. Eliot wrote. These are all physical experiences, felt by the body.

    Moreover, Gelernter observes, the mind operates in different ways through the course of each given day. It works one way if the body is on high alert, another on the edge of sleep. Then, as the body slumbers, the mind slips entirely free to wander dreamscapes that are barely remembered, much less understood.

    All of these physical conditions go into the formation and operation of a human mind, Gelernter says, adding, "Until you understand this, you don't have a chance of building a fake mind."
  • TheMadFool
    13.8k
    Just curious. What aspect of the Computer do you think will prevent it from doing 1 as well as 2? I think the missing aspect is Consciousness.SteveKlinko

    It lacks a name that satisfies me. Anyway, what is ''comprehension''? We seem to think that comprehension is an entirely different ball game compared to rule application. Bottom line is comprehension requires logic and that we know is a agreed upon set of rules. Why can't a computer do that too?
  • SteveKlinko
    395
    If you are referring to the anthropomorphic character of the statement — SteveKlinko
    Yes, I was. I think it's amazing the way our desire for meaning slips out. A computer that could feel deprived if it did not get the information it wanted. That's a statement full of desire for something more than just data. Something more than DNA or atomic particles; consciousness as a result of human excellence in engineering.

    It's interesting how so many people find themselves feeling outraged over the idea of a God creating them on the understanding that they will be subject to various behavioral expectations, and yet the idea of an AI which decides that it does not want to be subject to its creators behavioral expectations is the basis for many sci-fi horror plots.

    Maybe, when God wants a good jump scare, he tunes in to the humanity channel.
    John Days

    Of course the Computer can not feel deprived before it attains some aspect of Consciousness. I think when we finally understand our Consciousness we will be able understand how it might be given to a Computer. Science does not understand even the first principles of Consciousness. yet. Science understands the Neural Correlates of Consciousness but nothing about Consciousness itself.
  • SteveKlinko
    395
    There's an insightful Time magazine story about David Gelertner, who is a professor of Computer Science at Yale, on the question of the possibilities (and impossibilities) of AI here.Wayfarer

    Yes that is good. Science can not make a Conscious Machine because Science does not even know what Human Consciousness is. Science cannot give a Machine Consciousness before Science knows what Consciousness is. But I think that, when Science does figure out what Consciousness is, it will be able to design Conscious Machines.
  • SteveKlinko
    395
    Just curious. What aspect of the Computer do you think will prevent it from doing 1 as well as 2? I think the missing aspect is Consciousness. — SteveKlinko
    It lacks a name that satisfies me. Anyway, what is ''comprehension''? We seem to think that comprehension is an entirely different ball game compared to rule application. Bottom line is comprehension requires logic and that we know is a agreed upon set of rules. Why can't a computer do that too?
    TheMadFool

    The Computer can probably comprehend something in the sense that it has the rules. But without a Conscious aspect it will never know it has the rules. But that might not matter depending on what it is designed for. In the middle of the Summer when it gets hot enough my AC loses control and the temperature rises beyond the control point that is set. If my thermostat was Conscious it might feel bad about that. But since I'm pretty sure it is not Conscious I don't feel bad for the thermostat.
  • John Days
    146
    Science understands the Neural Correlates of Consciousness but nothing about Consciousness itself.SteveKlinko

    I wouldn't say "nothing". I think it's possible to understand a good deal of consciousness.
  • Harry Hindu
    4.9k
    All of these physical conditions go into the formation and operation of a human mind, Gelernter says, adding, "Until you understand this, you don't have a chance of building a fake mind."
    Doesn't he mean "you don't have a chance of building a real mind"? We build fake minds all the time. This is the crux of the argument that most people have against computers - that they aren't real minds. That seems to be the problem we have - that we can build fake minds, but not real ones.

    But then doesn't it say something that we can even build fake minds? We must be getting something right, but not everything, to even say that it is a fake mind. If not, then why even call it a fake mind? What is it that fake minds have in common with real minds to designate them both as minds?
  • Harry Hindu
    4.9k
    What is the difference between my ability to recognize faces and a computer's ability to recognize faces? When a computer uses a digital image of a face to measure the features, is not our minds doing the same thing? To recognize a face means that you compare a face to some preset parameters and if those parameters match then recognition occurs. What is missing?
  • SteveKlinko
    395
    Science understands the Neural Correlates of Consciousness but nothing about Consciousness itself. — SteveKlinko
    I wouldn't say "nothing". I think it's possible to understand a good deal of consciousness
    John Days

    Please, name one thing about Consciousness that we understand.
  • SteveKlinko
    395
    All of these physical conditions go into the formation and operation of a human mind, Gelernter says, adding, "Until you understand this, you don't have a chance of building a fake mind."
    Doesn't he mean "you don't have a chance of building a real mind"? We build fake minds all the time. This is the crux of the argument that most people have against computers - that they aren't real minds. That seems to be the problem we have - that we can build fake minds, but not real ones.

    But then doesn't it say something that we can even build fake minds? We must be getting something right, but not everything, to even say that it is a fake mind. If not, then why even call it a fake mind? What is it that fake minds have in common with real minds to designate them both as minds?
    Harry Hindu

    Real Mind has Consciousness. Fake Mind has no Consciousness. But if we could give Consciousness to a Machine then it would not be a Fake Mind anymore, because Consciousness is the key.
  • SteveKlinko
    395
    What is the difference between my ability to recognize faces and a computer's ability to recognize faces? When a computer uses a digital image of a face to measure the features, is not our minds doing the same thing? To recognize a face means that you compare a face to some preset parameters and if those parameters match then recognition occurs. What is missing?Harry Hindu

    Awareness that you have recognized a face is the difference. Even when the IBM Watson won Jeopardy it never knew it won. It could never enjoy that it won. Think about that. What is that difference? That is the answer.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment