• Gregory
    4.7k
    AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out there
  • Jackson
    1.8k
    I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?Gregory

    That is what I think. Thinking can take place without consciousness. Why I think wondering about sentient AI is beside the point.
  • Agent Smith
    9.5k
    Consciousness sans reason: Mirrors!

    Reason sans consciousness: Computers!

    We have to, sensu latu, put computers before mirrors and let the magic happen!
  • 180 Proof
    15.4k
    I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?Gregory
    What do you mean by "consciousness" in this query?
  • 180 Proof
    15.4k
    Feeling of mindGregory
    What do you mean by "mind"?
  • Agent Smith
    9.5k
    What do you mean by "consciousness" in this query?180 Proof

    What do you mean by "mind"?180 Proof

    Going Socrates on (poor) Gregory. Looks as though Gregory's casting a wide net - he's not trying to catch a particular kinda fish, any fish'll do!
  • Gregory
    4.7k


    Mind is awareness which is a feeling. How can we know if AI has it since it's "biology" is so different from ours? People say they might demonstrate reason in AI, but I'm wondering if this includes awareness
  • 180 Proof
    15.4k
    Mind is awareness which is a feeling.Gregory
    But you stated that "consciousness" is "feeling of mind" "which is a feeling." This circularity makes no sense and renders the OP gibberish. Unless, of course, I'm missing something ... :chin:
  • praxis
    6.5k
    “I believe consciousness is simply what it feels like to have a neocortex.”

    ― Jeff Hawkins
  • praxis
    6.5k
    I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?Gregory

    Don’t we already have this with computers? The device that I’m using to send this message can easily beat me in chess, for instance, and its not conscious.
  • punos
    561
    Animals have consciousness but not reasoning like we do.Gregory

    I think animals have consciousness along with the ability to reason, the difference between humans and other intelligent forms has to do with varying degrees of complexification. Our consciousness is the media of our awareness and informs our reason (man or animal, or plant even). The more complex a consciousness is the more scope it has for consideration, the less complex the less it's able to consider complex variables. The evolution of the nervous system reached it's maximum complexity on this plant with the advent of modern humans. This evolutionary process is still going on and the torch of complexity is at this moment beginning to pass to AI. A fully integrated AI planetary network functioning as one consciousness seems to be the evolutionary trajectory that we are on.
  • Josh Alfred
    226
    Most of the time I do not know if I am thinking reasonable or not, that is, I am not aware of my reasonings while they occur. In the midst of that kind of thinking I still retain consciousness. Likewise, artifical intelligence will be conscious rather or not it is reasoning-out its syntaxtual (thought-out) responses.

    Here;s one of my blogs that you can interact with and hopefully enjoy: https://taoofthepsyche.blogspot.com/2018/07/the-robot-life-unfinished-treatise.html
  • Babbeus
    60


    Intelligence is just getting information as an input and returning some output that is "useful" for some "task". A computer engine designed to play chess is a form of intelligence. The "usefulness" and the "task" are not rigorous concepts and an AI could be actually stupid or pointless. You could think that any system that gives some "response" to any "input" has the same "nature" and essence of an AI.
  • Pantagruel
    3.4k
    AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out thereGregory

    Right now so called AI can perform specific tasks based on extensive programming. At the height of its complexity, these tasks can be generalized to what may be called "abilities": carry on a conversation, for example. So the question is, if we think of AI as being conscious, is this a specific ability which we confer on it? That only begs the question of what consciousness is. In that case, if we think of AI as attaining consciousness, it must be in the context of us conferring more and more task specific capabilities such that, in a cumulative fashion, new generalized abilities emerge, at the apex of which emerges consciousness, the ultimate general ability. And if it is an emergent property then we would no more have created that consciousness than we created the matter out of which the computer was formed.

    As to reason without conscious, in abilities-centric characterization just offered, I think reason and consciousness must be synonymous. Viz, a computer that displays the general ability of "carrying on a conversation" (in the context of the Turing test say) is not really reasoning, just executing a whole lot of algorithms very quickly. You could not call that reasoning unless it were at the same time conscious.
  • Seeker
    214
    AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out thereGregory

    For now an AI is still nothing more than a database full of predefined (answer) sentences. The answer to present from such a database (AI) is initialised by the/any occurence of matching (key)words coming from any possible question(s). The 'smart' (reasoning) behind any of it is still (totally) dependant on the skill-level and the creativity of the (human) programmer(s).
  • Gregory
    4.7k
    Lots of good posts! Reason for me is the ability to grasp an idea with the necessary means of consciousness. We strive for the idea among a complex of thoughts and feel we have the truth when our minds rest. If AI can do complex thoughts it's possible it does so by another means then by way of consciousness.
  • apokrisis
    7.3k
    “I believe consciousness is simply what it feels like to have a neocortex.”praxis

    Neuroreductionism.

    The better answer is that consciousness is simply what it is like to be a self living in its world.

    So the neocortex, and the rest of the brain, are all a necessary part of the hardware equation. But being "a mind" is how the neocortex, rest of the brain, and even the entire body - and with humans, the whole damn sociocultural edifice – get to pay for their biological existence.

    Consciousness is the modelling relation an organism has with its environment. An engorged neocortex is what you can afford if it adds that much of a benefit in terms of a nutrition and survival dividend.

    Brains burn through energy like working muscle, even when idling. So this is something we have to consider when it comes to AI. An artificial mind would also be one that is paying close attention to its own organismic existence. It would have to be smart in the sense of earning its entropic keep.

    Of course, here in the real world, humans build machines to amplify their own power to exist. They are an investment meant to serve our entropic existence. We want AI in the form of extensions to our reach, not as some rival class of organisms, living in the same world, equipped with the minds - or modelling relation - which might allow them that level of mental independence.

    If we build actual AI, then we are just proving ourselves stupid.

    Animals have consciousness but not reasoning like we do.Gregory

    Animals have reason. They have genetic and neural level models of the world they live in that work because they are "reasonable" in the pragmatic sense.

    So what humans have got is the extra semiotic modelling capacity that comes with having developed speech and maths - codes based on words and numbers, layered on top of the codes based on genes and neurons.

    Words allow humans to organise in a properly organismic fashion - as one shared mind - at the scale of the social organism.

    Then maths/logic became the even more abstracted and universalised symbol system that led to a civilised and technological version of this social order - one that amplified its entropic reach through machinery like steam engines and Turing computation.

    So "consciousness" is an unhelpful term here. It presumes that the mind is some kind of special Cartesian substance which has properties like "an introspective glow of awareness".

    Neuroscientists avoid using it. Computer scientists are not so bashful, but even they started to limit themselves to artificial "intelligence" once they were asked to put up or shut up.

    Neuroscience has now got quite used to understanding consciousness and reasoning in terms of embodied semiosis – the enactive turn and Bayesian brain. So it ain't about having a neocortex. It is about there being some level of reality modelling that an organism can pragmatically afford.

    Humans stumbled into language and technology - fire, spears, shelters, baskets - as a new sociocultural way of life. They could then afford a much bigger brain because this new level of semiosis filled their bellies with a much more calorie dense diet.

    Reason for me is the ability to grasp an idea with the necessary means of consciousness. ... If AI can do complex thoughts it's possible it does so by another means then by way of consciousness.Gregory

    You are describing how one level of semiosis gets stacked on another.

    So the brain does the neurosemiosos. It gives you an animal level of intelligence, insight, habit learning, recognition memory, etc.

    Then language and logic are further levels of world modelling where we humans learn to stand outside our animal or biological level of ideation to now take an objective – or rather, social and technical - view of the deal.

    We learn the habit of thinking about what we are doing from first the point of a society, which is looking at our rather animistic desires and reactions and passing some kind of more rational collective judgement.

    And then we up it even more by living in a society that has learnt to stand back even from the embodied social point of view to consider the problems of existence from the point of view of a world ruled by the abstractions of numbers and logic. We become part of a civilisation that wants society to run itself in a technocratic and enlightened fashion.

    Again, where does AI fit into this natural arc of mental development? In what way does it pave the path to some even higher level of semiotic intelligence?

    Even for a computer scientist, this is the kind of question that needs to be answered.

    IBM might self-advertise by cranking out gadgets that can win at chess, or even go and bridge. But chucking lumps of circuitry – even biologically-inspired circuitry like neural nets – at the public is a big fake.

    Replicating what brains do is just rehashing neurosemiosis. Where is AI's sociosemiosis, or technosemiosis? What social world would make sense of these neural machines?

    Anyone can talk about making conscious machines as some kind of sci-fi engineering project. But actual AI ain't even a thing until we see the social engineering - the blueprint of the world in which this hardware even makes sense, pragmatically speaking.
  • sime
    1.1k
    In practice, "Artificial intelligence" is merely state-of-the-art software engineering in service of human beings done in accordance with the ideals of human rationality; it is the design and implementation of systems whose validation criteria are socially determined in accordance with cultural requirements, e.g a recommender system must suggest a 'good' movie, a chatbot must argue 'persuasively', a chess engine must respond with a 'brilliant' move, a mars rover must avoid 'dying'....

    These sorts of applications aren't differences in 'kind' from early programming applications; they only differ in terms of their degree of environmental feedback and their corresponding hardware requirements. In both cases, software is invented to satisfy human needs and often to reinforce human prejudices.

    As for general intelligence, no such thing can exist in either man or machine; to pass a 'general' Turing Test is to pass a highly specialised "human traits" examination that comes at the cost of being unable to perform any single task efficiently, whilst also ruling out the ability to execute of other potentially useful behaviours that humans don't recognise as being rational. (Also, no two humans have the same concept of rationality because they live non-identical lives).

    The concept of "consciousness" cannot be divorced from the concept of rationality, because empathy is invoked when judging the rationality of another agent's actions. We put ourselves in the agent's shoes, then fool ourselves into thinking that we were experiencing their consciousness rather than ours.
  • Alkis Piskas
    2.1k
    what the most common sense way to seeing these machines was.Gregory
    What machines?
    I believe that you should explore and undestand well what "Artificial Intellgence" is before launching this discussion on this subject. But of course, it's too late for that. Nevertheless, it's still a good idea to do that even now.
  • SpaceDweller
    520
    Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious?Gregory
    Conscious I think means self-awareness, and if so machines will never be self-aware like us.
    machines may have reasoning far better and faster than us, ex. chess engine, but self-awareness not.
  • Josh Alfred
    226
    The self-awareness test has been utilized as a marker for self-awareness in infant human beings and other species of life. When a machine can recognize itself in a mirror it will have self-awareness.

    Consciousness is stimulation of the senses. Just sensory input. Some of our machines have such, as in the case of visual recognition soft-ware.

    Sentience is a little more complicated than that. Self-awareness, intelligence, consciousness, and other factors are included.

    Contextual awareness, such that a thing (including the self) exists within some kind of phenomenal boundaries is possible in machines too.

    Deductive reasoning has rules. If it has rules it can be simulated/programmed into machines.
  • Alkis Piskas
    2.1k
    When a machine can recognize itself in a mirror it will have self-awareness.Josh Alfred
    There are a lot of devices that can recognize all sort of things. They are programmed to do that. So, if you program a device to recognize itself in a mirror, and then say (issue a sound) "Here I am!", it couuld do all that. But a machine could never do that by itself, i.e. w/o having been programmed amd instructed appropriately. Machines do not and can never have awareness.

    Awareness is a characteristic of life. Humans, animals and plants have awareness, of a different kind. But self-awareness is an ability and characteristic that only humans have.
  • Alkis Piskas
    2.1k
    Conscious I think means self-awareness, and if so machines will never be self-aware like us.SpaceDweller
    I agree.

    Machines may have reasoning far better and faster than us.SpaceDweller
    I don't agree. :smile:
    Reasoning involves thinking, and machines do not think. Machines execute instructions. Sometimes, in sophisticated programs and advanced AI cases, it might seem that the machines think, but behind this apparent thinking lies programming, i.e. instructions. Machines can be even programmed to create programs themselves, but this is still based on human programming.
    Thinking, and with it reasoning, is an ability possessed exclusively by humans.

    On the other hand, machines can surpass us, and in fact to a huge degree, in the fields of calculation, memory capacity and retrieval and timing.
  • Agent Smith
    9.5k
    What we need, as far (artificial) intelligence is concerned, is what is described as the technological singularity - an exponential growth of intelligence - each subsequent intelligence should be greater than the one preceding it by a factor that would depend on what's possible given physical/chemical/biological constraints. This kinda growth actually occurs at a small scale with individuals - a person is today more intelligent than she was yesterday (books & experience as invaluable teachers).
  • Agent Smith
    9.5k
    Vishvakarma/Tvashtar (The Architect/Lord of the machines). :pray:

    Om Vishwakarmane Namah!

    :snicker:
  • Seeker
    214
    What we need, as far (artificial) intelligence is concerned, is what is described as the technological singularity - an exponential growth of intelligence - each subsequent intelligence should be greater than the one preceding it by a factor that would depend on what's possible given physical/chemical/biological constraints. This kinda growth actually occurs at a small scale with individuals - a person is today more intelligent than she was yesterday (books & experience as invaluable teachers).Agent Smith

    Isnt that implying the expectation (or need) for AI('s) to be exactly like us in order to satisfy the criteria for intelligence? If so, wouldnt that also introduce the need for emotional awareness enabling regulation of the outcome per any given cycle of growth?
  • Agent Smith
    9.5k
    There's a lot we have to work on!
  • Seeker
    214
    Well, here's something to chew on for now.

    Mankind has always looked for ways to reduce manual labor and repetitive tasks. To that end, and in the absence of technology, civilization exploited various methods, often by taking advantage of their fellow humans. Robots, as a potential solution, have long fascinated mankind, capturing our imagination for centuries. Even in Greek mythology, the god Hephaestus had « mechanical » servants. But not until recently, has artificial intelligence finally progressed to a level that will become more and more life-changing for the future of humanity. — Free Documentary

    https://www.youtube.com/watch?v=mh45OBLeCu8
  • Corvus
    3.4k
    In the case of AI machines, would it not be the state of "powered ON" being them conscious in the human terms?
    My PC has ON OFF and also SLEEP options. It only works when it is "ON".
  • Raul
    215
    You need to understand what conscioussnes is and what it is not. I'm not going to explain it here. Read it from the experts.
    What I can tell you regarding your question is that consciousness is not a ON-OFF thing. There're grades of consciousness as well as states of consciousness.
    Can an AI be conscious, yes of course. An AI can have a very low or high grade of consciousness depending of the amount of integrated information and it modularity (Tonini IIT).
    But that consciousness is far from ours as humans, because the AI is conscious of "its world", that is to say, that if we talk about an AI in a google-car... that AI is conscious within its "traffic-world" not beyond that.
    That said, keep in mind consciousness is not the same than self consciousness and take into account as well that feelings and emotions are as well other components to take into account.

    Net,
    An AI will be one day conscious and self conscious in the sense we understand human consciousness but it will have to:
    1 - be embodied
    2 - be more than one AI... interact with similar AIs in order to develop a social self. We humans could be similar and can interact but it will need similar "replicants" to fully empathize :-)
    3 - will have to be directed by the "survival" pulsion
    4 - will require an architecture that generates a rich gradient of feeling and emotions. Never like ours that require "flesh" but synthetic ones linked to its source of energy, protect its body feeling temperature, pressure, etc.. similar to what we do...
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.