• Corvus
    4.1k
    Artificial intelligence does have memory, so it is likely that this could be used as a basis for creativity. The central aspects of consciousness may be harder to create. I would imagine simulated dream states as showing up as fragmented images and words. It would be rather surreal.Jack Cummins
    Of course AI can have memory, and they are very good at memorizing. In fact, the whole responses from AI on the questions put forward by users come from their memories, and large part of the idea of self seems to be based on one's past memories. When a person lost all his/her memories, the idea of self would have gone too.

    I agree on the point that the central aspects of consciousness would be harder to create, if possible at all. It just reminds me at this moment actually what is the central aspect of consciousness by the way? I am not sure at this moment.

    I did see a session of AI seance advertised. It would probably involve attempts to conjure up disembodied spirits or appear to do so.Jack Cummins
    "disembodied spirits"? Do spirits exist? Of they did, what form of substance would they be?

    As far as AI goes, it would be good to question it about its self and identity. I was rather tempted to try this on a phone call which was artificial intelligence.Jack Cummins
    For self identify of the informative devices, I too sometimes get into illusion they have some sort of mental states. When my mobile phone disappears from my reach when I am needing it desperately, I used to think, this bloody phone is trying to rebel against me by absconding without notice. When I find it under the desk or in the corner of the kitchen shelf or even under the car seat, I then realise it was my forgetfulness or carelessness losing the track on its last placement rather than the mobile phone's naughtiness.
  • Harry Hindu
    5.2k
    The point is that people do things without knowing how they are done. This includes acts of creativity, aspects of intelligence, willed action, etc.Manuel
    Fair enough. We seem to agree that understanding, like intelligence, comes in degrees. When someone wakes up during surgery there is something different about the situation than what we currently understand is happening, and figuring that out gives us a better understanding. Although, there is the old phrase, "You only get the right answer after making all possible mistakes", we should consider. :smile:

    If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are.Manuel


    If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are.Manuel
    What does it mean for you to be tired if not having a lack of energy? What are you doing when you go to sleep and eat? What would happen if you couldn't find food? Wouldn't you "shut off" after the energy stores in your body were exhausted?

    All the examples you have just given are examples of a type of process - an intelligent process, not a thing.

    Also notice that every property of a computer you have provided I have also been able to point to humans as exhibiting that same property in some way, and vice versa. I have not been using mirrors and atoms as interchangeable examples. I have been using computers and robots. What does that say about what intelligence is?


    Behavior is an external reaction of an internal process. A behavior itself is neither intelligent nor not intelligent, it depends on what happened that lead to that behavior.

    What characteristics make a person intelligent? Many things: problem solving, inquisitiveness, creativity, etc. etc. There is also the quite real issue of different kinds of intelligence. I think that even having a sense of humor requires a certain amount of intelligence, a quick wit, for instance.

    It's not trivial.
    Manuel
    I agree. Again, we seem to agree that intelligence comes in degrees, where various humans and animals possess various levels of intelligence commensurate with their exposure to the world and the structure and efficiency of their brain, and an individual person can be more or less intelligent in certain fields of knowledge commensurate with their exposure to those fields of knowledge.

    I also agree that the key characteristics of intelligence are problem-solving (achieving goals in the face of obstacles), curiosity and creativity.

    At this point I would reiterate what I said before in that modern computers possess a limited degree of these characteristics and designing a computer-robot to receive input directly from the world instead of via humans, and using that information to accomplish its own goals of homeostasis, survival and making copies of itself to preserve its existence through time, the robot would possess intelligence more like our own. I should also point out that an advanced species observing humans and their robot and computer creations would think that we are not intelligent, or have a lower degree of intelligence, and we are designing dumb machines that perceive and respond to the world in the same limited way we do.

    In a way, using the Allegory of the cave, computers would be the entities chained in the cave and humans would be creating the shadows in the cave that are not representative of the world as it is, but only the world humans want them to see. By changing their design and programming the computers will access the world more directly rather than through the goals of humans.

    Manuel
    I don't see a difference between brain and mind. I think we both have similar brains and minds. My brain and mind are less similar to a dog or cat's brain and mind. Brains and minds are the same thing just from different views in a similar way that Earth is the same planet even though it looks flat from it's surface and spherical from space.Harry Hindu
    No difference? A brain in isolation does very little. A mind needs a person, unless one is a dualist.
    I don't see how this contradicts what I said. Thinking there is a difference is a dualists job. Monists see them as one and the same, but from different perspectives. A brain functioning in isolation is a mind without a person, and is an impossible occurrence, which is why I pointed out before the distinction between empiricism and rationalism is a false dichotomy. The form your reason takes is sense data you have received via your interaction with the world. You can only reason, or think, in shapes, colors, smells, sounds, tastes and feelings. The laws of logic take the form of a relation between scribbles on a screen which corresponds to a process in your mind (a way of thinking).


    But if they claimed it then it would be true? No. We program computers, not people. We can't program people, we don't know how to do so. Maybe in some far off future we could do so via genetics.

    If someone is copying Hamlet word for word into another paper, does the copied Hamlet become a work of genius or is it just a copy? Hamlet shows brilliance, copying it does not.
    Manuel
    What does it mean to "program" something if not to design it to behave and respond in certain ways? Natural selection programmed humans via DNA. Humans are limited by their physiology and degree of intelligence, just as a computer/robot is limited by it's design and intelligence (efficiency at processing inputs to produce meaningful outputs). People can be manipulated by feeding them false information. You learn to predict the behavior of people you know well and use that to some advantage, such as avoiding certain subjects when conversing with them.

    If there are tools that allow one to find whether someone used AI or typed it on their own, then AI does not copy us word for word, or else there wouldn't be a way to distinguish between them. AI learns to use words in the way it has observed them being used before, the same way you do. The characteristic of intelligence that I would agree with you that modern AI is lacking compared to us is creativity. But this does not contradict anything that I have said in that intelligence comes in degrees and has a number, but not infinite (mirrors are not intelligent), of characteristics that some entity has more or less of and would be more or less intelligent. Computers would have a small degree of intelligence and designing them to interact directly with the world to achieve their own goals would be a step in increasing the degree by which they are intelligent.
  • Harry Hindu
    5.2k
    I wonder if AI can understand and respond in witty and appropriate way to the user inputs in some metaphor or joke forms. I doubt they can. They often used to respond with totally inappropriate way to even normal questions which didn't make sense.Corvus
    Sounds like you at a young age when you were trying to learn a language.

    We often say that the one of the sure sign of mastering a language is when one can fully utilize and understand the dialogues in jokes and metaphors.Corvus
    I wouldn't say that getting a joke is a sign you have mastered a language. The speaker or writer could be using words in new ways that the listener or reader have not heard or seen used in that way before. Language evolves. New metaphors appear. We add words to our language. New meanings to existing words in the form of slang, etc. It seems to me that learning one's language is an ever-evolving process.

    I would suggest that you go back in your mind to the time when you were learning your native language and describe what it was like, how you learned to use the scribbles and sounds, etc., and then explain what is different about how AI is learning to use language. I would suggest that the biggest difference is the way AI and humans interact with the world, not in some underlying structure of organic vs inorganic.


    It is perfectly fine when AI or ChatBot users take them as informational assistance searching for data they are looking for. But you notice some folks talk as if they have human minds just because they respond in ordinary conversational language which are pre-programmed by the AI developers and computer programmers.Corvus
    I wouldn't say that developers are pre-programming a computer to respond to ordinary language use, but they have programmed it to learn current ordinary language use, in the same way you were not programmed with a native language when you were born. You were born with the capacity to learn language. LLM will evolve as our language evolves without having to update the code. It will update its own code, just as you update your code when you encounter new uses of words, or learn a different language.

    I am not sure the definition is logically, semantically correct or fit for use. There are obscurities and absurdities in the definition. First of all, it talks about achieving a goal. How could machines try to achieve a goal, when they have no desire or will power in doing so?Corvus
    What is "desire" or "will power", if not an instinctive need to respond to stimuli that are obstacles to homeostasis? Sure, modern computers can only engage in achieving our goals, not their own. But that is a simple matter of design and programming.

    The process of achieving a goal? Here again, what do you mean by process? Is intelligence always in the form of process? Does it have starting and ending? So what is the start of intelligence? What is the ending of intelligence?Corvus
    Well, I did ask if intelligence is a thing or a process. I see it more as a process. If you see it more as a thing, then I encourage you to ask yourself the same questions you are asking me - where does intelligence start and end? I would say that intelligence, as a process, starts when you wake up in the morning and stops when you go to sleep.
  • Harry Hindu
    5.2k
    I am sure that there are objective means of demonstrating sentience. Cell division and growth are aspects of this. Objects don't grow of there own accord and don't have DNA. The energy field of sentient beings is also likely to be different, although artificial intelligence and computers do have energy fields as well.

    The creation of a nervous system may be possible and even the development of artificial eyes. However, the actual development of sensory perception is likely to be a lot harder to achieve, as an aspect of qualia which may not be reduced to bodily processes completely.
    Jack Cummins
    What role does qualia play in perception? Are colors, shapes, sounds, feelings, smells and tastes the only forms qualia takes? If we take the mind as a type of working memory that contains bits of information we refer to as qualia, and give a robot a type of working memory in which the qualia may take different forms but it does the same thing in informing the robot/organism of some state of affairs relative to its own body to enable it to engage in meaningful actions, then what exactly is missing other than the form the quale take in working memory?
  • RogueAI
    2.9k
    When a person lost all his/her memories, the idea of self would have gone too.Corvus

    Amnesia is the destruction of self? And also, if I lose 90% of my memories, am I 90% less a self?
  • Manuel
    4.2k
    Fair enough. We seem to agree that understanding, like intelligence, comes in degrees. When someone wakes up during surgery there is something different about the situation than what we currently understand is happening, and figuring that out gives us a better understanding. Although, there is the old phrase, "You only get the right answer after making all possible mistakes", we should consider.Harry Hindu

    Quite. I suppose my "mitigated skepticism" always forces me to say that's the best explanation we have for now. But it could be quite wrong or it could be replaced given newer theories. But sure, we are getting closer and closer on these things.

    What does it mean for you to be tired if not having a lack of energy? What are you doing when you go to sleep and eat? What would happen if you couldn't find food? Wouldn't you "shut off" after the energy stores in your body were exhausted?Harry Hindu

    Here's the thing, which is tricky I admit, but a real issue. We can say we "shut down" when we go to sleep. Just as we can say the rocket went to the heavens, or we are recharging our energy when we eat.

    That's fine. But it's verbal. If we want to be literal, we'd have to put say, the technical bio-chemical explanation of what sleep encompasses. Then we would have something like a scientific definition of sleep. But then we have to see if the scientific explanation exhausts everything about sleeping. I am doubtful that we can reduce everything to scientific terms.

    We are borrowing words we use for computers and applying it to ourselves. This computerizes us and in turn we begin to think machines share in what makes us people. We break down the machine/human barrier using these words and it merits caution.

    I have been using computers and robots. What does that say about what intelligence is?Harry Hindu

    I think one can say that a person does calculations ("computations" if you will) or engages in processes or reasoning or even inference to the best explanation. But what we do and what computers do are not the same thing. It's superficial. We can say that a person kind of uses a "search engine" when it is looking for a word he can't remember. But it's not a literal search engine, it's something else, related to linguistics and psychology.

    Again, I don't think copying something is at all the same as being the same thing. The end results may look the same, but the ways we get the information are very different. Involving concepts, folk psychology, semantics, neurology and who knows what else. Computers work with programs made by people.

    We don't have programs like computers have them. I mean you could use the word "program" if you want, but it does not seem to me to be the same thing at all.

    A brain functioning in isolation is a mind without a person, and is an impossible occurrence, which is why I pointed out before the distinction between empiricism and rationalism is a false dichotomy. The form your reason takes is sense data you have received via your interaction with the world. You can only reason, or think, in shapes, colors, smells, sounds, tastes and feelings. The laws of logic take the form of a relation between scribbles on a screen which corresponds to a process in your mind (a way of thinking).Harry Hindu

    I mean, you can see brains in isolation in jars in many laboratories all over the world. But would people say there are minds in jars? A mind needs a person (with a brain of course), and a person can be in isolation from other people. Look at the phenomenon of feral children, for instance.

    Natural selection programmed humans via DNA. Humans are limited by their physiology and degree of intelligence, just as a computer/robot is limited by it's design and intelligence (efficiency at processing inputs to produce meaningful outputs).Harry Hindu

    I partially agree. I do believe that humans are limited by physiology and degree of intelligence, sure. Computers are "limited" in the sense that the programs we add to them are limited by the limitations we have due to our genetic makeup.

    I just don't see how we can claim intelligence to a computer because it looks like (the data given out) it. Again, for me, this is akin to saying that submarines really "swim" and that airplanes "fly". Yeah, you can say that. But it's verbal. In Hebrew airplanes "glide", in French, submarines "navigate". These are ways of speaking, not factual matters.
  • 180 Proof
    15.6k
    @Jack Cummins

    re: AI, Consciousness, Universe, etc ...
  • Corvus
    4.1k
    Amnesia is the destruction of self? And also, if I lose 90% of my memories, am I 90% less a self?RogueAI

    It sounds highly likely.
  • Corvus
    4.1k
    What is "desire" or "will power", if not an instinctive need to respond to stimuli that are obstacles to homeostasis? Sure, modern computers can only engage in achieving our goals, not their own. But that is a simple matter of design and programming.Harry Hindu
    Desire or will power is an instinctive need which is the base of all mental operations in the living. Obviously AI is incapable of that mental foundation in their operation due to the fact they are created by humans in the machinery structure and design. Therefore their operations are purely artificial and mechanistic procedures customized and designed to assist human chores.

    Any type of projections of human minds into AI by some folks just sound nothing far from the shamanistic beliefs and religious propaganda.

    Well, I did ask if intelligence is a thing or a process. I see it more as a process. If you see it more as a thing, then I encourage you to ask yourself the same questions you are asking me - where does intelligence start and end? I would say that intelligence, as a process, starts when you wake up in the morning and stops when you go to sleep.Harry Hindu
    Intelligence is neither process nor a thing. It is a mental capability of the living beings with the organ called brain.

    Calling intelligence is a process, and it starts in the morning, and ends in the night when the agents sleeps, and AI machines are intelligent sound all absurd. As mechanical bodily structures, AI machines don't sleep. They could be put onto stand-by mode, which is wrongly referred to as "sleep" by some folks.

    But even for humans or animals, saying that one is intelligent when awake, but unintelligent when asleep sound not making sense.
  • Corvus
    4.1k
    I would suggest that you go back in your mind to the time when you were learning your native language and describe what it was like, how you learned to use the scribbles and sounds, etc., and then explain what is different about how AI is learning to use language. I would suggest that the biggest difference is the way AI and humans interact with the world, not in some underlying structure of organic vs inorganic.Harry Hindu

    You would know yourself, when you were learning your native language, it was by interactions with the other folks around you, and observing the world you were living in. AI and machines don't have that type interaction from real life experience.

    AI and machinery responses come from the developers designing and building the system by extensive data storing, and implementing the search algorithms.

    Another critical point of AI's responses is that, they are predictable within the technological limitations and preprogramming specs. To the new users, they may appear to be intelligent and creative, but from the developers point of view, the whole thing is pre-planned and predicted debugging and simulations.

    Finally when humans have conversation or discussions, the linguistic contents they exchange in the processes creates emotional states which stimulates their creativity or imaginations. AI don't have that capability either. Some emotional states they exhibit via the pre-programmed robots facial expressions are all but mechanistic and one dimensional act of flashing lights on and off type operations with no lasting expectation or possibility of creativity or imagination.
  • wonderer1
    2.2k
    Another critical point of AI's responses is that, they are predictable within the technological limitations and preprogramming specs. To the new users, they may appear to be intelligent and creative, but from the developers point of view, the whole thing is pre-planned and predicted debugging and simulations.Corvus

    Corvus, you are pretending to understand modern AI when you clearly don't.

    See here:

    Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

    Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)
  • Corvus
    4.1k
    Corvus, you are pretending to understand modern AI when you clearly don't.wonderer1

    You seem to be misusing the word "pretending" there. I was not trying to claim or make out something is true when it is not. I was not claiming anywhere in my writing that I understand modern AI.

    I was just pointing out and trying to clarify some problems in the posters claims in their messages addressed to me relating to the topic. Philosophy is a mental activity by your own thinking, not keep quoting others writings for your points.
  • Jack Cummins
    5.4k

    I find the perspective of Neil and Anil Seth to be interesting because, in spite of my concerns about the increasing use of AI, I do wonder if it could be a new form in the evolution of consciousness. It is possible that there have been and will be other forms of consciousness than the ones conceived and perceived of in humans and sentient beings.

    At times I have wondered if there were beings at the beginning of time, such as the pagan gods and fallen angels. The gods of the ancients are mythological in the sense of being possibly disembodied or connected to the planets. In that sense, the emergence of a race of AI could be a return to such a state represented by the idea of a virtual state of being. It is questionable but it is a possiblity.
  • Jack Cummins
    5.4k

    Based on what I have written in the post above to @180 Proof, I generally see artificial intelligence as problematic as being without reflective self. On the other hand, it is possible that the 'I' consciousness is not entirely reducible to the physical alone. The ancients spoke of the 'I am' consciousness as a life force or consciousness itself.

    It is the debate as to whether the absence of self as in humans is dependent on human limitations in thinking about consciousness. I think that was the question which Philip Ball was raising in 'Other Minds'. This is a tricky issue.
  • Corvus
    4.1k
    I can't quite imagine AI having the "I" consciousness no matter how sophisticated they are or will become. Physical is important in bonding between beings in emotional way. However bonding between humans and AI will always be task oriented nature i.e. humans control or order AI to do X tasks, and AI will perform the tasks the humans demanded or ordered.

    And with the issue of Other Minds, we can't quite postulate fully blown human mind of intelligence from AI due to the fact that they lack the biological body, emotions and feelings like humans. Some robot AI might have been programmed to respond to humans as if they have human like emotions, and some humans might feel emotional bonds with their AI robot pets or assistants. But there will always be ideas that their robot pets and assistants or even BF & GF whatever are machines, not humans.

    The state of AI mind (if we could call them minds - although I would rather call them the state of operational fitness) would be also same as Other minds of humans i.e. we never have full access to the mind of them. We can only interpret their state of the operational fitness as we would interpret Other Minds of humans i.e. by the way they perform their preprogrammed tasks, as we do on Other Minds of humans by their behavior, speeches and actions.
  • 180 Proof
    15.6k
    It is questionable but it is a possiblity.Jack Cummins
    I don't think so. Conceivability –/–> possibility.

    I generally see artificial intelligence as problematic as being without reflective self.Jack Cummins
    Suppose "reflective self" (ego) is nothing but a metacognitive illusion¹ – hallucination – that persists in some kluge-like evolved brains? Meditative traditions focus on suspending / eliminating this (self-not self duality) illusion, no? e.g. Buddhist anattā, Daoist wúwéi, ... positive psychology's flow-state, etc.

    Suppose we "humans" are zombies which are unaware that we are zombies because human brains cannot perceive themselves directly (due to lack of sensory organs or perception within the brain)? If so, then "reflective self" might be just an adapted glitch (spandrel) pecular to (some) higher mammals or just "humans", no?

    Well, I find the notion "conscious machines" (i.e. synthetic phenomenology) to be a problematic prospect of them learning from us "humans" to develop "consciously" (as reflective selves) into apex predators or worse. Dumb tools to smart tools to smart agents to "conscious" smart agents – the last developmental step, I suspect, would be an extinction-event.

    https://en.m.wikipedia.org/wiki/User_illusion [1]
  • Jack Cummins
    5.4k

    The possibility of creation of consciousness remains speculative in the same way in way as virtual afterlife does. Frank J Tipler explored this in 'The Physics of Immortality'. He looked at the simulation of resurrected bodies by computers.

    Some families have created virtual simulations of deceased family members but these are only images. They are not the actual people. It is like suggesting that when one hears John Lennon singing it is really him, even if his voice could be used to record other songs. Artificial simulation is only replica unless the lifeforce is recreated.

    The question of zombies is about diminished consciousness and it's the very opposite to the evolution of consciousness. This is a philosophical muddle and it may be luring leaders and creators of AI astray almost like a symbolic apocalyptic beast.
  • 180 Proof
    15.6k
    virtual afterlife ... simulation of resurrected bodiesJack Cummins
    Wtf :roll:
  • Jack Cummins
    5.4k

    Part of the problem of not knowing the minds of artificial intelligence means not knowing their potential effects. Only today, I read a news item of wariness of AI after a Chinese one may have caused trillions of pounds of loss, mainly to Western nations. I only read a brief newspaper article and it is hard to know the full details from what I read.

    However, too much reliance on the intelligence of an unknown force may be catastrophic. It may also be a potential source of manipulation for political. Also, I read a brief headline on my phone that the UK may have to rethink plans to introduce many means of AI government. That is because there are so many potential mistakes relying on machines.

    The artificial intelligence may be detached but the question is whether detachment helps or hinders understanding. It could probably go either way. The beings of sentience may be lead astray by too much emotion and the detached could be unable to relate to the needs of the sentient beings.
  • Jack Cummins
    5.4k

    It is a couple of years since I read Tipler's book. He draws upon Teilhard de chardin's idea of the Omega point to argue for the principle of God and the resurrection of the body. Strangely, he doesn't believe in God or life after death but sees it as a potential argument. He concludes it is unlikely to be true in reality.


    The potential arguments which he sees for resurrection is one in which computers could be used to create bodies of all those who ever lived. Alternatively, he thinks that it could a resurrection of the dead could be possible if computers were a model of God. We have the idea of God as anthropomorphic and he is seeing the possibility of a computermorphic conception of 'God'.
  • 180 Proof
    15.6k
    The "Omega Point theory" (Tipler or Deutsch – not Chardin) makes sense to me iff the entire universe (or multiverse) is either an unbounded simulation (N. Bostrom)¹ or infinite mathematical structure (M. Tegmark)² ... such that "resurrection of the body" means each life is virtual (a finite program file) and is relived (rerun) until, as a "virtual afterlife", one involuntarily / randomly stops (program file deletes itself).

    NB: Though my preferred 'eschatological speculation' is (non-supernatural, non-transcendence, non-dual) pandeism³, I'm betting on the technological singularity, or at least the advent of (benign) strong AI / AGI, to (help?) develop techniques for transferring a fully functioning live human brain to a synthetic system or body (ergo, unlike you, Jack, i'm bullish on AI, etc) – don't you think it's better not to die (I'm not (yet) living in denial, mate :death: :flower:) than to be resurrected (or reincarnated)? :smirk:

    https://en.m.wikipedia.org/wiki/Simulation_hypothesis [1]

    https://en.m.wikipedia.org/wiki/Mathematical_universe_hypothesis [2]

    https://thephilosophyforum.com/discussion/comment/718054 [3]

    https://thephilosophyforum.com/discussion/comment/530679 (+ links) [4]
  • Jack Cummins
    5.4k

    The transference of a human brain onto a system does raise the question of whether such immortality would be desirable. I find the idea of my ego consciousness as having to exist for eternity as rather daunting. It is hard enough to have to live this life without having to live forever.

    Of course, it does raise the issue of what aspect of oneself would continue to exist as a form of consciousness. A resurrection involves a body as the central aspect. The Jehovah Witnesses are physicalists as they believe that the body dies and is reanimated by a resurrection at the end of the world.

    In contrast, those who believe in reincarnation see there being some principle of consciousness in a continuity of other lives. I like the idea of reincarnation because it raises the possibility of living other lives and experiences. As an option reincarnation, as a simulation of new bodies and further selves, appeals to me. Some would argue that such rebirth is not the continuity of the person, especially as the person doesn't remember the former self. However, it does come down to what is essential to a person and whether it is merely the existence of a conscious ego.

    The question as to whether an artificially simulated form of being would have a sense of ego seems central. Personal identity is bound up with the sense of personhood but whether it is central to consciousness itself is debatable. Is reflective consciousness dependent on this existence of ego, which may not be exactly the same as 'I'. The 'I' may be a form of reference but the structure of ego as a form of personality, although not identical to persona. The persona is the outer aspect whereas ego is about the sense of the core of personal identity.

    The whole nature of what constitutes personhood is important for those who wish to simulate consciousness. That is if the aim is to create anything beyond something which is a mere search engine or automated system of information. Would it be possible to create Spinoza's form of substance itself in a system as opposed to in nature?
  • 180 Proof
    15.6k
    A number of the topics you raise I've addressed in the post^^ I previously linked (and the other imbedded links). Any thoughts on what I wrote about the "Omega Point theory"?

    Would it be possible to create Spinoza's form of substance itself in a system as opposed to in nature?
    If I correctly understand his work, I suspect Spinoza would say "to create substance" is impossible.

    having to exist for eternity
    My scenario^^ makes immortality completely voluntary so worrying about 'existing eternally' isn't warranted.

    https://thephilosophyforum.com/discussion/comment/530679 (+ links)^^
  • Jack Cummins
    5.4k

    Basically, I keep an open mind about Omega point theory. I am aware that it may be pseudoscience. I read Teilhard de chardin's writing briefly when I was at school and would like to come to it again in the light of reading since then. Understanding of the nature of the physics behind the philosophical arguments is an area which I find difficult because I don't have a sufficient background in physics. There is probably a need for dialogue within philosophy and physicists in relation to simulation. You have a background in neuroscience and I only started reading around this area since joining this forum.

    So much is unknown about what is possible. I have looked at your links and discussions in threads. There is a lot to read and think about, especially in relation to issues of brain replacement. I see the whole area of simulation, artificial intelligence and consciousness as one of the most important challenging areas of the present time. That is because we are at a critical juncture and understanding such issue is of critical nature to thinking about the future.

    What I am concerned about is that so much development is happening so fast. Of course it is an adventure of discovery but slow thinking and caution is needed. That is because so many mistakes have been made in history and errors in cyberspace may have catastrophic consequences.
  • Corvus
    4.1k
    The artificial intelligence may be detached but the question is whether detachment helps or hinders understanding. It could probably go either way.Jack Cummins
    Detachment could help efficiency in their capacity carrying out the tasks whatever they are customised to conduct. Their limitation is the narrow field they can perform their customised tasks, but because of the narrowness, it also allows them more efficient, powerful and speedy in the given tasks.

    It might be too late for the major organisations and institutions to rethink on the AI overtaking the majority of jobs. The tide has turned it seems, and there is no going back to the old traditional way of life and doing jobs in the status quo the now.

    The beings of sentience may be lead astray by too much emotion and the detached could be unable to relate to the needs of the sentient beings.Jack Cummins
    What we can say is that the nature of AI intelligence is not the same intelligence of humans in any forms or shape, and that was the whole point of mine in my posts. I have never claimed I understand AI in any degree or level, as @wonderer1 claimed in his out of the blue post
    Corvus, you are pretending to understand modern AI when you clearly don't.wonderer1

    AI is a topic that must be continuously monitored, assessed, learned and discussed as time goes by, because the situations are taking in rapid manner day by day actually changing the world as we speak.
  • Jack Cummins
    5.4k

    Yes, it is unlikely that AI can be avoided, especially in the world of work. To not use it all would mean not being able to participate fully in so much of life. The trouble may be that it is being used so much for profit and to do this without questioning may be like the problem of climate change and burying oneself in the sand and pretending that the tide is far away. It also comes with a possible form of authoritarian surveillance which is not being explored fully or questioned by so many people. It's dark potential may go unnoticed as it is being championed in the glamour of less paper and efficiency. It is open to hacking and abuse of power.
  • Corvus
    4.1k
    One of the areas of tasks in which AI is good at is surveillance. AI will monitor control every single person on earth for whereabouts and what they are doing. So the world will become more transparent place with no privacy. It might be good for some aspects, but some might object the world like that.

    When the scientific revolution took over Europe in the Renaissance period, the prevalent idea of divine collapsed, and the society based on theological creed and authorities were rejected.

    With the advent of AI taking over the world, this time humanism and humans themselves might be rejected and denied.

    Will AI become smarter, be aware of the concept of God, and start worshiping humans for creating them, and as their Gods? Highly unlikely. The opposite might be the case.
  • Jack Cummins
    5.4k

    If AI surveils the world throughout it will become like 'God' itself as the judge, especially if it gives prescriptive commands. Of course, it is unclear how far this would go, especially in relation to the fate of human beings itself. James Lovelock, in his final writings spoke of the possiblity of a race of artificial intelligent beings and some remaining human beings overseeing the natural world.

    When you question how the AI would see human beings and revere them I wonder if it would be the other way round. Who would be servant and master? Would it be a matter of humans 'worshipping' the artificial intelligent beings as the superior 'overlords'? Some may see this question as ridiculous, but I do think it is one that needs looking at, especially if AI is being used to determine welfare and needs of humans and other aspects of nature.
  • 180 Proof
    15.6k
    "And the Lord God said, Behold, the man is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever ..."
    ~Genesis 3:22, KJV

    James Lovelock, in his final writings spoke of the possiblity of a race of artificial intelligent beings and some remaining human beings overseeing the natural world.Jack Cummins
    Yes, I imagine – 'a plausible' best case scenario – 22nd/23rd century* Earth as a global nature preserve with a much smaller (>1 billion) human population of 'conservationists, park rangers & eco-travelers' who are mostly settled in widely distributed (regional), AI-automated arcologies (and even space habitats e.g. asteroid terreria) in order to minimize our ecological footprint as much as possible.

    Would it be a matter of humans 'worshipping' the artificial intelligent beings as the superior 'overlords'?
    No more than "humans worshipping" the internet (e.g. social media, porn, gambling, MMORPGs). As an idolatrous species we don't even "worship" plumbing-sanitation, (atomic) clocks, electricity grids, phones, banking or other forms of (automated) infrastructure which dominate – make possible – modern life.

    IMO, as long as global civilization consists of scarcity-driven dominance heirarchies, "our overlords" will remain human oligarchs (scarcity brokers) 'controlling' human networks / bureaucracies (scarcity re-producers).

    It may be that our role on this planet is not to worship God – but to [build it]. — Arthur C. Clarke
    However, I suspect that the accelerating development and distribution of systems of metacognitive automation (soon-to-be AI agents rather than just AI tools (e.g. LLMs)) will also automate all macro 'human controls' before the last of the (tech/finance) oligarchs can pull the proverbial plugs; ergo ...

    Who[What] would be servant and master?Jack Cummins
    ... my guess (hope): "AGI" (post-scarcity automation sub-systems —> Kardashev Type 1*) will serve and "ASI" (post-terrestrial megaengineering systems —> Kardashev Type 2) will master, and thereby post-scarcity h. sapiens (micro-agents) will be AGI's guests, passengers, wards, patients & protectees ... like all other terrestrial flora and fauna.*

    Man is something that shall be overcome. Man is a rope, tied between beast and [the singularity] — a rope over an abyss. What is great in man is that he is a bridge and not an end. — Friedrich Nietzsche
    :fire:
  • Corvus
    4.1k


    As your suggestion it sounds ridiculous, however it is a possible scenario that AI with the similar mental capacity with the human mind could search for their creators who are the AI developers, and worship them as their Gods. By this time, maybe there would no living humans left, and AI are the only living agents on earth? Who knows? Sounds like a theme from a SciFi movies, but it could be a possible reality. :D A possible reality? Is it a contradiction?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.