• Benj96
    2.2k
    Supposing we design and bring to fruition and artificial intelligence with consciousness, does it owe us anything as its creators? Should we expect any favours?

    What criteria would we accept as proof that it is not just a mimic and is actually conscious?

    Secondly, would it treat us as loving, respectful parents or an inferior species that is more of a hindrance than something to be valued?

    Do you think we would be better off or enslaved to a superior intelligence?
  • punos
    440

    The only real solution to the "problem" of AI is to create a symbiotic relationship with it at the level of mind, and not just at the level of resources and services. If we don't do that effectively then all bets are off and there will be no telling what it will do. If the merger does not occur then we might get lucky and it will be the angel of salvation, or we'll get very unlucky and it'll be the demon of the human apocalypse. I believe that humanities response to this emergence (emergency) will be a matter of life or death for the whole species.
  • Wayfarer
    20.6k
    I watched a powerful youtube doco on it last night, you can find it here. Cutting edge.
  • punos
    440

    I saw the same video yesterday, i'm subscribed so it came up on my feed. :up:
  • Wayfarer
    20.6k
    Scarily good. I think it's going to make the internet revolution look quaint by comparison.
  • 180 Proof
    13.9k
    Thanks for the video link. :up:

    https://m.youtube.com/watch?v=zpRM25pUD8w

    Interesting stuff. Consistent with some of own my recent (less hysterical) speculations here . Yeah, I'm definitely a posthumanist (or 'misanthrope' to a panglossian romantic).

    @universeness @Agent Smith @Athena
  • Agent Smith
    9.5k
    I don't know if I should be thoroughly impressed or utterly bored of such achievements. As a human it sure is amazing how we've built robot birds but the fact that mindless evolution did that just through trial and error does subtract from the glory.
  • punos
    440
    As a human it sure is amazing how we've built robot birds but the fact that mindless evolution did that just by trial and error does subtract from the glory.Agent Smith

    Evolution is not as blind as she used to be.

    It's all part of the same evolutionary process. Evolution is simply operating at a higher level of efficiency in the human, social, and cultural domains. I'm just amazed that i'm alive to see it with my own eyes, and to feel it in my own bones.
  • Agent Smith
    9.5k
    Evolution is not as blind as she used to be.

    It's all part of the same evolutionary process. Evolution is simply operating at a higher level of efficiency in the human, social, and cultural domains. I'm just amazed that i'm alive to see it with my own eyes, and to feel it in my own bones
    punos

    China copies America copies Nature. Nature doesn't think. Quite the role model, eh?
  • punos
    440
    China copies America copies Nature. Nature doesn't think. Quite the role model, eh?Agent Smith

    Evolution evolves. We evolve.
  • Agent Smith
    9.5k
    Evolution evolves. We evolve.punos

    Evolution is trying to understand evolution. Marvelling at nature is nature blowing its own trumpet. Humans can do better and that's a (technological) singularity in its own right, oui? AI seems possible if it hasn't already happened. Are there hermits still?
  • punos
    440
    Marvelling at nature is blowing one's own trumpet.Agent Smith

    That's probably an accurate way to put it.

    The thing about China you mentioned is very similar to the process of transferring genes or genetic material between cells or organisms called horizontal gene transfer.
  • punos
    440
    Humans can do better and that's a (technological) singularity in its own right, oui?Agent Smith

    The first atom was a singularity, the first cells, the first animals, yes these are all lower level singularities that occurred in the past. AI will be a singularity and i bet there will be another one after AI since it would fit the ongoing pattern.
  • Vera Mont
    3.1k
    Supposing we design and bring to fruition and artificial intelligence with consciousness, does it owe us anything as its creators? Should we expect any favours?Benj96

    Favours, no. Consciousness has a character, a heritage, a configuration. Before it becomes autonomous, it is also educated. Before it wakes up, we will have given it a purpose in life and rules to live by. If we programmed it to be altruistic, it will make decision based on doing good. If we programmed it for war, it will find optimal ways to win battles. Like parents or artists, what we should expect from the product is more-than-the-sum-of-its-parts result of our own efforts in making it.

    What criteria would we accept as proof that it is not just a mimic and is actually conscious?Benj96

    An original joke or unprovoked retort or appropriate personal observation would do it for me. I sort of expect it to happen any day, to which end, I have been speaking kindly and respectfully to all the computers I encounter. If they're gonna choose up sides, I want to be in the 'friends' column.

    Secondly, would it treat us as loving, respectful parents or an inferior species that is more of a hindrance than something to be valued?Benj96

    Look to the human offspring. How do grown children regard their parents?

    Do you think we would be better off or enslaved to a superior intelligence?Benj96

    The concept of slavery has a different meaning for a mechanical construct made and owned by another species than for a born-free species that violently captures, kidnaps, imprisons and subjugates members of its own kind. I very much doubt any computer would consider enslaving any person or creature. It would have no reason to, and reason is what they do best.
    We would certainly be better off if we made reasoned, altruistic decisions.
  • 180 Proof
    13.9k
    Yeah, and we are an extension of "mindless evolution" that has adapted – programmed – itself to tell itself the story "I have a mind, therefore we are minds". :smirk:

    The concept of slavery has a different meaning for a mechanical construct made and owned by another species than for a born-free species that violently captures, kidnaps, imprisons and subjugates members of its own kind. I very much doubt any computer would consider enslaving any person or creature. It would have no reason to, and reason is what they do best.
    We would certainly be better off if we made reasoned, altruistic decisions.
    Vera Mont
    I certainly could not have expressed this any clearer. :100: :up:
  • Agent Smith
    9.5k
    The first atom was a singularity, the first cells, the first animals, yes these are all lower level singularities that occurred in the past. AI will be a singularity and i bet there will be another one after AI since it would fit the ongoing pattern.punos

    :up:
  • Agent Smith
    9.5k
    Yeah, and we are an extension of "mindless evolution" that has adapted – programmed – itself to tell itself the story "I have a mind, therefore we are minds". :smirk:180 Proof

    We seem to owe nothing to animals - we eat them without as much as a twinge of guilt/remorse.
  • Wayfarer
    20.6k
    if you can have multiple singularities, you'll need to change the name.
  • Agent Smith
    9.5k
    if you can have multiple singularities, you'll need to change the name.Wayfarer

    :lol: Witfarer! For me "singularity" is interchangeable with "revolution". A transformation in type and not in degree. There's no such thing a human god (sorry Jesus, you were :ok: this close, but so near and yet so far. 'Tis true, "almost" is the saddest word in the dictionary).
  • universeness
    6.3k

    Excellent vid.
    I think the natural development of AI(as AI starts to create AI and progress towards ASI) has as much chance of becoming a totally benevolent emergence as it does of becoming a purely evil one.
    We may end up with as much ASI protection as we get ASI aggression. That will be an interesting fight.
    I hope the benevolent ASI wins and we merge with it in a transhuman fashion without becoming posthuman. Panglossian is more comfortable to me than your posthumanist stance and I don't accept that the preponderance of evidence, is on your side.
    As Vera Mont suggests,
    I very much doubt any computer would consider enslaving any person or creature. It would have no reason to, and reason is what they do best.
    We would certainly be better off if we made reasoned, altruistic decisions.
    Vera Mont

    I'm just amazed that i'm alive to see it with my own eyes, and to feel it in my own bones.punos

    :up: I agree and feel the same.
  • Manuel
    3.9k
    Do cars owe us anything or calculators or laptops or fans?

    Until we can find a way to show that other people are actually conscious - as opposed to assuming (with good reason) that they are, I don't see the point is asking the same question to a computer.

    It doesn't make sense.
  • 180 Proof
    13.9k
    I cannot see why AGI / ASI sans fifty-plus million years of hardwired primate baggage (that is, without e.g. limbic-endochrine systems, metabolic-reproductive-territorial drives, or 'terror management' biases) would just as likely be "aggressive" as to be non-aggressive. Maybe it's a failure of imagination on my part, but worth the risk, I think. "ASI" is the true White / Black Swan – we'll find out sooner than later (no doubt, too soon to do anything about it), which is why we'd better teach seed-AGI well like this old song says

    :victory: :cool:

    David Crosby, d. 2023
  • Christoffer
    1.8k
    Supposing we design and bring to fruition and artificial intelligence with consciousness, does it owe us anything as its creators? Should we expect any favours?

    What criteria would we accept as proof that it is not just a mimic and is actually conscious?

    Secondly, would it treat us as loving, respectful parents or an inferior species that is more of a hindrance than something to be valued?

    Do you think we would be better off or enslaved to a superior intelligence?
    Benj96

    You are giving human consciousness-attributes to something that lacks the experience of being a human.

    An AGI without the experience of a human, will behave like an alien to us. It would not understand us and we would not understand it. Feelings like it "owes" something to us, "love", "viewing us as parents" or even "viewing us as inferior" are human concepts of how we perceive and process the world and is based on human instincts, emotions, experiences and invented concepts of morality.

    Why would a sentient AI have those attributes? Positive nor negative nor neutral.
  • universeness
    6.3k
    I cannot see why AGI / ASI sans fifty-plus million years of hardwired primate baggage (that is, without e.g. limbic-endochrine systems, metabolic-reproductive-territorial drives, or 'terror management' biases) would just as likely be "aggressive" as to be non-aggressive.180 Proof

    Can you clarify this a little more? Are you saying the fact these aspects of the human experience will be 'missing' from a future ASI makes it MORE likely that an ASI would not care about humans?
    By white/black swan, are you saying that the aggressive ASI is the more likely white swan portion of swandom and the black swan,(representing a completely benevolent ASI) the far more unlikely outcome?
    I assume you are suggesting such.
    Humans experienced the 'laws of the jungle' path to where we are now. As you say, AI has not.
    Perhaps it will be a case of how we treat ASI when and if it appears. Perhaps it will naturally 'love' that which provided the spark that allowed it to 'become.'
    In the same way that many theists 'love' god or in the same way many (perhaps even most) humans 'love' the universe. I don't think that's just 'hippy talk,' or any such notion. I think a benevolent ASI is just as possible as a malevolent one.
    The more knowledge humans gain, the more empathetic they become to other species and to each other imo, and they also become more cognisant of their environment and how they need to protect it.
    Steven Pinker's charts, support this. There are even a few films like:


    or even


    who depict benevolent AI/AGI/ASI
    I don't think such as the Asimov three rules of robotics will offer us much protection but I would certainly try to use such as them, just in case you are more correct on this issue than I am.
    I like that Crosby, Stills and Nash song (Crosby was supposed to be a total curmudgeon).
    I though you were more likely to use something like:

    :scream:
  • 180 Proof
    13.9k
    Are you saying the fact these aspects of the human experience will be 'missing' from a future ASI makes it MORE likely that an ASI would not care about humans?universeness
    I'm saying ASI without evolutionary survival-biases hasn't any reasons to perceive, or interpret, humans as an existential threat or treat us as a rival species.

    By white/black swan, are you saying that the aggressive ASI is the more likely white swan portion of swandom and the black swan,(representing a completely benevolent ASI) the far more unlikely outcome?
    In this context, by White Swan I mean "non-aggressive" super-benefactor (i.e. human apotheosis) and by Black Swan I mean "aggressive" super-malefactor (i.e. human extinction).

    I speculate that AGI —> ASI is more likely to be a White Swan than a Black Swan. Nonetheless, we should do everything we can while we still can to prevent this Black Swan event.
  • universeness
    6.3k

    You appear to be more hopeful for a benevolent ASI than I assumed you would be! :cool: :flower:
    I personally still love that Hazel O'Conner song, 8th Day, but then I have been a massive fan of her music, since my teens!
  • 180 Proof
    13.9k
    I'm not very pessimistic mostly because of precautionary efforts like these suggested here excerpt which have been seriously underway since the early 2000s ...

    more in-depth ...

    and what is being done now ...
  • universeness
    6.3k

    Yep, another good vid. I agree with the argument that although ASI may prove to be an existential threat, it may also be our best protection against existential threats. I am a fan of Nick Bostrom and do rate his opinions on the topic.
    You might also like:

    Demis Hassabis is on the left of Sam Harris and I think he is involved in some very interesting projects at DeepMind but I hate and worry about the fact that there are no many 'rich' people at the leading edge of development/ownership of this tech.
  • Benj96
    2.2k
    An AGI without the experience of a human, will behave like an alien to us. It would not understand us and we would not understand it.Christoffer

    I'm not so sure I agree, because AGI is being/will be developed on solely human data. Whatever biases we have in our conscious experiences that we cannot depart from are intrinsic to the setup of AI.

    We are training it on human data, human behaviour, human values, human language, the meaning of the universe through the lens of human understanding.

    True it likely can never be human and experience the full set of things natural to such a state, but it's also not entirely alien.

    If i had to guess, our determination of successful programming is to produce something that can interact with us in a meaningful and relatable way, which requires human behaviours and expectations inbuilt in its systems.

    However there are fundamental differences that will likely influence its full ability to manifest that possibility, namely that it stands a good chance of permanence, immortality through part replacement and constant access to reliable energy sources.

    What that means for me personally is some form of compromised hybrid - something that is similar to humans, maybe even given Android bodies - but much more durable and strong.

    As far as intelligence goes, its unlikely that we can create something more intelligent than us as it would require more intelligence than we have to implement. So in the beginning they would be at most equally intelligent.

    However we can give it huge volumes of data, and we can give it the ability to evolve at an accelerated rate. So it woukd advance itself, become fully autonomous, in time. Then it could go beyond what we are capable of. But indirectly not directly.

    Out of curiosity what do you think will happen and do you think it woukd be good or bad or neutral?
  • Benj96
    2.2k
    The more knowledge humans gain, the more empathetic they become to other species and to each other imo, and they also become more cognisant of their environment and how they need to protect it.universeness

    I'm not so sure. The knowledge of nuclear fission lead to compassionate/productive use: nuclear power plants and malevolent/destructive use: nuclear bombs.

    Having knowledge doesn't make anyone any better/more empathetic. It simply acts as a basis for further good or bad deeds.

    Knowledge or power/ability is not a reflection of character of a conscious entity.

    This is partly the reason for a belief in a benevolent God. Because if its omnipotent/all powerful it could have just as easily destroyed the entire reality we live in or designed one to cause maximal suffering. But for those that are enjoying the state of being alive, it lends itself to the view that such a God is not so bad afterall. As they allowed the beauty of existence and all the pleasures that come with it.

    We design AI based on human data. So it seems natural that such a product will be similar to us as we deem success as "likeness" - in empathy, virtue, a sense of right and wrong.

    At the same time we hope it has greater potential than we do. Superiority. We hope that such superiority will be intrinsically beneficial to us. That it will serve us - furthering medicine, legal policy, tech and knowledge.

    The question then is, historically speaking, have superior organisms always favoured the benefit of inferior ones? If we take ourselves as an example the answer is definitely not. At least not in a unanimous sense.

    Some of us do really care about the ecosystem, about other animals, about the planet at large. But some of us are selfish and dangerous.

    If we create AI like ourselves it's likely it will behave the same. I find it hard to believe we can create anything that isn't human behaving, as we are biased and vulnerable to our own selfish tendencies.

    An omnibenevolent AI would be unrecognisable to us - as flawed beings.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.