• universeness
    6.3k
    I described a trolley problem where 5 lives can be saved by taking action that kills one. What’s does your consistent moral code say about this? Why isn’t it done today? Why is it more moral to let the 5 die, and should this standard be changed?noAxioms

    We may well apply morality as a pure numbers game, when there is no other information available. For me personally, It would totally depend on what information was available around the scenario.
    Would I find it 'worth it' to cause the death of a large number of innocent people, say 1000, to kill a tyrant that's destroying our way of life, yes, probably. Your 'trolley' style problem needs more detail about the individuals involved. In the absence of such detail, the morally consistent approach for me, is that if we are talking about a train track lever that switches the trolly from one track to another, then I would probably pull the leaver and let 1 die rather than 5, if I know nothing about the people involved.
    If the 1 was my child/wife etc, it's probably then going to be bye bye 5, unless it was 5 children.
    In any such situation, of choosing what you consider horrific outcome 1 and horrific outcome 2, but you do have some personal moral notion of a lesser evil between the two choices, then you make your choice, but you will probably never recover from the experience. It will take it's toll on you for the rest of your life.
    I would never advocate for harvesting the organs of 1 to save many, like you suggested, no.
  • noAxioms
    1.3k
    We may well apply morality as a pure numbers game, when there is no other information available.universeness
    All else being equal then. In the organ thing, everybody is around 40 year old and part of a family and is loved. The 5 will die within 3 months without the procedure. They would be expected to live full productive lives with the surgery, but of course at the cost of the one, also loved, etc.

    In the absence of such detail, the morally consistent approach for me, is that if we are talking about a train track lever that switches the trolly from one track to another, then I would probably pull the leaver and let 1 die rather than 5, if I know nothing about the people involved.
    So it is a numbers game, but only when its a game and only if you're not personally involved.
    For the record, I am asking what is the morally best course of action, not whether or not you'd do it. It sounds like you'd change your decision based on if you loved somebody (the one in particular). That means you're willing to do the wrong thing for personal reasons.
    The trolley is a metaphor. I've never seen the situation come up with an actual trolley. I've seen it with an automobile where the choice was between about 20 people and a small dog. The 20 people were hit, many of whom died, but the dog was OK. I didn't read if the survivors had the dog killed afterwards, but they should have.

    If the 1 was my child/wife etc, it's probably then going to be bye bye 5, unless it was 5 children.
    Ah, children are worth more than adults. Interesting. The old Titanic thing. I wonder where they cut off the age limit for 'women and children'.

    I actually would agree that a person's worth changes through the course of their lives, but modern morality seems to be based on a life being worth infinity period, which results in all sorts of silliness.

    In any such situation, of choosing what you consider horrific outcome 1 and horrific outcome 2, but you do have some personal moral notion of a lesser evil between the two choices, then you make your choice, but you will probably never recover from the experience.
    Sure, but what if the making of the choice was done by another or was automated? Remember, I'm asking what's right, not what you would do, although knowing what you would do is certainly also interesting. We discussed automating unpleasant tasks above. This certainly qualifies as one.

    I would never advocate for harvesting the organs of 1 to save many, like you suggested, no.
    This is in direct contradiction to your comment above where you perhaps suggest saving the five outweighs the one. How is this not exactly the trolley problem? I assure you it comes up in real life, and human morality actually says kill the 5, not the 1. Why is this?
    Many times the decision has to be made in moments (like the car/dog thing above). Sometimes you have a long time to ponder the choice.

    I wondered if we were getting off topic, but no, this has direct relevance.
  • universeness
    6.3k

    There are many obvious examples of these moral dilemma's from history. Here are two.

    1. Would YOU have dropped the bombs on Hiroshima and Nagasaki to force the Japanese to surrender? The story goes that the Americans DID demonstrate the power of the bomb to the Japanese top brass and told them to surrender before they dropped the bomb, but the Japanese top brass refused (including their moronic emperor.) Would you have went the alternate route of invasion of the Japanese homeland and the price of doing that for all sides involved?

    2. Churchill knew the city of Coventry was going to be massively bombed, as the British has broken the enigma code, but if he evacuated the city then the nazi's would know the British had broken the code and they would change it, which could have led to the defeat of Britain by the nazi's. So he let Coventry be bombed and many innocents died. What would YOU have done?

    An automated system at the level of an AGI or ASI would hopefully prevent such scenario's from happening in the first place or be better able to create alternatives to binary choices between horrific choice 1 or horrific choice 2.

    If the details surrounding Truman and Churchills decisions are all true, as I read them, then I would have made the same decisions as they did. I could not have survived either of them however. My suicide soon after, would have been the only relief I could imagine.
    An automated system may not feel such a need to self-destruct however and that is probably better.
    I am no fan of Churchill or Truman and I don't know how they managed to live with themselves after making such decisions but my opinion on that, is merely that, ..... my opinion
  • noAxioms
    1.3k
    There are many obvious examples of these moral dilemma's from history. Here are two.universeness
    War has always been about sacrifice of people here and there for a greater goal. It is unavoidable. If you could not have lived with yourself after making the decisions you mention, then you (and I both) are not fit for leadership.
    The bomb (especially the short interval between them) was done partly to keep the USSR out. They were going to ally with Japan to divide China between them, and that ceased when it became somewhat apparent that we could churn out these bombs at a fairly fast pace. It kept us out of the war with the USSR, mostly because leaders everywhere didn't have the stomach to finish what needed to be done. Churchill did, but he didn't have the support needed, including from you apparently.

    An automated system at the level of an AGI or ASI would hopefully prevent such scenario's from happening in the first place or be better able to create alternatives to binary choices between horrific choice 1 or horrific choice 2.
    How do you envision that these automated systems would have chosen better? No matter what, they still have to throw lives against the lives of the enemy and it is partly a numbers game. Would they have chosen differently?

    Why cannot many people live with knowledge of having done the right thing? Seems either a defect in people or a defect in the definition of the right thing.
  • universeness
    6.3k
    War has always been about sacrifice of people here and there for a greater goal. It is unavoidable.noAxioms
    Such words are easily typed but such a horrific situation, might mean you have to sacrifice your own family, as well as many other innocents, to stop a horror like fascism from taking over.
    I hope you never personally face such horror's in your life.

    Churchill did, but he didn't have the support needed, including from you apparently.noAxioms
    Thank goodness that we stopped him then. He was a butcher and a man who would be King, if he could.
    His character was very similar to Hitler's or Stalin's imo.

    How do you envision that these automated systems would have chosen better? No matter what, they still have to throw lives against the lives of the enemy and it is partly a numbers game. Would they have chosen differently?noAxioms
    I think they would reject all notions of war and would not allow such, as they would not be infected with the same primal fears/paranoia/territoriality/tribalism that humans have to combat.
  • universeness
    6.3k

    Based on our earlier exchange on Leonard Susskind's proposal that quantum entanglement may actually BE gravity. I thought you might enjoy this recent discussion on Quora:
    https://www.quora.com/What-do-physicists-think-of-Leonard-Susskinds-paper-where-he-states-that-QM-GR
  • Athena
    2.9k
    Anesthetic, can remove all feeling from your body and you can remain awake. How is this possible if any aspect of consciousness or mind, exists outside of the brain? My brother-in-law, had a triple bypass operation, and he was awake all the way through the operation and asked to see his opened body and exposed heart, during the operation, this request was fulfilled. Why did Stephen Hawking continue with his life considering the lack of function/feeling he had in his body? Do you think he was less conscious or had less access to 'mind' due to the reduced state of his body? Why do people paralised from the neck down, still want to live? Christopher Reeves of superman fame for example?universeness

    If your brother-in-law was awake during surgery he had a regional anesthetic, not a general anesthesia
    that makes a person unconscious. His brain was still working, right?

    https://www.asahq.org/madeforthismoment/anesthesia-101/effects-of-anesthesia/

    Stephen Hawking had ALS and so did my mother. ALS destroys the muscles but does not interfere with emotional feelings that are a different nerve pathway. If you are interested in the body's relationship to emotions you might find this link interesting.

    Emotions are how individuals deal with matters or situations they find personally significant. Emotional experiences have three components: a subjective experience, a physiological response and a behavioral or expressive response.
    https://online.uwa.edu/news/emotional-psychology/#:~:text=Physiological%20Responses,-We%20all%20know&text=This%20physiological%20response%20is%20the,fight%2Dor%2Dflight%20response.
    — Psychology and Counseling News
  • Athena
    2.9k
    Not sure what you are referring to here Athena, a particular sci-fi movie perhaps?universeness

    I can not find a decent link with all the clutter like this one. https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/

    I want to add to what I said, I watched a video explanation that used a human as AI and she insisted she has feelings. That is to say, things are being misrepresented. If our brains were in a vat and could think and communicate, there would be no feeling body. Going on stored information, the brain could think losing a child is sad, but it could not feel the sadness. AI can not have an emotionally feeling body.
  • universeness
    6.3k
    If your brother-in-law was awake during surgery he had a regional anesthetic, not a general anesthesia
    that makes a person unconscious. His brain was still working, right?
    Athena

    It is called a local anesthetic here, not a regional anesthetic. If there was any 'consciousness' in the body, they you would think my brother-in-law would have experienced a reduction or 'loss of body consciousness.' If you are suggesting that loss of feeling or sensation in the body, IS loss of consciousness or 'mind' in the body, then I completely disagree.

    From the source you cited, we have:
    "We all know how it feels to have our heart beat fast with fear. This physiological response is the result of the autonomic nervous system’s reaction to the emotion we’re experiencing. The autonomic nervous system controls our involuntary bodily responses and regulates our fight-or-flight response. According to many psychologists, our physiological responses are likely how emotion helped us evolve and survive as humans throughout history.

    Interestingly, studies have shown autonomic physiological responses are strongest when a person’s facial expressions most closely resemble the expression of the emotion they’re experiencing. In other words, facial expressions play an important role in responding accordingly to an emotion in a physical sense."


    All physiological responses are controlled, enacted and terminated via the brain, imo.
    My hand might shake due to fear. If I have no hands, then they wont shake with fear. This does not mean a person with no hands does not experience fear, in the same way a person with hands does.
    There is no consciousness in your hands, or any other part of your body, imo.

    AI can not have an emotionally feeling body.Athena

    The news article you cited suggested to me that the 'Tay' chatbot failed because it was NOT a very good AI system. It was obviously easily shut down. I have no concern over a badly programmed AI chatbot which is easily shut down. The future AGI/ASI systems proposed by the current experts in the field are way way beyond purported AI systems such as Tay or chatGBT.
    Emulating the human brain processes that cause emotions/sensations/feelings in the human body is POSSIBLE in my opinion but I fully accept that we are still far away from being able to replace your pinky, with a replicant which can equal it's functionality and it's actions as a touch sensor.

    Consider sites such as BIT BRAIN:
    "One of the ways of studying human emotions is to study the nonconscious and uncontrollable changes that occur in the human body. Thanks to the latest advances in neuroimaging and neurotechnology, we can measure these changes with precision and then study them. But we face several difficulties, such as the problem of reverse inference (there are no specific somatic patterns associated with each emotion), inter-subject variations (no two brains are the same), and intra-subject variations (a person’s brain changes and evolves throughout time)."
  • universeness
    6.3k
    Stephen Hawking had ALS and so did my mother.Athena

    Sorry to hear your mother went through that.
    Do you know of Dave Warnock? He is currently dying of ALS and he is an atheist (ex theist), who speaks against theism and religious doctrine, online.

  • noAxioms
    1.3k
    Such words are easily typed but might mean you have to sacrifice your own family, as well as many other innocents, to stop a horror like fascism from taking over. I hope you never personally face such horror's in your life.universeness
    My comment was just reaching for real-life non-war scenarios that demonstrated the trolley paradox. I've come across many. I hope you personally never have to face one, either being the one or being part of the five, but it happens.

    How do you envision that these automated systems would have chosen better?
    — noAxioms
    I think they would reject all notions of war and would not allow such
    Hitler is taking over Europe, including GB in short order. The automated system would reject that and just let it happen rather than resist? That route was encouraged by several notable figures at the time, I admit.


    Stephen Hawking had ALS and so did my mother.Athena
    I too give my sympathies, for your mother and for any caregivers, a heroic task similar to caring for an Alzheimer's patient. I have a cousin-in-law that is in final stages of ALS, in hospice now.
  • universeness
    6.3k
    Hitler is taking over Europe, including GB in short order. The automated system would reject that and just let it happen rather than resist? That route was encouraged by several notable figures at the time, I admit.noAxioms

    I was unable to unpackage your meaning here, or understand what your question was referring to.
    I was suggesting that if an ASI was the main power on Earth, then the rise to power, of a character like Hitler or even Trump, would not be allowed to occur.
  • Athena
    2.9k
    It is called a local anesthetic here, not a regional anesthetic.universeness

    If you want us to believe you know it all, you should read the links before making your arguments.

    Local anesthesia. This is the type of anesthesia least likely to cause side effects, and any side effects that do occur are usually minor. Also called local anesthetic, this is usually a one-time injection of a medication that numbs just a small part of your body where you’re having a procedure such as a skin biopsy.

    Regional anesthesia is a type of pain management for surgery that numbs a large part of the body, such as from the waist down. The medication is delivered through an injection or small tube called a catheter and is used when a simple injection of local anesthetic is not enough, and when it’s better for the patient to be awake.
    American Society of Anesthesia

    All physiological responses are controlled, enacted and terminated via the brain, imo.universeness

    That may be so but the feeling is still in the body and without one there are no feelings. The brain can not terminate a feeling like a switch being turned off. The hormones must be metabolized in their own time and as we age this process slows down. Music is good for producing desired feelings and calming us down when we are in fight or flight mode, as I am now because of a communication problem with someone in the room with me. :lol: My intense anger may be the result of hormones started in my head but I assure you they are in my body, not my head and I should probably go for a walk to metabolize these fight of flight hormones faster. Yipes he is not shutting up- I am going for a walk.
  • universeness
    6.3k
    If you want us to believe you know it all, you should read the links before making your arguments.

    Local anesthesia. This is the type of anesthesia least likely to cause side effects, and any side effects that do occur are usually minor. Also called local anesthetic, this is usually a one-time injection of a medication that numbs just a small part of your body where you’re having a procedure such as a skin biopsy.

    Regional anesthesia is a type of pain management for surgery that numbs a large part of the body, such as from the waist down. The medication is delivered through an injection or small tube called a catheter and is used when a simple injection of local anesthetic is not enough, and when it’s better for the patient to be awake.
    — American Society of Anesthesia
    Athena

    If I present myself to you as 'a know it all,' then I have either presented myself to you badly Athena or your impression of me is unjust. I am content to think either is true as opposed to accepting that I really do think I am a know it all. Anyway, you have just PROVED that such a personal trait in me IS unwarranted, as you have corrected my error. You have also provided clear evidence that I do not read every word in a link provided by another poster. I wish I had such time available to me.

    Music is good for producing desired feelings and calming us down when we are in fight or flight mode, as I am now because of a communication problem with someone in the room with me.Athena
    Put the gun down Athena! Remove yourself from the room or suggest the person leaves until you both calm down, or is this situation not as bad as I suggest?

    My intense anger may be the result of hormones started in my head but I assure you they are in my body, not my head and I should probably go for a walk to metabolize these fight of flight hormones faster. Yipes he is not shutting up- I am going for a walk.Athena
    A relative? A politician on the TV? @Jamal?
  • Athena
    2.9k
    Put the gun down Athena! Remove yourself from the room or suggest the person leaves until you both calm down, of is this situation not as bad as I suggest?universeness

    I am back. I really wish such darn emotions did not hijack my sanity! I have logic in my brain and what I just went through was not logical! It was emotional and intensely physical and this is why I keep arguing with you about where our emotions are. My head wants me to be a better person but our bodies! and insane emotions, can consume us. Now I will probably have to take a nap and I will probably lack energy for the rest of the day.

    I knew better than ask him to help me by getting the box down from above the shelves. He could not understand " the box above the shelves". Not even when I pointed to it could he understand the request. He would not be in home if he were more capable. I really want to talk about this but not in this thread. However, here, perhaps we can speak of who rules, our brain or our body because that has been our argument for a long time. I want to be different from how I am and heaven knows I have put a lot of effort into being a better person. :lol: I look forward to being reincarnated in a totally different body with the hope of having a different life experience. AI will not have this problem because it does not have a body and hormones and therefore the ability to experience life. But it also won't be capable of the good either.
  • noAxioms
    1.3k
    I was suggesting that if an ASI was the main power on Earth, then the rise to power, of a character like Hitler or even Trump, would not be allowed to occur.universeness
    OK, I thought you were suggesting that AI would have avoided war with Hitler given the same circumstances. You are instead proposing that the entire world has already been conquered and the AGI would keep it that way. So more or less the same question, how would the AGI prevent a rise to power of a rival better than if a human was the main power of the entire Earth? I accept that choices motivated by personal gain (corruption) is more likely than with the AGI since it isn't entirely clear what it would consider to be personal gain other than the assured continuation of its hold on power.
  • Athena
    2.9k
    I too give my sympathies, for your mother and for any caregivers, a heroic task similar to caring for an Alzheimer's patient. I have a cousin-in-law that is in final stages of ALS, in hospice now.noAxioms

    Yes, and that is so for us because we are humans. It can not be so for AI because AI can not have emotional responses to life. I think we are being truly philosophical now, reminiscent of some ancient Greek arguments. I hate being controlled by emotions, but I am also thankful that because of emotions I am motivated to make things better for myself and other human beings. I think it would be dreadful if I just didn't care about others, and AI will not have the emotional experience of life that makes us caring people. A human can program the computer to process thoughts that a human gives the computer but that is a human creation, not an AI self-generated creation based on experiencing life.

    Hum, that did not address your condolences well. I remember my mother crying whenever something was meaningful to her and she would say it was ALS that caused her to cry. She had much work to do to come to peace with her life and the end of it.

    I am listening to a series of lectures about spirituality and meditation and I think this is an important part of the process of preparing for death. My mother was resistant to what was happening to her and did not make good choices compared to a man who was diagnosed with ALS when he was only 28. He took advantage of everything that could make his life better and that made being part of his life easier for me because we were working for the positive, rather than bracing against the negative and rejecting a lift chair or an electric wheeler chair and accepting my help. I am very thankful for the CDs that could improve how I manage my life and death.
  • Athena
    2.9k
    Emulating the human brain processes that cause emotions/sensations/feelings in the human body is POSSIBLE in my opinion but I fully accept that we are still far away from being able to replace your pinky, with a replicant which can equal it's functionality and it's actions as a touch sensor.universeness

    Artificially measuring the pressure of a touch may be possible but that is not equal to an emotional feeling. Right now even the sensation of touch requires a physical body.

    https://www.science.org/content/article/prosthetic-hands-endowed-sense-touch

    More interesting is how an emotional feeling is different from the sensation of touching something. I can recognize my emotional feelings as illogical. Like duh, I am talking with a man who has right frontal brain damage, and getting angry with him because he does not understand what I am saying. That is pretty stupid. For several years I worked with a mildly retarded guy who never got upset when someone didn't understand something as simple as sweeping the floor. He could relate to not understanding and would help the person understand. While I am instantly screaming at someone for being an idiot. Who is the idiot? It is not easy being human and really, I don't understand why it is so hard but my emotions make me behave will an idiot even when I know better. So what is up with these emotions?

    I love the demeanor of the Asian people I have met. They stay calm and basically more logical than emotional. It is our culture that makes us so hyper-emotional. But just wanting to be like them, and repeating their logical statements about things being as they are and fussing about them does not help, does not make me the reasonable person I want to be. We do not mentally manifest our emotions. Our emotions can control us, especially if we are unaware of them and think we are being rational.
  • universeness
    6.3k

    Is this the same guy you rescued from living in his car?

    AI will not have this problem because it does not have a body and hormones and therefore the ability to experience lifeAthena
    Not current AI no. Do you reject the idea of a merging of the human brain with a future cybernetic body (cyborgs) or a cloned body or some combination of tech/mecha and orga?
    I don't understand why you think any process/sensation/feeling that you have ever experienced in your body and interpreted by your mind, CANNOT EVER be reproduced by scientific efforts.
  • universeness
    6.3k
    OK, I thought you were suggesting that AI would have avoided war with Hitler given the same circumstances. You are instead proposing that the entire world has already been conquered and the AGI would keep it that way.noAxioms

    You are citing old habits. Conquest is not the only way to achieve unison!
    I envisage an AGI/ASI would have an intelligence level that supersedes any base notions, invoked via human primal fear. It would protect sentient life against threats to it's continued existence, as it would have a very real and deep understanding of how purposeless the universe is, without such lifeforms.
    That is either a very arrogant assumption on my part, or it's a truth about our existence in the universe.
    I have always thought that the wish or need to 'conquer,' is a mental abnormality and is pathological.
    I fully accept the necessity to protect, but not the need to conquer.
    Do you remember this star trek episode? Perhaps the 'organians' are like a future ASI:

    The organians or a future ASI, would have many ways to stop pathological narcissistic sociopaths like Hitler, or even relative failures like Trump. Perhaps they could even treat their illness.
  • Athena
    2.9k
    Not current AI no. Do you reject the idea of a merging of the human brain with a future cybernetic body (cyborgs) or a cloned body or some combination of tech/mecha and orga?
    I dont understand why you think any process/sensation/feeling that you have ever experienced in your body an interpreted in your mind, CANNOT EVER be reproduced by scientific efforts.
    universeness

    Yes, it is the same guy. I want to talk about that in the thread for that subject but not this thread.

    I have a preference for life on this earth being organic. I was thrilled with the internet when it first came up but hate what has been done to it. Opening AI to everyone is like giving a teenager the keys to car and ignoring Saturday night is a party night and all may not go well. We have some serious problems and need to stop here for a while and contemplate what we are doing and where we want to go with this.

    But I also have a spiritual concern as well. It goes with wanting to preserve the organic earth and valuing humans. I think valuing AI more than we value humans, and nature, can be a path into the darkness. I want to be very clear about this. I am concerned about how much we value humans.
  • universeness
    6.3k
    I am talking with a man who has right frontal brain damage, and getting angry with him because he does not understand what I am saying. That is pretty stupid. For several years I worked with a mildly retarded guy who never got upset when someone didn't understand something as simple as sweeping the floor. He could relate to not understanding and would help the person understand. While I am instantly screaming at someone for being an idiot. Who is the idiot? It is not easy being human and really, I don't understand why it is so hard but my emotions make me behave will an idiot even when I know better. So what is up with these emotions?Athena

    Yes, it is the same guy. I want to talk about that in the thread for that subject but not this thread.Athena

    Perhaps a future automated system will be better able to 'assist,' the person you have taken such a laudable responsibility for. I hope you have not taken on more that you can cope with.

    We have some serious problems and need to stop here for a while and contemplate what we are doing and where we want to go with this.Athena

    This is always good advice! Stop, pause and think, especially if we are trying to cope with stuff that's too destructive to us and perhaps we need to reconsider what needs to be done to regain 'balance.'

    But I also have a spiritual concern as well. It goes with wanting to preserve the organic earth and valuing humans. I think valuing AI more than we value humans, and nature, can be a path into the darkness. I want to be very clear about this. I am concerned about how much we value humans.Athena

    I don't think we value AI more than we value humans. I think we are just musing about the projections of AI into AGI/ASI, in the future. I don't recognise any aspect of 'me' that I associate with or connect to the term 'spiritual,' as a 'supernatural' conception, if that is how you are employing the term.
    You did not answer my questions:
    Do you reject the idea of a merging of the human brain with a future cybernetic body (cyborgs) or a cloned body or some combination of tech/mecha and orga?
    I dont understand why you think any process/sensation/feeling that you have ever experienced in your body an interpreted in your mind, CANNOT EVER be reproduced by scientific efforts.
    universeness

    I noticed some error in the wording of my second query so I edited it and requoted it below:

    I don't understand why you think any process/sensation/feeling that you have ever experienced in your body and interpreted by your mind, CANNOT EVER be reproduced by scientific efforts.universeness
  • universeness
    6.3k
    @Alkis Piskas
    Are you aware of this lecture by Rupert Sheldrake (released to YouTube 2 months ago,) regarding his theory of morphic resonance and morphic fields? It's 2.5 hours long but worth the watch. I knew about his work but I found this lecture on how an aspect of 'mind' might reach beyond the restriction of brain and body, quite interesting.
    I think you would enjoy it, if you are not already very familiar with Rupert and his work.

  • Athena
    2.9k
    Do you reject the idea of a merging of the human brain with a future cybernetic body (cyborgs) or a cloned body or some combination of tech/mecha and orga?
    I dont understand why you think any process/sensation/feeling that you have ever experienced in your body an interpreted in your mind, CANNOT EVER be reproduced by scientific efforts.
    universeness

    Yes, I do not think merging the human brain with a future cybernetic body is a good idea. Our brains are limited and I think we need to understand the limits and stay within them. There are concerns about what could happen to our brains and also what could happen to AI.

    "I, Robot" starring William Smith is about an attempted robot takeover. The original Star Trek TV series addressed the potential of people being under the control of a computer. A British show "Humans -made in our image out of our control" a show about robots having self-awareness as humans do. It offers many many things to think about. I so wish we could sit together and watch these shows and discuss them.

    Also, how far can we go in a discussion of feelings? Exactly what is required to have a feeling? Why do we have feelings? Would we be better without feelings? Star Trek also addressed the question of the good of our feelings. Joseph Campbell said Star Trek is the best mythology for our time. The Greeks shared a mythology and there are many benefits to having a shared mythology. You and I have the problem of no shared mythology and it is hard to build a debate without a shared understanding of what we are talking about.
  • Athena
    2.9k
    Are you aware of this lecture by Rupert Sheldrake (released to YouTube 2 months ago,) regarding his theory of morphic resonance and morphic fields? It's 2.5 hours long but worth the watch. I knew about his work but I found this lecture on how an aspect of 'mind' might reach beyond the restriction of brain and body, quite interesting.universeness

    Seriously?! Have you read Jose' Arguelles's book "The Mayan Factor" It is all about the Mayan understanding of morphic resonance and our cosmic connection with the universe. Some of Jose' Arguelles's thoughts are too weird but if you want to talk about morphic resonance his book should be part of the discussion. Here is a way of seeing reality in a different way....

    1. The Pulsation-Ray of Unity.
    2. The Pulsation-Ray of Polarity.
    3. The Pulsation-Ray of Rhythm.
    4. The Pulsation-Ray of Measure.
    5. The Pulsation-Ray of the Center.
    6. The Pulsation-Ray of Organic balance.
    7. The Pulsation-Ray of Mystic Power.
    8. The Pulsation-Ray of Harmonic Resonance.
    9. The Pulsation-Ray of Cycle Periodicity.
    10. The Pulsation-Ray of Manifestation.
    11. The Pulsation-Ray of Dissonant Structure.
    12. The Pulsation-Ray of Complex Stability.
    13. The Pulsation-Ray of Universal Movement.
  • Alkis Piskas
    2.1k

    Hi. Yes, I know and I like this guy. Thanks for this ref., but the video is too long.
  • universeness
    6.3k
    Seriously?Athena

    Well, I am interested in how relatively respected scientists such as Sheldrake, 'evidence' claims such as morphic resonance, morphic fields, habits (as a means of a 'natural' growth in the ability of a system to become more able to perform a process over time) and telepathy.

    I was interested in the validity of the evidence he references in the video I posted:
    1. How the 'melting points' of materials increased, as their 'purity' is increased over time, eventually becoming a 'constant.'
    2. How crystallisation becomes naturally more efficient over time.
    3. The examples of morphic resonance he claims are exemplified/evidenced, in the movements of flocks of birds, schools of fish etc,
    4. His examples of events, that most people would explain through 'coincidence,' that he claims are examples of morphic resonance/field, such as thinking about someone you have not encountered for years, then they all of a sudden, get in contact with you.
    5. His connection of morphic resonance to quantum phenomena such as entanglement.
    6. His examples of forms of 'telepathy,' and his evidence, using a particular parrot and it's owner, Dogs and cats who seem to indicate that they know when their owner in on their way home, even when the owner is still many miles away, or have not even started their journey home yet, but have decided to come home and this (via the morphic field) becomes known to an animal, that has a close relationship with the owner.
    7. His experiments involving rats and mazes, and his results of increasing ability in each new generation of rats and in rats all over the world, not directly connected to the original experiments.
    8. His 'positive results' when performing his 'who is phoning me, when you are given 4 choices.'
    His use of the pop group the Nolan sisters. Where one sister correctly predicted which of her other 4 sisters was phoning her, before she answered. The 50% success rate she achieved, being much higher than the 25% success rate she should have achieved. He claims he has performed hundreds of such experiments with similar results.

    I was very entertained by the lecture and his evidence.
    I remain unconvinced that morphic resonance, morphic fields, habit forming systems and telepathy are real, and can be irrefutably demonstrated, using the evidence Sheldrake has built up over the years of his career. I do not totally hand wave his evidence away and call him a crank and a charlatan, as some have chosen to do. I find some of his evidence interesting, and science and scientists have the responsibility to either prove him completely wrong, or accept there is some value to his claims.

    I also think that even if all his evidence is true then this could simply mean that humans and other species have another 'sense' system that we do not fully understand but this other sense system is still fully sourced in the brain.

    No, I have not heard of Jose' Arguelles's or read his book about the Mayans.
    A quick google search identified him as a now deceased, 'new age author and artist,' with a PhD in art history and aesthetics.
    When it comes to hypothesis, I consider empirical evidence to be the final arbiter, so although 'new age authors and artists,' with PhD's, can indeed be entertaining and very knowledgeable in their specialist subjects, I prefer the work of people like Sheldrake, which is also entertaining, but also has some real science behind it.

    This sentence from wiki, also reduced Jose' to the level of a peddler of woo woo for me.
    "Argüelles' significant intellectual influences included Theosophy and the writings of Carl Jung and Mircea Eliade. Astrologer Dane Rudhyar was also one of Argüelles' most influential mentors."
    The words I underlined make me go :rofl:
  • universeness
    6.3k
    Yes, I do not think merging the human brain with a future cybernetic body is a good idea. Our brains are limited and I think we need to understand the limits and stay within them. There are concerns about what could happen to our brains and also what could happen to AI.Athena

    Does this not contradict your claim that a future AI system cannot have a body which is capable of the same or very similar, emotional sensation, to that of a current human body?
    If you believe that a human brain could exist within a cybernetic body, then the capability of that cybernetic system to mimic or emulate or fully reproduce, every function and every emotional capability that a human biological system can currently demonstrate, becomes a matter of solving engineering problems.
  • noAxioms
    1.3k
    Conquest is not the only wat to achieve unison!universeness
    Well, the alternative seems to be every world leader voluntarily ceding power to a non-human entity. I'm sure none of them will have a problem with that. Imagine an AGI (seems totally benevolent!) created by the Russians and the UK is required to yield all power to it. Will they?

    In what way will it hold that power? Sure, it can recommend decisions to make, but that just puts it in an advisory role. It's word has to be law, or else something (what?). Just trying to envision how it works. It can't just be a program running on some servers since everybody could just decide to ignore it and that would be that. How does a human do it? How does some king prevent everybody from just suddenly ignoring him? I'm not really challenging your idea, but am exploring what it means to be in power, what would be needed. I think the kings do it with loyalists and threats of forceful nastiness to those that are not, but our AI is supposed to find a better way than that.

    I envisage an AGI/ASI ... would protect sentient life against threats to it's continued existence, as it would have a very real and deep understanding of how purposeless the universe is, without such lifeforms.
    This is a human conclusion. The AGI might well decide that, being superior, it is the better thing to give the universe purpose. I of course don't buy that because I don't think the universe can have a purpose, but assuming it can, how would the AGI not be the better thing to preserve, or at least to create it successor, and so on.

    That brings up another interesting point. If the AGI has any kind of sense of survival, why would it design and create a better AGI? The definition of the singularity is something that could after all. Currently we seem to be nowhere near that, but such things tend to blindside everybody.

    That is either a very arrogant assumption on my part, or it's a truth about our existence in the universe.
    To suggest a purpose to the universe is to suggest it was designed. I cannot think of a purposeful thing that isn't designed, even if not intelligently designed.

    Do you remember this star trek episode? Perhaps the 'organians' are like a future ASI.
    The organians or a future ASI, would have many ways to stop pathological narcissistic sociopaths like Hitler, or even relative failures like Trump. Perhaps they could even treat their illness.
    Things usually work out in fiction because they have writers who make sure the good guys prevail.

    You never answered how an AGI might have prevented war with Hitler. I admit that intervention long before they started their expansion would have prevented the whole world war, but what kind of intervention if something like war is off the table? Preventing them from building up a military in the first place seems like a good idea in hindsight, but it certainly didn't seem the course of action at the time. The AGI says, hey, don't do that, and Hitler doesn't even bother to respond. Sanctions, etc, ensue, but that didn't work with Russia either.

    In a way, these questions are unfair because I'm asking a low intelligence (us) what a superior intelligence (AGI) would do, which is like asking squirrels how to solve an economic crisis.
  • universeness
    6.3k
    To suggest a purpose to the universe is to suggest it was designed. I cannot think of a purposeful thing that isn't designed, even if not intelligently designed.noAxioms
    Not at all! as the 'purpose,' I am suggesting, only exists, as an emergence of all the activity of that which is alive, and can demonstrate intent and purpose, taken as a totality.
    No intelligent designer for the universe is required, for an emergent totality of purpose, within the universe, to be existent.

    I think the main issue you should consider is that you keep assuming that any 'cooperation' or merging/union with an AGI/ASI will be against what humans/sentient life, want(s). Many humans will welcome such a union, as it will give them so many more options than they have now, with such as robustness, life span, capability etc. We wont fight ASI, we will merge with it.
    I think @180 Proof, would see such a merging as 'post human,' and that may well be the case, but I am not so sure. I think that it may turn out, that allowing humans to continue to be born, experience life as a human, and then at some point, 'choose' to merge/ascend to an orga/ASI stage of existence, may be the most beneficial way to exist, for all the components involved.

    You never answered how an AGI might have prevented war with Hitler. I admit that intervention long before they started their expansion would have prevented the whole world war, but what kind of intervention if something like war is off the table? Preventing them from building up a military in the first place seems like a good idea in hindsight, but it certainly didn't seem the course of action at the time. The AGI says, hey, don't do that, and Hitler doesn't even bother to respond. Sanctions, etc, ensue, but that didn't work with Russia either.noAxioms
    All production would be controlled by the ASI in a future, where all production is automated.
    No narcissistic, maniacal human, could get their hands on the resources needed to go to war, unless the ASI allowed it.
    As I suggested, Hitler's pathology, would have been treated from a young age, and he would not have become the maladjusted person he grew up to be. The mental aberrations humans suffer from today, would be treated, much more successfully, than they are today.

    You are convinced that humans and a future AI will inevitably be enemies. You may be correct. Many people would agree with you. I am not so sure that this is the inevitable outcome. I think there will be a 'new age.' I think technological advance will surpass natural evolution completely. I think any moments of technical singularity will very soon after, involve 'augmentation'/'merging'/'union' of naturally organic life (such as humans,) and advanced artificial intelligence. I think that will prove to be the most beneficial outcome for all concerned. I think ASI will reach that conclusion very very quickly, because IT IS a super intelligence.

    Imagine an AGI (seems totally benevolent!) created by the Russians and the UK is required to yield all power to it. Will they?noAxioms

    I don't think it will matter which nation develops the first ASI, capable of self-replication and self-augmentation. Once it's growth moves beyond the control of humans, we will ALL most likely be at it's ...... mercy ........ or perhaps it would be more accurate to say, we would be dependent on it's super intelligence/reason and sense of morality.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.