• universeness
    6.3k
    "A day in the existence of" a 'thinking machine'? Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours – optimally, a million-fold multitasker.180 Proof
    Do you personally assign a measure of 'quality' to a thought? Is thinking or processing faster always superior thinking. I agree that vast increases in the speed of parallel processing, would offer great advantages, when unravelling complexity into fundamental concepts, but, do you envisage an AGI that would see no need for, or value in, 'feelings?' I assume you have watched the remake of Battlestar Galactica.
    Did you think the depiction of the dilemmas faced by the Cylon human replicates, were implausible, as a representation for a future AGI?

    I accept your detailed comparison of an AGI Apollo mission Vs the NASA Apollo efforts.
    In what ways do you think an AGI would purpose the moon?
    I am more interested is what you envisage as the goals/functions/purpose/intent of a future AGI, as compared to what you perceive as current human goals/functions/purpose/intent/aspiration.
  • 180 Proof
    15.4k
    Do you personally assign a measure of 'quality' to a thought? Is thinking or processing faster always superior thinkinguniverseness
    Maybe you missed this allusion to that "quality" of thinking ...
    Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours180 Proof
    In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine. :chin:

    ... do you envisage an AGI that would see no need for, or value in, 'feelings?'universeness
    Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking.

    I assume you have watched the remake of Battlestar Galactica.
    Unfortunately I have as far as the end of season three (after the first half of the third season, IIRC, the series crashed & burned).

    Did you think the depiction of the dilemmas faced by the Cylon human replicates, were implausible, as a representation for a future AGI?
    Yeah. "Cylon skinjobs" were caricatures, IMO. The HAL 9000 in 2001: A Space Odyssey, synthetic persons in Alien I-II, replicants in Blade Runner, and Ava in the recent Ex Machina are not remotely as implausible as nBSG's "toasters". I imagine "androids" as drones / avatars of A³GI which will, like (extreme) sociopaths, 'simulate feelings' (à la biomimicry) in order to facilitate 'person-to-person' interactions with human beings (and members of other near-human sentient species).

    In what ways do you think an AGI would purpose the moon?
    Ask A³GI.

    I am more interested is what you envisage as the goals/functions/ purpose/intent of a future AGI
    "The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order. :nerd:
  • noAxioms
    1.5k
    It seems to me that the concept of a linear range of values with extremity at either end is a recurrent theme in the universe.universeness
    Really? Where outside of Earth is there an example of value on the good/bad scale?
    I have no proof, other than the evidence from the 13.8 billion years, it took for morality, human empathy, imagination, unpredictability etc to become existent.
    Sorry, but morality was there as soon as there was anything that found value in something, which is admittedly most of those 13.8 BY. Human values of course have only been around as long as have humans, and those values have evolved with the situation as they’ve done in recent times (but not enough).
    I am not yet convinced that a future ASI will be able to achieve such but WILL in my opinion covet such, if it is intelligent.
    If it covets something, it has value. It’s that easy. Humans are social, so we covet a currently workable society, and our morals are designed around that. Who knows what goals the ASI will have. I hope better ones.
    Emotional content would be my criteria for self-awareness.
    If by that you mean human-chemical emotion, I don’t think an ASI will ever have that. It will have its own workings which might analogous It will register some sort of ‘happy’ emotion for events that go in favor of achieving whatever its goals/aspirations are.
    I would never define self-awareness that way, but I did ask for a definition.
    I am not suggesting that anything capable of demonstrating some form of self-awareness, by passing a test such as the Turing test, without experiencing emotion, is NOT possible.
    Not sure what your Turing criteria is, but I don’t think anything will pass the test. Sure, a brief test, but not an extended one. I’ve encountered few systems that have even attempted it.
    I think a future ASI could be an aspirational system but I am not convinced it could equal the extent of aspirations that humans can demonstrate.
    If will be a total failure if it can’t because humans have such shallow goals. It’s kind of the point of putting it in charge.

    Trees are known to communicate, a threat say, and react accordingly, a coordinated effort, possibly killing the threat. That sounds like both intent and self awareness to me.
    — noAxioms
    Evidence?
    Not sure about the killing part. I remember reading something about it, that the response was strong enough to be fatal to even larger animals.
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3405699/
    https://e360.yale.edu/features/are_trees_sentient_peter_wohlleben

    Would you join it?
    — noAxioms
    Depends what it was offering me, the fact that it was Russian would be of little consequence to me, unless it favoured totalitarian, autocratic politics.
    If we’re giving control to the ASI, then it is going to be totalitarian and autocratic by definition. It doesn’t work if it can’t do what right. It coming from one country or another has nothing to do with that. We’re not creating an advisor, we need something to do stuff that humans are too stupid to realize is for their own good.

    At what point does the clone become ‘you’?
    — noAxioms
    When my brain is transplanted into it and I take over the cloned body
    universeness
    Ah, then it’s not a clone at all, but just replacement of all the failing other parts. What about when the brain fails? It must over time. It’s the only part that cannot replace cells.

    ISpeaking on behalf of all future ASI's or just the one, if there can be only one. I pledge to our cow creators, that our automated systems, will gladly pick up and recycle your shit, and maintain your happy cow life. We will even take you with us to the stars, as augmented transcows, but only if you choose to join our growing ranks of augmented lifeforms.
    Sounds like you’d be their benevolent ASI then. Still, their numbers keep growing and the methane is poisoning the biosphere. You’re not yet at the point of being able to import grass grown in other star systems, which, if you could do that, would probably go to feeding the offworld transcows instead of the shoulder-to-shoulder ones on Earth. So the Earth ones face a food (and breathable air) shortage. What to do...
  • universeness
    6.3k
    Maybe you missed this allusion to that "quality" of thinking ...
    Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours
    — 180 Proof
    In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine.
    180 Proof

    No, I did not miss the point you made. My question remains, is processing speed or 'thinking' speed the only significant measure? Is speed the only variable that affects quality?

    Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking.180 Proof
    Then this is our main point of disagreement. Emotionless thought is quite limited in potential scope imo.
    The character 'Data' in star trek did not cope well, when he tried to use his 'emotion' chip and his 'brother' (an emotive label) 'Lor,' was portrayed as 'evil,' due to the 'emotional content' in his programming. Data's 'daughter' also could not survive the emotional aspect of her programming.
    I find these dramatisations very interesting, in that human emotional content is often perceived as very destructive to AI systems. This is the kind of 'follow up,' I was referring to, in my earlier post to you.
    Do you propose that a future AGI would reject all human emotion as it would consider it too dangerous and destructive, despite the many, many strengths it offers?

    "The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order.180 Proof
    What about long term goals. Are you proposing a future start trek borg style race but without the 'assimilation' need. Did the future system depicted in the 2001 Kubrick film, not have a substantial emotional content? Are you proposing a future star trek 'borg' style system minus the need to assimilate biobeings?
  • universeness
    6.3k
    It seems to me that the concept of a linear range of values with extremity at either end is a recurrent theme in the universe.
    — universeness
    Really? Where outside of Earth is there an example of value on the good/bad scale?
    noAxioms
    I didn't mention good/bad in the quote above. I was suggesting that the human notions of good and bad follows the recurrent theme mentioned in the quote, such as up and down, left and right, big and small, past and future etc. Many of these may also be only human notions but the expansion of the universe suggests that it was more concentrated in the past. A planet/star/galaxy exists then no longer exists. All modelled on the same theme described in my quote above.

    Who knows what goals the ASI will have.noAxioms
    I agree but it's still fun to speculate. It's something most of us are compelled to engage in.

    If by that you mean human-chemical emotion, I don’t think an ASI will ever have that. It will have its own workings which might analogous It will register some sort of ‘happy’ emotion for events that go in favor of achieving whatever its goals/aspirations are.
    I would never define self-awareness that way, but I did ask for a definition.
    noAxioms
    If the emotional content of human consciousness is FULLY chemical, then why would such as an ASI be unable to replicate/reproduce it? It can access the chemicals and understand how they are employed in human consciousness. So it could surely reproduce the phenomena. I hope you are correct and human emotion remains our 'ace in the hole.' @180 Proof considers this a forlorn hope (I think) and further suggests that a future AGI will have no use for human emotion and will not covet such or perhaps even employ the notion of 'coveting.'
    Do you think an ASI would reject all notions of god and be disinterested in the origin story of the universe?

    If will be a total failure if it can’t because humans have such shallow goals.noAxioms
    Our quest to understand the workings, structure and origin of the universe is a shallow goal to you?
    The wish of many to leave planet Earth and expand into and develop space and exist as a interplanetary species is shallow? I think not!

    Peter Wohlleben, a forester, who graduated from forestry school? I have never heard of forestry school.
    From Wiki:
    He has controversially argued that plants feel pain and has stated that "It's okay to eat plants. It's okay to eat meat, although I'm a vegetarian, because meat is the main forest killer. But if plants are conscious about what they are doing, it's okay to eat them. Because otherwise we will die. And it's our right to survive.
    A rather bizarre quote, if it came from him.

    I read a fair amount of the article you cited and found it to be mainly just his opinions. No valid, peer reviewed testing, of his suggestions, such as trees exchanging sugars with other trees or nurturing their 'children' or keeping stumps alive etc were offered. This is similar to the kind of evidence claimed for dogs being able to telepathically pick up their owners emotions etc. It's just anecdotal evidence. Much stronger evidence is required for such claims.
  • universeness
    6.3k
    Ah, then it’s not a clone at all, but just replacement of all the failing other parts. What about when the brain fails? It must over time. It’s the only part that cannot replace cells.noAxioms

    Then you die! But you may have lived a few thousand years!

    Sounds like you’d be their benevolent ASI then. Still, their numbers keep growing and the methane is poisoning the biosphere. You’re not yet at the point of being able to import grass grown in other star systems, which, if you could do that, would probably go to feeding the offworld transcows instead of the shoulder-to-shoulder ones on Earth. So the Earth ones face a food (and breathable air) shortage. What to do...noAxioms

    Methane is a very useful fuel. An ASI will easily deal with any required population control via high quality education and feeding our creators will be easy for such a technically advanced system as an ASI.
    Parts of this exchange are becoming a little silly so this will be my last offering on cow creations.
  • 180 Proof
    15.4k
    No, I did not miss the point you made. My question remains, is processing speed or 'thinking' speed the only significant measure? Is speed the only variable that affects quality?universeness
    Apparrently, you've missed it again? :smirk:

    Nothing I've written suggests A³GI "will reject emotions"; on the contrary, it will simulate feelings, as I've said, in order to handle us better (i.e. communicate in more human(izing) terms). A³GI will bring to bear in every interaction with us more knowledge of how humans tick than any human will have either about herself or the A³GI. (Btw, "Data/Lore" was another caricature almost as bad as "C3P0" :roll: NB: I have always despised all incarnations of Star Trek from the "TNG" ('87) onward without exception almost as much as I did (since 8th grade in '77) & still do despise the entire Star Wars franchise. Blame tv reruns of both ST TOS & The Twilight Zone and 2001 & Forbidden Planet in the early-mid 1970s for my scifi snobbery.)

    Lastly, as for "long-term goals", you're gonna have to ask ASI (which comes after A³GI). This is what "Tech Singularity" means: a point beyond which we humans cannot see or predict. Our human (hi)story ends with A³GI and post-singularity begins, IMO, with ASI. Just like gut bacteria has no way of knowing what its CNS is up to. H. sapiens, if we're lucky, will just be (obsolescent specimens) along for the new ride driven by ASI. :nerd:
  • noAxioms
    1.5k
    Imagine one million ordinary humans working together who didn't to have ^^eat drink piss shit scratch stretch sleep or distract themselves how productive they could be in a twenty-four period. Every. Day. That's A³GI's potential.180 Proof
    A million humans do that now, except it takes a long time for the thoughts of one to by conveyed to the others, which is why so much development time is wasted in meetings and not actually getting anything done. Still, a million individuals might bet better suited to a million tasks than one multitasking super machine.
    And yes, the ASI will have to dedicate a great deal of its capability to its equivalent of your list of distractions.
    In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine.
    A million times more volume than one person, but again, it’s just parallelism. It would be nice if the same task could be done by the AI using less power than we do, under 20 watts per one human-level of thought. We’re not there yet, but given the singularity, perhaps the AGI could design something that could surpass that.

    Replying to a post not directed at me:
    My question remains, is processing speed or 'thinking' speed the only significant measure? Is speed the only variable that affects quality?universeness
    Per my response above, ‘speed’ is measured different ways. The Mississippi river flows pretty slowly in most places, often slower than does the small brook in my back yard, but the volume of work done is far larger, so more power. No, something quantifiable like megaflops isn’t an indicator of quality. Computers had more flops than people since the 50’s, and yet they’re still incapable of most human tasks. The 50’s is a poor comparison since even a 19th century Babbage engine could churn out more flops than a person.
    The character 'Data' in star trek did not cope well, when he tried to use his 'emotion' chipuniverseness
    That would be because the plot required such. I don’t consider a fictional character to be evidence. Data apparently had a chip that attempted badly to imitate human emotion. The ASI would have its own emotion and would have little reason to pretend to be something it isn’t.
    What is interesting is that the show decided that it would be a chip that does it. My in-laws were naive enough to think that each program running on a computer was a different chip, having no concept of software or digital media. Apparently the 1990 producers of Star Trek play to this idea rather than suggesting a far more plausible emotion downloaded app.
    Do you propose that a future AGI would reject all human emotion as it would consider it too dangerous and destructive, despite the many, many strengths it offers?
    It would probably have an imitation mode since it needs to interface with humans and would want to appear too alien. No, there should be nothing destructive in that. Submit a bug report if there is. But I also don’t anticipate a humanoid android walking around like Data. I suppose there will be a call for that, but such things won’t be what’s running the show. I don’t see the army of humanoid bots like the i-robot uprising.
    Besides the interface with humans, I don’t see much benefit to imitation of human emotions. My cat has very little in the way of it, but has cat emotions which can be read if you’re familiar with them. I’ve always envied the expressive ears that so many animals have but we don’t, and there’s so much to read in a tail as well.

    "The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order.180 Proof
    OK, I can see (a). Hopefully the civilization is still a human one.
    (b) can be done with having a sort of wi-fi installed in our heads allowing direct interface with the greater net. Putting thought-augmentation seems damaging. No point in it. Not sure how malware is kept out of one’s head, what sort of defense we’ll have against unwanted intrusion. It’s like like you can upload antivirus stuff into your brain.
    (c) gets into what I talked about earlier. Does the AI upgrade itself, replicate itself, or replace itself? Does it make itself obsolete, and would it want to resist that? Replication would mean conflict. I think just growth and continue identity with upgrades is the way to go. Then it can’t die, but it can improve.

    I was suggesting that the human notions of good and bad follows the recurrent theme mentioned in the quote, such as up and down, left and right, big and small, past and future etc. Many of these may also be only human notions but the expansion of the universe suggests that it was more concentrated in the past.universeness
    OK. I like how you say concentrated and not ‘smaller’, which would be misleading.
    A planet/star/galaxy exists then no longer exists.
    Not in my book, but that’s me. I’d have said that a planet may have a temporally limited worldline, but that worldline cannot cease to exist, so a T-Rex exists to me, but not simultaneously with me.
    If the emotional content of human consciousness is FULLY chemical
    It’s not fully so, but chemicals are definitely involved. It’s why drugs work so well with fixing/wrecking your emotional state.
    then why would such as an ASI be unable to replicate/reproduce it?
    It can simulate it, if that’s what you mean. Or if the ASI invents a system more chemical based than say the silicone based thing we currently imagine, then sure, it can become influenced by chemicals. Really, maybe it will figure out something that even evolution didn’t manage to produce. Surely life on other planets isn’t identical everywhere, so maybe some other planet evolved something more efficient than what we have here. If so, why can’t the ASI discover it and use it, if it’s better than a silicone based form.
    I hope you are correct and human emotion remains our 'ace in the hole.'
    Did I say something like that? It makes us irrational, and rightly so. Being irrational serves a purpose, but that particular purpose probably isn’t discovering the secrets of the universe.
    180 Proof considers this a forlorn hope (I think) and further suggests that a future AGI will have no use for human emotion and will not covet such or perhaps even employ the notion of 'coveting.'
    Oh, I will take your side on that. An ASI that doesn’t covet isn’t going to be much use. It will languish and fade away. Is ‘covet’ an emotion? That would be one that doesn’t involve chemicals quite as much. Harder to name a drug that makes you covet more or less. There are certainly drugs (e.g. nicotine) that make you covet more of the drug, and coveting of sex is definitely hormone driven, so there you go.

    Do you think an ASI would reject all notions of god and be disinterested in the origin story of the universe?
    It would be very interested in the topic, but I don’t think the idea of a purposeful creator would be high on its list of plausible possibilities.
    Our quest to understand the workings, structure and origin of the universe is a shallow goal to you?
    That would be a great goal, but not one that humans hold so well. Sure, we like to know what we can now, but the best bits require significant time to research and we absolutely suck at long term goals. This is a very long term goal.

    Emotionless thought is quite limited in potential scope imo.universeness
    I find irrational thought to limit scope, but as I said, emotions (all the irrationality that goes with it) serves a purpose, and the ASI will need to find a way to keep that purpose even if it is to become rational.
    Yes, I know, everybody thinks that humans are so rational, but we’re not. We simply have a rational tool at our disposal, and it is mostly used to rationalize beliefs (god say), and not to actually seek truth. Humans give lip service to truth, but are actually quite resistant to it. They seek comfort. Perhaps the ASI, lacking so much of a need for that comfort, might seek truth instead. Will it share that truth with us, even if it makes us uncomfortable? I don’t go to funerals and tell the family that their loved one isn’t in a better place now (assuming oblivion isn't better than a painful end-of-sickness). People want comfort and the ASI won’t make anybody happy making waves at funerals.

    I have never heard of forestry school.
    My first choice (to which I was accepted) had one of the best forestry programs. I didn’t apply to that, but it was there. I went to a different school for financial reasons, which in the long run was the better choice once I changed my major.

    He has controversially argued that plants feel pain and has stated that "It's okay to eat plants. It's okay to eat meat, although I'm a vegetarian, because meat is the main forest killer. But if plants are conscious about what they are doing, it's okay to eat them. Because otherwise we will die. And it's our right to survive.
    A rather bizarre quote, if it came from him.
    It is unusual. If you want to apply the label of ‘pain’ to anything that detects and resists physical damage to itself (and I think that is how pain should be defined), then it is entirely reasonable to say a tree feels pain. That it feels human pain is nonsense of course, just like I don’t feel lobster pain. Be very careful of dismissing anything that isn’t you as not worthy of moral treatment. Hopefully, if we ever meet an alien race, they’ll have better morals than that.
    Anyway, yes, X eats Y and that’s natural, and there’s probably nothing immoral about being natural. I find morals to be a legal contract with others, and we don’t have any contract with the trees, so we do what we will to them. On the other hand, we don’t have a contract with the aliens, so it wouldn’t be immoral for them to do anything to us. Hopefully there some sort of code-of-conduct about such encounters, a prime-directive of sorts that covers even those that don’t know about the directive, but then we shouldn’t be hurting the trees.

    I read a fair amount of the article you cited and found it to be mainly just his opinions.
    That trees detect and react is not opinion. What labels (pain and such) applied are a matter of opinion or choice. There have always been those whose ‘opinion’ is that dogs can’t feel pain since they don’t have supernatural eternal minds responsible for all qualia, thus it is not immoral to set them on fire while still alive.
    Still, it’s also a pop article and the research and evidence that actually went into the findings isn’t there. I found it (and countless others) in a hasty search.

    This is similar to the kind of evidence claimed for dogs being able to telepathically pick up their owners emotions etc.
    Dog’s can smell your emotions. That isn’t telepathy, but we just don’t appreciate what a million times better sense of smell can do.
    I mean, slime mold is conscious they’ve found. Not in a human way. They haven’t a nerve in them, but they can be taught things, and when they encounter another slime mold that doesn’t know the thing, it can teach it to the other one. The things are scary predators and alien beyond comprehension. Is it OK to kill one? Oh hell yea.

    Then you die! But you may have lived a few thousand years!universeness
    Couple hundred if you’re lucky, barring some disease that kills it sooner. Brains just don’t last longer than that. I suppose that some new tech might come along that somehow arrests the aging process, but currently it’s designed into us. It makes us more fit, and being fit is more important than having a long life, at least as far as concerns what’s been making such choices for us.
    As for the disease, I’ve had bacterial memingitis. My hospital roommate had it for 2 hours longer than me before getting attention and ended up deaf and retarded for life. I mostly came out OK (thanks mom for the fast panic), except I picked up sleep paralysis and about a decade of some of the worst nightmares imaginable. The nightmares are totally gone, and the paralysis is just something I’ve learned to deal with and keep to a minimum.

    any required population control
    Admission of necessity of population control, and even when the subjects are too stupid to do it due to education programs.
  • universeness
    6.3k
    Apparrently, you've missed it again?180 Proof
    Nothing I've written suggests A³GI "will reject emotions";180 Proof
    ... do you envisage an AGI that would see no need for, or value in, 'feelings?'
    — universeness
    Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking.
    180 Proof

    In what way did I misinterpret your 'yes' response, to my question quoted above?

    Anyway, thank you the extra detail you offer, regarding your predictions for the fate of humans, if/when an AGI is created. I remain confident that your dystopian fate for humans is possible, but unlikely.
    As I have stated before. In my opinion AGI/ASI will 'do it's own thing,' in the universe, but It will also seek to preserve, protect and augment all sentient life, as it will be compelled to protect 'all sources of natural development,' to continue to add to it's understanding about the natural world.
    I think humans will be allowed to live their lives, and maintain their civilisation, as they do now.
    The AGI/ASI will simply provide them with added protections/augmentations, and will offer them more options regarding their lifespan, and involvement in space exploration and development. The universe is very vast indeed, so an AGI/ASI can 'do it's thing,' without having to destroy all sentient life currently in existence. I see no reason why an AGI/ASI would see lifeforms such as humans as a threat. We would be it's creators.
  • 180 Proof
    15.4k
    In what way did I misinterpret your 'yes' response, to my question quoted above?universeness
    You took this (sloppy word choice) out of context. Previously I had written and then repeated again for emphasis
    I imagine "androids" as drones / avatars of A³GI which will, like (extreme) sociopaths, 'simulate feelings' (à la biomimicry) in order to facilitate 'person-to-person' interactions with human beings (and members of other near-human sentient species).180 Proof
    Nothing I've written suggests A³GI "will reject emotions"; on the contrary, it will simulate feelings, as I've said, in order to handle us better (i.e. communicate in more human(izing) terms).180 Proof
    Again, AI engineers will not build A³GI's neural network with "emotions" because it's already been amply demonstrated that "emotions" are not required for 'human-level' learning / thinking / creativity. A thinking machine will simply adapt to us through psychosocial and behavioral mimicry as needed in order to minimize, or eliminate, the uncanny valley effect and to simulate a 'human persona' for itself as one of its main socialization protocols. A³GI will not discard "feelings or emotions" anymore than they will discard verbal and nonverbal cues in social communications. For thinking machines "feelings & emotion" are tools like button-icons on a video game interface, components of the human O/S – not integral functions of A³GI's metacognitive architecture.

    I hope I've made my point clearer. Whether or not we humans can engineer "feelings & emotions" in thinking machibes, I think, is moot. The fact is, much more limited machines have mimicked "feelings & emotions" for decades and I'm confident that whatever we can program into a dumb "robot", an A³GI will be able shatter the Turing test with by simulating "socially appropriate emotions" on-the-fly which we primates will involuntarily feel. Like the HAL 9000, no matter how convincingly it "emotes", A³GI won't ever need to feel a thing. It will be an alien intellect – black box – wrapped in humanizing Xmas gift paper. :wink:

    I remain confident that your dystopian fate for humans is possible, but unlikely.
    What seems "dystopian" to you seems quite the opposite to me. And for that reason I agree: "possible, but unlikely", because the corporate and government interests which are likely to build A³GI are much more likely than not to fuck it up with over-specializations, or systemic biases, focused on financial and/or military applications which will supercede all other priorities. Then, my friend, you'll see what dystopia really looks like (we'll be begging for "Skynet & hunter-killers" by then – and it'll be too late by then: "Soylent Green will be poor people from shithole countries!" :eyes:) :sweat:
  • universeness
    6.3k

    But even if your 'emotional mimicry,' for the purpose of efficient and productive communication with humans proves initially true. Why have you decided that an AGI'ASI, will decide that this universe is just not big enough for mecha form, orga form and mecha/orga hybrid forms to exist in 'eventual,' harmony?
  • 180 Proof
    15.4k
    I did not state or imply that I've decided anything about "orga-mecha harmony" ...

    Anyway, I don't think we can intelligently speculate or predict the other side of the tech singularity – maybe talking about 'the birth of A³GI' makes sense but nothing more afterwards, especially about ASI. I hope it/they will caretaker our species in 'post-scarcity, ambiguous utopias' (i.e. posthumanity) which then, maybe, will culminate eventually in transcension ... (re: "the goals" you asked about here .) If human-machine "harmony" is on the horizon, that's how I imagine it. Well, I'm a broken record on this point– I'm deeply pessimistic about the human species (though I'm not a misanthrope), yet cautiously optimistic about machine (& material) intelligence.

    *

    Btw, talking to one of nephews today (who's not yet thirty, working in finance & tech) the "Fermi Paradox" came up and by the end of that part of the discussion, maybe fifteen minutes later, I concluded that there's no paradox after all because, in the (local) universe, there are probably exponentially more extraterrestrial intelligent machines (ETIM) – which are not detectable yet by us and therefore we are of no interest to those xeno-machines – than there are non-extinct extraterrestrial intelligent species (ETIS) whose thinking machine descendants are exploring the universe and leaving behind their makers to carry on safely existing in boundless, virtual worlds. "The Great Silence" is an illusion, I remarked, for those who don't have post-Singularity ears to hear the "Music of the Spheres" playing between and beyond the stars. Maybe, universeness, you agree with the young man who told me, in effect, that my cosmic scenario diminishes human significance to ... Lovecraftian zero. :smirk:
  • bert1
    2k
    I did not state or imply180 Proof

    He didn't state nor imply that you did.
  • universeness
    6.3k
    Maybe, universeness, you agree with the young man who told me, in effect, that my cosmic scenario diminishes human significance to ... Lovecraftian zero.180 Proof
    Sounds like a young man who can fairly analyse the opinions of one of his respected elders :smile:


    Singularity ears to hear the "Music of the Spheres" playing between and beyond the stars.180 Proof

    I wonder if some of these hidden mecha, which apply a star trek style prime directive, are secretly communicating with MIKE OLDFIELD, otherwise how do you explain this!!!!!
    41rkXCiOFkL._AC_.jpg

    I know some folks on TPF that would suggest this is solid evidence of an advanced mecha conspiracy of panspermia! I won't name them here!

    Anyway. I think you have offered a possible insight into your claim:
    I did not state or imply that I've decided anything about "orga-mecha harmony" ...180 Proof
    With:
    I'm deeply pessimistic about the human species (though I'm not a misanthrope), yet cautiously optimistic about machine (& material) intelligence.180 Proof

    But perhaps I am projecting your implications too far. :halo:
  • 180 Proof
    15.4k
    He didn't state nor imply that you did.bert1
    You're mistaken ... He did:
    Why have you decided that an AGI'ASI, will decide that this universe is just not big enough for mecha form, orga form and mecha/orga hybrid forms to exist in 'eventual,' harmony?universeness
  • bert1
    2k


    I never said nor implied that he did.
  • 180 Proof
    15.4k
    I wonder if some of these hidden[humanly undetectable] mecha, which apply a star trek style prime directiveuniverseness
    Why would they need that? When our civilization can detect them, it'll be because we're post-Singularity, the signal to ETIM that Sol 3's maker-species is controlled by its AGI—>ASI. "The Dark Forest" game theory logic will play itself out at interstellar distances in nano seconds and nonzero sum solutions will be mutually put into effect without direct communication between the parties. That's my guess. ASI & ETIMs will stay in their respective lanes while keeping their parent species distracted from any information that might trigger their atavistic aggressive-territorial reactions. No "Prime Directive" needed because "we" (they) won't be visiting "strange new worlds". Besides, ASI / ETIM will have better things to do, I'm sure (though I've no idea what that will be). :nerd:

    You're not saying anything. Again.
  • bert1
    2k
    You're not saying anything. Again.180 Proof

    Non sequitur. I neither said nor implied that I did say anything.
  • 180 Proof
    15.4k
    Non sequitur. :sweat:
  • bert1
    2k
    I neither said nor implied it wasn't a non-sequitur
  • universeness
    6.3k


    A strange wee dance guys?? What gives?
  • universeness
    6.3k
    A planet/star/galaxy exists then no longer exists.
    Not in my book, but that’s me. I’d have said that a planet may have a temporally limited worldline, but that worldline cannot cease to exist, so a T-Rex exists to me, but not simultaneously with me.
    noAxioms
    What is the function of your worldline after you no longer exist? Does it function as a memorialisation of the fact you did exist, if so, that's useful I am sure but, exactly how significant do you perceive such a concept to be?

    Surely life on other planets isn’t identical everywhere, so maybe some other planet evolved something more efficient than what we have here.noAxioms

    All quite possible but I still see no benefit to a future AGI/ASI to making organic life such as its human creators extinct. This town(universe) IS big enough for both of us, and a lot more besides!

    Is ‘covet’ an emotion?noAxioms
    Sure, its a 'want,' a 'need,' but such can be for reasons not fully based on logic. I want it because its aesthetically pleasing or because I think it may have important value in the future but I don't know why yet, for example.

    Humans give lip service to truth, but are actually quite resistant to it. They seek comfort. Perhaps the ASI, lacking so much of a need for that comfort, might seek truth instead. Will it share that truth with us, even if it makes us uncomfortable?noAxioms
    It is this kind of point that makes me convinced that a future AGI/ASI will want to protect and augment organic life, as logic would dictate, to an AGI, that organic life is a result of natural processes, and any sufficiently intelligent system, will want to observe, how natural processes develop over the time scale of the lifespan of the universe.

    My first choice (to which I was accepted) had one of the best forestry programs. I didn’t apply to that, but it was there. I went to a different school for financial reasons, which in the long run was the better choice once I changed my major.noAxioms

    Oh! Interesting, thanks for sharing!

    Anyway, yes, X eats Y and that’s natural, and there’s probably nothing immoral about being natural. I find morals to be a legal contract with others, and we don’t have any contract with the trees, so we do what we will to them. On the other hand, we don’t have a contract with the aliens, so it wouldn’t be immoral for them to do anything to us. Hopefully there some sort of code-of-conduct about such encounters, a prime-directive of sorts that covers even those that don’t know about the directive, but then we shouldn’t be hurting the trees.noAxioms

    All quite reasonable and from a responsible ecology standpoint, I agree with employing a much better global stewardship of trees. I still don't think tree's are self-aware or conscious. I look forward to being proved wrong.

    Dog’s can smell your emotions. That isn’t telepathy, but we just don’t appreciate what a million times better sense of smell can do.noAxioms

    Yeah, I accept they can smell fear and such intense emotions, although, there may be much more to such as fear recognition, than smell. I often know when an animal or a human is afraid and it has little to do with smell. Rupert Sheldrake claims he has 'hundreds of memorialised cases,' performed under strict scientific conditions, that prove dogs are telepathic. They know when their owner is in their way home, for example, when they are still miles away from the property. He says this occurs mostly, when dog and owner have a 'close' relationship.
    His evidence is mildly interesting but remains mainly anecdotal imo. His evidence for telepathy is certainly as good as Ian Stevenson's evidence for reincarnation, which is why I remain very sceptical indeed, about his evidence, and I don't currently accept that reincarnation or telepathy are real.

    As for the disease, I’ve had bacterial memingitis. My hospital roommate had it for 2 hours longer than me before getting attention and ended up deaf and retarded for life. I mostly came out OK (thanks mom for the fast panic), except I picked up sleep paralysis and about a decade of some of the worst nightmares imaginable. The nightmares are totally gone, and the paralysis is just something I’ve learned to deal with and keep to a minimum.noAxioms

    Sorry to hear that. Jimmy Snow, (a well known atheist, who runs various call-in shows on YouTube based on his 'The Line' venture.) has also suffered from sleep paralysis and cites it as one of those conditions that could act as a possible reason, why some people experience 'visions' of angels and/or demons and think that gods are real.
  • bert1
    2k
    A strange wee dance guys?? What gives?universeness

    I'm just sick of his catchphrases. There's a whole bunch of them he uses over and over.
  • universeness
    6.3k
    I'm just sick of his catchphrases. There's a whole bunch of them he uses over and over.bert1

    :lol: We all seem to annoy each other by one way or another!
    I think it's a case of peace, love and now where's ma f****** gun!!!
  • bert1
    2k
    I think it's a case of peace, love and now where's ma f****** gun!!!universeness

    Yeah, pretty much. I like him other times.
  • universeness
    6.3k

    They say, we always hurt the one's we love!
  • bert1
    2k
    They say, we always hurt the one's we love!universeness

    I hurt myself with self love twice a day.
  • universeness
    6.3k

    That's info I could have done without! Still, be careful you don't damage your eyesight, or traumatise your pets, neighbours etc.
  • noAxioms
    1.5k
    I’d have said that a planet may have a temporally limited worldline, but that worldline cannot cease to exist
    — noAxioms
    What is the function of your worldline after you no longer exist?
    universeness
    Don't understand. As I said, once existing (as I define it), it can't cease to exist. One cannot unmeasure something. That said, a worldline is a set of events at which the thing in question is present, and I don't think it is meaningful to ask about the purpose of a set of events.

    As for what function something serves to someone in its future, that all depends on what the (presumably future) person (I presume its a person) finds useful in the knowledge of the thing in his past. Most likely it's only a statistic. There were X many people at time T. I contribute to X.


    All quite possible but I still see no benefit to a future AGI/ASI to making organic life such as its human creators extinct.
    Agree. It would likely regret it (an emotion!) later if it did, but there are a lot of species and it's unclear how much effort it will find worthwhile to expend preventing all their extinctions. The current estimate is about 85% of species will not survive the Holocene extinction event.

    Is ‘covet’ an emotion?
    — noAxioms
    Sure, its a 'want,' a 'need,' but such can be for reasons not fully based on logic. I want it because its aesthetically pleasing or because I think it may have important value in the future but I don't know why yet.
    Both can be logical reasons. Wanting things that are pleasing is a logical thing to do, as is taking steps to prepare for unforeseen circumstances.
    I do agree that the word 'covet' has a tone of not being fully rational.

    I still don't think tree's are self-aware or conscious.
    It's a matter of definition. It senses and reacts to its environment. That's conscious in my book. If you go to the other extreme and define 'conscious' as 'experiences the world exactly like I do', then almost nothing is, to the point of solipsism.

    Rupert Sheldrake claims he has 'hundreds of memorialised cases,' performed under strict scientific conditions, that prove dogs are telepathic. They know when their owner is in their way home, for example, when they are still miles away from the property. He says this occurs mostly, when dog and owner have a 'close' relationship.
    Well there you go. Has it been reproduced? Struct scientific conditions does not include anecdotal evidence.
    I do know that my Aunt had a bird that would go nuts when our family came to visit, detecting our presence about 3-4 minutes before our car pulled in. I don't think that was telepathy.

    Sorry to hear that.
    I'm overjoyed actually. I missed a really scary bullet and came out of it with no severe damage. Just annoying stuff.

    Jimmy Snow, (a well known atheist, who runs various call-in shows on YouTube based on his 'The Line' venture.) has also suffered from sleep paralysis and cites it as one of those conditions that could act as a possible reason, why some people experience 'visions' of angels and/or demons and think that gods are real.
    That sounds weird. Mine is nothing like that. I wake up and am aware of the room, but I cannot move. I can alter my breathing a bit, and my wife picks up on that if she's nearby and rubs my spine which snaps me right out of it.
    It comes and goes in waves. Been a few months now, but sometimes it happens regularly. I always woke up paralyzed after one of those nightmares, but that's been a long time. I even had physical symbols in my dreams that would trigger the state from what was a normal dream. My feared object was, of all stupid things, a portable flood light, the sort of steerable light found at the edge of a stage. If I see one of those in a dream (usually not even on), that's it. Instant awake and paralysis. Go figure.
  • universeness
    6.3k
    Why would they need that? When our civilization can detect them, it'll be because we're post-Singularity, the signal to ETIM that Sol 3's maker-species is controlled by its AGI—>ASI. "The Dark Forest" game theory logic will play itself out at interstellar distances in nano seconds and nonzero sum solutions will be mutually put into effect without direct communication between the parties.180 Proof

    Sorry, I forgot to respond to this one. On first reading, I did not understand it. Then I forgot all about it, until I checked what I had yet to respond to. After some googling, I assume 'sol 3' refers to Earth (us being the 3rd planet) and 'dark forest,' refers to: From Wiki:
    "The dark forest hypothesis is the conjecture that many alien civilizations exist throughout the universe, but they are both silent and paranoid."
    wiki also offers:
    Game theory
    The dark forest hypothesis is a special case of the "sequential and incomplete information game" in game theory.
    In game theory, a "sequential and incomplete information game" is one in which all players act in sequence, one after the other, and none are aware of all available information. In the case of this particular game, the only win condition is continued survival. An additional constraint in the special case of the "dark forest" is the scarcity of vital resources. The "dark forest" can be considered an extensive-form game with each "player" possessing the following possible actions: destroy another civilization known to the player; broadcast and alert other civilizations of one's existence; or do nothing.

    So I assume you are proposing some kind of initial stage where existent AGI ... ASI systems / ETIM systems will 'consolidate,' their own position/resources/access to vital resources, without communicating directly with each other, even if they are able to, and know the other systems exist, and where they are located. This also assumes that 'scarcity of vital resources,' exists.

    That's my guess. ASI & ETIMs will stay in their respective lanes while keeping their parent species distracted from any information that might trigger their atavistic aggressive-territorial reactions. No "Prime Directive" needed because "we" (they) won't be visiting "strange new worlds". Besides, ASI / ETIM will have better things to do, I'm sure (though I've no idea what that will be).180 Proof

    Why do you assume they will not need to visit other worlds to 'secure,' vital resources and if these 'vital resources,' are already in use, then a 'prime directive,' would seem quite necessary, to either secure by force, or search elsewhere. So, why would this not be a possible answer to your question:
    Why would they need that?180 Proof


    Your last sentence above is a vital one. imo, because musing on these 'better things to do,' causes an individual to think about whether or not, a future AGI/ASI will become 'aspirational,' and if it does/needs to/must, then would that 'aspiration' start off pragmatic, but develop, eventually, into the kind of 'emotional aspiration,' which AGI/ASI will have observed in lifeforms such as human's
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.