• 180 Proof
    14.7k
    The unleashed power of the atom has changed everything save our modes of thinking and we thus drift toward unparalleled catastrophe. A new type of thinking is essential if mankind is to survive and move toward higher levels. — Albert Einstein (1946)

    Our h. sapiens species has shown itself to be uniquely smart enough to create at least one problem for itself so intractably complex in scale and scope that we cannot solve it – climate change accelerated by anthropogenic global warming. Weirdly I'm hopeful that AGI —> ASI – assuming it bothers – will be capable of reframing the parameters of the problem so that it can be solved well enough to save a significant portion of Earth's habitable biosphere and thereby a sustainable fraction (1/2-1/20?) of the human population. I imagine the only significant "planetary terraforming" that will ever be undertaken will be an AGI —> ASI-driven project to terraform the Earth and eventually reverse / end the Anthropocene.


    We are the cure.
  • universeness
    6.3k

    Ok, It was useful to drill down a little more into your position in this issue.
  • 180 Proof
    14.7k
    :cool:

    My "hopes" are silver linings in the dark clouds rolling in. The butterfly, sir, is about to leave the caterpillar's "human" chrysalis (re: ).

    :point:
  • universeness
    6.3k

    Do you have evidence that the butterfly retains no knowledge of its time as a caterpillar?
    Might the butterfly maintain much of the 'mind' of the caterpillar?
  • 180 Proof
    14.7k
    Do you have evidence that the butterfly retains no knowledge of its time as a caterpillar?universeness
    Do we "retain knowledge" of our time as blastocysts? :roll:

    Might the butterfly maintain much of the 'mind' of the caterpillar?
    I imagine crawling is, at best, useless for flying. Maybe butterflies keep caterpillars around just to study them (e.g. "butterflygenesis") or for shitz-n-giggles (à la reality tv, stupid pet tricks, etc) or both? :smirk:
  • universeness
    6.3k
    Do we "retain knowledge" of our time as blastocysts?180 Proof
    I would need to concentrate to see if I have any such stored memories. I will try hard this weekend after 1 or 10 single malts!

    I imagine crawling is, at best, useless for flying.180 Proof
    They have to land sometimes! I have witnessed landed butterfly's walk/crawl!

    Maybe butterflies keep caterpillars around just to study them (e.g. "butterflygenesis") or for shitz-n-giggles (à la reality tv, stupid pet tricks, etc) or both?180 Proof

    No caterpillars = no butterfly's. As I suggested before, there may be aspects of human consciousness that no 'created' system can reproduce.
  • 180 Proof
    14.7k
    Fortunately, "no created system" requires – is functionally enabled by – any "aspects of human consciousness" (i.e. a metacognitive processing bottleneck ... à la D. Kahneman's slooooow 'brain system 2'). Sapience sans (beyond) sentience. Butterfly sans (free from constraints-defects of) chrysalis/caterpillar.
  • universeness
    6.3k

    Too much in your link for me to read at the moment. When I have read it, I will comment on it.
  • universeness
    6.3k
    Read the article about Daniels System 1 and System 2 thinking.
    I did not see any strong connections to our discussion, was there a main summary point from his system 2 category that YOU find strongly contends with my suggestion that
    there may be aspects of human consciousness that no 'created' system can reproduce.universeness

    Btw, I came across this:
    A new study finds that moths can remember things they learned when they were caterpillars — even though the process of metamorphosis essentially turns their brains and bodies to soup. The finding suggests moths and butterflies may be more intelligent than scientists believed.
    From here
  • 180 Proof
    14.7k
    My reference to Kahneman's work was only mentioned as scientific corroboration, not justification or proof, of my philosophical statement about a 'metacognitive processing bottleneck' (re: System 2, thinking slow aka "consciousness"). There isn't any evidence among higher mammals, including h. sapiens, that Sys 2 / conscious processing such as ours is indispensible for intelligent – adaptive problem-solving – behavior. To me it's clear that that expectation is only an anthropocentric bias. The current developmental state of 'large language models' / 'neural net machines' (e.g. ChatGPT, OpenAI, AlphaZero, etc) in still narrow ways, as far as I can discern, show that 'sapience sans sentience' is the (optimal) shape of things to come.

    Another link to the catastrophic effects of (runaway) global heating on Earth's fresh water sources: lakes & reservoirs.

    https://www.cnn.com/2023/05/18/world/disappearing-lakes-reservoirs-water-climate-intl/index.html

    The heating of oceans and drying up of lakes-reservoirs are strongly correlated. Not "pessimism", my friend, just facts. :mask:
  • universeness
    6.3k
    show that 'sapience sans sentience' is the (optimal) shape of things to come.180 Proof

    Do you mean 'intelligence versus self-awareness?'
    I just can't conceive of any value in an intelligent system that is not-self aware other that as a functional, very useful tool for an intelligence that IS self-aware. Like a computer is for a human today.
    Perhaps I am missing your main point here due to my attempts to decipher/interpret the words/phrases, you choose to use.

    The heating of oceans and drying up of lakes-reservoirs are strongly correlated. Not "pessimism", my friend, just facts. :mask:180 Proof

    I don't refute the very valid concerns regarding climate change.
    I do fully accept that the evidence is overwhelming, that we have damaged the Earth's ecosystem significantly, in a way that compromises our survival and the survival of the current flora and fauna on the Earth. I think the Earth itself, will easily survive the actions of humans.
    I think WE WILL pay a price for abusing Earths resources for private gain, and to satisfy the lusts/greeds of individual/(groups of) nefarious humans, but it's not over until it's over.
    The 'facts' you mention are not imo, immutable, yet.
    We probably have passed the point of no return in some ways, but not with the results that you suggest, ie, population reduction to the levels of an 'endangered species' or actual extinction.
  • 180 Proof
    14.7k
    Do you mean 'intelligence versus self-awareness?'universeness
    No. I mean intelligence (i.e. adaptivity) without "consciousness" (i.e. awareness of being self-aware), a distinction I suggest in this old post https://thephilosophyforum.com/discussion/comment/528794 ... and speculate on further, with respect to 'AGI', here https://thephilosophyforum.com/discussion/comment/608461.
  • 180 Proof
    14.7k
    If you haven't watched this US Congressional testimony by the late Carl Sagan back in 1985, consider his well-informed warnings – macro predictions – which had subsequently been largely ignored by governments and transnational corporations because of very irrational, biased, human groupthink – a metacognitive defect AGI will not be limited by) ...



    Also today ...
    https://www.theguardian.com/environment/2023/jul/19/climate-crisis-james-hansen-scientist-warning
  • 180 Proof
    14.7k
    We probably have passed the point of no return in some ways ...universeness
    Apologies for continuing to flog this equine's carcass:
    https://www.dw.com/en/sea-surface-temperature-hotter-than-ever-before/a-66444694
  • universeness
    6.3k
    For some reason, I have only been messaged regarding your last post on this thread. I was unaware of your previous 2. I know @Jamal 'sunk' this thread, so that it would not show up on the main page anymore, but it was not closed to new posts. You have replied to me in the two posts I was not messaged about so I don't know what happened.

    Anyway ....... firstly I will try to refresh where we are in our exchange here:

    Do you mean 'intelligence versus self-awareness?'
    I just can't conceive of any value in an intelligent system that is not-self aware other that as a functional, very useful tool for an intelligence that IS self-aware. Like a computer is for a human today.
    Perhaps I am missing your main point here due to my attempts to decipher/interpret the words/phrases, you choose to use.
    universeness
    No. I mean intelligence (i.e. adaptivity) without "consciousness" (i.e. awareness of being self-aware), a distinction I suggest in this old post https://thephilosophyforum.com/discussion/comment/528794 ... and speculate on further, with respect to 'AGI', here https://thephilosophyforum.com/discussion/comment/608461.180 Proof
    "consciousness", on the other hand, is intermittent (i.e. flickering, alter-nating), or interrupted by variable moods, monotony, persistent high stressors, sleep / coma, drug & alcohol intoxication, psychotropics, brain trauma (e.g. PTSD) or psychosis, and so, therefore, is either online (1) or offline (0) frequently – even with variable frequency strongly correlated to different 'conscious-states' – during (baseline) waking-sleep cycles.180 Proof
    What I mean by 'atavistic ... metacognitive bottleneck of self-awareness' is an intelligent system which develops a "theory of mind" as humans do based on a binary "self-other" model wherein classes of non-selves are otherized to varying degrees (re: 'self-serving' (i.e. confabulation-of-the-gaps) biases, prejudices, ... tribalism, etc). Ergo: human-level intelligence without anthropocentric defects (unless we want all of our Frankenstein, Skynet-Terminator, Matrix nightmares to come true).180 Proof

    I still perceive a 'versus' between the 'theory of mind' that you propose for a future AI and our human 'theory of mind.' Would the AI theory of mind you propose have to decide whether or not their 'intelligence' but not 'conscious' (at least not conscious in the human sense) was a 'superior' or inferior state the human 'state of mind.' I am struggling to find clear terminology here.
    Perhaps, a better angle would be, If your AI mind model cannot demonstrate all of your listed functionalities:
    • pre-awareness = attention (orientation)
    • awareness = perception (experience)
    • adaptivity = intelligence (error-correcting heurstic problem-solving)
    • self-awareness = [re: phenomenal-self modeling ]
    • awareness of self-awareness = consciousness
    180 Proof
    How do you know, it would not conclude/calculate that to be an inferior state and that functions 4 and 5 above become two of it's desires/imperatives/projects?
  • universeness
    6.3k
    If you haven't watched this US Congressional testimony by the late Carl Sagan back in 1985, consider his well-informed warnings – macro predictions – which had subsequently been largely ignored by governments and transnational corporations because of very irrational, biased, human groupthink – a metacognitive defect AGI will not be limited by) ...180 Proof


    I have watched just about everything with Carl Sagan in it, available on-line, more than once. Some, I have watched many times. I have watched the vid you posted at least 5 times so far.
    Carl was a far better predictor of future events than Nostradamus ever was.
    I don't try to play down any current danger that climate change activists are shouting about, nor have I ever suggested that the human race is doing other than a piss poor job of its stewardship of this planet but I don't see any reason to believe that a future AI would do a better job as stewards of this planet.
    AGI/ASI may well not be as 'biased,' or 'irrational' as 'human groupthink' can be but are you soooooooo sure that a future mecha wont be just as toxic towards planet Earth as humans were, if not more so.
    If it needs to strip the Earth of its resources to replicate, advance and spread its own system, then it may do so and move on into space.
  • universeness
    6.3k
    Apologies for continuing to flog this equine's carcass:
    https://www.dw.com/en/sea-surface-temperature-hotter-than-ever-before/a-66444694
    180 Proof

    Anything I typed here in response to the linked article, would probably be a repeat of elements of my previous post above. I fully accept all the warnings about the climate change disaster we immanently face. BUT, It's not over until its over! That's all I have to cling to, and cling on is what I will continue to do! Feel free to think of me like the monty python black knight if you wish but I don't think its as hopeless as that ...... yet.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.