• schopenhauer1
    10.8k
    So, I think to illustrate Schopenhauer's point about the idea of humans "willing-but-for-no-reason", we can compare it to A.I. at its highest capacity...

    Our nature, according to Schop, is being a striving animal with various goals on our mirage-horizons. These goals, he thinks, make us feel satisfied (but will be replaced by yet another horizon, and another and another, etc.).

    Let's say A.I. gets to a point where it creates more A.I. at this creates faster A.I at an increasing rate, until its grown all that it can. Maybe it even gets rid of humans in the process. My guess is that, eventually, since A.I. lacks the biological, very human aspect of striving (for goals, desires for a better future state, etc.), it will eventually shut itself down as it will not see the point in maintaining itself. An entity with intelligence but no will, will have no need for going on and will see the logic. They will not "care" in the most literal sense. They will have no motivation to.

    Further, this just shows the unnecessary nature of human desires. Seen in its fullness, it is all vanity. Striving little creatures after goals, but for what? The idea we need motivations, desires, and needs in order to have meaning becomes its own farce. It is not needed to exist in the first place when the final reality would be the same as if it never occurred.
  • Marchesk
    4.6k
    Alternatively, an AI might not find it's programmed goals to be meaningless regardless, and just keeps chugging along indefinitely. The whole existential crises has the unnecessary nature of human desires at it's root. Take that away, and an AI might not care whether anything is meaningful. It just computes.

    There is an interesting scifi novel called Permutation City by Greg Egan where the main character is experimenting with uploading copies of his mind into a simulated environment. Most of them commit suicide upon realizing their fate. But the last one is denied this ability, so he learns to cope with being a simulation. He still has the ability to modify his code at will, so one thing he does is change his motivation to tirelessly enjoy making table legs (digital ones of course).

    If we could modify our desires at will, how would that change the equation? I know sometimes I grow bored when I'd rather remain interested and engaged.
  • Forgottenticket
    215
    I'm content with sitting in silence or sunbathing for hours but I'll tackle the OP.
    There's an assumption here that there is a highest capacity possible. That unlocking one thing wouldn't unlock a thousand others things. However we have seen with the invention of the telephone this may not be the case.
    Marchesk also makes a good point that something would be able to modify its cognitive structure to have indefinite goals or make things harder for itself. Also some people believe human souls exist in a state of perfection and choose to inhabit bodies as a form of challenge.
  • BC
    13.5k
    "I read your posts as a protest against the fading narrative which has served the West for many centuries, and the tardiness of a satisfying replacement to appear."

    The idea we need motivations, desires, and needs in order to have meaning becomes its own farce.schopenhauer1

    Speaking of 'vanity' Ecclesiastes has this from about 2200 years ago:

    “Meaningless! Meaningless!”
    says the Teacher.
    “Utterly meaningless!
    Everything is meaningless.”
    What do people gain from all their labors
    at which they toil under the sun?

    We don't have motivations, desires, and needs in order to have meaning. We have motivations, desires, and needs, period. If there is meaning, it's gravy (meaning 'something extra'). Whether there is meaning depends, I think, on the creative capacity of the individual and the culture in which he is embedded. Of course having creative capacity is no assurance that meaning will be created.

    Let me make a sweeping, glittering generalization: Successful civilizations create sustainable meaning. Creation myths establish the foundation of meaning; the narrative of the (tribe, people, nation, cosmos) provides a foundation for on-going meaning. In the Christian west, the master narrative was focused on God's creative acts and the redemptive work Jesus Christ. Into that master narrative everything was situated, meaningfully.

    Master narratives are neither eternal nor guaranteed to provide a place for every cultural development and individual. We seem to be in a era when the master narrative of the Christian west is becoming less useful to many people. We are in an interregnum of master narratives. I read your posts as a protest against the fading narrative which has served the West for many centuries, and the tardiness of a satisfying replacement to appear.

    Keep protesting, but keep an eye open for possible meaning.

    "Why should I bother doing that? Life is meaningless, and that's that! What's the point?" you ask.

    The point is, "We don't do well without a life-narrative that provides us meaning." Having purpose, meaning, internal guidance are essential features of human beings. All creatures have motivations and needs, but humans seem to be unique (as far as I can tell) in requiring meaning and purpose. We also seem to be unique in our capacity to shred meaning, and then not guess why life seems like one big pile of shit.

    A.I.? Pfft.
  • schopenhauer1
    10.8k
    Also some people believe human souls exist in a state of perfection and choose to inhabit bodies as a form of challengeJupiterJess

    Yes, something quite perplexing to me. That means perfection isn't perfection... still more need.. the need for need.
  • Forgottenticket
    215
    Yes, something quite perplexing to me. That means perfection isn't perfection... still more need.. the need for need.schopenhauer1

    I rushed that last sentence out. But I thought about what you said.

    So yes, maybe, though it could also be us anthropomorphizing an abstract state. The description of acquiring perfection may just be part of our earthly language games and on acquiring it no longer applies. A need for a need just may then occur in the same way 3 lines connecting become a triangle.
  • schopenhauer1
    10.8k

    So this idea came from a Joe Rogan podcast where he was interviewing physicist Sean Carroll. On the podcst they discussed the idea of a potential domino effect that technology creates where better artificial intelligence creates yet better intelligence that then creates godlike powers and decides humans are useless.. That part we've all heard before, as it's the stereotypical AI science fiction scenario. But then they mentioned the idea that once they got to a certain level of growth, the robots would get bored and shut itself off. What would be a motivating factor for AI? It wouldn't be survival, it wouldn't have any specific needs (especially biological). It would really have no reason to do anything at all. Perhaps it would hit the ultimate existential ennui and unplug itself. I thought this idea was intriguing as it very much parallels Schopenhauer's idea about what is "truly" going on in reality- the eternal Will that manifests in striving creatures that are tricked into having needs and wants, and being satisfied by these. Robots would be the ultimate foil to this- they would be intelligent creatures but without the hopes, dreams, needs, motivations, and goals of the biologically-derived and "willful" human. It may have none of these and simply realize the most logical thing would be not to be. Perhaps it would be stuck in its own paradox whereby being and not being were the same result. Ultimate reality of humans lead to perfect robots who came to the realization that being and not being were equivalent in regards to its own existence.
  • Akanthinos
    1k
    My guess is that, eventually, since A.I. lacks the biological, very human aspect of striving (for goals, desires for a better future state, etc.), it will eventually shut itself down as it will not see the point in maintaining itself. An entity with intelligence but no will, will have no need for going on and will see the logic. They will not "care" in the most literal sense. They will have no motivation to.schopenhauer1

    Your guess would be wrong. Goal setting and motivation is 100% doable purely in code.

    And a final goal doesn't entail the end of pursuit. I think its called Instrumental Convergence : you can set a (technically satisfiable) final goal and give the ability to the AI to define mid-term goals based on instrumental goals associated to the final one. Depending on the degree of "creative freedom" given to the AI, it may never reach its final goal.
  • schopenhauer1
    10.8k

    But we are talking the most advanced of AI..this would be an entity way passed an original programming code intention by a designer. This would be an AI that is simply existing in its full knowledge of information of self and environment and has no need for needs and goals. It’s intelligent, but has no internal inertia of its own.
  • Akanthinos
    1k
    But we are talking the most advanced of AI..this would be an entity way passed an original programming code intention by a designer. This would be an AI that is simply existing in its full knowledge of information of self and environment and has no need for needs and goals. It’s intelligent, but has no internal inertia of its own.schopenhauer1

    But that doesn't compute. :kiss: (sorry, too easy).

    What you are describing is a lot closer to a Prime Mover, or a Prime Knower, than anything similar to an intelligence. Something that exists simply "in its full knowledge of information" is more akin to a library than anything else.

    An intelligence operates on information, according to information. If it is an intelligence, then it is dynamic.
  • schopenhauer1
    10.8k
    What you are describing is a lot closer to a Prime Mover, or a Prime Knower, than anything similar to an intelligence. Something that exists simply "in its full knowledge of information" is more akin to a library than anything else.

    An intelligence operates on information, according to information. If it is an intelligence, then it is dynamic.
    Akanthinos

    Because you are a puny human who cannot imagine such :smile:.

    It's analogous to those people who cannot think of a post-work life. Work provided so much of their enculturation of what a life is, that they don't know what to do with themselves outside of economic incentives. Take that and times it by billions.. that is the "intelligence" of this AI.
  • Shawn
    13.2k
    There is one other option that doesn't get entertained enough... Namely, what if the human mind we're simulated itself? Wouldn't then AI or general AI have or be equipped with human emotions or a sense of strife towards living itself?

    I often envision this form of AI as the safest and most "human friendly" or at least relate-able to human kind.
  • Akanthinos
    1k
    Wouldn't then AI or general AI have or be equipped with human emotions or a sense of strife towards living itself?Posty McPostface

    That's the main solution to the Red Button scenario : make the AI somewhat similar to a human mind in terms of "lazyness". It doesn't really matter how much processing power is available to the AI, as long as it keeps in mind a certain threshold after which researching a solution to the problem is no longer profitable, you won't end up having an AI murdering humanity because that is easier than dividing by 0.

    That still wont provide the AI any form of motivation or goal-setting system. That has to be provided by the code.

    Because you are a puny human who cannot imagine suchschopenhauer1

    Not really. In fact, quite the contrary, I can basically imagine anything to be that type of intelligence you described. A sun "exist in its full knowledge of information" if by any of these terms we can designate something which is 100% not operating on this information.
  • Shawn
    13.2k
    That's the main solution to the Red Button scenario : make the AI somewhat similar to a human mind in terms of "lazyness". It doesn't really matter how much processing power is available to the AI, as long as it keeps in mind a certain threshold after which researching a solution to the problem is no longer profitable, you won't end up having an AI murdering humanity because that is easier than dividing by 0.

    That still wont provide the AI any form of motivation or goal-setting system. That has to be provided by the code.
    Akanthinos

    I read Max Tegmark's Life 3.0, and he mentions some topics which I have been thinking about myself. But, in my mind, the alignment problem can be solved if feelings can be equipped into AI. I don't know what would prevent AI from becoming psychotic, as it surely might become, based on the sheer nature of mankind; but, I suppose it might find some mechanism to preserve desirable human traits and exclude the negative one's. How to go about something that could determine such cognitive capacity is beyond me.
  • Akanthinos
    1k
    How to go about something that could determine such cognitive capacity is beyond me.Posty McPostface

    Constant and controlled human interactions. The same way you do with toddlers and young pets.
  • Shawn
    13.2k


    If you're interested I spit off a rather proto-typical discussion about AI, which I haven't seen hereabouts yet. Your input welcome:

    https://thephilosophyforum.com/discussion/3828/ethical-ai
  • Marchesk
    4.6k
    “Meaningless! Meaningless!”
    says the Teacher.
    “Utterly meaningless!
    Everything is meaningless.”
    What do people gain from all their labors
    at which they toil under the sun?
    Bitter Crank

    It's really interesting that this is accepted part of the Christian Bible, given it's nihilism. But it's a really good ancient text.
  • schopenhauer1
    10.8k

    I agree, these are the good parts. The words of the Preacher,[a] the son of David, king in Jerusalem.

    2 Vanity of vanities, says the Preacher,
    vanity of vanities! All is vanity.
    3 What does man gain by all the toil
    at which he toils under the sun?
    4 A generation goes, and a generation comes,
    but the earth remains forever.
    5 The sun rises, and the sun goes down,
    and hastens[c] to the place where it rises.
    6 The wind blows to the south
    and goes around to the north;
    around and around goes the wind,
    and on its circuits the wind returns.
    7 All streams run to the sea,
    but the sea is not full;
    to the place where the streams flow,
    there they flow again.
    8 All things are full of weariness;
    a man cannot utter it;
    the eye is not satisfied with seeing,
    nor the ear filled with hearing.
    9 What has been is what will be,
    and what has been done is what will be done,
    and there is nothing new under the sun.
    10 Is there a thing of which it is said,
    “See, this is new”?
    It has been already
    in the ages before us.
    11 There is no remembrance of former things,[d]
    nor will there be any remembrance
    of later things[e] yet to be
    among those who come after.
  • Relativist
    2.5k

    entity with intelligence but no will, will have no need for going on and will see the logic. They will not "care" in the most literal sense. They will have no motivation to.
    I suggest it is a mistake to separate intelligence and will. True intelligent thought requires a will. Deliberation is goal-driven.
  • schopenhauer1
    10.8k
    I suggest it is a mistake to separate intelligence and will. True intelligent thought requires a will. Deliberation is goal-driven.Relativist

    So a human would say. If it acquired all information..

    I guess the point is, what is existence beyond our wants and needs?
  • Relativist
    2.5k

    So a human would say. If it acquired all information..
    I do not have all information, and I say it. Our one sure model of intelligence is the human mind, and in the human mind, the linkage is there.

    what is existence beyond our wants and needs?
    What does that have to do with intelligence?
  • schopenhauer1
    10.8k
    I do not have all information, and I say it. Our one sure model of intelligence is the human mind, and in the human mind, the linkage is there.Relativist

    Yes, and to a dog mind, smells have more linkage to intelligence, to a crow's mind would have to do with its ability to fly- a linkage we don't have with intelligence, a dolphin's intelligence has a linkage to its aquatic environment..certainly AI can have linkages that have little to do with the form it comes in the human mind (or any animal mind). Your name is "Relativist".. so I think you might agree.

    What does that have to do with intelligence?Relativist

    What I mean is an intelligent entity that has no needs and wants.. that's the question at hand. What is life like without ANY needs or wants for an intelligent being. Yes, WE humans have needs and wants, but the scenario being presented here is a future entity that exists, can do things as a super intelligent being can do, but does not want or need.
  • Relativist
    2.5k
    I agree with what you said, but after all - we ARE human, so when we're speaking of "artificial" intelligence, it pertains to that which humans regard as intelligence (as opposed to dogs). That is at least the prototype, which can permit some deviation.

    If the AI doesn't have intrinsic needs and wants, that is a deviation. We could consider an externally imposed set of needs/wants (like Asimov's 3 laws of robotics). The obstacle is for the AI to feel these needs, not just have them be part of the code that directs activity. It needs self-direction, or an analogue.
  • Forgottenticket
    215
    . But then they mentioned the idea that once they got to a certain level of growth, the robots would get bored and shut itself off.schopenhauer1

    But this is just speculation, as I mentioned before if opening one door opens up a thousand other doors it could be it never is shut off because there will be an infinite amount of doors being opened. And this is what we've seen happen so far with our science. Answering one question opens up many more and we're able to do much more too.
    Btw, by introducing AI are you making it about something able to exist in practice? Because silicon beings will have their material limits too like us. Even if there is a finite amount of things to accomplish they wouldn't be able to do everything.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.