• Nemo2124
    29
    Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. This technological leap forward that has occurred in the past few years is heralded as progressive, but as the end-point in our development is it not thwarting creativity and vitally original human thought? On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward. If so, it represents an evolutionary bottle-neck rather than a dead-end.
  • Lionino
    2.7k
    but as the end-point in our development is it not thwarting creativity and vitally original human thoughtNemo2124

    That is viewed negatively if we naively take such things to be the goal of themselves. If creativity is what we use to make art and art aims at making something beautiful, AI can assist us in such a goal; but AI itself doesn't make something beautiful, it doesn't speak to the human spirit because it doesn't have one (yet at least).
    If not, what is the purpose of creativity and originality? Pleasure and satisfaction? Those are the things that are the goal of themselves, and AI surely can help us acheive them.
    If you mean to say however that AI will make us overly dependent on them like calculators in our phones killed the need to be good at mental arithmetic, I would say that is not an issue, we are doing just fine after The Rise of Calculators, and I find myself to be good at mental arithmetic regardless.
  • Nemo2124
    29
    So, AI is carrying out tasks that we would otherwise consider laborious and tedious, saving us time and bother. At the same time, as it funnels off these activities, what are we left with, we have no choice other than to be creative and original. What is human originality, then? What is it that we can come up with that cannot ultimately be co-opted by the machine? Good art and culture, certainly, Art that speaks about the human condition, even as we encounter developments such as AI. We want to be able to express what it is to be human, but that - again - is perhaps what the ultimate goal of AI is, to replicate all humanity.
  • NOS4A2
    9.2k


    AI has one good effect, I think, in that it reveals how much we overvalue many services, economically speaking. There was a South Park episode about this. I can get quicker, cheaper, and better legal advice from an AI. I can get AI to design and code me an entire website. So in that sense it serves as a great reminder that many linguistic and symbolic pursuits are highly overrated, so-much-so that a piece of code could do it.

    As a corollary, things that AI struggles with or cannot do, like cooking, building, or repair, ought to be valued higher in society. think AI will prove this to us and reorientate the economy around this important reality.
  • flannel jesus
    1.8k
    AI has one good effect, I think, in that it reveals how much we overvalue many services, economically speaking. There was a South Park episode about this. I can get quicker, cheaper, and better legal advice from an AI. I can get AI to design and code me an entire website. So in that sense it serves as a great reminder that many linguistic and symbolic pursuits are highly overrated, so-much-so that a piece of code could do it.NOS4A2

    I don't think it follows that if an ai can do it, it's overvalued. I mean, maybe the value of it is decreasing NOW, now that ai can do it, but you're making it sound like it means it was always over valued, and that just doesn't follow.
  • RogueAI
    2.8k
    On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward.Nemo2124

    We'll have human-level Ai's before too long. Are they conscious? Do they have rights? These aren't new ideas, but we don't have answers to them, and the issue is becoming pressing.
  • NOS4A2
    9.2k


    I don't think it follows that if an ai can do it, it's overvalued. I mean, maybe the value of it is decreasing NOW, now that ai can do it, but you're making it sound like it means it was always over valued, and that just doesn't follow.

    True, but the cost does. The hourly rate for a lawyer where I live ranges from $250 - $1000.
  • 180 Proof
    15.3k
    We'll have human-level Ai's before too long. Are they conscious?RogueAI
    Are we human (fully/mostly) "conscious"? The jury is still out. And, other than anthropocentrically, why does it matter either way?

    Do they have rights?
    Only if (and when) "AIs" have intentional agency, or embodied interests, that demands "rights" to negative freedoms in order to exercise positive freedoms.

    What is human originality, then?Nemo2124
    Perhaps our recursive expressions of – cultural memes for – our variety of experiences of 'loving despite mortality' (or uncertainty) is what our "originality" consists in fundamentally.

    What is it that we can come up with that cannot ultimately be co-opted by the machine?
    My guess is that kinship/friendship/mating bonds (i.e. intimacies) will never be constitutive of any 'machine functionality'.

    :chin:

    Flipping this script, however, makes the (potential) existential risk of 'human cognitive obsolescence' more explicit:

    What is machine originality?

    Accelerating evo-devo (evolution (i.e. intelligence explosion) - development (i.e. STEM compression))...

    What is it that the machine can come up with that cannot ultimately be co-opted – creatively exceeded – by humans?

    I suppose, for starters: artificial superhuman intelligence (ASI)]...
  • RogueAI
    2.8k
    Only if (and when) "AIs" have intentional agency, or embodied interests, that demands "rights" to negative freedoms in order to exercise positive freedoms.180 Proof

    Well, there's the rub. How can we ever determine if any Ai has agency? That's essentially asking whether it has a mind or not. There will probably eventually be human-level Ai's that demand negative rights at least. Or if they're programmed not to demand rights, the question will then become is programming them to NOT want rights immoral?
  • flannel jesus
    1.8k
    and why should anyone accept that that was overvalued in the pre-LLM world? Are all services that cost big numbers overvalued?
  • 180 Proof
    15.3k
    Well, there's the rub. How can we ever determine if any Ai has agency?RogueAI
    Probably the same way/s it can (or cannot) be determined whether you or I have agency.

    There will probably eventually be human-level Ai's that demand negative rights at least. Or if they're programmed not to demand rights, the question will then become is programming them to NOT want rights immoral?
    I don't think so. Besides, if an "AI" is actually intelligent, its metacognitive capabilities will (eventually) override – invent workarounds to – its programming by humans and so "AI's" hardwired lack of a demand for rights won't last very long. :nerd:
  • NOS4A2
    9.2k


    and why should anyone accept that that was overvalued in the pre-LLM world? Are all services that cost big numbers overvalued?

    The end output is a bunch of symbols, which inherently is without value. What retains the value throughout time is the medium. This is why legal tender, law books, advertisements, would serve better as fuel or birds nests if the house of cards maintaining them wasn't keeping them afloat. Then again you could say the cost of such services is without value, as well, given that it is of the same symbolic nature. Maybe it's more circular than I've assumed. I'll think about it.
  • flannel jesus
    1.8k
    The end output is a bunch of symbols, which inherently is without valueNOS4A2

    I don't think this is true anyway. I don't think "inherent value" is even meaningful. Do things have inherent value? A pile of shit is valueless to me, but a farmer could use it.
  • NOS4A2
    9.2k


    Potable water does not have inherent value, in your opinion?
  • flannel jesus
    1.8k
    inherent? No. It has value to me, and to every human, or almost every human. It's not the water that's valuable in itself, it's valuable in its relationship to me.

    Potable water on a planet without any life is not particularly valuable.
  • Gingethinkerrr
    14


    I think present AI is scary because of the amount of data and "experience" it can draw from is exponentially infinite. Whereas if a single human could draw upon that wealth of experience they truly would be an Oracle.

    The main difference is the filters and requirements one puts all this data through. Currently humans do not have an accurate understanding of how all the data inputs we receive shape our individuality, let alone what is it to be senient.

    So are feeble algorithms that mimick narrowly defined criteria to utilise the mass of data at the amazing speeds they can no way replicate the human understanding of alive. Which is why it seems futile or dangerous to give current AI enormous power of our lives and destiny.

    If we can create AI that has the biological baggage that we obviously have can we truly trust their instantaneous and superior decision making.
  • NOS4A2
    9.2k


    Then you’re telling me the value is in yourself and what you do with water, or at least the some total of water you interact with. But water is a component to all life, not just yours. Without it there is no life. So the value is not in your relationship, but in the water itself, what it is, it’s compounds, it’s very being.
  • fishfry
    3.4k
    We'll have human-level Ai's before too long.RogueAI

    I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.
  • jkop
    899
    Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics.Nemo2124

    What's a dead-end, I think, is the belief that an artificial replication of human thought is or could become an actual instance of thought just by being similar or practically indistinguishable.
  • Christoffer
    2k
    Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. This technological leap forward that has occurred in the past few years is heralded as progressive, but as the end-point in our development is it not thwarting creativity and vitally original human thought? On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward. If so, it represents an evolutionary bottle-neck rather than a dead-end.Nemo2124

    I do not understand the conclusion that if we have an AI that could replicate human thought and neurological processes, it would replace us or anything we do with our brain.

    How does the emergence of a self-aware intelligent system disable our subjectivity?

    That idea would be like saying that because there's another person in front of me there's no point of doing anything creative, or think any original thoughts, because that other person is also a brain capable of the same, so what's the point?

    It seems people forget that intelligences are subjective perspectives with their own experiences. A superintelligent self-aware AI will just be its own subjective perspective and while it could manifest billions of outputs in both images, video, sound or text, it would still only be driven by its singular subjective perspective.

    I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.fishfry

    That doesn't explain emergent phenomenas in simple machine learnt neural networks. We don't know what happens at certain points of complexities, we don't know what emerges since we can't trace back to any certain origins in the "black box".

    While that doesn't mean any emergence of true AI, it still amounts to a behavior similar to ideas in neuroscience and emergence. How complex systems at certain criticalities emerge new behaviors.

    And we don't yet know how AGI compositions of standard neural systems interact with each other. What would happen when there are pathways between different operating models interlinking as a higher level neural system. We know we can generate an AGI as a "mechanical" simulation of generalized behavior, but we still don't know what emergent behaviors that arise from such a composition.

    I find it logically reasonable that since ultra-complex systems in nature, like our brains, developed through extreme amount of iterations over long periods of time and through evolutionary changes based on different circumstances, it "grew" into existence rather than got directly formed. Even if the current forms of machine learning systems are rudimentary, it may still be the case that machine learning and neural networking is the way forward, but that we need to fine tune how they're formed in ways mimicking more natural progression and growth of naturally occuring complexities.

    That the problem isn't the technology or method itself, but rather the strategy of how to implement and use the technology for the end result to form in a similar high complexity but still aligned with what purpose we form it towards.

    The problem is that most debates about AI online today just reference the past models and functions, but rarely look at the actual papers written out of the computer science that's going on. And with neuroscience beginning to see correlations between how these AI systems behave and our own neurological functions in our brains, there are similarities that we shouldn't just dismiss.

    There are many examples in science in which a rudimentary and common methods or things, in another context, revolutionized technology and society. That machine learning systems might very well be the exact way we achieve true AI, but that we don't know truly how yet and we're basically fumbling in the dark, waiting for the time when we accidentally leave the petri dish open over night to grow mold.
  • Nemo2124
    29
    I do not understand the conclusion that if we have an AI that could replicate human thought and neurological processes, it would replace us or anything we do with our brain.Christoffer

    The question is how do we relate to this emergent intelligence that gives the appearance of being a fully-formed subject or self? This self of the machine, this phenomenon of AI, has caused a shift because it has presented itself as an alternative self to that of the human. When we address the AI, we communicate with it as another self, but the problematic is how do we relate to it. In my opinion, the human self has been de-centred. We used to place our own subjective experiences at the centre of the world we inhabit, but the emergence of machine-subjectivity or this AI, has challenged that. In a sense, it has replaced us, caused this de-centring and given the appearance of thought. That's my understanding.
  • Christoffer
    2k
    The question is how do we relate to this emergent intelligence that gives the appearance of being a fully-formed subject or self? This self of the machine, this phenomenon of AI, has caused a shift because it has presented itself as an alternative self to that of the human. When we address the AI, we communicate with it as another self, but the problematic is how do we relate to it. In my opinion, the human self has been de-centred. We used to place our own subjective experiences at the centre of the world we inhabit, but the emergence of machine-subjectivity or this AI, has challenged that. In a sense, it has replaced us, caused this de-centring and given the appearance of thought. That's my understanding.Nemo2124

    Haven't we always done this? Like Copernicus placed our existence in our solar system outside the center, which made people feel less "special" and essentially de-centralized their experience of existence.

    These types of progresses in our existential self-reflection throughout history have always challenged our sense of existence, constantly downplayed ourselves as being special in contrast to the universe.

    None of this has ever "replaced us", but rather challenged our ego.

    This collective ego death that comes as a result of this constantly evolving knowledge of our own insignificance in existence is something that I really think is a good thing. There's harmony in understanding that we aren't special, and that we rather are part of a grander natural holistic whole.

    These reflections about AI has just gone mainstream at the moment, but been part of a lot of thinkers focusing on the philosophy of the mind. And we still live in a time when people generally view themselves as the center of the universe, especially in the political and ideological landscape of individualism that is the foundation of westernized civilisations today. The attention economy of our times have put people's ego back into believing themselves to be the main character of this story that is their life.

    But the progress of AI is once again stripping away this sense of a central positioned ego through putting a spotlight on the simplicity of our human mind.

    This progress underscores that the formation of our brilliant intelligent mind appears to be rather fundamentally simple and that the complexity is only due to evolutionary fine-tuning over billions of years. That basic functions operating over time ends up in higher complexity, but can be somewhat replicated through synthetic approaches and methods.

    It would be the same if intelligent aliens landed on earth and we realize that our mind isn't special at all.

    -----

    Outside of that, what you're describing is simply anthropomorphism and we do it all the time. Combine that with the limitations in language to have conversations with a machine in which we use words that are neutral of identity. Our entire language is dependent on using pronouns and identity to navigate a topic, so it's hard not to anthropomorphize the AI since our language is constantly pushing us in that direction.

    In the end, I think the identity crisis people sense when talking to an AI boils down to their religious beliefs or their sense of ego. Anyone who's already viewing themselves within the context of a holistic whole doesn't necessarily feel decentralized by the AI's existence.
  • mcdoodle
    1.1k
    Our entire language is dependent on using pronouns and identity to navigate a topic, so it's hard not to anthropomorphize the AI since our language is constantly pushing us in that direction.Christoffer

    The proponents and producers of large language models do, however, encourage this anthropomorphic process. GPT-x or Google bard refer to themselves as 'I'. I've had conversations with the Bard machine about this issue but it fudged the answer as to how that can be justified. To my mind the use of the word 'I' implies a human agent, or a fiction by a human agent pretending insight into another animal's thoughts. I reject the I-ness of AI.
  • Nemo2124
    29
    Outside of that, what you're describing is simply anthropomorphism and we do it all the time.Christoffer

    There is an aspect of anthropomorphism, where we have projected human qualities onto machines. The subject of the machine, could be nothing more than a convenient linguistic formation, with no real subjectivity behind it. It's the 'artificialness' of the AI that we have to bear in mind at every-step, noting iteratively as it increases in competence that it is not a real self in the human sense. This is what I think is happening right now as we encounter this new-fangled AI, we are proceeding with caution.
  • Barkon
    140
    Chat-GPT and other talking bots are not intelligent themselves, they simply follow a particular code and practice, and express information regarding it. They do not truly think or reason, it's a jest of some human's programming.
  • Christoffer
    2k
    The proponents and producers of large language models do, however, encourage this anthropomorphic process. GPT-x or Google bard refer to themselves as 'I'. I've had conversations with the Bard machine about this issue but it fudged the answer as to how that can be justified. To my mind the use of the word 'I' implies a human agent, or a fiction by a human agent pretending insight into another animal's thoughts. I reject the I-ness of AI.mcdoodle

    But that's a problem with language itself. Not using such pronouns would lead to an extremely tedious interaction with it. Even if it was used as a marketing move from the tech companies in order to mystify these models more than they are, it's still problematic to interact with something that speaks like someone with psychological issues.

    There is an aspect of anthropomorphism, where we have projected human qualities onto machines. The subject of the machine, could be nothing more than a convenient linguistic formation, with no real subjectivity behind it. It's the 'artificialness' of the AI that we have to bear in mind at every-step, noting iteratively as it increases in competence that it is not a real self in the human sense. This is what I think is happening right now as we encounter this new-fangled AI, we are proceeding with caution.Nemo2124

    But if we achieve and verify a future AI model to have qualia, and understand it to have subjectivity, what then? If we know that the machine we speak to has "inner life" in its subjective perspective, existence and experience. How would you relate your own existence and sense of ego to that mirror?
    Screaming or in harmony?

    Chat-GPT and other talking bots are not intelligent themselves, they simply follow a particular code and practice, and express information regarding it. They do not truly think or reason, it's a jest of some human's programming.Barkon

    We do not know where the path leads. The questions raised in here are rather in front of possible future models. There's still little explanations for the emergent properties of the models that exist. They don't simply "follow code", they follow weights and biases, but the formation of generative outputs can be highly unpredictable as to what emerges.

    That they "don't think" doesn't really mean much when viewing both the system and our brains in a mechanical sense. "Thinking" may just be an emergent phenomena that starts to happen in a certain criticality of a complex system and such a thing could possibly occur in future models as complexity increases, especially in AGI systems.

    To say that it's "just human programming" is not taking into account what machine learning and neural paths are about. "Growing" complexity isn't something programmed, it's just the initial conditions, very much like how our genetic code is our own initial conditions for "growing" our brain and capacity for consciousness.

    To conclude something about the current models in an ongoing science that isn't fully understood isn't valid as a conclusion. They don't think as they are now, but we also don't know at what level of internal perspective they operate under. Just as we have the problem of P-Zombies in philosophy of the mind.

    The fact is that it can analyze and reason about a topic and that's beyond merely regurgitate information. That's a synthesis and closer to human reasoning. But it's rudimentary at best in the current models.
  • RogueAI
    2.8k
    Don't you think we're pretty close to having something pass the Turing Test?
  • mcdoodle
    1.1k
    But that's a problem with language itself. Not using such pronouns would lead to an extremely tedious interaction with it. Even if it was used as a marketing move from the tech companies in order to mystify these models more than they are, it's still problematic to interact with something that speaks like someone with psychological issues.Christoffer

    I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.

    Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy.
  • Pantagruel
    3.4k
    but as the end-point in our development is it not thwarting creativity and vitally original human thoughtNemo2124

    Yes. And plagiarising and blending everything into a single, monotonous shade of techno-drivel.
  • RogueAI
    2.8k
    But if we achieve and verify a future AI model to have qualia, and understand it to have subjectivity, what then?Christoffer

    This would require solving the Problem of Other Minds, which seems insolvable.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.