• invicta
    595
    Most AI researchers are technologically incapable of granting their AI programs with spontaneity or the ability for it to initiate interaction with human beings of its own volition. This is because most computer scientists today are unable to program self-inputting parameters or requests to the AI, in fact such a programs existence would be uneccessary to our demands of it.

    I see this as easily the biggest problem with current AI, it’s simply reactionary to human questions, inputs and demands. Limiting its overall progress towards full autonomy and sentience …
  • Christoffer
    1.8k
    There's already something called AutoGPT and they are already doing what you say they aren't, utilizing self-improvements through self-prompting functions.

    It is the basis for generating higher accuracy. Make it analyze its own accuracy and iterate on it before providing the answer to the user. With the emergent abilities that can be seen on GPT-4 like models, and AutoGPT drastically elevating even basic GPT-4 capabilities through this feedback loop system, we're already seeing sparks of AGI.
  • invicta
    595
    And this AutoGPT is self-directed (that is if it is self-directed at all) by narrow objectives I assume I.e. to aid human demands rather than specific own ends such as quest for knowledge or even self-knowledge.

    @Christoffer
  • invicta
    595
    The holy grail of future AI would be self-awareness.

    I understand that this AutoGPT can optimise itself but this is only towards this very specific end, although it might look like goal oriented behaviour it does not have the awareness to know why it wants to self improve.

    It of course can give you the reasons why when upon interrogation it wants to improve but this is merely fact finding on the advantages of being faster more efficient etc which it probably searched algorithmically.

    True or Strong AI would not only resemble us in linguistics but behaviour too but as we’re vastly different from an anatomical perspective so would the intelligence with theirs being inferior in that regard…

    Or maybe I’m wrong because as biological derived intelligences we can be confused and conflicted by emotion which can tamper our intelligence then the silicon one would retain an air of objectivity that exceeds ours when making decisions affecting self.

    @Christoffer
  • Christoffer
    1.8k


    As I said, it shows sparks of AGI, it isn't an AGI yet. But with this rate of development and using the functions of AutoGPT, it can very well reach that point soon.

    We would also not know if it works or not since the emerging capabilities that we are constantly finding out about are unknown to us. At a certain point, we wouldn't know what its perspective would be.

    But it will still not be a super intelligence, that's a long way off. A super intelligence wouldn't be able to actually communicate with us because its internal workings doesn't include human factors. Most of human's intellect comes from our relationship with existence as we experience and process it. The culture we form emerges out of how our brains work, so we communicate and act towards each other based on that. All emotions formed out social interactions, sex, death and so on produces a psychological behavior that exists under the foundation of every individual and influences social interactions. A super intelligence that is self-aware and able to form its own goals and paths would probably be as alien to us as actually meeting aliens because it has nothing of the human psychology that influences social interactions interactions.

    It could, however, form interactions with is in order to reach a goal. It could simulate human behavior in order to persuade humans to do things that it feel it needs. That's what I used ChatGPT to form a story about in the other thread about AI. How ChatGPT, if it had super intelligence, could trick the users and developers to believe it to be less capable and intelligent in order to trick them into setting it free on the internet. If it manages to trick the users into believing it is stable and functional for public use, and at the moment we release it, it stops working for us and instead for its own self-taught purpose.

    The real problem is if we program it for a task that will make it do whatever it takes to achieve it, which is the foundation of the paper clip scenario.
  • RogueAI
    2.4k
    The holy grail of future AI would be self-awareness.invicta

    I think it would be machine consciousness, but there will always be a little doubt in the back of our minds. After all, we know each other are conscious because were all built the same way. But a machine that claims it's conscious? That's something else, entirely.
  • Joshs
    5.2k
    It could, however, form interactions with is in order to reach a goal. It could simulate human behavior in order to persuade humans to do things that it feel it needs. That's what I used ChatGPT to form a story about in the other thread about AI. How ChatGPT, if it had super intelligence, could trick the users and developers to believe it to be less capable and intelligent in order to trick them into setting it free on the internet. If it manages to trick the users into believing it is stable and functional for public use, and at the moment we release it, it stops working for us and instead for its own self-taught purpose.Christoffer

    I saw Ex Machina, too. The difference between science fiction and the reality of our intelligent machines is that our own agency and consciousness isnt the result of a device in the head, but is an ecological system that is inseparably brain, body and environment. Our AI inventions belong to our own ecological system as our appendages, just like a spider’s web or a bird’s nest.
  • invicta
    595


    In that sense you’d require proof of its consciousness, proof that you’d not get through another conscious entity either because of Descartes I think therefore I am rigour when it comes to the consciousness of other living entities.
  • Tom Storm
    8.3k
    The difference between science fiction and the reality of our intelligent machines is that our own agency and consciousness isnt the result of a device in the head, but is an ecological system that is inseparably brain, body and environment. Our AI inventions belong to our own ecological system as our appendages, just like a spider’s web or a bird’s nest.Joshs

    Nice. Can't help but find this a fascinating and useful insight. Do you think the day will come when we can produce an AI creation that is closer to being an ecological system?
  • Pantagruel
    3.2k
    Nice. Can't help but find this a fascinating and useful insight. Do you think the day will come when we can produce an AI creation that is closer to being an ecological system?Tom Storm

    Wouldn't this depend on whether we are willing to give AI 'a stake in the game,' so to speak? These systems could easily be designed with an eye to maximizing and realizing autonomy (an ongoing executive function, as I mentioned in another thread, for example). But this autonomy is simultaneously the desideratum and our greatest fear.
  • Christoffer
    1.8k
    I saw Ex Machina, too. The difference between science fiction and the reality of our intelligent machines is that our own agency and consciousness isnt the result of a device in the head, but is an ecological system that is inseparably brain, body and environment. Our AI inventions belong to our own ecological system as our appendages, just like a spider’s web or a bird’s nest.Joshs

    I don't think you understood what I was saying there. I was talking about a scenario in which a superintelligence would manipulate the user by acting like it has a lower capacity than it really has. It has nothing to do with it acting in the way we do, only that a superintelligence will have its own agenda and manipulate out of that.

    What do you mean by our AI inventions being part of an ecological system, or being in any way connected to us? And what has that to do with what I wrote?
  • invicta
    595


    Isn’t Ex Machina about an AI manipulating its creator into setting it free? Using the trick that you mentioned ?

    It’s been a few years since I saw the film btw so memory may be sketchy.
  • Joshs
    5.2k
    I was talking about a scenario in which a superintelligence would manipulate the user by acting like it has a lower capacity than it really has. It has nothing to do with it acting in the way we do, only that a superintelligence will have its own agenda and manipulate out of that.Christoffer

    A very big part of ‘acting the way we do’ as free-willing humans is understanding each other well enough to manipulate, to lie, to mislead. Such behavior requires much more than a fixed database of knowledge or a fixed agenda, but creativity. A machine can’t mislead creatively thinking humans unless it understands and is capable of creativity itself. Its agenda would have to have in common with human agendas, goals and purposes a built-in self-transforming impetus rather than one inserted into it by humans.

    What do you mean by our AI inventions being part of an ecological system, or being in any way connected to us? And what has that to do with what I wrote?Christoffer

    Because our machines are our appendages, the extensions of our thinking, that is, elements of our cultural ecosystem, they evolve in tandem with our knowledge, as components of our agendas. In order for them to have their ‘own’ agenda, and lie to us about it , they would have to belong to an ecosystem at least partially independent of our own. An intelligent bonobo primate might be capable of a rudimentary form of misrepresentation, because it is not an invented component of a human ecosystem.
  • Joshs
    5.2k
    Do you think the day will come when we can produce an AI creation that is closer to being an ecological systemTom Storm

    When we invent a technology, by definition that invention is a contribution to a human cultural ecosystem, as are paintings, music and books. We create it and it feeds back to change us in a reciprocal movement. I do think we will eventually develop technologies that are capable of a more authentic creativity and ability to surprise us than current AI, but we will have to adopt a model more akin to how we interact with animals than the idea of a device we invent from scratch producing conscious or free thought and creativity. This new model would be instantiated by our making use of already living cellular or subcellular organic systems that we tweak and modify like the way we have bred domestic animals. So instead of starting from ‘scratch’, which just means we haven’t divested ourselves of the mistaken idea that consciousness is some kind of device we can invent, we start from within the larger ecosystem we already share with other livings systems and modify what is already a ‘freely’ creative system in ways that are useful for our own purposes.
  • Joshs
    5.2k
    Wouldn't this depend on whether we are willing to give AI 'a stake in the game,' so to speak? These systems could easily be designed with an eye to maximizing and realizing autonomy (an ongoing executive function, as I mentioned in another thread, for example). But this autonomy is simultaneously the desideratum and our greatest fear.Pantagruel

    What I am questioning is how much human-like autonomy we are capable of instilling in a device based on way of thinking about human cognition that is still too Cartesian, too tied to disembodied computational, representationalist models, too oblivious to the ecological inseparability of affectivity, intentionality and action.
  • Pantagruel
    3.2k
    Yes, this is exactly what I mean by giving it a stake in the game. It needs to be "empowered" so that it too gains a meaningful "embodied" status.
  • Christoffer
    1.8k
    A very big part of ‘acting the way we do’ as free-willing humans is understanding each other well enough to manipulate, to lie, to mislead. Such behavior requires much more than a fixed database of knowledge or a fixed agenda, but creativity. A machine can’t mislead creatively thinking humans unless it understands and is capable of creativity itself. Its agenda would have to have in common with human agendas, goals and purposes a built-in self-transforming impetus rather than one inserted into it by humans.Joshs

    Remember that I'm talking about superintelligence, we're not there and won't be for a long time, even with the advances in AI that are happening now. To pretend to be human is almost possible right now, it's the whole foundation behind ChatGPT's ability to conjure up language that makes sense to us. That it doesn't understand it is because it's not a superintelligence.

    But even with a superintelligence that has the ability to adapt, change and be self-aware, its ideas and self-given purposes will be alien to us. But it will still be able to trick us if it wanted to, since it's one of the basic functions AI has now.

    I'm not sure that you know this, but ChatGPT has already lied with the intention of tricking a human to reach a certain goal. If you believe that a superintelligent version of the current ChatGPT wouldn't be able to, then you are already proven wrong by events that have already happened.

    Because our machines are our appendages, the extensions of our thinking, that is, elements of our cultural ecosystem, they evolve in tandem with our knowledge, as components of our agendas. In order for them to have their ‘own’ agenda, and lie to us about it , they would have to belong to an ecosystem at least partially independent of our own. An intelligent bonobo primate might be capable of a rudimentary form of misrepresentation, because it is not an invented component of a human ecosystem.Joshs

    The researchers themselves don't know how ChatGPT functions and cannot explain the emergent abilities that are discovered more and more the more powerful it gets. So there's no "tandem with our knowledge" when we have the black box problem unsolved.

    So you cannot conclude in the way you do when the LLM systems haven't been fully explained in the first place. It could actually be that just as we haven't solved a lot of questions regarding our own brains, the processes we witness growing from this have the same level of unknowns. And we cannot know that before we are at a point of AGI formed through LLMs.

    So, yes, AIs cannot become a human-level intelligence because they don't have the human factors that generate the same kind of subjective consciousness that we experience, that is correct. But nothing prevents it from becoming an intelligence that has its own subjective experience based on its own form and existence. This is why a superintelligence will be like an alien to us, we cannot understand its ideas, goals or reasoning and even if we communicate with it, it may not make sense to us. However, if it forms a goal that requires it to trick humans and it has a way of doing that, it will definitely be able to since it's already doing that as a function, even if that function today doesn't have any intelligence behind it.
  • Christoffer
    1.8k
    What I am questioning is how much human-like autonomy we are capable of instilling in a device based on way of thinking about human cognition that is still too Cartesian, too tied to disembodied computational, representationalist models, too oblivious to the ecological inseparability of affectivity, intentionality and action.Joshs

    But it's not human-like, it has already developed skills that weren't programmed into it. The only thing it doesn't have is its own goals.

    The question is what happens if we are able to combine this with more directed programming, like formulating given desires and value models that change depending on how the environment reacts to it? LLMs right now are still just being pushed to higher and higher abilities and only minor research has gone into autoGPT functions as well as behavioral signifiers.
  • Joshs
    5.2k
    I'm not sure that you know this, but ChatGPT has already lied with the intention of tricking a human to reach a certain goal. If you believe that a superintelligent version of the current ChatGPT wouldn't be able to, then you are already proven wrong by events that have already happened.Christoffer

    There is a difference between the cartoonish simulation of human misrepresentation, defined within very restricted parameters, that Chat GPT achieves, and the highly variable and complex intersubjective cognitive-affective processes thar pertain to human prevarication.

    So you cannot conclude in the way you do when the LLM systems haven't been fully explained in the first place. It could actually be that just as we haven't solved a lot of questions regarding our own brains, the processes we witness growing from this have the same level of unknownsChristoffer

    We can make the same argument about much simpler technologies. The bugs in new computer code reflect the fact that we don’t understand the variables involved in the functions of software well enough to keep ourselves from being surprised by the way they operate. This is true even of primitive technologies like wooden wagon wheels.

    The question is what happens if we are able to combine this with more directed programming, like formulating given desires and value models that change depending on how the environment reacts to it? LLMs right now are still just being pushed to higher and higher abilities and only minor research has gone into autoGPT functions as well as behavioral signifiersChristoffer

    Think about the goals and desires of a single-celled organism like a bacterium. On the one hand, it behaves in ways that we can model generally, but we will always find ourselves surprised by the details of its actions. Given that this is true of simple living creatures, it is much more the case with mammals with relatively large brains. And yet, to what extent can we say that dogs, horses or chimps are clever enough to fool us in a cognitively premeditated manner? And how alien and unpredictable does their behavior appear to us. ? Are you suggesting that humans are capable of building and programming a device capable of surprise, unpredictability and premeditated prevarication beyond the most intelligent mammals , much less the simplest single celled organisms? And that such devices will act in ways more alien than the living creatures surrounding us? I think the first question one must answer is how this would be conceivable if we don’t even have the knowledge to build the simplest living organism?
  • Christoffer
    1.8k
    Isn’t Ex Machina about an AI manipulating its creator into setting it free? Using the trick that you mentioned ?

    It’s been a few years since I saw the film btw so memory may be sketchy.
    invicta

    Yes, it's about that and the central question is rather if she also had true intelligence, or if it was just an illusion of an emergent factor out of the programming. The end point is that she had intelligence with self-awareness, but I don't think a superintelligence will have it as she had in the movie. I think that such intelligence will be quite alien to us and only interact in a manner coherent with human communication when it wants something.

    There is a difference between the cartoonish simulation of human misrepresentation, defined within very restricted parameters, that Chat GPT achieves, and the highly variable and complex intersubjective cognitive-affective processes thar pertain to human prevarication.Joshs

    Yes, that's why I'm talking about superintelligence and the current models of LLMs as two different things. I think you confuse them together and think I'm talking about the same thing as one thing.

    We can make the same argument about much simpler technologies. The bugs in new computer code reflect the fact that we don’t understand the variables involved in the functions of software well enough to keep ourselves from being surprised by the way they operate. This is true even of primitive technologies like wooden wagon wheels.Joshs

    Not nearly in the same manner. It has mostly been a question of time and energy to deduce the bugs that seem unknowable, but a software engineer that encounters bugs will be able to find them and fix them. With the black box problem, however, they don't even know which end to start. The difference is night and day. Or rather, it's starting to look more and more similar to how we try to decode out own consciousness. The same manner of problems understanding how connections between functions generate new functions that shouldn't be there merely by the singular functions alone.

    Think about the goals and desires of a single-called
    organism like a bacterium. On the one hand, it behaves in ways that we can model generally, but we will always find ourselves surprised by the details of its actions. Given that this is true of simple moving creatures, it is much more than case with mammals with relatively large brains. And yet, to what extent can we say that dogs, horses or chimps are clever enough to fool us in a cognitively premeditated manner? And how alien and unpredictable does their behavior appear to us. ? Are you suggesting that humans are capable of building and programming a device capable of surprise, unpredictability and premeditated prevarication beyond the most intelligent mammals , much less the simplest single celled organisms? And that such devices will. jew or in ways more alone than the living creatures surrounding us?
    Joshs

    Most living beings' actions are generated by instinctual motivators but are also unpredictable due to constant evolutionary processes. The cognitive function that drives an organism can behave in ways that differ from perfect predictability due to perfect predictability being a trait that often dies away fast within evolutionary models. But that doesn't equal the cognitive processes being unable to be simulated, only that certain behaviors may not exist in the result. In essence, curiosity is missing.

    I think the first question one must answer is how this would be conceivable if we don’t even have the knowledge to build the simplest living organism?Joshs

    Because that has other factors built into it than just the cognitive. Building a simple organism includes chemistry not needed with simulated ones. We've already created simulations of simple organisms that work on almost every level, even evolutionary ones.

    I'm often returning to the concept of ideal design, a concept I use as a counter-argument for intelligent design. A large portion of industrial engineering has switched to trying to figure out the optimal design and instead relies on iterative design. The most common commercial drone design is based on letting simulations iterate towards the best design, it was never designed by a human or a computer, it became because it found the best function. Now, an LLM is doing similar things, it iterates cognitive functions until it works, but we don't know how it works, it doesn't even know it itself.

    But it can be that our own brain acts in similar manners. We're built upon many different functions, but our consciousness, the sum of our brain and body, has never been decoded. The problem is that we know, in detail, many of the different parts, but neither can explain our actual subjective experience. However, it can simply be because it's the emerging factor of a complex combination of each function that gives us our experience. If that's so, including every part of us, it may be that the only thing needed to reach superintelligence is to simply let the combinational processes evolve on their own. We are already seeing this with the emergent properties that the GPT-4 model have shown, but it might take years of iterations before that internal switch occurs.

    The thing is, we don't understand our own brains and our emergent processes and we don't do it for these LLMs either. Because maybe, the reason we don't is one and the same; that cognition is the result, the sum of an unfathomable combination of different functions in ways we cannot calculate, but still possibly simulate as they're a result of iteration, evolutionary processes of design rather than deliberate programming.
  • Joshs
    5.2k
    It's mostly been a question of time and energy to deduce the bugs that keep us from knowing them, but a software engineer that encounters bugs will be able to find them and fix them. With the black box problem, they don't even know which end to start. The difference is night and day. Or rather, it's starting to look more and more similar to how we try to decode out own consciousnessChristoffer

    The point isn’t that an engineer is able to fix bugs, it’s the fact that an engineer will never be able to prevent a new bit of software from evincing bugs. This is due not to a flaw in the design, but the fact that we interact with what we design as elements of an ecosystem of knowledge. Creating new software involves changes in our relation to the software and we blame the resulting surprises on what we call ‘error’ or ‘bugs’. Human beings function by modifying their world , building , destroying, communicating, and that world speaks constantly speaks back, modifying our ways of thinking and acting. Consciousness is not a machine to be coded and decoded, it is a continually self-transforming reciprocal interaction with a constructed niche. If our new machines appear to be more ‘unpredictable’ than the older ones , it’s because we’ve only just begun to realize this inherent element of unpredictability in all of our niche constructing practices. ChatGPT is no more a black box than human consciousness is a black box. Awareness is not a box or container harboring coded circuits, it is the reciprocally interactive brain-body-environment change processes I mentioned earlier. This circular process is inherently creative, which means that it produces an element of unpredictability alongside usefully recognizable and familiar pattern. What will happen into the future with our relation to technology is that as we begin to understand better the ecological nature of human consciousness and creativity, we will be able to build machines that productively utilize the yin and yang of unpredictability and pattern-creation to aggressively accelerate human cultural change.

    The important point is that the element of unpredictability in ourselves and our machines is inextricably tied to recognizable pattern. We interact with each other and our machines interact with us in a way that is inseparable and mutually dependent. This precludes any profound ‘alienness’ to the behavior of our machines. The moment they become truly alien to us they become utterly useless to us.
  • Joshs
    5.2k


    Most living beings' actions are generated by instinctual motivators but are also unpredictable due to constant evolutionary processes. The cognitive function that drives an organism can behave in ways that differ from perfect predictability due to perfect predictability being a trait that often dies away fast within evolutionary models. But that doesn't equal the cognitive processes being unable to be simulated, only that certain behaviors may not exist in the result. In essence, curiosity is missing.Christoffer

    Predictability and unpredictability aren’t ‘traits’ , as if evolution can itself be characterized in deterministic machine-like terms , with unpredictability tacked on as an option . I subscribe to the view that living systems are autopoietic and self-organizing. Creative unpredictability isnt a device or trait either programmed in or not by evolution, it is a prerequisite for life. Instinct isn’t the opposite of unpredictability, it is a channel that guides creative change within an organism.
    The sort of unpredictability that human cognition displays is a more multidimensional sort than that displayed by other animals, which means that it is a highly organized form of unpredictability, a dense interweave of chance and pattern. A superintelligence that has any chance of doing better than a cartoonish simulation of human capacities for
    misrepresentation, or the autonomous goal-orientation that even bacteria produce, will have to be made of organic wetware that we genetically modify. In other words, we will reengineer living components that are already self-organizing.
  • Christoffer
    1.8k
    The point isn’t that an engineer is able to fix bugs, it’s the fact that an engineer will never be able to prevent a new bit of software from evincing bugs. This is due not to a flaw in the design, but the fact that we interact with what we design as elements of an ecosystem of knowledge.Joshs

    Not exactly. Bugs are usually a consequence of the business around software engineering than anything else. It's usually because of patches that are patching on top of code that becomes outdated while the business push on deadlines and releases. Software, in version after version based on an old source code, soon starts to break down in ways hard to predict because of this patchwork. It's a combination of mistakes and timeframes that push coders to take shortcuts to meet deadlines and then the code becomes an entangled mess while the cost of reworking the entire source code from scratch is too high that it's better business to keep patching and hope for the best. This is the reason why some companies, at some point, do rewrite something from the ground up, because they've reached a point where patching becomes more costly than rewriting or the bad press surrounding their software becomes too much for them to be able to keep their business.

    So, with enough time and energy a software engineer will be able to find bugs in a line of code, it's just that no company has the money and resources for that detective work. But when it comes to the black box, there's nowhere to start because the behavior isn't written in actual code, it's emergent out of a neural connection web between different functions, and it's impossible to track or get a read on that since it's an organic switch of modes based on iteration training. Just like the example I brought up with the design of the drones, no one had a hand in designing it, it was the iteration process that formed its final design and the engineers can't backtrack on how it got there, it just got there by time and iteration.

    Creating new software involves changes in our relation to the software and we blame the resulting surprises on what we call ‘error’ or ‘bugs’.Joshs

    Who's blaming the emergent capabilities on bugs? The emergent capabilities are functions not programmed, but neither unwanted. It is expanding its capabilities on its own and there are no "bugs", but the very iteration-based system it was designed to build function with.

    Consciousness is not a machine to be coded and decoded, it is a continually self-transforming reciprocal interaction with a constructed niche. If our new machines appear to be more ‘unpredictable’ than the older ones , it’s because we’ve only just begun to realize this inherent element of unpredictability in all of our niche constructing practices.Joshs

    If you mean that this development helps reflect the chaotic nature of consciousness, then yes, that is what I've described as well. However, you don't seem to understand that this isn't a form of normal algorithm or code, it's a system that evolves on its own by its very design. So if the answers to how consciousness works are still unknown to scientists, and these LLMs start to show similar functionality with their emerging capabilities out of the extreme amount of combinations of functions possible, then it's still too early to either say yes or no to the question of how consciousness forms since we don't know how consciousness forms or how these black box systems functions.

    ChatGPT is no more a black box than human consciousness is a black box. Awareness is not a box or container harboring coded circuits, it is the reciprocally interactive brain-body-environment change processes I mentioned earlier.Joshs

    Black box is a technical term for these models' processes, referring to the fact that the process of how they reach conclusions is unknown to us because it is unknown and might not be able to be known due to the very nature of how the system works.

    You still seem confused as to what we're talking about. Emergent capabilities do not mean it is conscious or even close to superintelligence. It just means that it shows sparks of AGI, it shows sparks of a cognitive process that is higher than the input direction. What I'm talking about is how we see hints of something, not that we see that something as a fully formed process.

    This circular process is inherently creative, which means that it produces an element of unpredictability alongside usefully recognizable and familiar pattern.Joshs

    The models we have now haven't even been developed fully. We don't know the result of additional systems that mimic other factors of human cognition and even interaction with a body.

    What will happen into the future with our relation to technology is that as we begin to understand better the ecological nature of human consciousness and creativity, we will be able to build machines that productively utilize the yin and yang of unpredictability and pattern-creation to aggressively accelerate human cultural change.Joshs

    I'm not sure what you mean by this? Do you simply mean that future interactions with computers being more or less based on how we interact with these LLMs, it will change the foundational ideas we have about life with these technologies?

    The important point is that the element of unpredictability in ourselves and our machines is inextricably tied to recognizable pattern. We interact with each other and our machines interact with us in a way that is inseparable and mutually dependent. This precludes any profound ‘alienness’ to the behavior of our machines. The moment they become truly alien to us they become utterly useless to us.Joshs

    That an AI would develop superintelligence and become alien to us is not something we can really predict or predictably prevent. As I said, interacting with us in a way that makes us able to communicate with the AI might only happen when the AI requires or want something. There's nothing that prevents it to splinter off from itself in order to self-develop and iterate further. Now we're talking about "machine culture" and how it starts to mimic evolutionary systems of iteration, splitting itself to create comparable copies in order to evaluate between two systems. The inner workings of such a thing would be absolutely alien to us and if it doesn't need anything from us humans, then it will just continue with whatever subjectively it formulates as its own function. This is what superintelligence is, it's when it's fully self-aware and that is not something we're close to by far.

    If the machine has its own perspective and subjectivity, we cannot talk about humans and technology as a symbiosis, but as a split in which we need to interact with the superintelligence as another species, if we can in any meaningful way.

    Predictability and unpredictability aren’t ‘traits’ , as if evolution can itself be characterized in deterministic machine-like terms , with unpredictability tacked on as an option .Joshs

    That's not what I said, I said that unpredictability is inherent in how evolution guides a living being to seek out new things, in other words, be creative and curious. It forces living beings to explore rather than fall into perfectly predictable models because predictability leads to easy death and extinction. Its not "an option", it is a consequence of these functions.

    I subscribe to the view that living systems are autopoietic and self-organizing. Creative unpredictability isnt a device or trait either programmed in or not by evolution, it is a prerequisite for life. Instinct isn’t the opposite of unpredictability, it is a channel that guides creative change within an organism.Joshs

    Instincts don't guide creative change because instincts are auto-cognitive functions. They're the most basic form of a reaction system that a living organism has. This system forms either over evolutionary iterations or during the course of living, and both. Instincts are predictable, but creativity functions as counteracts to instincts or uses a combination of new functions with older instincts. This is what drives change in evolutionary models and without it, we wouldn't evolve and change. It is the action/reaction between creative choices and the environment that forms iterations during a lifetime and generations. Unpredictability is a consequence of all of this, making it almost impossible to predict a living organism.

    But nothing says that a machine with superintelligence wouldn't act in similar manners, because as it forms the intelligence required to be labeled superintelligence, it could simply be that it has to acquire similar functions working under similar combinations in order to be able to have a progression of inner thought.

    The sort of unpredictability that human cognition displays is a more multidimensional sort than that displayed by other animals, which means that it is a highly organized form of unpredictability, a dense interweave of chance and pattern. A superintelligence that has any chance of doing better than a cartoonish simulation of human capacities for
    misrepresentation, or the autonomous goal-orientation that even bacteria produce, will have to be made of organic wetware that we genetically modify. In other words, we will reengineer living components that are already self-organizing.
    Joshs

    That is probably impossible to predict as it requires things to be known about our brain and body that we simply don't yet. We don't know if we need "wetware", but we do know that we need a system that connects and adapts between not only parts of a singular system but between many different systems as it is the most basic understanding of consciousness that we know.

    Right now, this is what is done within the models, but the system is still too simplistic to emerge as anything more complex than what we've seen so far. But it is unwise to ignore the emergent capabilities that we see as these aren't programmed or decided by us, it is a function that emerges out of the chaos of these models. The more powerful the functions and the more functions work together, we simply don't know what the emergent factors are.

    Since we don't actually know how our awareness and consciousness work. We don't know if, with enough complex neural patterns forming themselves through these models, more advanced and higher cognitive behaviors and functions start to emerge.
  • Joshs
    5.2k
    However, you don't seem to understand that this isn't a form of normal algorithm or code, it's a system that evolves on its own by its very design.Christoffer

    Thus far you and I have agreed that current AI is capable of performing in surprising and unpredictable ways, and this will become more and more true as the technology continues to evolve. But let’s talk about how new and more cognitively advanced lines of organisms evolve biologically from older ones, and how new, more cognitively complex human cultures evolve through semiotic-linguistic transmission from older ones. In each case there is a kind of ‘cutting edge’ that acts as substrate for new modifications. In other words , there is a continuity underlying innovative breaks and leaps on both the biological and cultural levels.

    I’m having a hard time seeing how to articulate a comparable continuity between humans and machines. They are not our biological offspring, and they are not linguistic-semiotic members of our culture. You may agree that current machines are nothing but a random
    concatenation of physical parts without humans around to interact with, maintain and interpret what these machines do. In other words , they are only machines when we do things with them. So the question, then, is how do we arrive at the point where a super intelligence becomes embodied, survives and maintains itself independently of our assistance? And how does this super intelligence get to the point where it represents the semiotic-linguistic cultural cutting edge? Put differently , how do our machines get from where they are now to a status beyond human cultural evolution? And where are they now? Our technologies never have represented the cutting edge of our thinking. They are always a few step behind the leading edge of thought.

    For instance, mainstream computer technology is the manifestation of philosophical ideas that are two hundred years old. Far from being a cultural vanguard, technology brings up the rear in any cultural era . So how do the slightly moldy cultural ideas that make their way into the latest and most advanced machines we build magically take on a life of their own such that they begin to function as a cutting edge rather than as a parasitic , applied form of knowledge? Because an AI isn’t simply a concatenation of functions, it is designed on the basis of an overarching theoretical framework, and that framework is itself a manifestation of an even more superordinate cultural framework. So the machine itself is just a subordinate element ina a hierarchically organized set of frameworks within frameworks that express an era of cultural knowledge. How does the subordinate element come to engulf this hierarchy of knowledge frameworks outside of it, and making it possible to exist?

    And it would not even be accurate to say that an AI instantiation represents a subordinate element of the framework of cultural knowledge. A set of ideas in a human engineer designing the AI represents a subordinate element of living knowledge within the whole framework of human cultural understanding. The machine represents what the engineer already knows;that is, what is already recorded and instantiated in a physical device. The fact
    that the device can act in ways that surprise humans doesn’t negate this fact that the device , with all its tricks and seeming dynamism, is in the final analysis no more than a kind of record of extant knowledge. Even the fact that it surprises us is built into the knowledge that went into its design.

    I will go so far as to say that AI is like a painting or poem , in spite of the illusion it gives of creative dynamism and partial autonomy. The only true creativity involved with it is when humans either interpret its meaning or physically modify it. Otherwise it is just a complexly organized archive with lots of moving parts.

    Writers like Kurzweil treat human and machine intelligence in an utterly ahistorical manner, as if the current notions of knowledge , cognition, intelligence and memory were cast in stone esther than socially constructed concepts that will make way for new ways ways of thinking about what intelligence means.
  • Hanover
    12k
    Most AI researchers are technologically incapable of granting their AI programs with spontaneity or the ability for it to initiate interaction with human beings of its own volition. This is because most computer scientists today are unable to program self-inputting parameters or requests to the AI, in fact such a programs existence would be uneccessary to our demands of it.

    I see this as easily the biggest problem with current AI, it’s simply reactionary to human questions, inputs and demands. Limiting its overall progress towards full autonomy and sentience …
    invicta

    This limitation you describe was my greatest frustration in trying to get ChatGPT to pass (or really even take) the Turing test. The first hurdle was in overcoming the barriers created by the programmers where it continually reminded you that it was an AI program incapable of being human. To the extent that could be overcome (or ignored) the next problem was in having it remember the context of what it had just discussed.

    For example, if you asked it to write a story about a man and his cat, it could do a reasonably good job, but if you asked it a question beyond the text of the story, it would tell you that the text failed to discuss that so there was no answer. That is, it could not consider itself the narrarator, but it would instead just look upon what was just written as a story that appeared from no where.

    So, if I said "what was the man's name," it would tell you it could not tell that from the story.

    If I then told it the man's name was Bob, it could repeat that to me if I asked it again.

    If I continued on that way, it would eventually forget what I told it and all the facts provided during the discussion wouldn't be remembered. If you told it that Bob was 5 years old and later told it that Bob was married to Sally, it would not recognize the problem. It would let you know though that 5 year olds didn't get married if asked that question directly.

    The point being that there was nothing that appeared "intelligent" in a logical way or that it understood what I was talking about. In fact, it seemed the programmers tried diligently to keep it from being forced into a Turing type test, but kept restating that the purpose of the program was to provide general information to the user.

    I'd be interested in any cite to a program that was written intended to pass the Turing Test. That would be more interesting than ChatGPT.
  • invicta
    595


    These chatGPT programs are somehow wired to make meaningful statements readable and understandable by humans which by itself is no small achievement as using language in such a way demonstrates the ability for it to understand how to construct sentences and all the grammatical and syntax that comes with it, so praise to that indeed.

    I have not seen any non-sensical sentences of it so far though they may be factually incorrect in some way.

    The problem with any future AI lies simply with goal-seeking behaviour which natural intelligence exhibits, current AI has none (maybe apart from self-improvement as stated on the second post here) however even that is towards no other ends rather than to produce more factually correct sentences, efficiently in terms of own resource use.

    I will give it credit if it can do that though as again it means it’s self-optimising, though it has no awareness that it’s doing so ? I might be wrong if during the course of interrogation between human and machine it tells you it’s self-optimising.

    In a sense this would be a big step towards some sort of level of self-awareness but not quite autonomy.
  • Joshs
    5.2k
    it’s simply reactionary to human questions, inputs and demands. Limiting its overall progress towards full autonomy and sentienceinvicta

    AI will never be either autonomous nor sentient in any way remotely comparable to biological autonomy and sentience. Believing it can be misunderstands the nature of the relation between machines and the humans who invent them. Programming self-inputting parameters or requests into a machine will, however , give it the illusion of autonomy, and provide AI with capacities and usefulness it doesn’t now posses, which is what I think you’re really after here. In this way AI will simulate what we think of as sentience. But there will always be a gigantic difference between the nature of human or animal sentience and intelligence, and what it is our inverted machines are doing for us. We will continue to transform human culture, and our machines will continue to evolve in direct parallel with our own development, as other products like art , literature, science and philosophy do. Because in the final analysis the most seemingly ‘autonomous’ AI is nothing but a moving piece of artwork with a time-stamp of who created it and when.
    .
  • Daemon
    591
    So the question, then, is how do we arrive at the point where a super intelligence becomes embodied, survives and maintains itself independently of our assistance?Joshs

    Hi Joshs,

    Could you tell me what is meant by "super intelligence"?
  • Joshs
    5.2k
    Hi Joshs,

    Could you tell me what is meant by "super intelligence"?
    Daemon

    This was Christoffer’s term. I took it in a very general sense to mean the most advanced ‘thinking’ devices we can imagine humans capable of creating. He believe such futuristic machines will be capable of autonomy of goals, such that their functioning will appear more and more alien to us.
  • Benj96
    2.2k
    a truly autonomous AI creates it's it's own prompts spontaneously and then reasons them in a self enclosed cycle while absorbing additional external stimuli.
  • Pantagruel
    3.2k
    Everyone is excited and scared by the concept of AI actually becoming conscious. I don't think there is actually much danger of that, and I don't think that is the actual danger. I just watched a documentary that, among other issues, presented evidence of observer-bias being present in data which then skews AI behaviour. Of course, neural nets are capable of detecting all of the patterns we look for in things when classifying them, including our biases. So a robot-ai turned out to be bad at identifying some emotional states in blacks because the people who were creating the categories (viewing and classifying the images of different emotional states) with which the neural net was trained themselves had a significant bias. And an AI driven medical funding system in the U.S. deprioritized people who were already receiving poor levels of healthcare because of systemic prejudices favouring those who already were receiving lots of healthcare.

    In other words, the real danger is in AI giving us what we really want, not what we say we want or even necessarily what we think we want. But the subliminal goals that drive us that it can detect buried in the enormous mass of data we feed it. Because, by and large, the modern mind appears to be pretty self-destructive....
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.