Of course AI can have memory, and they are very good at memorizing. In fact, the whole responses from AI on the questions put forward by users come from their memories, and large part of the idea of self seems to be based on one's past memories. When a person lost all his/her memories, the idea of self would have gone too.Artificial intelligence does have memory, so it is likely that this could be used as a basis for creativity. The central aspects of consciousness may be harder to create. I would imagine simulated dream states as showing up as fragmented images and words. It would be rather surreal. — Jack Cummins
"disembodied spirits"? Do spirits exist? Of they did, what form of substance would they be?I did see a session of AI seance advertised. It would probably involve attempts to conjure up disembodied spirits or appear to do so. — Jack Cummins
For self identify of the informative devices, I too sometimes get into illusion they have some sort of mental states. When my mobile phone disappears from my reach when I am needing it desperately, I used to think, this bloody phone is trying to rebel against me by absconding without notice. When I find it under the desk or in the corner of the kitchen shelf or even under the car seat, I then realise it was my forgetfulness or carelessness losing the track on its last placement rather than the mobile phone's naughtiness.As far as AI goes, it would be good to question it about its self and identity. I was rather tempted to try this on a phone call which was artificial intelligence. — Jack Cummins
Fair enough. We seem to agree that understanding, like intelligence, comes in degrees. When someone wakes up during surgery there is something different about the situation than what we currently understand is happening, and figuring that out gives us a better understanding. Although, there is the old phrase, "You only get the right answer after making all possible mistakes", we should consider. :smile:The point is that people do things without knowing how they are done. This includes acts of creativity, aspects of intelligence, willed action, etc. — Manuel
If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are. — Manuel
What does it mean for you to be tired if not having a lack of energy? What are you doing when you go to sleep and eat? What would happen if you couldn't find food? Wouldn't you "shut off" after the energy stores in your body were exhausted?If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are. — Manuel
I agree. Again, we seem to agree that intelligence comes in degrees, where various humans and animals possess various levels of intelligence commensurate with their exposure to the world and the structure and efficiency of their brain, and an individual person can be more or less intelligent in certain fields of knowledge commensurate with their exposure to those fields of knowledge.Behavior is an external reaction of an internal process. A behavior itself is neither intelligent nor not intelligent, it depends on what happened that lead to that behavior.
What characteristics make a person intelligent? Many things: problem solving, inquisitiveness, creativity, etc. etc. There is also the quite real issue of different kinds of intelligence. I think that even having a sense of humor requires a certain amount of intelligence, a quick wit, for instance.
It's not trivial. — Manuel
— Manuel
No difference? A brain in isolation does very little. A mind needs a person, unless one is a dualist.I don't see a difference between brain and mind. I think we both have similar brains and minds. My brain and mind are less similar to a dog or cat's brain and mind. Brains and minds are the same thing just from different views in a similar way that Earth is the same planet even though it looks flat from it's surface and spherical from space. — Harry Hindu
What does it mean to "program" something if not to design it to behave and respond in certain ways? Natural selection programmed humans via DNA. Humans are limited by their physiology and degree of intelligence, just as a computer/robot is limited by it's design and intelligence (efficiency at processing inputs to produce meaningful outputs). People can be manipulated by feeding them false information. You learn to predict the behavior of people you know well and use that to some advantage, such as avoiding certain subjects when conversing with them.But if they claimed it then it would be true? No. We program computers, not people. We can't program people, we don't know how to do so. Maybe in some far off future we could do so via genetics.
If someone is copying Hamlet word for word into another paper, does the copied Hamlet become a work of genius or is it just a copy? Hamlet shows brilliance, copying it does not. — Manuel
Sounds like you at a young age when you were trying to learn a language.I wonder if AI can understand and respond in witty and appropriate way to the user inputs in some metaphor or joke forms. I doubt they can. They often used to respond with totally inappropriate way to even normal questions which didn't make sense. — Corvus
I wouldn't say that getting a joke is a sign you have mastered a language. The speaker or writer could be using words in new ways that the listener or reader have not heard or seen used in that way before. Language evolves. New metaphors appear. We add words to our language. New meanings to existing words in the form of slang, etc. It seems to me that learning one's language is an ever-evolving process.We often say that the one of the sure sign of mastering a language is when one can fully utilize and understand the dialogues in jokes and metaphors. — Corvus
I wouldn't say that developers are pre-programming a computer to respond to ordinary language use, but they have programmed it to learn current ordinary language use, in the same way you were not programmed with a native language when you were born. You were born with the capacity to learn language. LLM will evolve as our language evolves without having to update the code. It will update its own code, just as you update your code when you encounter new uses of words, or learn a different language.It is perfectly fine when AI or ChatBot users take them as informational assistance searching for data they are looking for. But you notice some folks talk as if they have human minds just because they respond in ordinary conversational language which are pre-programmed by the AI developers and computer programmers. — Corvus
What is "desire" or "will power", if not an instinctive need to respond to stimuli that are obstacles to homeostasis? Sure, modern computers can only engage in achieving our goals, not their own. But that is a simple matter of design and programming.I am not sure the definition is logically, semantically correct or fit for use. There are obscurities and absurdities in the definition. First of all, it talks about achieving a goal. How could machines try to achieve a goal, when they have no desire or will power in doing so? — Corvus
Well, I did ask if intelligence is a thing or a process. I see it more as a process. If you see it more as a thing, then I encourage you to ask yourself the same questions you are asking me - where does intelligence start and end? I would say that intelligence, as a process, starts when you wake up in the morning and stops when you go to sleep.The process of achieving a goal? Here again, what do you mean by process? Is intelligence always in the form of process? Does it have starting and ending? So what is the start of intelligence? What is the ending of intelligence? — Corvus
What role does qualia play in perception? Are colors, shapes, sounds, feelings, smells and tastes the only forms qualia takes? If we take the mind as a type of working memory that contains bits of information we refer to as qualia, and give a robot a type of working memory in which the qualia may take different forms but it does the same thing in informing the robot/organism of some state of affairs relative to its own body to enable it to engage in meaningful actions, then what exactly is missing other than the form the quale take in working memory?I am sure that there are objective means of demonstrating sentience. Cell division and growth are aspects of this. Objects don't grow of there own accord and don't have DNA. The energy field of sentient beings is also likely to be different, although artificial intelligence and computers do have energy fields as well.
The creation of a nervous system may be possible and even the development of artificial eyes. However, the actual development of sensory perception is likely to be a lot harder to achieve, as an aspect of qualia which may not be reduced to bodily processes completely. — Jack Cummins
Fair enough. We seem to agree that understanding, like intelligence, comes in degrees. When someone wakes up during surgery there is something different about the situation than what we currently understand is happening, and figuring that out gives us a better understanding. Although, there is the old phrase, "You only get the right answer after making all possible mistakes", we should consider. — Harry Hindu
What does it mean for you to be tired if not having a lack of energy? What are you doing when you go to sleep and eat? What would happen if you couldn't find food? Wouldn't you "shut off" after the energy stores in your body were exhausted? — Harry Hindu
I have been using computers and robots. What does that say about what intelligence is? — Harry Hindu
A brain functioning in isolation is a mind without a person, and is an impossible occurrence, which is why I pointed out before the distinction between empiricism and rationalism is a false dichotomy. The form your reason takes is sense data you have received via your interaction with the world. You can only reason, or think, in shapes, colors, smells, sounds, tastes and feelings. The laws of logic take the form of a relation between scribbles on a screen which corresponds to a process in your mind (a way of thinking). — Harry Hindu
Natural selection programmed humans via DNA. Humans are limited by their physiology and degree of intelligence, just as a computer/robot is limited by it's design and intelligence (efficiency at processing inputs to produce meaningful outputs). — Harry Hindu
Desire or will power is an instinctive need which is the base of all mental operations in the living. Obviously AI is incapable of that mental foundation in their operation due to the fact they are created by humans in the machinery structure and design. Therefore their operations are purely artificial and mechanistic procedures customized and designed to assist human chores.What is "desire" or "will power", if not an instinctive need to respond to stimuli that are obstacles to homeostasis? Sure, modern computers can only engage in achieving our goals, not their own. But that is a simple matter of design and programming. — Harry Hindu
Intelligence is neither process nor a thing. It is a mental capability of the living beings with the organ called brain.Well, I did ask if intelligence is a thing or a process. I see it more as a process. If you see it more as a thing, then I encourage you to ask yourself the same questions you are asking me - where does intelligence start and end? I would say that intelligence, as a process, starts when you wake up in the morning and stops when you go to sleep. — Harry Hindu
I would suggest that you go back in your mind to the time when you were learning your native language and describe what it was like, how you learned to use the scribbles and sounds, etc., and then explain what is different about how AI is learning to use language. I would suggest that the biggest difference is the way AI and humans interact with the world, not in some underlying structure of organic vs inorganic. — Harry Hindu
Another critical point of AI's responses is that, they are predictable within the technological limitations and preprogramming specs. To the new users, they may appear to be intelligent and creative, but from the developers point of view, the whole thing is pre-planned and predicted debugging and simulations. — Corvus
Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)
Corvus, you are pretending to understand modern AI when you clearly don't. — wonderer1
I don't think so. Conceivability –/–> possibility.It is questionable but it is a possiblity. — Jack Cummins
Suppose "reflective self" (ego) is nothing but a metacognitive illusion¹ – hallucination – that persists in some kluge-like evolved brains? Meditative traditions focus on suspending / eliminating this (self-not self duality) illusion, no? e.g. Buddhist anattā, Daoist wúwéi, ... positive psychology's flow-state, etc.I generally see artificial intelligence as problematic as being without reflective self. — Jack Cummins
If I correctly understand his work, I suspect Spinoza would say "to create substance" is impossible.Would it be possible to create Spinoza's form of substance itself in a system as opposed to in nature?
My scenario^^ makes immortality completely voluntary so worrying about 'existing eternally' isn't warranted.having to exist for eternity
Detachment could help efficiency in their capacity carrying out the tasks whatever they are customised to conduct. Their limitation is the narrow field they can perform their customised tasks, but because of the narrowness, it also allows them more efficient, powerful and speedy in the given tasks.The artificial intelligence may be detached but the question is whether detachment helps or hinders understanding. It could probably go either way. — Jack Cummins
What we can say is that the nature of AI intelligence is not the same intelligence of humans in any forms or shape, and that was the whole point of mine in my posts. I have never claimed I understand AI in any degree or level, as @wonderer1 claimed in his out of the blue postThe beings of sentience may be lead astray by too much emotion and the detached could be unable to relate to the needs of the sentient beings. — Jack Cummins
Corvus, you are pretending to understand modern AI when you clearly don't. — wonderer1
Yes, I imagine – 'a plausible' best case scenario – 22nd/23rd century* Earth as a global nature preserve with a much smaller (>1 billion) human population of 'conservationists, park rangers & eco-travelers' who are mostly settled in widely distributed (regional), AI-automated arcologies (and even space habitats e.g. asteroid terreria) in order to minimize our ecological footprint as much as possible.James Lovelock, in his final writings spoke of the possiblity of a race of artificial intelligent beings and some remaining human beings overseeing the natural world. — Jack Cummins
No more than "humans worshipping" the internet (e.g. social media, porn, gambling, MMORPGs). As an idolatrous species we don't even "worship" plumbing-sanitation, (atomic) clocks, electricity grids, phones, banking or other forms of (automated) infrastructure which dominate – make possible – modern life.Would it be a matter of humans 'worshipping' the artificial intelligent beings as the superior 'overlords'?
However, I suspect that the accelerating development and distribution of systems of metacognitive automation (soon-to-be AI agents rather than just AI tools (e.g. LLMs)) will also automate all macro 'human controls' before the last of the (tech/finance) oligarchs can pull the proverbial plugs; ergo ...It may be that our role on this planet is not to worship God – but to [build it]. — Arthur C. Clarke
... my guess (hope): "AGI" (post-scarcity automation sub-systems —> Kardashev Type 1*) will serve and "ASI" (post-terrestrial megaengineering systems —> Kardashev Type 2) will master, and thereby post-scarcity h. sapiens (micro-agents) will be AGI's guests, passengers, wards, patients & protectees ... like all other terrestrial flora and fauna.*Who[What] would be servant and master? — Jack Cummins
:fire:Man is something that shall be overcome. Man is a rope, tied between beast and [the singularity] — a rope over an abyss. What is great in man is that he is a bridge and not an end. — Friedrich Nietzsche
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.