In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject. — Nemo2124
...when they reach human level intelligence, and we put them in cute robots, we're going to think they're more than machines. That's just how humans are wired. — RogueAI
Don't you think we're pretty close to having something pass the Turing Test? — RogueAI
This would require solving the Problem of Other Minds, which seems insolvable. — RogueAI
I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.
Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy. — mcdoodle
In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject. — Nemo2124
I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.
— fishfry
That doesn't explain emergent phenomenas in simple machine learnt neural networks. — Christoffer
We don't know what happens at certain points of complexities, we don't know what emerges since we can't trace back to any certain origins in the "black box". — Christoffer
While that doesn't mean any emergence of true AI, — Christoffer
it still amounts to a behavior similar to ideas in neuroscience and emergence. How complex systems at certain criticalities emerge new behaviors. — Christoffer
And we don't yet know how AGI compositions of standard neural systems interact with each other. What would happen when there are pathways between different operating models interlinking as a higher level neural system. — Christoffer
We know we can generate an AGI as a "mechanical" simulation of generalized behavior, but we still don't know what emergent behaviors that arise from such a composition. — Christoffer
I find it logically reasonable that since ultra-complex systems in nature, like our brains, developed through extreme amount of iterations over long periods of time and through evolutionary changes based on different circumstances, it "grew" into existence rather than got directly formed. — Christoffer
Even if the current forms of machine learning systems are rudimentary, it may still be the case that machine learning and neural networking is the way forward, but that we need to fine tune how they're formed in ways mimicking more natural progression and growth of naturally occuring complexities. — Christoffer
That the problem isn't the technology or method itself, but rather the strategy of how to implement and use the technology for the end result to form in a similar high complexity but still aligned with what purpose we form it towards. — Christoffer
The problem is that most debates about AI online today just reference the past models and functions, but rarely look at the actual papers written out of the computer science that's going on. And with neuroscience beginning to see correlations between how these AI systems behave and our own neurological functions in our brains, there are similarities that we shouldn't just dismiss. — Christoffer
There are many examples in science in which a rudimentary and common methods or things, in another context, revolutionized technology and society. That machine learning systems might very well be the exact way we achieve true AI, but that we don't know truly how yet and we're basically fumbling in the dark, waiting for the time when we accidentally leave the petri dish open over night to grow mold. — Christoffer
↪fishfry Don't you think we're pretty close to having something pass the Turing Test? — RogueAI
Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. — fishfry
There's no intelligence, let alone self-awareness being demonstrated. — fishfry
There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening. — fishfry
This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware. — fishfry
Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software. — fishfry
And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do. — fishfry
So I didn't need to explain this, you already agree. — fishfry
Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.
What "new behaviors" to you refer to? A chatbot is a chatbot. — fishfry
Believe they start spouting racist gibberish to each other. I do assume you follow the AI news. — fishfry
Well if we don't know, what are you claiming?
You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard. — fishfry
I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will. — fishfry
In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know. — fishfry
They know everything that's happened, but nothing about what's happening. — fishfry
They can't reason their way through a situation they haven't been trained on. — fishfry
since someone chooses what data to train them on — fishfry
Neural nets will never produce AGI. — fishfry
You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output. — fishfry
I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick. — fishfry
Neural nets are the wrong petri dish.
I appreciate your thoughtful comments, but I can't say you moved my position. — fishfry
You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc... — Nemo2124
Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.
There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening. — fishfry
Where do we draw the line about subjectivity? — Christoffer
they develop in the end a degree of subjectivity that can be given recognition through language. — Nemo2124
In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution. — Christoffer
Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it. — flannel jesus
Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution. — Christoffer
No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that. — flannel jesus
You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it? — flannel jesus
"we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me. — flannel jesus
if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"? — flannel jesus
The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution. — flannel jesus
Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact. — Christoffer
Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation. — flannel jesus
"Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back." — flannel jesus
And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a join and some way of powering the extension and contraction of that joint."
And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution. — flannel jesus
That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer? — flannel jesus
No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts. — Christoffer
so robots can't walk? — flannel jesus
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.