"inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are. — flannel jesus
With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this. — Christoffer
Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anything — flannel jesus
I'm talking about an Ai that passes all the time, even against people who know how to trip up Ai's. We don't have anything like that yet. — RogueAI
This is simply wrong. — Christoffer
These are examples of what I'm talking about:
https://hai.stanford.edu/news/examining-emergent-abilities-large-language-models
https://ar5iv.labs.arxiv.org/html/2206.07682
https://www.jasonwei.net/blog/emergence
https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/ — Christoffer
Emergence does not equal AGI or self-awareness, but they mimmick what many neuroscience papers are focused on in regards to how our brain manifest abilities out of increasing complexity. And we don't yet know how combined models will function. — Christoffer
No one is claiming this. But equally, the problem is, how do you demonstrate it? Effectively the Chinese room problem. — Christoffer
There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.
— fishfry
The current predictive skills are extremely limited and far from human abilities, but they're still showing up, prompting a foundation for further research. — Christoffer
But no one has said that the current LLMs in of themselves will be able to reach AGI. Not sure why you strawman in such conclusions? — Christoffer
Why does conventional hardware matter when it's the pathways in the network that is responsible for the computation? — Christoffer
The difference here is basically that standard operation is binary in pursuit of accuracy, but these models operate on predictions, closer to how physical systems do, which means you increase the computational power with a slight loss of accuracy. That they operate on classical software underneath does not change the fact that they operate differently as a whole system. Otherwise, why would these models vastly outperform standard computation for protein folding predictions? — Christoffer
Seen as the current research in neuroscience points to emergence in complexities being partly responsible for much of how the brain operates, why wouldn't a complex computer system that simulate similar operation not form emergent phenomenas? — Christoffer
There's a huge difference between saying that "it forms intelligence and consciousness" and saying that "it generates emergent behaviors". There's no claim that any of these LLMs are conscious, that's not what this is about. And AGI does not mean conscious or intelligent either, only exponentially complex in behavior, which can form further emergent phenomenas that we haven't seen yet. I'm not sure why you confuse that with actual qualia? The only claim is that we don't know where increased complexity and multimodal versions will further lead emergent behaviors. — Christoffer
This is just a false binary fallacy and also not correct. The programmable behavior is partly weights and biases within the training, but those are extremely basic and most specifics occur in operational filters before the output. If you prompt it for something, then there can be pages of instructions that it goes through in order to behave in a certain way. — Christoffer
In ChatGPT, you can even put in custom instructions that function as a pre-instruction that's always handled before the actual prompt, on top of what's already in hidden general functions. — Christoffer
That doesn't mean the black box is open. There's still a "black box" for the trained model in which it's impossible to peer into how it works as a neural system. — Christoffer
This further just illustrates the misunderstandings about the technology. Making conjectures about the entire system and the technology based on these company's bad handling of alignment does not reduce the complexity of the system itself or prove that it's "not a black box". It only proves that the practical application has problems, especially in the commercial realm. — Christoffer
Maybe read the entire argument first and sense the nuances. You're handling all of this as a binary agree or don't discussion, which I find a bit surface level. — Christoffer
Check the publications I linked to above. — Christoffer
Do you understand what I mean by emergence? What it means in research of complex systems and chaos studies, especially related to neuroscience. — Christoffer
Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.
— fishfry
That's not what I'm talking about. I'm talking about multimodality. — Christoffer
Most "news" about AI is garbage on both sides. We either have the cryptobro-type dudes thinking we'll have a machine god a month from now, or the luddites on the other side who don't know anything about the technology but sure likes to cherry-pick the negatives and conclude the tech to be trash based on mostly just their negative feelings. — Christoffer
I'm not interested in such surface level discussion about the technology. — Christoffer
If you want to read more about emergence — Christoffer
in terms of the mind you can find my other posts around the forum about that. — Christoffer
Emergent behaviors has its roots in neuroscience and the work on consciousness and the mind. — Christoffer
And since machine learning to form neural patterns is inspired by neuroscience and the way neurons work, there's a rational deduction to be found in how emergent behaviors, even rudimentary ones that we see in these current AI models, are part of the formation of actual intelligence. — Christoffer
This, when combined with evidence that the brain may be critical, suggests that ‘consciousness’ may simply arise out of the tendency of the brain to self-organize towards criticality. — Christoffer
The problem with your reasoning is that you use the lack of a final proven theory of the mind as proof against the most contemporary field of study in research about the mind and consciousness. — Christoffer
It's still making more progress than any previous theories of the mind and connects to a universality about physical processes. Processes that are partly simulated within these machine learning systems. And further, the problem is that your reasoning is just binary; it's either intelligent with qualia, or it's just a stupid machine. That's not how these things work. — Christoffer
I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.
— fishfry
I'm not saying AGI is imminent, but I wouldn't take the other side of the bet either. You have to be dead sure about a theory of the mind or theories of emergence to be able to claim either way, and since you don't seem to aspire to any theory of emergence, then what's the theory that you use as a premiss for concluding it "not possible"? — Christoffer
In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know.
— fishfry
How is that different from a human mind? — Christoffer
The only technical difference between a human brain and these systems in this context is that the AI systems are trained and locked into an unchanging neural map. The brain, however, is constantly shifting and training while operating. — Christoffer
If a system is created that can, in real time, train on a constant flow of audiovisual and data information inputs, which in turn constantly reshape its neural map. What would be the technical difference? The research on this is going on right now. — Christoffer
They can't reason their way through a situation they haven't been trained on.
— fishfry
The same goes for humans. — Christoffer
since someone chooses what data to train them on
— fishfry
They're not picking and choosing data, they try to maximize the amount of data as more data means far better accuracy, just like any other probability system in math and physics. — Christoffer
Neural nets will never produce AGI.
— fishfry
Based on what? Do you know something about multimodal systems that others don't? Do you have some publication that proves this impossibility? — Christoffer
Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? — Christoffer
As far as I can see the most glaring difference is the real time re-structuring of the neural paths and multimodal behavior of our separate brain functions working together. No current AI system, at this time, operates based on those expanded parameters, which means that any positive or negative conclusion for that require further progress and development of these models. — Christoffer
Bloggers usually don't know shit and they do not operate through any journalistic praxis. While the promoters and skeptics are just driving up the attention market through the shallow twitter brawls that pops up due to a trending topic. — Christoffer
Are you seriously saying that this is the research basis for your conclusions and claims on a philosophy forum? :shade: — Christoffer
Maybe stop listening to bloggers and people on the attention market? — Christoffer
I rather you bring me some actual scientific foundation for your next premises to your conclusions. — Christoffer
I don't think this is a take that's likely correct. This super interesting writeup on an LLM learning to model and understand and play chess convinces me of the exact opposite of what you've said here:
https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation — flannel jesus
I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.
It's true that a big pile of flipping bits somehow implements a web browser or a chess program or a word processor or an LLM. But calling that emergence, as if that explains anything at all, is a cheat. — fishfry
I stand astonished. That's really amazing. — fishfry
Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings. — flannel jesus
more akin to a form of explicit reasoning that relies on an ability to attend to internal representations? — Pierre-Normand
Did you read the article I posted that we're talking about? — flannel jesus
I appreciate you taking the time to read it, and take it seriously. — flannel jesus
Ever since chat gpt gained huge popularity a year or two ago with 3.5, there have been people saying LLMs are "just this" or "just that", and I think most of those takes miss the mark a little bit. "It's just statistics" it "it's just compression". — flannel jesus
Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings. — flannel jesus
And one important extra data point from that write up is the bits about unique games. Games become unique, on average, about 10 moves in, and even when a game is entirely unique and wasn't in chat gpts training set, it STILL calculates legal and reasonable moves. I think that speaks volumes. — flannel jesus
LLMs have some capacities that "emerged" in the sense that they were acquired as a result of their training when it was not foreseen that they would acquire them. Retrospectively, it makes sense that the autoregressive transformer architecture would enable language models to acquired some of those high-level abilities since having them promotes the primary goal of the training, which was to improve their ability to predict the next token in texts from the training data. (Some of those emergent cognitive abilities are merely latent until they are being reinforced through training the base model into a chat or instruct variant). — Pierre-Normand
One main point about describing properties or capabilities being emergent at a higher level of description is that they don't simply reduce to the functions that were implemented at the lower level of description. — Pierre-Normand
This is true regardless of there being an explanation available or not for their manifest emergence, — Pierre-Normand
and it applies both to the mental abilities that human being have in relation to their brains and to the cognitive abilities that conversational AI agents have in relation to their underlying LLMs. — Pierre-Normand
The main point is that just because conversational AI agents (or human beings) can do things that aren't easily explained as a function of what their underlying LLMs (or brains) do at a "fundamental" level of material realization, isn't a ground for denying that they are "really" doing those things. — Pierre-Normand
This one developed a higher level of understanding than it was programmed for, if you look at it that way. — fishfry
No internal model of any aspect of the actual game. — fishfry
the "Chinese room" isn't a test to pass — flannel jesus
"We have no idea what's happening, but emergence is a cool word that obscures this fact." — fishfry
I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about. — fishfry
But calling that emergence, as if that explains anything at all, is a cheat. — fishfry
"Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear. — fishfry
Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight. — fishfry
I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.
If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.
A new idea is needed. — fishfry
Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI. — fishfry
I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires. — fishfry
They're a very clever way to do data mining. — fishfry
(1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s. — fishfry
By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits. — fishfry
Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years? — fishfry
I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection... — fishfry
Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means. — fishfry
You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates. — fishfry
You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."
So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."
You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post. — fishfry
Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making? — fishfry
You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management. — fishfry
You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s. — fishfry
"Impossible to peer into." I call that bullpucky. Intimidation by obsurantism. — fishfry
Every line of code was designed and written by programmers who entirely understood what they were doing. — fishfry
And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. . — fishfry
You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."
I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs. — fishfry
Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?
That will save us both a lot of time.
I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.
I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all. — fishfry
I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t. — fishfry
Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."
Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time. — fishfry
And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links. — fishfry
Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me. — fishfry
Is there anything I've written that leads you to think that I want to read more about emergence? — fishfry
Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting. — fishfry
My point exactly. In this context, emergence means "We don't effing know." That's all it means. — fishfry
I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords. — fishfry
You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything. — fishfry
I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.
I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it. — fishfry
I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.
It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case. — fishfry
I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post. — fishfry
But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there. — fishfry
Ah. The first good question you've posed to me. Note how jargon-free it was. — fishfry
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.. — fishfry
But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening. — fishfry
Nobody knows what the secret sauce of human minds is. — fishfry
Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea. — fishfry
How can you say that? Reasoning our way through novel situations and environments is exactly what humans do. — fishfry
That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs. — fishfry
How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time? — fishfry
Humans are not "probability systems in math or physics." — fishfry
Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking. — fishfry
Yes, but apparently you can't see that. — fishfry
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.. — fishfry
Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? — Christoffer
Yes, but apparently you can't see that. — fishfry
I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity. — fishfry
Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do you — fishfry
You're right, I lack exponential emergent multimodality. — fishfry
I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal? — fishfry
You've convinced me to stop listening to you. — fishfry
No internal model of any aspect of the actual game.
— fishfry
I feel like you might have missed some important paragraphs in the article. Did you notice the heat map pictures? Did you read all the paragraphs around that? A huge part of the article is very much exploring the evidence that gpt really does model the game. — flannel jesus
Those heat map things are amazing. — fishfry
If you don't give it a mental model of the game space, it builds one of its own. — fishfry
No internal model of any aspect of the actual game
↪fishfry okay so I guess I'm confused why, after all that, you still said
No internal model of any aspect of the actual game — flannel jesus
For full clarity, and I'm probably being unnecessarily pedantic here, it's not necessarily fair to say that's all they did. That's all their goal was, that's all they were asked to - BUT what all of this should tell you, in my opinion, is that when a neural net is asked to achieve a task, there's no telling HOW it's actually going to achieve that task. — flannel jesus
In order to achieve the task of auto completing the chess text strings, it seemingly did something extra - it built an internal model of a board game which it (apparently) reverse engineered from the strings. (I actually think that's more interesting than its relatively high chess rating, the fact that it can reverse engineer the rules of chess seeing nothing but chess notation). — flannel jesus
So we have to distinguish, I think, between the goals it was given, and how it accomplished those goals. — flannel jesus
Apologies if I'm just repeating the obvious. — flannel jesus
You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions. — Christoffer
Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know. — fishfry
And thanks so much for pointing me at that example. It's a revelation. — fishfry
When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.
You're unpleasant, so I won't be interacting with you further. — fishfry
Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance). — Nemo2124
Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.
Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room). — Nemo2124
Even our own subjectivity has still some philosophical and metaphysical questions. We simply start from being subjects. Hence it's no wonder we have problems to put "subjectivity" into to our contraptions called computers.But we still cannot know if they have subjectivity. — Christoffer
When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions. — ssu
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.