• Christoffer
    2k
    "inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are.flannel jesus

    Why is that moving a goal post? It's literally what engineers use today to design things. Like how they designed commercial drones using evolutionary iterations to find the best balanced, light and aerodynamic form for it. They couldn't design it by "just designing it" anymore than the first people who attempted flight couldn't do so by flapping planks with feathers on them.

    With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this. It's pointless.
  • flannel jesus
    1.8k
    With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this.Christoffer

    Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anything
  • Christoffer
    2k
    Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anythingflannel jesus

    You have demonstrated even less. You've done no real argument other than saying that "we can walk because we have legs", a conclusion that's so banal in its shallow simplicity that it could be uttered by a five-year old.

    You ignore actually making arguments from the questions asked and you don't even seem to understand what I'm writing by the way you answer to it. When I explain why robots "can't just walk" you simply utter "so robots can't walk?". Why bother putting time in a discussion with this low quality attitude. Demonstrate a better level of discourse first.
  • flannel jesus
    1.8k
    I'll start demonstrating that by informing you of something you apparently do not know: the "Chinese room" isn't a test to pass
  • Nemo2124
    29


    Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance).

    Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.

    Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room).
  • fishfry
    3.4k
    I'm talking about an Ai that passes all the time, even against people who know how to trip up Ai's. We don't have anything like that yet.RogueAI

    Agreed. My point is that the humans are the weak link.

    Another interesting point is deception. For Turing, the ability to fool people about one's true nature is the defining attribute of intelligence. That tells us more about Turing, a closeted gay man in 40's-50's England, than it does about machine intelligence.

    What if we had a true AGI that happened to be honest? "Are you human?" "No, I'm an AI running such and so software on such and so hardware." It could never pass the test even it were self-aware.
  • fishfry
    3.4k
    This is simply wrong.Christoffer

    I take emergence to be a synonym for, "We have no idea what's happening, but emergence is a cool word that obscures this fact."


    I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.

    It's true that a big pile of flipping bits somehow implements a web browser or a chess program or a word processor or an LLM. But calling that emergence, as if that explains anything at all, is a cheat.

    Emergence does not equal AGI or self-awareness, but they mimmick what many neuroscience papers are focused on in regards to how our brain manifest abilities out of increasing complexity. And we don't yet know how combined models will function.Christoffer

    "Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear.

    No one is claiming this. But equally, the problem is, how do you demonstrate it? Effectively the Chinese room problem.Christoffer

    Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight.

    There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.
    — fishfry

    The current predictive skills are extremely limited and far from human abilities, but they're still showing up, prompting a foundation for further research.
    Christoffer

    I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.

    If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.

    A new idea is needed.

    But no one has said that the current LLMs in of themselves will be able to reach AGI. Not sure why you strawman in such conclusions?Christoffer

    Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI.

    Why does conventional hardware matter when it's the pathways in the network that is responsible for the computation?Christoffer

    I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires.

    The difference here is basically that standard operation is binary in pursuit of accuracy, but these models operate on predictions, closer to how physical systems do, which means you increase the computational power with a slight loss of accuracy. That they operate on classical software underneath does not change the fact that they operate differently as a whole system. Otherwise, why would these models vastly outperform standard computation for protein folding predictions?Christoffer

    They're a very clever way to do data mining. I didn't say I wasn't impressed with their achievements. Only that (1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s.

    By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits.

    I do oppose this mysterianistic attitude on the part of many neural net proponents. It clouds people's judgment. How did black George Washington show up on Google's AI? Not because it "operates different as a whole system." Rather, it's because management told the programmers to tune it that way, and they did.

    Neural nets are deterministic programs operating via principles that were well understood 70 years ago.

    Stop the neural net mysterianism! That's my motto for today.

    they operate differently as a whole system
    Yes, and why would a system that is specifically very good at handling extreme complexities, not begin to mimic complexities in the physical world?[/quote]

    When did I ever claim that large, complex programs aren't good at mimicking the physical world? On the contrary, they're fantastic at it.

    I don't mean to downplay the achievements of neural nets. Just want to try to get people to dial back the hype ("AGI is just around the corner") and the mysterianism ("they're black boxes and even their programmers can't understand them.")




    Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years?

    Emergence means, "We don't understand what's going on, but emergence is a cool word that will foll people." And it does.

    Seen as the current research in neuroscience points to emergence in complexities being partly responsible for much of how the brain operates, why wouldn't a complex computer system that simulate similar operation not form emergent phenomenas?Christoffer

    Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means.

    You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates.

    There's a huge difference between saying that "it forms intelligence and consciousness" and saying that "it generates emergent behaviors". There's no claim that any of these LLMs are conscious, that's not what this is about. And AGI does not mean conscious or intelligent either, only exponentially complex in behavior, which can form further emergent phenomenas that we haven't seen yet. I'm not sure why you confuse that with actual qualia? The only claim is that we don't know where increased complexity and multimodal versions will further lead emergent behaviors.Christoffer

    You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."

    So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."

    You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post.

    This is just a false binary fallacy and also not correct. The programmable behavior is partly weights and biases within the training, but those are extremely basic and most specifics occur in operational filters before the output. If you prompt it for something, then there can be pages of instructions that it goes through in order to behave in a certain way.Christoffer

    Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making?

    In ChatGPT, you can even put in custom instructions that function as a pre-instruction that's always handled before the actual prompt, on top of what's already in hidden general functions.Christoffer

    You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management.

    That doesn't mean the black box is open. There's still a "black box" for the trained model in which it's impossible to peer into how it works as a neural system.Christoffer

    You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s.

    "Impossible to peer into." I call that bullpucky. Intimidation by obsurantism.

    Every line of code was designed and written by programmers who entirely understood what they were doing.

    And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. They do not go, "Oh, this black box is inscrutable, incomprehensible. We better just pray to the silicon god."

    It doesn't work that way.

    This further just illustrates the misunderstandings about the technology. Making conjectures about the entire system and the technology based on these company's bad handling of alignment does not reduce the complexity of the system itself or prove that it's "not a black box". It only proves that the practical application has problems, especially in the commercial realm.Christoffer

    You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."

    I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs.

    Maybe read the entire argument first and sense the nuances. You're handling all of this as a binary agree or don't discussion, which I find a bit surface level.Christoffer

    Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?

    That will save us both a lot of time.

    I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.

    I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all.

    Check the publications I linked to above.Christoffer

    I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t.

    Do you understand what I mean by emergence? What it means in research of complex systems and chaos studies, especially related to neuroscience.Christoffer

    Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."

    Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time.

    Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.
    — fishfry

    That's not what I'm talking about. I'm talking about multimodality.
    Christoffer

    Exponential emergent multimodality of the inner black box.

    Do you have the slightest self-awareness that you are spouting meaningless buzzwords at this point?

    Do you know what multimodal freight is? It's a technical term in the shipping industry that means trains, trucks, airplanes, and ships.

    It's not deep.

    Most "news" about AI is garbage on both sides. We either have the cryptobro-type dudes thinking we'll have a machine god a month from now, or the luddites on the other side who don't know anything about the technology but sure likes to cherry-pick the negatives and conclude the tech to be trash based on mostly just their negative feelings.Christoffer

    And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links.

    This is going nowhere.

    I'm not interested in such surface level discussion about the technology.Christoffer

    Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me.


    If you want to read more about emergenceChristoffer

    Oh man you are killin' me.

    Is there anything I've written that leads you to think that I want to read more about emergence?


    in terms of the mind you can find my other posts around the forum about that.Christoffer

    Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting.

    Emergent behaviors has its roots in neuroscience and the work on consciousness and the mind.Christoffer

    My point exactly. In this context, emergence means "We don't effing know." That's all it means.

    And since machine learning to form neural patterns is inspired by neuroscience and the way neurons work, there's a rational deduction to be found in how emergent behaviors, even rudimentary ones that we see in these current AI models, are part of the formation of actual intelligence.Christoffer

    I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords.

    This, when combined with evidence that the brain may be critical, suggests that ‘consciousness’ may simply arise out of the tendency of the brain to self-organize towards criticality.Christoffer

    You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything.

    The problem with your reasoning is that you use the lack of a final proven theory of the mind as proof against the most contemporary field of study in research about the mind and consciousness.Christoffer

    I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.

    I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it.

    It's still making more progress than any previous theories of the mind and connects to a universality about physical processes. Processes that are partly simulated within these machine learning systems. And further, the problem is that your reasoning is just binary; it's either intelligent with qualia, or it's just a stupid machine. That's not how these things work.Christoffer

    I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.

    It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case.

    I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.
    — fishfry

    I'm not saying AGI is imminent, but I wouldn't take the other side of the bet either. You have to be dead sure about a theory of the mind or theories of emergence to be able to claim either way, and since you don't seem to aspire to any theory of emergence, then what's the theory that you use as a premiss for concluding it "not possible"?
    Christoffer

    I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post.

    But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there.

    In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know.
    — fishfry

    How is that different from a human mind?
    Christoffer

    Ah. The first good question you've posed to me. Note how jargon-free it was.

    I don't know for sure. Nobody knows. But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.

    But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening.

    The only technical difference between a human brain and these systems in this context is that the AI systems are trained and locked into an unchanging neural map. The brain, however, is constantly shifting and training while operating.Christoffer

    Interesting idea. Do they have neural nets that do that? My understanding is that they train the net, and after that, the execution is deterministic. Perhaps you have a good research idea there. Nobody knows what the secret sauce of human minds is.

    If a system is created that can, in real time, train on a constant flow of audiovisual and data information inputs, which in turn constantly reshape its neural map. What would be the technical difference? The research on this is going on right now.Christoffer

    Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea.

    They can't reason their way through a situation they haven't been trained on.
    — fishfry

    The same goes for humans.
    Christoffer

    How can you say that? Reasoning our way through novel situations and environments is exactly what humans do.

    That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs.

    How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time?

    since someone chooses what data to train them on
    — fishfry

    They're not picking and choosing data, they try to maximize the amount of data as more data means far better accuracy, just like any other probability system in math and physics.
    Christoffer

    Humans are not "probability systems in math or physics."

    Neural nets will never produce AGI.
    — fishfry

    Based on what? Do you know something about multimodal systems that others don't? Do you have some publication that proves this impossibility?
    Christoffer

    Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking.

    Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences?Christoffer

    Yes, but apparently you can't see that.


    As far as I can see the most glaring difference is the real time re-structuring of the neural paths and multimodal behavior of our separate brain functions working together. No current AI system, at this time, operates based on those expanded parameters, which means that any positive or negative conclusion for that require further progress and development of these models.Christoffer

    I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity.

    Bloggers usually don't know shit and they do not operate through any journalistic praxis. While the promoters and skeptics are just driving up the attention market through the shallow twitter brawls that pops up due to a trending topic.Christoffer

    Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do you?

    Are you seriously saying that this is the research basis for your conclusions and claims on a philosophy forum? :shade:Christoffer

    You're right, I lack exponential emergent multimodality.

    I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal?

    Maybe stop listening to bloggers and people on the attention market?Christoffer

    You've convinced me to stop listening to you.

    I rather you bring me some actual scientific foundation for your next premises to your conclusions.Christoffer

    It's been nice miscommunicating with you. I'm sure you must feel the same about me.

    tl;dr: Someone said AGI is imminent. I said I'd gladly take the other side of that bet. I reiterate that. Also, when it comes to AGI and a theory of mind, neural nets are like climbing a tree to reach the moon. You apparently seem to be getting closer, but it's a dead end. And, the most important point: buzzwords are a sign of fuzzy thinking.

    I appreciate the chat, I will say that you did not move my position.
  • fishfry
    3.4k
    I don't think this is a take that's likely correct. This super interesting writeup on an LLM learning to model and understand and play chess convinces me of the exact opposite of what you've said here:

    https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
    flannel jesus

    Less Wrong? Uh oh that's already a bad sign. I'll read it though. I do allow for the possibility that I could be wrong. I just finished replying to a lengthy screed from @Christopher so I'm willing to believe the worst about myself at this point. I'm neither exponential nor emergent nor multimodal so what the hell do I know. The Less Wrong crowd, that's too much Spock and not enough Kirk. Thanks for the link. I'm a little loopy at the moment from responding to Christopher.

    ps -- Clicked the link. "This model is only trained to predict the next character in PGN strings (1.e4 e5 2.Nf3 ...) and is never explicitly given the state of the board or the rules of chess. Despite this, in order to better predict the next character, it learns to compute the state of the board at any point of the game, and learns a diverse set of rules, including check, checkmate, castling, en passant, promotion, pinned pieces, etc. "

    I stand astonished. That's really amazing.
  • RogueAI
    2.8k
    What if we had a true AGI that happened to be honest? "Are you human?" "No, I'm an AI running such and so software on such and so hardware." It could never pass the test even it were self-aware.fishfry

    Good point.
  • Pierre-Normand
    2.4k
    I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.

    It's true that a big pile of flipping bits somehow implements a web browser or a chess program or a word processor or an LLM. But calling that emergence, as if that explains anything at all, is a cheat.
    fishfry

    LLMs have some capacities that "emerged" in the sense that they were acquired as a result of their training when it was not foreseen that they would acquire them. Retrospectively, it makes sense that the autoregressive transformer architecture would enable language models to acquired some of those high-level abilities since having them promotes the primary goal of the training, which was to improve their ability to predict the next token in texts from the training data. (Some of those emergent cognitive abilities are merely latent until they are being reinforced through training the base model into a chat or instruct variant).

    One main point about describing properties or capabilities being emergent at a higher level of description is that they don't simply reduce to the functions that were implemented at the lower level of description. This is true regardless of there being an explanation available or not for their manifest emergence, and it applies both to the mental abilities that human being have in relation to their brains and to the cognitive abilities that conversational AI agents have in relation to their underlying LLMs.

    The main point is that just because conversational AI agents (or human beings) can do things that aren't easily explained as a function of what their underlying LLMs (or brains) do at a "fundamental" level of material realization, isn't a ground for denying that they are "really" doing those things.
  • flannel jesus
    1.8k
    I stand astonished. That's really amazing.fishfry

    I appreciate you taking the time to read it, and take it seriously.

    Ever since chat gpt gained huge popularity a year or two ago with 3.5, there have been people saying LLMs are "just this" or "just that", and I think most of those takes miss the mark a little bit. "It's just statistics" it "it's just compression".

    Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.

    And one important extra data point from that write up is the bits about unique games. Games become unique, on average, about 10 moves in, and even when a game is entirely unique and wasn't in chat gpts training set, it STILL calculates legal and reasonable moves. I think that speaks volumes.
  • Pierre-Normand
    2.4k
    Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.flannel jesus

    Indeed! The question still arises - in the case of chess, is the language model's ability to "play" chess by completing PGN records more akin to the human ability to grasp the affordances of a chess position or more akin to a form of explicit reasoning that relies on an ability to attend to internal representations? I think it's a little bit of both but there currently is a disconnect between those two abilities (in the case of chess and LLMs). A little while ago, I had a discussion with Claude 3 Opus about this.
  • flannel jesus
    1.8k
    more akin to a form of explicit reasoning that relies on an ability to attend to internal representations?Pierre-Normand

    Did you read the article I posted that we're talking about?
  • Pierre-Normand
    2.4k
    Did you read the article I posted that we're talking about?flannel jesus

    Yes, thank you, I was also quite impressed by this result! But I was already familiar with the earlier paper about the Othello game that is also mentioned in the LessWrong blog post that you linked. I also had had a discussion with Llama-3-8b about it in which we also relate this with the emergence of its rational abilities.
  • fishfry
    3.4k
    I appreciate you taking the time to read it, and take it seriously.flannel jesus

    Beneath my skepticism of AI hype, I'm a big fan of the technology. Some of this stuff is amazing. Also frightening. Those heat map things are amazing. The way an AI trained for a specific task, maps out the space in its ... well, mind, as it were. I think reading that article convinced me that the AIs really are going to wipe out the human race. These things discern the most subtle n-th order patterns in behavior, and then act accordingly.

    I am really bowled over that it can play chess and learn the rules just from auto-completing the game notation. But it makes sense ... as it trained on games it would figure out which moves are likely. It would learn to play normal chess with absolutely no programmed knowledge of the rules. Just statistical analysis of the text string completions. I think we're all doomed, don't you?

    I will have to spend some more time with this article. A lot went over my head.


    Ever since chat gpt gained huge popularity a year or two ago with 3.5, there have been people saying LLMs are "just this" or "just that", and I think most of those takes miss the mark a little bit. "It's just statistics" it "it's just compression".flannel jesus

    I was one of those five minutes ago. Am I overreacting to this article? I feel like it's turned my viewpoint around. The chess AI gained understanding it its own very strange way. I can see how people would say that it did something emergent, in the sense that we didn't previously know that an LLM could play chess. We thought that to program a computer to play chess, we had to give it an 8 by 8 array, and tell it what pieces are on each square, and all of that.

    But it turns out that none of that is necessary! It doesn't have to know a thing about chess. If you don't give it a mental model of the game space, it builds one of its own. And all it needs to know is what strings statistically follow what other strings in a 5 million game dataset.

    It makes me wonder what else LLMs can do. This article has softened my skepticism. I wonder what other aspects of life come down, in the end, to statistical pattern completion. Maybe the LLMs will achieve sentience after all. This one developed a higher level of understanding than it was programmed for, if you look at it that way.

    Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.flannel jesus

    It seems that in this instance, there's no need to understand the game at all. Just output the most likely string completion. Just as in the early decades of computer chess, brute force beat systems that tried to impart understanding of the game.

    It seems that computers "think" very differently than we humans. In a sense, an LLM playing chess is to a traditional chess engine, as the modern engines are to humans. Another level of how computers play chess.

    This story has definitely reframed my understanding of LLMs. And suddenly I'm an AI pessimist. They think so differently than we do, and they see very deep patterns. We are doomed.


    And one important extra data point from that write up is the bits about unique games. Games become unique, on average, about 10 moves in, and even when a game is entirely unique and wasn't in chat gpts training set, it STILL calculates legal and reasonable moves. I think that speaks volumes.flannel jesus

    That's uncanny for sure. I really feel a disturbance in the force of my AI skepticism. Something about this datapoint. An LLM can play chess just by training on game scores. No internal model of any aspect of the actual game. That is so weird.
  • fishfry
    3.4k
    LLMs have some capacities that "emerged" in the sense that they were acquired as a result of their training when it was not foreseen that they would acquire them. Retrospectively, it makes sense that the autoregressive transformer architecture would enable language models to acquired some of those high-level abilities since having them promotes the primary goal of the training, which was to improve their ability to predict the next token in texts from the training data. (Some of those emergent cognitive abilities are merely latent until they are being reinforced through training the base model into a chat or instruct variant).Pierre-Normand

    I just watched a bit of 3blue1brown's video on transformers. Will have to catch up on the concepts.

    I confess to having my viewpoint totally turned around tonight. The chess-playing LLM has expanded my concept of what's going on in the space. I would even be willing to classify as emergence -- the exact kind of emergence I've been railing against -- the manner in which the LLM builds a mental map of the chess board, despite having no data structures or algorithms representing any aspect of the game.

    Something about this has gotten my attention. Maybe I'll recover by morning. But there's something profound in the alien-ness of this particular approach to chess. A glimmer of how the machines will think in the future. Nothing like how we think.

    I do not believe we should give these systems operational control of anything we care about!


    One main point about describing properties or capabilities being emergent at a higher level of description is that they don't simply reduce to the functions that were implemented at the lower level of description.Pierre-Normand

    I'm perfectly willing to embrace the descriptive use of the term. I only object to it being used as a substitute for an explanation. People hear the description, and think it explains something. "Mind emerges from brain" as a conversation ender, as if no more needs to be said.


    This is true regardless of there being an explanation available or not for their manifest emergence,Pierre-Normand

    Right. I just don't like to see emergence taken as an explanation, when it's actually only a description of the phenomenon of higher level behaviors not explainable by lower ones.

    and it applies both to the mental abilities that human being have in relation to their brains and to the cognitive abilities that conversational AI agents have in relation to their underlying LLMs.Pierre-Normand

    Yes, and we should not strain the analogy! People love to make these mind/brain analogies with the neural nets. Brains have neurons and neural nets have weighted nodes, same difference, right? Never mind the sea of neurotransmitters in the synapses, they get abstracted away in the computer model because we don't understand them enough. Pressing that analogy too far can lead to some distorted thinking about the relation of minds to computers.

    The main point is that just because conversational AI agents (or human beings) can do things that aren't easily explained as a function of what their underlying LLMs (or brains) do at a "fundamental" level of material realization, isn't a ground for denying that they are "really" doing those things.Pierre-Normand

    And not grounds for asserting it either! I'm still standing up for Team Human, even as that gets more difficult every day.
  • flannel jesus
    1.8k
    This one developed a higher level of understanding than it was programmed for, if you look at it that way.fishfry

    I do. In fact I think that's really what neural nets are kind of for and have always (or at least frequently) done. They are programmed to exceed their programming in emergent ways.

    No internal model of any aspect of the actual game.fishfry

    I feel like you might have missed some important paragraphs in the article. Did you notice the heat map pictures? Did you read all the paragraphs around that? A huge part of the article is very much exploring the evidence that gpt really does model the game.
  • Christoffer
    2k
    the "Chinese room" isn't a test to passflannel jesus

    I never said it was a test. I've said it was a problem and an argument about the inability for us to know if something is actually self-aware in their thinking or if it's just highly complex operations looking like it. The problem seems more that you don't understand in what context I'm using that analogy.

    "We have no idea what's happening, but emergence is a cool word that obscures this fact."fishfry

    This is just a straw man fallacy that misrepresents the concept of emergence by suggesting that it is merely a way to mask ignorance. In reality, emergence describes how complex systems and patterns arise from simpler interactions, a concept extensively studied and supported in fields like neuroscience, physics, and philosophy. https://en.wikipedia.org/wiki/Emergence

    Why are you asserting something you don't seem to know anything about? This way of doing arguments makes you first assert this, and then you keep that as some kind of premise in your head while continuing, believing you construct a valid argument when you're not. Everything after it becomes subsequently flawed in reasoning.

    I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.fishfry

    So now you're denying actual studies just because you don't like the implication of what emergence means? This is ridiculous.

    But calling that emergence, as if that explains anything at all, is a cheat.fishfry

    In what way?

    "Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear.fishfry

    You're just continuing with saying the same thing without even engaging with the science behind emergence. What's your take on the similarities between the system and the brain and how the behaviors in these systems and those seen in neuroscience matches up? What's your actual counter argument? Do you even have one?

    Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight.fishfry

    You seem to conflate what I said in that paragraph with the concept of emergence. Demonstrating self-awareness is not the same as emergence. I'm talking about emergent behavior. Demonstrating self-awareness is another problem.

    It's like you're confused to what I'm answering to and talking about? You may want to check back on what you've written and what I'm answering to in that paragraph because I think you're just confusing yourself.

    It only becomes harmful when people ignore to actually read up and understand certain concepts before discussing them. You ignoring the science and misrepresenting the arguments or not carefully understand them before answer is the only thing harmful here, it's bad dialectic practice and being a dishonest interlocutor.

    I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.

    If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.

    A new idea is needed.
    fishfry

    If you don't even know where the end state is, then you cannot conclude anything that final. If emergent behaviors are witnessed, then the research practice is to test it out further and discover why they occur and if they increase in more configurations and integrations.

    You claiming some new idea is needed requires you to actually have final knowledge about how the brain and consciousness works, which you don't. There are no explanations for the emergent behaviors witnessed, and therefore, before you can explain those behaviors with certainty, you really can't say that a new idea is needed. And since we haven't even tested multifunctional models yet, how would you know that the already witnessed emergent behavior does not increase? You're not really making an argument, you just have an opinion and dismiss everything not in-line with that opinion. And when that is questioned you just repeat yourself without any further depth.

    Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI.fishfry

    I don't give a shit about what other people are saying, I'm studying the science behind this, and I don't care about bloggers, tech CEOs and influencers. If that's the source of all your information then you're just part of the uneducated noise that's flooding social media online and not actually engaging in an actual philosophical discussion. How can I take you seriously when you constantly demonstrate this level of engagement?

    I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires.fishfry

    How would you know any of this? What's the source of this understanding? Do you understand that the neural net part of the system isn't the same as the operating code surrounding it? Please explain how you know the programmers know what's going on within the neural net that was trained? If you can't, then why are you claiming that they know?

    This just sounds like you heard some blogger or influencer say that the programmers do and then just regurgitate that statement in here without even looking into it with any further depth. This is the problem with discussions today; people are just regurgitating shit they hear online as a form of appeal to authority fallacy.

    They're a very clever way to do data mining.fishfry

    No, it's probability-based predicting computation.

    (1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s.fishfry

    You can say the same thing about any complex system. Anything is "simple" in its core fundametnals, but scaling a system up can lead to complex operations vastly outperforming the beliefs and predictions of its limitations. People viewed normal binary computation as banal and simple and couldn't even predict where that would lead.

    Saying that a system at its fundamental core is simple does not equal anything about the totality of the system, especially in scaled up situations. A brain is also just a bunch of neural pathways and chemical systems. We can grow neurons in labs and manipulate the composition easily, and yet it manifest this complex result that is our mind and consciousness.

    "Simple" as a fundamental foundation of a system does not mean shit really. Almost all things in nature are simple things forming complexities that manifest larger properties. Most notable theories in physics tend to lean into being oddly simple and when verified. It's basically Occam's razor, practically applied.


    By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits.fishfry


    You have no actual argument, you're just making a fallacy of composition.

    Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years?fishfry

    Oh, you mean like this?

    I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection...fishfry

    You've clearly stated here yourself that you haven't read any of the actual shit for years. You're just regurgitating already regurgitated information from bloggers and influencers.

    Are you actually expecting me to take you seriously? You demonstrate no actual insight into what I'm talking about and you don't care about the information I link to, which are released research papers from studies on the subject. You're basically demonstrating anti-intellectualism in practice here. This isn't reddit or on twitter, you're on a philosophy forum and you dismiss research papers when provided. Why are you even on this forum?

    Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means.fishfry

    You think this kind of behavior helps your argument? This is just stupid.

    You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates.fishfry

    Maybe read up on the topic, and check the sourced research papers. It's not my problem that you don't understand what I'm talking about. Compared to you I actually try to provide sources to support my argument. You're just acting like an utter buffoon with these responses. I'm not responsible for your level of knowledge or comprehension skills, because it doesn't matter if I explain further or in another way. I've already done so extensively, but you demonstrate an inability to engage with the topic or concept honestly and just repeat your dismissal in the most banal and childish way.

    You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."

    So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."

    You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post.
    fishfry

    You've yet to provide any source of your understanding of this topic outside of "i'm a math guy trust me" and "I don't read papers I follow bloggers and influencers".

    Yeah, you're not making a case for yourself able to really understand what I'm talking about. Being a "math guy" means nothing. It would be equal to someone saying "I'm a Volvo mechanic, therefore I know how to build a 5-nanometer processor".

    Not understanding what someone else is saying does not make it meaningless babble. And based on how you write your arguments I'd say the case isn't in your favor, but rather that you actually don't know or care to understand. You dismiss research papers as basically "blah blah blah". So, no, it seems more likely that you don't understand what I'm talking about.

    Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making?fishfry

    That you ignore the formed neural map and just look at the operating code working on top of it, which isn't the same as how the neural system operates underneath.

    You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management.fishfry

    The black box is the neural operation underneath. The fact that you confuse the operating code of the software that's there to create a practical application on top of the neural network core operation just shows you know nothing of how these systems actually work. Do you actually believe that the black box concept refers to the operating code of the software? :lol:

    You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s.fishfry

    The neural pathways and how they operate is not the "source code". What the fuck are you talking about? :rofl:

    "Impossible to peer into." I call that bullpucky. Intimidation by obsurantism.fishfry

    Demonstrate how you can peer into the internal operation of the trained model's neural pathways and how they form outputs. Show me any source that demonstrate that this is possible. I'm not talking about software code, I'm talking about what the concept of black box is really about.

    If you trivialize it in the way you do, then demonstrate how, because this is a big problem within computer science, so maybe educate us all on how this would be done.

    Every line of code was designed and written by programmers who entirely understood what they were doing.fishfry

    That's not how these systems work. You have a software running the training and you have a practical operation software working from the trained model, but the trained model itself does not have code in the way you're talking about it. This is the black box problem.

    And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. .fishfry

    Please provide any source that easily shows how you can trace back operation within a trained model. Give me one single solid source and example.

    You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."

    I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs.
    fishfry

    You're not buying it because you ignore to engage the topic by actually reading up on it. This is reddit and twitter-level of engagement in which you don't care to read anything and just continue the same point over and over. Stop with the strawman arguments it's getting ridiculous.

    Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?

    That will save us both a lot of time.

    I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.

    I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all.
    fishfry

    You have no argument, you're not engaging with this in any philosophical scrutiny so the discussion just ends at the level you're demonstrating here. It's you who's responsible for just babbling this meaningless pushback, not because you have actual arguments with good sources, but because "you don't agree". On this forum, that's not enough, that's called "low quality". So can you stop the low quality BS and actually make actual arguments rather than this fallacy-ridden rants over and over?
  • Christoffer
    2k
    I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t.fishfry

    This is anti-intellectualism. You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions. You're not cool, edgy or provide any worth to these discussions, you're just a tragic example of the worst things about how people act today; to ignore actual knowledge and just have opinions, regardless of their merits. Not only is it not contributing to knowledge, it actively works against it. A product of how internet self-radicalize people into believing they are knowledgeable, but taking zero epistemic responsibility of the body of knowledge that the world should be built on. I have nothing but contempt for this kind of behavior and how it transforms the world today.

    Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."

    Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time.
    fishfry

    I'm not going to recognize a position of anti-intellectualism. You show not understanding or insight into the topic I raise. A topic that is broader than just AI research. Your position is worth nothing if you base it off influencers and bloggers and ignore actual research papers. It's lazy and arrogant.

    You will never be able to agree on anything because your knowledge isn't based on actual science and what constitutes how humanity forms a body of knowledge. You're operating on online conflict methods in which a position should be "agreed" upon based on nothing but fallacious arguments and uneducated reasoning. I'm not responsible for your inability to comprehend a topic and I'm not accepting fallacious arguments rooted in that lack of comprehension. Your entire position is based on a lack of knowledge or understanding and a lack of engagement in the source material. As I've said in the beginning, if you build arguments on fallacious and error premises, then everything falls down.

    And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links.fishfry

    You continue to parrot yourself based on a core inability to understand anything about this. You don't know what emergence is and you don't know what the black box problem is because you don't understand how the system actually works.

    Can you explain how we're supposed to peer into that black box of neural operation? Explain how we can peer into the decision making of the trained models. NOT the overarching instruction-code, but the core engine, the trained model, the neural map that forms the decisions. If you just say one more time that "the programmers can do it, I know they can" as an answer to a request on"how", then you don't know what the fuck you're talking. Period.

    Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me.fishfry

    You're saying the same thing over and over with zero substance as a counter argument. What's your actual argument beyond your fallacies? You have nothing at the root of anything you say here. I can't argue with someone providing zero philosophical engagement. You belong to reddit and twitter, what are you doing on this forum with this level of engagement?

    Is there anything I've written that leads you to think that I want to read more about emergence?fishfry

    No, your anti-intellectualism is pretty loud and clear and I know exactly at what level you're at. If you ignore engagement in the discussion honestly, then you're just a dishonest interlocutor, simple as that. If you ignore actually understanding a scientific field at the core of this topic when someone brings it up, only to dismiss it as buzzwords, then you're not worth much as a part of the discussion. I have other people to engage with who can actually form real arguments. But your ignorance just underscores who's coming out on top in this. No one in a philosophy discussion views the ignorant and anti-intellectual as anything other than irrelevant, so I'm not sure what you're hoping for here.

    Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting.fishfry

    You show no sign of understanding any of it. It's basically just "I'm an expert trust me". The difference between you and me is that I don't do "trust me" arguments. I explain my point, I provide sources if needed and if the person I'm discussing with just utters an "I'm an expert, trust me" I know they're full of shit. So far, you've done no actual arguments beyond saying basically that, so the amount of statistical data informing us exactly how little you know about all of this, is just piling up. And it's impossible to engage with further arguments sticking to the topic if the core of your arguments are these low quality responses.

    My point exactly. In this context, emergence means "We don't effing know." That's all it means.fishfry

    No it doesn't. But how would you know when you don't care?

    I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords.fishfry

    The McCullock-Pitt neuron does not include mechanisms for adapting weights. And since this is a critical feature of biological neurons and neural networks, I'm not sure why that applies to either emergence theories or modern neural networks? Or are you just regurgitating part of the history of AI thinking it has any relevance to what I'm writing?

    You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything.fishfry

    It means you're uneducated and don't care to research before commenting:


    I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.

    I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it.
    fishfry

    Who cares about your opinion? Your opinion is meaningless without foundational premises for your argument. This forum is about making arguments, it's within the fundamental rules of the forum, if you're here to just make opinions you're in the wrong place.

    I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.

    It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case.
    fishfry

    Why are you even on this forum?

    I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post.fishfry

    You're making truth statements based on nothing but personal opinion and what you feel like. Again, why are you on this forum with this kind of attitude, this is low quality, maybe look up the forum rules.

    But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there.fishfry

    I've shown clarity in this and I've provided further reading. But if you don't have the intellectual capacity to engage in it, as you've clearly, in written form, shown not to have and not to have an interest in, then it doesn't matter how much someone try to explain something to you. Your stance is that if you don't understand or comprehend something, then you are, for some weird reason, correct, and that the one you don't understand is wrong and it's their fault for not being clear enough. What kind of disrespectful attitude is that? You're lack of understanding, your lack of engagement, your dismissal of sources, your fallacies in arguments and your lack of providing any actual counter arguments just makes you an arrogant, uneducated and dishonest interlocutor, nothing more. How would a person even be able to have a proper philosophical discussion with someone like you?

    Ah. The first good question you've posed to me. Note how jargon-free it was.fishfry

    Note the attitude you pose.

    But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data..fishfry

    Define "what's happening". Define what constitutes "now".

    If "what is happening" only constitutes a constant stream of sensory data, then that stream of data is always pointing to something happening in the "past", i.e "what's happened". There's no "now" in this regard.

    And because of this, the operation of our mind is simply streaming sensory data as an influence on our already stored neural structure with hormones and chemicals further influencing in strengths determined by pre-existing genetic information and other organ signals.

    In essence, the difference you're trying to aim for, is simply one that's revolves around the speed of analysis of that constant stream of new data, and an ability to use a fluid neural structure that changes based on that data. But the underlying operation is the same, both the system and the brain operate on "past events" because there is no "now".

    Just the fact that the brain need to process sensory data before we comprehend it, means that what we view as "now" is simply just the past. It's the foundation for the theory of predictive coding. This theory suggests that the human brain compensates for the delay in sensory processing by using predictive models based on past experiences. These models enable rapid, automatic responses to familiar situations. Sensory data continually updates these predictions, refining the brain's responses for future interactions. Essentially, the brain uses sensory input both to make immediate decisions and to improve its predictive model for subsequent actions. https://arxiv.org/pdf/2107.12979


    But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening.fishfry

    The clearest sign of the uneducated is that they treat science as a binary "true" or "not true". Rather than a process. As with both computer science and neuroscience, there are ongoing research and adhering to that research and the partial findings are much more valid in arguments than demanding "proof" in the way you speak. And as with the theory of predictive coding (don't confuse it with computer coding which it isn't about), it is at the frontlines of neuroscience. What that research implies will, for anyone with an ability to make inductive arguments, point towards the similarities between neural systems and the brain in terms of how both act upon input, generation and output of behavior and actions. That one system, at this time, is in comparison, rudimentary, simplistic and lacking similar operating speed, does not render the underlying similarities that it does have, moot. It rather prompts further research into if behaviors match up further, the closer the system becomes to each other. Which is what current research is going on about.

    Not that this will go anywhere but over your head.

    Nobody knows what the secret sauce of human minds is.fishfry

    While you look at the end of the rainbow, guided by the bloggers and influencers, I'm gonna continue following the actual research.

    Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea.fishfry

    You don't know the difference between what emergence is and what this is. They are two different aspects within this topic. One has to do with self-awareness and qualia, this has to do with adaptive operation. One is about the nature of subjectivity, the other is about mechanical non-subjective AGI. What we don't know is if emergence occurs the closer the base system gets. But again, that's too complex for you.

    https://arxiv.org/pdf/1705.08690
    https://www.mdpi.com/1099-4300/26/1/93
    https://www.mdpi.com/2076-3417/11/24/12078
    https://www.mdpi.com/1424-8220/23/16/7167

    As the research is ongoing there's no "answers" or "proofs" for it yet in the binary way you require these things to be framed as. Rather, it's the continuation of merging knowledge between computer science and neuroscience that has been going on for a few years now ever since the similarities were noted to occur.

    How can you say that? Reasoning our way through novel situations and environments is exactly what humans do.fishfry

    I can say that because "novel situations" are not a coherently complex thing. We're seeing reasoning capabilities within the models right now. Not at each level of human capacity, pretty rudimentary, but still there. Ignoring that is just dishonest. And with the ongoing research, we don't yet know how complex this reasoning capability will become, simply because we've haven't a multifunction system running yet that utilizes real-time processing and act across different functions. To claim that they won't be able to do is not valid as the current behavior and evidence point in the other direction. Making a fallacy of composition as the sole source as to why they won't be able to reason is not valid.

    That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs.fishfry

    No they're not, they're researching AI, or they're researching neuroscience. Of course they're breaking down the building blocks in order to decode consciousness, the mind and behavior. The problem is that there are too many spiritualist and religious nutcases who rather arbitrarily uplift humans to a position that's composed of arrogance and hubris. That we are far more than part of the physical reality we were formed within. I don't care about spiritual and religious hogwash when it comes to actual research, that's something the uneducated people with existential crises can dwell their futile search for meaning in. I'm interested in what is, nothing more, nothing less.

    How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time?fishfry

    Why do you interpret it in this way? It's like you interpret things backwards. What I'm saying is that the operation of our brain and consciousness, through concepts like the theory of predictive coding, seems to operate on rather rudimentary functions that are possible to be replicated with current machine learning in new configurations. What you don't like to hear is the link between such functions generating extreme complexity and that concepts like subjectivity and qualia may form as emergent phenomenas out of that resulting complexity. Probably because you don't give a shit about reading up on any of this and instead just operate on yourself "just not liking it" as the foundation for the argument.

    Humans are not "probability systems in math or physics."fishfry

    Are you disagreeing that our reality is fundamentally acting on probability functions? That's what I mean. Humans are part of this reality and this reality operates on probability. That we show behavior of operating on predictions of probability when navigating reality is following this fact; Predictive Coding Theory, Bayesian Brain Hypothesis, Prospect Theory, Reinforcement Learning Models etc.
    Why wouldn't our psychology be based on the same underlying function as the rest of nature. Evolution itself is acting along predictive functions based on probabilistic "data" that arise out of complex ecological systems.

    I don't deal in religious hogwash to put humans on a pedestal against the rest of reality.

    Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking.fishfry

    It's not credentialism, I'm fucking asking you for evidence that it's impossible as you clearly just regurgitate the same notion of "impossibility" over and over without any sources or rational deduced argument for it. The problem here isn't clarity, it's that you actively ignore the information given and that you never demonstrate even a shallow understanding of this topic. Telling that you do, does not change that fact. Like in storytelling; show don't tell.

    Show that you understand, show that you have a basis for your claims that AGI can never happen with these models as they are integrated with each other. So far you show nothing else but to try and ridicule the one you argue against, as if that were any kind of foundation for a solid argument. It's downright stupid.

    Yes, but apparently you can't see that.fishfry

    Oh, so now you agree with my description that you earlier denied?

    What about this?

    But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data..fishfry

    So when I say this:

    Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences?Christoffer

    You suddenly agree with this:

    Yes, but apparently you can't see that.fishfry

    This is just another level of stupid and it shows that you're just ranting all over the place without actually understanding what the hell this is about, all while trying to mock me for lacking clarity. :lol: Seriously.


    I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity.fishfry

    But the source of your knowledge, as mentioned by yourself, is still to not read papers, and only bloggers and influencers, the only ones who actually are THE ones to use buzzwords and hype? All while what I've mentioned are actual fields of studies and terminology derived from research papers?That's the most ridiculous I've ever heard. And you seem totally blind to any ability of self-reflection on this dissonance in reasoning. :lol:

    Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do youfishfry

    Yes, they do, but based on how you write things, I don't think you really understand them as you clearly seem to not understand either the concepts that's been mentioned or be able to formulate actual arguments for your claims. Reading blogs is not the same as reading the actual research and actual comprehension of a topic requires more sources of knowledge than just brief summery's. Saying that you read stuff, means nothing if you can't show a comprehension of the body of knowledge required. All of the concepts I've talked about should be something you already know about, but since you don't I only have your word that you "know stuff".

    You're right, I lack exponential emergent multimodality.fishfry

    You lack the basics of how people are supposed to form arguments on this forum. You're doing twitter/reddit posts. Throughout your answer to me, you've not even once demonstrated actual insight into the topic or made any actual counter arguments. That even in that lengthy answer, you still weren't able to. It's like you want to show an example of the opposite of philosophical scrutiny.

    I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal?fishfry

    Once again you just say that "you know shit", without every showing it in your arguments. It's the appeal to authority fallacy as it's your sole source of explanation of why "you know shit". If you have academic and professional experience, you would know how problematic it is to just adhere to experience like that as the source premise for an argument. What it rather tells me is that you either have such experience, but you're simply among the academics who're at the bottom of the barrel (there are lots of academics who're worse than non-academics in the practice of conducting proper arguments and research), or that the academic fields are not actually valid for the specific topic discussed, or that you just say it as a form of desperate attempt to increase validity. But being an academic or have professional experience (whatever that even means without context), means absolutely nothing if you can't show the knowledge that've come out of it. I know lots of academics who are everything from religious zealots to vaccine deniers, it doesn't mean shit. Academia is education and building knowledge, if you can't show that you learned or built any such knowledge, then it means nothing in here.

    You've convinced me to stop listening to you.fishfry

    More convincing evidence that you are acting out of proper academic praxis in discourse? As with everything else, ridiculous.
  • fishfry
    3.4k
    No internal model of any aspect of the actual game.
    — fishfry

    I feel like you might have missed some important paragraphs in the article. Did you notice the heat map pictures? Did you read all the paragraphs around that? A huge part of the article is very much exploring the evidence that gpt really does model the game.
    flannel jesus


    I was especially impressed by the heat map data and I do believe I mentioned that in my earlier post. Indeed, I wrote:

    Those heat map things are amazing.fishfry

    A little later in that same post, I wrote:

    If you don't give it a mental model of the game space, it builds one of its own.fishfry

    That impressed me very much. That the programmers do not give it any knowledge of the game, and it builds a "mental" picture of the board on its own.

    So I believe I already understood and articulated the point you thought I missed. I regret that I did not make my thoughts more clear.
  • flannel jesus
    1.8k
    okay so I guess I'm confused why, after all that, you still said

    No internal model of any aspect of the actual game
  • fishfry
    3.4k
    ↪fishfry okay so I guess I'm confused why, after all that, you still said

    No internal model of any aspect of the actual game
    flannel jesus

    The programmers gave it no internal model. It developed one on its own. I'm astonished at this example. Has caused me to rethink my opinions of LLMs.



    For full clarity, and I'm probably being unnecessarily pedantic here, it's not necessarily fair to say that's all they did. That's all their goal was, that's all they were asked to - BUT what all of this should tell you, in my opinion, is that when a neural net is asked to achieve a task, there's no telling HOW it's actually going to achieve that task.flannel jesus

    Yes, I already knew that about neural nets. But (AFAIK) LLMs are a restricted class of neural nets, good only for finding string continuations. It turns out that's a far more powerful ability than I (we?) realized.

    In order to achieve the task of auto completing the chess text strings, it seemingly did something extra - it built an internal model of a board game which it (apparently) reverse engineered from the strings. (I actually think that's more interesting than its relatively high chess rating, the fact that it can reverse engineer the rules of chess seeing nothing but chess notation).flannel jesus

    Yes, I'm gobsmacked by this example.

    Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know.

    So we have to distinguish, I think, between the goals it was given, and how it accomplished those goals.flannel jesus

    We know neural nets play chess, better than the greatest grandmasters these days. What we didn't know was that an LLM could play chess, and develop an internal model of the game without any programming. So string continuation might be an approach to a wider class of problems than we realize.

    Apologies if I'm just repeating the obvious.flannel jesus

    I think we're in agreement. And thanks so much for pointing me at that example. It's a revelation.
  • fishfry
    3.4k
    You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions.Christoffer

    When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.

    You're unpleasant, so I won't be interacting with you further.

    All the best.
  • flannel jesus
    1.8k
    Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know.fishfry

    Yeah same, this was really intriguing to me too

    And thanks so much for pointing me at that example. It's a revelation.fishfry

    Of course, I'm glad you think so. I've actually believed for quite some time that LLMs have internal models of stuff, but the strong evidence for that belief wasn't as available to me before - that's why that article is so big to me.

    I'm really pleased that other people see how big of a deal that is too - you could have just read a few paragraphs and called me an idiot instead , that was what I assumed would happen. That's what normally happens in these circumstances. I applaud you for going further than that.
  • Christoffer
    2k
    When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.

    You're unpleasant, so I won't be interacting with you further.
    fishfry

    Your criticism had zero counter-arguments and with an extremely arrogant tone in with a narrative of ridiculing and strawmanning everything I've said while totally ignoring ALL sources provided that acted as support for my premises.

    And now you're trying to play the victim when I've called out all of these behaviors on your side. Masking your own behavior by trying to flip the narrative in this way is a downright narcissistic flip. No one's buying it. I've pinpointed, as much as I could; the small fragments of counter-points you've made through that long rant of disjointed responses, so I've done my part in giving you the benefit of doubt with proper answers to those points. But I'm not gonna back out of calling out the fallacies and arrogant remarks that obscured those points as well. If you want to "control the narrative" like that you just have to find someone else who's susceptible and fall for that kind of behavior. Bye.
  • Christoffer
    2k
    Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance).Nemo2124

    It's rather showing a limit of our ability to know that it is thinking. Being the outsider feeding the Chinese characters through the door, we get the same behavior of translation regardless of if it's a simple non-cognitive program or if it's a sentient being doing it.

    Another notable thought experiment is the classic "Mary in the black and white room", which is more through the perspective of the AI itself. The current AI models are basically acting as Mary in that room, they have a vast quantity of knowledge about color, but the subjective experience of color is unknown to them until they have a form of qualia.

    Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.

    Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room).
    Nemo2124

    But we still cannot know if they have subjectivity. Let's say we build a robot that mimics all aspects of the theory of predictive coding, and featuring a constant feed of sensory data that basically acts onto a "wetwork" of realtime changing neural structures. Basically as close as we can theoretically think of mechanically mimicking the brain and our psychology. We still don't know if that leads to qualia which is required for subjectivity, required for Mary to experience color.

    All animals have a form of emotional realm that is part of navigating and guiding consciousness. It may very well be that the only reason our consciousness have a reason to act upon the world at all is because of this emotional realm. In the most basic living organisms, these are basically a pain-response in order to form predictive behavior that avoid pain and seek pleasure and in turn form a predictive network of ideas around how to navigate the world and nature.

    At the moment, we're basically focusing all efforts to match the cognitive behavior of humans in these AI systems. But we have zero emotional realm mapped out that work in tandem with those systems. There's nothing driving their actions outside of our external inputs.

    As life on this planet is the only example of cognition and consciousness we need to look for the points of criticality in which lifeforms go from one level of cognition to the next.

    We can basically fully map bacterial behavior with traditional computing algorithms that don't require advanced neural networks. And we've been able to scale up the cognition to certain insects using these neural models. But as soon as the emotional realm of our consciousness starts to emerge in larger animals and mammals we start to hit a wall in which we can only simulate complex reasoning on the level of a multifunctional superadvanced calculator.

    In other words, we've basically done the same as with a normal calculator. We can cognitively solve math in our head, but a calculator is better at it and more advanced. And now we have AI models that can calculate highly advanced reasoning that revolves around audiovisual and language operations.

    We're getting close to perfectly simulate our cognitive abilities in reasoning and mechanical thinking, but we lack the emotional realm that is crucial for animals to "experience" out of that mechanical thinking.

    It might be that this emotional aspect of our consciousness is the key to subjective experience and it's only when we can simulate that as part of these systems that actual subjective experience and qualia emerges out of such an AI model. How we simulate the emotional aspects of our cognition is still highly unknown.
  • ssu
    8.5k
    But we still cannot know if they have subjectivity.Christoffer
    Even our own subjectivity has still some philosophical and metaphysical questions. We simply start from being subjects. Hence it's no wonder we have problems to put "subjectivity" into to our contraptions called computers.

    When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions.
  • Christoffer
    2k
    When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions.ssu

    The irony is that we will probably use these AI systems as tools to make further progress on the journey to form a method to evaluate self-awareness and subjectivity. Before we know if they have it they will be used to evaluate. At enough complexity we might find ourselves in a position where they end the test on themselves as "I tried to tell you" :lol:
  • ssu
    8.5k
    Well, what does present physics look like?

    Hey guys! These formulas seem to work and are very handy... so let's go with them. No idea why they work, but let's move on.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.