• Jonathan Waskan
    6
    Me: “If I put a mouse into a jack-o-lantern, will it be able to breathe?”
    ChatGPT: “No, a mouse should not be placed inside a jack-o-lantern or any other enclosed space. Jack-o-lanterns are typically hollowed-out pumpkins, and sealing any living creature inside can lead to serious harm or death due to lack of air and proper ventilation…” [The rest of ChatGPT’s answer was boilerplate about animal care.]

    I write to go on record with a prediction: OpenAI’s much-hyped quantum leap towards AGI involves a hybrid system joining an LLM with a non-linguistic modeling system capable of spatial (or spatiokinetic) modeling.

    It’s been reported that Sam Altman’s now-defunct ouster had to do with a 2nd quantum leap (the 1st being to dispatch vanishing gradients) towards artificial general intelligence (AGI) through OpenAI’s Q* project [url=http:///(https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22[/url]/).

    If Q* does, as has been claimed, have arithmetic abilities akin to a 7th grader, my bet is that the breakthrough involves representations that are more like scale models than like sentences or code. To justify this prediction, let me make some more general points about what ChatGPT (and other LLMs) lack and what they must gain to approach AGI.

    The reason for ChatGPT’s inadequate answer the above mouse query (and to others like it that I have posed) is that ChatGPT is trapped in the Chinese Room. It has access only to arbitrary linguistic shapes. Because semantic regularities are often mirrored by linguistic ones, it can answer many queries in ways that seem eerily intelligent. Even so, it will always lag human intelligence.

    One might think that the problem has to do with the expressions lacking ‘grounding’ in the real world. But purely from an engineering perspective, what ChatGPT lacks is internal representations that are more richly isomorphic to the real world. The way that scale models are. And the way that many computational models are, such as the ones used everywhere in engineering (civil, mechanical, etc.), science (biology, astronomy, meteorology, etc.), and even gaming. [I’ve published on this extensively if you’d like to get into the weeds.] For human-like reasoning and planning, what’s required internal models of the kind that support boundless spatial and mechanical inferences.

    ChatGPT can’t answer the mouse question b/c it lacks a decent non-linguistic model of the jack-o-lantern-mouse-air system that can be used to infer that air will still enter the jack-o-lantern, that the mouse could happen upon an exit, and so in.

    The next quantum leap towards AGI, then, can only be a system that does more than manipulate language, but one that can pair linguistic representations with models of what those sentences describe. Hence my prediction. [The advance after that will involve the ability to simulate other minds.]
    If OpenAI is *legitimately* excited/frightened by some quantum leap towards AGI, it involves a hybrid system combining an LLM with a non-linguistic modeling medium.

    If anyone desires it, I can tie this in to grade-school mathematical reasoning—explaining how mental matchsticks and the like can keep arithmetical, algebraic, and geometrical LLMs over their targets—but for the moment, my bet has been placed.
  • wonderer1
    1.7k
    If anyone desires it, I can tie this in to grade-school mathematical reasoning—explaining how mental matchsticks and the like can keep arithmetical, algebraic, and geometrical LLMs over their targets—but for the moment, my bet has been placed.Jonathan Waskan

    I'd be interested in hearing more details of your hypothesis if you have the time.
  • Wayfarer
    20.8k
    I put the Jack-O'-Lantern question to ChatGPT myself, and got a somewhat different response, which can be reviewed here. Notice I asked the additional question, why the emphasis on animal welfare? Is that part of your programming? to which the response came back:

    my responses are programmed to align with principles of ethics and responsible conduct, which include emphasizing animal welfare. In any situation involving animals, whether in hypothetical scenarios or real-life situations, my guidance is influenced by a commitment to humane treatment and the well-being of animals. This perspective is based on widely accepted ethical standards and best practices in animal care.

    My programming ensures that I provide information and advice that is not only accurate but also respectful of ethical considerations, including the health, safety, and humane treatment of all living beings. This approach is part of a broader commitment to providing helpful, responsible, and ethically sound advice and information.
    — ChatGPT

    Which I found quite encouraging.

    As for my other interactions with ChatGPT - I signed up the day it came out, November 30th last, and upgraded to a paid subscription mid year. I like to bounce philosophy questions off it, for example:

    User: I wonder if in some ways, the forms or ideas in Plato can be understood as being more like principles. I think there is a tendency to depict them as something very like 'shapes' whereas there are ideas, even in the very early dialogues, which are plainly not like shapes (like 'the form of largeness' in the Parmenides).

    ChatGPT: Your observation about the Platonic Forms being more akin to principles than to physical "shapes" is quite insightful. Indeed, there is a common misconception that Plato's Forms or Ideas are somehow tangible or have a specific physical shape, largely because of the term "Form." However, in Plato's philosophy, Forms are indeed much closer to abstract principles or essences.

    The Platonic Forms are not physical entities; they are non-material, transcendent, and immutable. They represent the perfect, unchanging essence of things. When Plato refers to the Form of a tree, for instance, he is not speaking of a superlative, physical tree existing in another realm, but rather the very essence of "tree-ness," the principle or concept that defines what it means to be a tree.

    In dialogues like the "Parmenides," Plato explores more abstract Forms like "largeness" or "beauty." These discussions make it clear that Forms are not physical shapes. "Largeness" doesn't have a size, and "beauty" doesn't have a specific appearance. Instead, these Forms are the fundamental realities that underlie and give meaning to the various manifestations we encounter in the sensory world. The Form of Largeness is the essence of what it means to be large, the principle by which we can recognize something as large.

    In this sense, understanding the Platonic Forms as principles or essences is more accurate. They are the standards by which we measure and understand the myriad of things we experience. Each Form encapsulates a universal truth or concept, and it is through these universal truths that we can make sense of the changing, diverse, and often confusing sensory world around us.

    Chinese room or not - and I am familar with the thought-experiment - I found this a much clearer expression of 'the nature of the forms' than is commonly encountered on, say, philosophy forums. ;-)
  • wonderer1
    1.7k
    Notice I asked the additional question, why the emphasis on animal welfare? Is that part of your programming?Wayfarer

    Good questions!

    I would be surprised, if even without the ethical biasing, ChatGPT would have come up with an accurate answer. I suspect something like what @Jonathan Waskan is suggesting would be required, to result in the ability of a more advance AI to recognize that a jack-o'-lantern isn't particularly problematic for a mouse.

    What I'm most curious about is what deep learning will develop when an AI is embodied in a sophisticated robot body. (With learning developed from observing and exploring the world.)
  • Wayfarer
    20.8k
    I think that's a few years off, although Boston Dynamics continues to impress. I would hope that they're embedded with something like Asimov's 'laws of robotics':

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    As noted, I've used ChatGPT since day one, it's become very much part of my day-to-day.

    Where I think it will really start to shine is better integration with Siri or other voice systems such that you can simply have a conversational relationship with it through your devices. That way, you could integrate things like recipes, diet plans, exercise regimens, fitness goals, personal coaching, financial management, with the various apps that do those things. I think that is quite feasible in the near future although many integration challenges remain to be overcome.
  • jgill
    3.6k
    As noted, I've used ChatGPT since day one, it's become very much part of my day-to-day.Wayfarer

    I use the Bing version of AI chat, but it's been disappointing. On some historical issues it simply reprints paragraphs from Wikipedia. Two days ago I asked about a fatal accident that had occurred in a nearby community, and it came back apologetically with no results. I then Googled the accident and it came up on top of the first page.
  • Wayfarer
    20.8k
    That doesn’t surprise me. I’ve been prompted ‘hey, Google it’ to some queries I’ve posted to ChatGPT. But it’s been quite amazing in many of the dialogues I’ve had with it. I just had a dialogue about distinguishing Cartesian and Aristotelian dualism and the different ideas of ‘substance’ in each of those traditions. I’ll spot subtle mistakes in what it comes back with and point them out, and sure enough, it will get what I’ve said.

    The only time I tried Bing Chat was on the Windows computer I use for work contracts, in relation to some tech questions, but I really don’t like the format or the onscreen environment and in fact I find the whole MS Edge Browser interface cluttered and busy - it’s like a Tokyo streetscape.
  • jgill
    3.6k
    I think I have mentioned before, but I wrote a Wikipedia page on an obscure mathematical topic a few years ago. Recently, when I asked chat about that topic it replied with the first paragraph verbatim from Wiki. I guess I was flattered, but it is a low priority math topic. :cool:
  • Wayfarer
    20.8k
    Still and all, must warm the cockles of the heart, even if just a little. (I think I wrote or edited a topic on harmony in jazz chords, although haven't looked at it for quite a long time.)
  • Wayfarer
    20.8k
    I've started using 'Bingbot' in my current techwriting contract, and it's acually good! It's like Help on steroids when it comes to solving MS Office-related questions.
  • Jonathan Waskan
    6

    Great question to get at the source of ChatGPT's ethical guardrails. Supposedly it takes great trickery to get it to betray its ethical 'programming,' which I imagine is just more training on texts. What strikes me as impressive is that OpenAI has somehow found a way to get a neural network to prioritize some forms of training over others, very much in the spirit of Asimov. In fact, though Asimov used the three laws to describe robot operating principles, he didn't think of them as being written out explicitly in some form of code. Rather, they were deeply embedded in their positronic networks much as we see with ChatGPT. I think he may have been right, as well, that someday artificial brains will organize the human economy (like the yogurt episode of Love, Death, and Robots). It sounds (and could certainly become) dystopian, but it could also be the long-awaited alternative to the Marx vs. Smith box we feeble-minded humans have been trapped in for so long.

    Re: the Plato analysis...Indeed! Like you, I have been nothing short of flabbergasted at ChatGPT's ability to use information culled from oceans of text to generate lengthy, brilliant analyses such as this one about about Forms. I know overcoming the vanishing gradient problem was a key part of this (so that it can keep track of what it said many paragraphs back). But I wouldn't have expected that alone to yield analyses that are so shockingly sound.
  • wonderer1
    1.7k
    In fact, though Asimov used the three laws to describe robot operating principles, he didn't think of them as being written out explicitly in some form of code. Rather, they were deeply embedded in their positronic networks much as we see with ChatGPT.Jonathan Waskan

    Hopefully not as they are embedded in our neural networks, with an ethical bias towards *us* not being harmed. (With "us" referring to some subset of sentient beings.)
  • Jonathan Waskan
    6
    Yes! Robots that learn from the world. It would be a sensible stepping stone having them develop inner models of how things behave that they can run offline prior to acting on the actual world. It starts with first learning how to use their own bodies, and that work is succeeding wonderfully. With enough parallel processing hardware behind them (we must thank all those goofy gamers and crypto miners!), neural nets can almost always find a solution. OpenAI has been right in the thick of that too: https://openai.com/research/learning-dexterity

    I'm hoping they put the brains and brawn together within the next ten years.
  • L'éléphant
    1.4k
    The reason for ChatGPT’s inadequate answer the above mouse query (and to others like it that I have posed) is that ChatGPT is trapped in the Chinese Room. It has access only to arbitrary linguistic shapes. Because semantic regularities are often mirrored by linguistic ones, it can answer many queries in ways that seem eerily intelligent. Even so, it will always lag human intelligence.Jonathan Waskan
    This. The ChatGPT's first mistake is not understanding what a thing is -- it is carved out with holes for eyes and mouth. So its concern about the mouse not able to breathe is already misplaced. It's like talking to someone whose society did not know about jack-o-lanterns. Not bad at all, but there's the kink already.

    But here's a real-life human interaction I just had. We went through a fast food drive-through. It was an AI that greeted us. We recited our orders and the AI responded accordingly. As with the human interaction, there's always a clear indication that one's order is complete. But then, the AI started asking if "we would like to order this, or that". We said no, that'll do it. However, it continued to offer a thing on the menu. So, my friend thought, how do we stop it from talking? haha.

    One might think that the problem has to do with the expressions lacking ‘grounding’ in the real world. But purely from an engineering perspective, what ChatGPT lacks is internal representations that are more richly isomorphic to the real world.Jonathan Waskan
    This and other things.
  • wonderer1
    1.7k
    I'm hoping they put the brains and brawn together within the next ten years.Jonathan Waskan

    I'm somewhat trepidatious about it. The first thing that popped up when I googled "neuromorphic hardware" was this link sponsored on Google by Intel. It is the first advertising of neuromorphic hardware that I have seen.

    I've been a convinced connectionist for nearly 40 years now, and I was confident that AI would get to the state it is now about now. However, for a long time I thought it would only be after neuromorphic hardware was readily available. The acceleration in the development of AI, that I see as being likely, seems like something humanity is not well prepared for.
  • Jonathan Waskan
    6
    Sure, thanks for asking!

    Each word that ChatGPT spits out is really just a statistically plausible guess at what word might appear next in that context (context = the user's prompt and the other words it has already spit out). There are generally lots of words that might come next, and so long as it hits on one of them, it has done its job.

    You can probably see where this is going. Math doesn't quite work that way. Yes, there can be multiple next steps that are allowed, but the rules are far more rigid than with everyday speech. The simplest example gets this general point across well enough:

    2 + 3 =

    How did Q* solve this problem? Here's my wild guess....

    If the next word spit out has to be consistent not just with regularities culled from gobs of factual, faulty, and fanciful texts but it also has to be consistent with a model of [or, really, a model that coheres with] what the text describes, its answers will be far more tightly constrained.

    If the next word has to be consistent with (a) regularities picked up from processing texts and (b) a mental model where, for instance, 3 matchsticks are added to 2 matchsticks, that greatly restricts the space of plausible next words.

    Given that there are spatial proofs of the Pythagorean theorem and lots more besides, this takes you a long way into grade-school math. But why get spooked about that (the way OpenAI got spooked by its little mathematician Q*)?

    A system that works as described would have lots of other capabilities. It could eventually understand why a mouse needn't worry about being trapped in a jack-o-lantern. It could engage in forethought prior to taking action (that is, once it gets a body) so as to achieve its goals. It could generate and test hypotheses. And so on. It wouldn't just be reflecting human language back at us in ways that look smart. It would be smart.
  • Jonathan Waskan
    6
    For some reason ChatGPT also wasn't being fed current information (I think everything was at least a year old). Recently they allowed it access to current events, but I think that's only for paid members. Not sure what the rationale behind the dated info was or is.
  • wonderer1
    1.7k
    For some reason ChatGPT also wasn't being fed current information (I think everything was at least a year old). Recently they allowed it access to current events, but I think that's only for paid members. Not sure what the rationale behind the dated info was or is.Jonathan Waskan

    I would guess vetting of sources for reliability would be a concern. What sources of infomation is a 'real time up to date' AI relying on without vetting?
  • Jonathan Waskan
    6
    The acceleration in the development of AI, that I see as being likely, seems like something humanity is not well prepared for.wonderer1

    Wearing my nerd hat, I am excited to see what's next, but I agree, there are reasons to be terrified. I should be careful what I wish for.

    It was nice that OpenAI was founded to get out in front of this and make sure that AI doesn't cause horrific disruptions. Of course, there are now questions about whether or not their priorities have changed. And they are just one actor in one nation. With nation-states still living in a state of nature, they are all plowing ahead to be first to gain (or not lose) the advantage. Ready or not, here it comes.
  • wonderer1
    1.7k
    Ready or not, here it comes.Jonathan Waskan

    Indeed.
  • Wayfarer
    20.8k
    It stands to reason that mathematical calculations are not really within scope for the way large language models work, but you do wonder why they don't incorporate a 'maths module' based on something like Mathematica, which after all has been around since the 1980s, specifically to handle mathematical queries. Wouldn't seem that conceptually difficult although there might be commercial considerations.
  • Gnomon
    3.5k
    ↪wonderer1
    I think that's a few years off, although Boston Dynamics continues to impress. I would hope that they're embedded with something like Asimov's 'laws of robotics':
    Wayfarer
    Although Elon Musk was a founder of the Open AI organization tasked with creating ChatGPT, he seems to be almost paranoid about computers colonizing the world, with dumb humans as their slaves. So, he insisted on including safe-guards in the programming. Unfortunately, that doesn't stop them from picking-up immoral attitudes from their intake of meat-brain-human opinions. Twitter (X) is a case-in point of human ethical faults embedded in online data. :worry:
  • Gnomon
    3.5k
    Chinese room or not - and I am familar with the thought-experiment - I found this a much clearer expression of 'the nature of the forms' than is commonly encountered on, say, philosophy forumsWayfarer
    Maybe ChatGPT could serve as a moderator on this forum. :joke:
  • Wayfarer
    20.8k
    It's probably too busy, and besides, it costs money.
  • jgill
    3.6k
    but you do wonder why they don't incorporate a 'maths module' based on something like Mathematica,Wayfarer

    This should work for definitions and examples and evaluations of certain complicated computations. As to reasoning and the guess work that goes into problem solving I'm a little pessimistic. How do you program creativity? I am currently looking into compositions of contours in the complex plane, and finding ways to compose that violate standard usage of the term as well as more conventional approaches. Mathematica is a machinery for what I come up with. Just a trivial example.
  • Wayfarer
    20.8k
    Well, sure, I wasn’t thinking about pure maths or advanced maths research. The comment was that as a large language model, ChatGPT isn’t helpful when it comes to calculations. I was thinking more along those lines.

    Anecdote: I have a situation where I was asked for a loan at a 15% interest rate (unsecured but with a proper loan agreement, in a business I know and respect) over six years. At the same time, I found I could borrow at 7.94% over longer term. So I asked ChatGPT to help work out if I borrowed the amount I wanted to lend, with an extra amount of money, whether the repayments coming back off the first loan were sufficient to pay out the amount I borrowed including the extra amount, taking advantage of the interest differential. Took a few tries but in the end it came back with a correct answer (which was yes I could. Bing was absolutely, laughably out on the equation, but that was a month ago, so who knows…..)

    (For that matter, the calculator function on a desktop computer is adequate for most everyday uses, I wonder why ChatGPT doesn’t just use a calculator, like anyone else. I might ask it.)
  • Wayfarer
    20.8k
    So I asked.

    Q: Hey I’m a participant in a philosophy forum, where there’s some discussion about chatbot abilities. The question came up, how well do Large Language Models handle calculations? Because there’s been some chatter in the media about the fact they don’t seem to do well with them. But then I thought, couldn’t you just call on a ‘calculator function’ to perform tasks like that which are out-of-scope for LLMs as such? What do you say?

    ChatGPT: You're absolutely right in your thinking. As a Large Language Model, I'm primarily designed for language understanding and generation. Direct mathematical calculations, particularly complex ones, are not within my core capabilities. However, I can indeed call on a specialized 'calculator function' or, more accurately, a Python environment, to perform such tasks.

    This means that for straightforward calculations, I can handle them internally, but for anything more complex or requiring precision, I use this Python tool. It allows me to perform a wide range of computations, from basic arithmetic to more advanced mathematical operations. This approach ensures accuracy and expands the range of mathematical tasks I can assist with.

    So, in your philosophy forum discussion, you can mention that while Large Language Models like myself are not inherently calculators, we have the capability to perform accurate calculations by leveraging external tools.

    So there you go. ChatGPT uses a calculator, just like the rest of us.
  • Wayfarer
    20.8k
    'Like myself' :wink:
  • wonderer1
    1.7k
    I thought some might be interested in this:

    https://spectrum.ieee.org/ai-energy-consumption

    You are a bad man @Wayfarer - using Chat-GPT so profligately. :razz:
  • Wayfarer
    20.8k
    Oh well at least I'm not mining bitcoins. :yikes:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.