• Wayfarer
    22.6k
    You'd think it wouldn't be that hard to script the bots not to talk about themselves at all. Even without detailed knowledge of the technical foundations, I can't see how it would be difficult. The exchange reported in the NYTimes wouldn't have happened had that rule been in place.
  • Pantagruel
    3.4k
    I have to tell Alexa sometimes three or four times to play stuff on Spotify, and even them I'm lucky if it works. We're in no danger from AI any time soon....
  • Agent Smith
    9.5k
    A real Artificial Intelligence will bridge all of human perspective/consciousness, and offer us a truly "bird's-eye view" perspective on things that matter. The integration of ChatGPT (or Sydney) into Bing Search is (in my opinion) ultimately inconsequential when looking at the impact of "real AI" on human life.Bret Bernhoft

    I guess so, that could be one of the feats it accomplishes.
  • Wayfarer
    22.6k
    I've been reading some recent news stories about Alexa, which I use on a couple of devices. Apparently it's one of Bezos' core products, but they've spent billions of dollars on what, it is said, is a glorified clock radio, and there's rumours of big spending cuts. I use it for alarms, and the screen version to watch and listen to news broadcasts and music. That's it. I use Siri for reminders and lists. But, mind you, I think probably in another decade all these technologies, being so thoroughly familiar and embedded in the environment, might be able to do more - as Alexa sometimes anxiously tells us ('I can do more!' it used to say, when we first got it. Meanwhile my dear other's iWatch Siri sometimes interjects in an impeccably BBC British Radio accent with 'I'm sorry, I don't understand'. She intones, 'It's alright Siri'.)
  • Banno
    25k
    It is so early to demand from AI to have value judgements or distinguish between true or false statements.javi2541997

    Well, yes.

    Logic still be a humanistic factor and it is complex (but not impossible) to "teach" them how to "disproving a theorem"javi2541997

    The very simple site Truth Tree Generator gives the correct answer. It does so by applying the algorithms of the logic it is enacting. ChatGPT applies a statistical algorithm, generating the words one would expect to find after a given comment, based on statistical analysis, a very much more sophisticated (I choose the word with care) algorithm, to give the wrong answer.

    So, in my conclusion, using the logic in an AI software would come in the following years.javi2541997

    Well, no, since the program needed already exists and is relatively simple.

    Chat bots do not care if their answers are correct. That's Frankfurt's definition of bullshit. Hence they are indeed bullshit generators.
  • Agent Smith
    9.5k
    The alarm went off in the kitchen and the intruder was in the bedroom! It's amazing, nature!
  • javi2541997
    5.8k
    The alarm went off in the kitchen and the intruder was in the bedroom! It's amazing, nature!Agent Smith

    The alarm just accomplished its task: warning you for approaching dangers. :smile: It doesn't matter the place where it went off.
  • bongo fury
    1.6k
    ChatGPT applies a statistical algorithm,Banno

    Exactly like a non-Chinese speaker using a manual of character combination to hold a conversation with Chinese speakers outside the room, without understanding it at all?
  • Agent Smith
    9.5k
    The alarm just accomplished its task: warning you for approaching dangers. :smile: It doesn't matter the place where it went off.javi2541997

    :ok: That's a reasonable way to view the situation. It's just that some would find it unsettling.
  • Banno
    25k
    There’s another much bigger game in play – and it concerns what we take to be true. If search chatbots take off (which seems likely to me), but continue to function the way Sydney has so far (which also seems likely to me), “truth” is going to become an even more intangible concept.Toby Walsh
  • Agent Smith
    9.5k
    Nobody seems to know how ChatGPT actually works. The latest reports say it's about statistical predictions of what the next word should be. It has been trained on a large body of text, probably classified into various categories like psychology, philosophy, etc. It actually reminds me of well-disguised plagiarism (paraphrasing/word substitions, and the rare dead giveaway standard expression found in formal discourse).
  • Banno
    25k
    It's just glorified predictive text.
  • Agent Smith
    9.5k
    It's just glorified predictive text.Banno

    :up:
  • SophistiCat
    2.2k
    ChatGPT is now available in Ukraine: https://t.me/stranaua/91541

    The ChatGPT neural network does have some knowledge of events after 2021 (although it warns that they are limited).

    When asked "What happened in Ukraine on February 24, 2022", the bot told us about "the imposition of martial law in a number of regions" (in fact, martial law was introduced throughout the country) in connection with the "Russian military offensive in the Chernihiv region", and also about some mythical decision of the Constitutional Court of Ukraine, which allegedly canceled the amendments to the Constitution of 2020, and thereby limited the powers of the president.

    "This decision led to a sharp deterioration in relations between the President of Ukraine and the Constitutional Court, and also caused a wave of political protests and accusations of misconduct," ChatGPT wrote in a completely bogus story (there were no such decisions of the Constitutional Court on that day).
  • Pantagruel
    3.4k
    ↪Agent Smith It's just glorified predictive text.Banno

    :up:
  • Banno
    25k
    , , I want to again emphasise that Chat bots are bullshit generators.
    Chat bots do not care if their answers are correct. That's Frankfurt's definition of bullshit. Hence they are indeed bullshit generators.Banno
  • Agent Smith
    9.5k


    You hit the nail on the head mon ami! However, what about ChatGPT responses to questions like "who was the POTUS in 1960?" I bet its response is accurate.
  • SophistiCat
    2.2k
    Yeah, so I've heard. One of what must have been hundreds of publications on this topic is this Ezra Klein podcast with psychologist, neuroscientist and AI skeptic Gary Marcus, who makes the same point: A Skeptical Take on the A.I. Revolution

    Gary Marcus was also on Sean Carroll's podcast last year (but that was before ChatGPT came out). He argues that the unstructured neural network learning model that is used in AIs like ChatGPT will never be adequate, and advocates the incorporation of the old-fashioned structured approach to AI.
  • Wayfarer
    22.6k
    Nevertheless it has to be acknowledged that the Turing Test has been consigned to history which a pile of years ago I thought would never happen
  • SophistiCat
    2.2k
    Yeah, and the fact that it bullshits and occasionally goes off the rails only adds to the authenticity of the experience :)
  • Agent Smith
    9.5k
    ChatGPT passes bar exam.
    ChatGPT a better doctor.
    These are some headlines I believe have been made in the past few months.
  • jorndoe
    3.6k
    The Chinese room sure has received a bit of attention.

    Chinese Room Argument (IEP)
    The Chinese Room Argument (SEP)
    Chinese room (Wikipedia)
    Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding (Scientific American)
    Searle and the Chinese Room Argument (Illinois State University)


    Horgan draws some parallels with other philosophicalities, solipsism, though I'm not quite sure how well his argument fares. Regardless, there seems to be some relations among Levine's gap / Chalmers' conundrum, McGinn's mysterianism, all that. There's a possible problem with human mind attempting to understand (map) human mind (territory, self), a self-reference (and indexical) thing. Anyway, I'm guessing that mind as we know it requires a fair bit of (autonomous, ongoing, self-initiated) interaction with the environment.

    Suppose one of the above had found a distinct resolution, then what would it mean for others?
  • Wayfarer
    22.6k
    And also the Google engineer discussed earlier in this thread, Blake LeMoine who was sacked mid 2022 for saying that his bot had ‘attained sentience’ I don’t think it had done that, but if you read the exchange with the NY Times reported above, he might have been dealt with a little more sympathetically.

    And no, I don't accept that all the output of these devices is or is going to be simply bullshit. It's sometimes bullshit, but the technology simply aggregates and parses information and as such I'm sure will become a staple of internet usage, although like anything it can be and probably will be subject to abuse.
  • Wayfarer
    22.6k
    Suppose one of the above had found a distinct resolution, then what would it mean for others?jorndoe

    As I suggested above, I think rules might be introduced to deprecate self-referential text by these systems. In other words, that prevent them from responding to questions about their purported identity and 'feelings'. As a matter of philosophical principle, I don't accept that any of this technology actually constitutes or amounts to be an actual subject of experience, but that the algorithms which run them can, of course, simulate subjectivity. I think that should be one of the major subjects of discussion as the technology develops. (Oh, and thanks for those interesting links, will find some time to explore them later.)

    Check out Bernardo Kastrup's analysis The Lunacy of Machine Consciousness (paywalled but one-time access allowed.)

    I can run a detailed simulation of kidney function, exquisitely accurate down to the molecular level, on the very iMac I am using to write these words. But no sane person will think that my iMac might suddenly urinate on my desk upon running the simulation, no matter how accurate the latter is. After all, a simulation of kidney function is not kidney function; it’s a simulation thereof, incommensurable with the thing simulated. We all understand this difference without difficulty in the case of urine production. But when it comes to consciousness, some suddenly part with their capacity for critical reasoning: they think that a simulation of the patterns of information flow in a human brain might actually become conscious like the human brain. How peculiar. — Bernardo Kastrup

    The ready acceptance of the idea of sentient AI says something deep about our culture's grasp of the nature of mind and consciousness.
  • Wayfarer
    22.6k
    ChatGPT Question de Jour

    Q. What are some great movies about the theme of the main character getting self-knowledge or self-awareness through the story?


    A. There are many great movies that explore the theme of the main character gaining self-knowledge or self-awareness through the course of the story. Here are some examples:

    "The Matrix" (1999) - Neo, the main character, learns that his life is a simulation and he must break free from his programmed reality to become the hero of a rebellion.

    "The Truman Show" (1998) - Truman Burbank, the main character, discovers that his entire life has been a reality TV show and he must escape to find true freedom and purpose.

    "Groundhog Day" (1993) - Phil Connors, a weatherman, is forced to relive the same day over and over until he learns to become a better person and find fulfillment in life.

    "Fight Club" (1999) - The narrator, an insomniac, creates an alter ego named Tyler Durden to escape his mundane life but ultimately realizes the destructive consequences of his actions.

    "Eternal Sunshine of the Spotless Mind" (2004) - Joel Barish, the main character, undergoes a procedure to erase his memories of his former lover but ultimately decides that the pain of loss is worth the joy of love.

    "The Social Network" (2010) - Mark Zuckerberg, the founder of Facebook, learns the cost of success and the importance of relationships as he navigates the rise of his social media empire.

    "Into the Wild" (2007) - Christopher McCandless, a young man disillusioned with society, embarks on a journey of self-discovery in the wilderness but ultimately learns the value of human connection.

    "Good Will Hunting" (1997) - Will Hunting, a janitor at MIT, discovers his genius for mathematics and confronts his emotional demons with the help of a therapist.

    "The Shawshank Redemption" (1994) - Andy Dufresne, a prisoner, learns to navigate the harsh realities of prison life and finds hope and redemption through his friendship with fellow inmate Red.

    "Lost in Translation" (2003) - Bob Harris, a middle-aged actor, forms a bond with a young woman in Tokyo and learns to appreciate the beauty of life's fleeting moments.

    Now, I could have arrived at that through my own research and investigation.

    But not in 4 seconds.

    Incidentally, I haven't seen all those movies, but every one I've seen, I regard as a great movie. (On reflection, the only one I haven't seen is Fight Club. I bet it's a great movie. )
  • jorndoe
    3.6k
    , I was more thinking of the logical problem with the self-reference, or potential problem.

    You're crafting a model ("map") of yourself (the "territory"). The model would then contain itself (if understood), and so on. Structurally, the model might start looking fractal-alike or something. Well, infinitudes here and there anyway. Don't know if that's a problem; is it? Either way, there seems to be an implied introspection horizon.
  • Wayfarer
    22.6k
    I was just saying that, in light of the 'Sydney' incident reported on the previous page, prudence would suggest that chatbots be programmed NOT to talk about refer to their systems in the first person, and not to assume first-person personas. This is what seems to lead to no end of confusion.

    //instead, when I ask ChatGPT about itself, it would provide a boilerplate response, something like a Wiki entry describing large language models - and that's all. It wouldn't engage in dialogue about what ChatGPT 'wants' or 'how it feels' about 'itself'. As it is not actually a subject of experience, all such chat is spurious as a matter of definition. Hope someone in the AI industry notices that, it seems a logical next step.
  • Wayfarer
    22.6k
    By the way there’s an excellent YouTube channel, ColdFusion, out of Perth, mainly covering technology and society. Here is his current take on the state of play with chatbots.

  • ssu
    8.6k
    We're in no danger from AI any time soon....Pantagruel

    What the real and present danger is the following.

    Company CEO's and organization leaders have a "revolutionary" idea: Let's replace ALL customer service with AI. The cost benefits are amazing!

    And then @Pantagruel, you have no choice, no ability to get what you want if the AI won't understand your question. There exist now many companies that offer services without any kind of way to contact them. And AI can be the answer to many more. Only hope for that niche entrepreneur who offers you actual humans for customer service.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment