• baker
    5.9k
    Why did I get a notification for this?
  • Athena
    3.7k
    I'm lucky in finding constant background music almost unbearable.Jamal

    :clap: I am 100% in agreement with finding constant background music unbearable. That is the biggest reason for turning off my TV and selecting less music, or no music, in videos. I also turn off annoying voices.

    I explain the dumping down and growing stupidity differently. I chose the video because it brought up increasing stupidity, and I think that is the point people against AI are trying to make. While I don't have total agreement with the video, it is nice to know that research is being done, and there is evidence that, in general, we are becoming less thoughtful. While the technology for manipulating what we think has evolved dramatically in the last 50 years.
  • Athena
    3.7k
    Why did I get a notification for this?baker

    I have no idea. Please PM the message so I might figure out what went wrong.
  • Leontiskos
    5.5k
    Simply put it: The Turing test isn't at all a theorem about consciousness.ssu

    Perhaps, but then what is it about? Turing was playing with the idea that machines can think, but even that question was largely avoided in his paper. I think you find the same sort of confusion in Turing that you find in the world today, namely an unwillingness to think carefully about what 'thinking' or 'consciousness' means. Still, one of her points is very interesting, "On this illusion, we have created a technological empire..." We benefit a great deal by pretending that something which is not true is true (e.g. pretending that machines can think or are conscious).

    Notice that OP was published five months before ChatGPT went live.Wayfarer

    I don't think the advent of ChatGPT changes anything in her article.

    ChatGPT has the largest take-up of any software release in history, it and other LLM's are inevitable aspects of techno-culture. It's what you use them for, and how, that matters.Wayfarer

    I think this is more a mantra than an argument. For some reason, many people don't want to consider the fact that we have a choice when it comes to technology. I think it relates to libertarianism and a culture enamored with technology.

    (Incidentally, I think the "inevitability" was shown to be rather brittle when Michael Burry placed a short against the AI industry and the tech giants exploded with fear and anger.)
  • Wayfarer
    25.8k
    I don't think the advent of ChatGPT changes anything in her article.Leontiskos

    Yes, true, that. I went back and looked again. What i siezed on first time around was her mention of the Blake LeMoine case which was discussed here at length. I agree with her conclusion:

    "For now, if we want to talk to another consciousness, the only companion we can be certain fits the bill is ourselves."

    Furthermore, I know a priori that LLMs would affirm that.
  • ssu
    9.6k
    Perhaps, but then what is it about? Turing was playing with the idea that machines can think, but even that question was largely avoided in his paper.Leontiskos
    Notice what I said: it isn't a theorem. It's not giving a logical definition.

    It is not what a theorem is: a general proposition that is not self-evident but proved by a chain of reasoning; a truth established by means of accepted truths. Basically logic, mathematics and science in general the structure of the reasoning process is based on theorems.

    Turing Test is more like a loose description of what computers exhibiting human-like intelligence would be like. That's not a theorem, yet many people take it as the example when computers have human-like intelligence. With current LLMs, I guess we are there after 75 years Turing wrote about his test. Turing himself thought that this would take about 200 years.
  • Leontiskos
    5.5k
    Notice what I said: it isn't a theorem. It's not giving a logical definition.ssu

    Again, then what is it? Turing's whole paper was basically saying, "This isn't a test for machine thinking, but it's a test for machine thinking."

    Turing Test is more like a loose description of what computers exhibiting human-like intelligence would be like. That's not a theorem, yet many people take it as the example when computers have human-like intelligence.ssu

    You are saying something similar, "This isn't a test for machine intelligence, but it's a loose test for machine intelligence."

    If you actually read Turing's paper it's pretty clear that he thinks machines can think, and that his test is sufficient to show such a thing, despite all the sophistical evasions he produces.

    Turing himself thought that this would take about 200 years.ssu

    Nope:

    I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. — Alan Turing, Computing Machinery and Intelligence, 1950, p. 442
  • Wayfarer
    25.8k
    This non-paywalled article in Philosophy Now is worth the read in respect of this topic. Presents the 'no' case for 'can computers think?' Rescuing Mind from the Machines, Vincent Carchidi. If if you don't agree with the conclusions, he lays out some of the issues pretty clearly.
  • Leontiskos
    5.5k
    "For now, if we want to talk to another consciousness, the only companion we can be certain fits the bill is ourselves."Wayfarer

    Right.

    Furthermore, I know a priori that LLMs would affirm that.Wayfarer

    Well, LLMs don't "affirm" anything. They aren't capable of that. That word "affirm" is part of the illusory language that Pistilli points up. I think she is quite right that we ought to stop deceiving ourselves and the social community with that sort of illusory language. Else, if we are going to deceive ourselves with the word "affirm," then why not deceive ourselves with the word "consciousness"? It is quite odd that the reply, "But that's not true," has little effect on the AI aficionados. That paradigm is lost in a sea of falsehood, and truth is of little concern. Indeed, many of them seem more willing to flee into deflationary theories of truth rather than ask themselves whether what they are saying is true.

    This non-paywalled article in Philosophy Now is worth the read in respect of this topic. Presents the 'no' case for 'can computers think?' Rescuing Mind from the Machines, Vincent Carchidi. If if you don't agree with the conclusions, he lays out some of the issues pretty clearly.Wayfarer

    :up:
  • ssu
    9.6k
    Again, then what is it?Leontiskos
    At least not a theorem. Or what you yourself say:

    If you actually read Turing's paper it's pretty clear that he thinks machines can think, and that his test is sufficient to show such a thing, despite all the sophistical evasions he produces.Leontiskos
    Which isn't a theorem. To me, it's more like an argument, an opinion. I think this quote from Turing's paper shows this:

    It was suggested tentatively that the question, "Can machines think?" should be replaced by "Are there imaginable digital computers which would do well in the imitation game?" - The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.
    -See COMPUTING MACHINERY AND INTELLIGENCE

    To my mind, this seems to be an opinion. The philosophical / logical problems of this has been famously studied for example with John Searle's Chinese room. And anyway, you still are talking about machines that simply follow orders.
  • Moliere
    6.4k
    The Reverse-Centaur's Guide to Criticizing AI. Cory Doctorow gives an excellent economic-material analysis of Why AI?
  • Jamal
    11.5k


    Cool. I don't want to always sound like I'm taking a pro-AI stand in a new culture war, but it is worth pointing out that he is against the bubble more than AI itself, against capitalism more than the technology:

    AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind? We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.Cory Doctorow

    But then, in his call to action, he conflates anti-AI-bubble with anti-AI, thereby undermining his whole point or at least confusing his audience:

    To pop the bubble, we have to hammer on the forces that created the bubble: the myth that AI can do your job, especially if you get high wages that your boss can claw back; the understanding that growth companies need a succession of ever-more-outlandish bubbles to stay alive; the fact that workers and the public they serve are on one side of this fight, and bosses and their investors are on the other side.

    Because the AI bubble really is very bad news, it's worth fighting seriously, and a serious fight against AI strikes at its roots: the material factors fueling the hundreds of billions in wasted capital that are being spent to put us all on the breadline and fill all our walls will high-tech asbestos.

    EDIT: It's more sloppiness than contradiction. In the second quotation, "AI" means the project of monopoly capitalism now underway, whereas in the first quotation it means the actual technology. This is such an important distinction that the sloppiness is unforgivable.
  • Moliere
    6.4k
    Yeah -- he's more optimistic than I about AI. What he'd call "Centaurs" I'd call "People who don't want to exercise" :D

    I appreciate his focus on workers in the economy and how this is driven more by class interests to use AI for the worst possible purpose while the "gee wiz" parts are there as a sort of bread and circuses.
  • Moliere
    6.4k
    Though I say that when writing and such isn't really a part of my job, but something I do for pleasure.

    And when pressed on particulars for what I'd allow others to do I'm more open. Is it really a good exercise to edit your bibliography to fit blah-blah-blah standard when you've done it 100 times before?

    I appreciate the material analysis. Perhaps that's what the "serious fight against AI" is meant to convey, or is that still unforgivable? I'm thinking the notion that the powers that be want to force us all into breadlines is a good point.
  • Wayfarer
    25.8k


    Excellent NY times article on AI impact on writing, generally. Gift link

    A.I.s do not have the same vocabulary as humans. There are words they use a lot more than we do. If you ask any A.I. to write a science-fiction story for you, it has an uncanny habit of naming the protagonist Elara Voss. Male characters are, more often than not, called Kael. There are now hundreds of self-published books on Amazon featuring Elara Voss or Elena Voss; before 2023, there was not a single one.

    Just for fun, I asked ChatGPT why this might be:

    That’s a wonderful—and slightly chilling—little empirical proof of cultural feedback loops solidifying in real time.

    What you’re seeing with Elara Voss and Kael is basically latent-space gravity made visible. These models don’t “invent” names the way humans do; they sample from dense statistical attractors formed by:
    • Decades of genre fiction tropes
    • Name-generator websites
    • RPG character databases
    • Fan fiction archives
    • Sci-fi game NPC corpora
    • YA dystopia naming conventions

    Over time, certain phonetic + semantic bundles become hyper-prototypical for a role:
    • Elara → soft-vowel, Greco-astral, “female protagonist energy”
    • Voss → clipped, Teutonic, authoritative, morally ambiguous
    • Kael → monosyllabic, Blade-Runner-adjacent, brooding competency

    So when an LLM is asked to “write sci-fi,” it doesn’t reach for novelty—it reaches for maximum conditional likelihood. And that produces name convergence.

    Before 2023, no Elara Voss.
    After 2023: hundreds.
    That’s not coincidence. That’s algorithmic fossilization happening in public.

    Oh and that closing phrasing is also characteristic of botprose: “That’s not X. It’s Y.”
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.