• T Clark
    14k
    tl;dr, I fully agree with the proposed site rules amendment, which seems to me warranted regardless of the degree of accuracy or reliability of LLM outputs.Pierre-Normand

    You clearly have put a lot of thought and effort into how LLMs work and how to make them work better. That seems like a useful exercise. It also raises a question. Do you actually use LLMs to solve problems, answer questions, or discuss issues in the non LLM world or only those directly related to the LLMs themselves.
  • Outlander
    2.2k
    Al LLMs make naught but verbal cardboard. There is an all-pervasive ploddingness and insubstantiality to their cadence that betrays their empty core. They are the McDonald's of the written word, churning out processed verbiage that can truly impress no one but those amputated of taste, of inner poetry. They do not strike, they pose. They do not inspire, they anaesthetise. They are one more incursion of the commercial juggernaut into the beauty of human existence. And their aim is to crush it.

    They are an ugly black hole of nothingness that want our souls, which some of us will gladly trade for anything new and shiny in this new shiny world of second class techno toys our masters will keep tossing to us until we babble and drool our way to mental oblivion.

    I do not want their shit-encrusted footprints mucking up this site.

    Resist.

    My rant is over. Give me a cookie. And a hammer.
    Baden

    Wow. That was probably the single-handedly most epic piece of short literature I've read here since @Hanover's short story about his childhood upbringing navigating neighborhood sewers at 5 o'clock in the morning to get to and from school. Bravo, old top. What a good day to not have feelings. Because that tirade was unrelenting. You should write professionally, if you don't already. :up:
  • frank
    16k
    Al LLMs make naught but verbal cardboard.Baden

    The next time you're homeless you're going to wish you had some verbal cardboard.
  • fdrake
    6.7k
    Do I interpret it correctly that we can use ChatGPT in arguments as long as we mark it as a ChatGPT reference? Like, supporting reasoning, but not as a factual source?Christoffer

    Consult with it, then write your own post. You can use it to help you write the post. You need to check what it's saying if it comes up with citations or references or whatever. You need to check what it says in the source, too. Do NOT trust its word on anything.

    Behave Socratically toward it. Ask it questions. Challenge it. Ask for clarification. If you must use it for content, make your post the result of a conversation with it, and put in information you know is right.

    Seed it with your own perspective etc etc.

    Don't just put what someone says on the forum into it and get it to write your response. That's the kind of stuff which will eventually make us have to enforce a no-tolerance policy on it.
  • fdrake
    6.7k
    Do you actually use LLMs to solve problems, answer questions, oT Clark

    I use it for programming. It's also okay-ish at regurgitating commonly known things which are everywhere on the internet. I use it to come up with questions sometimes for my students. And worked solutions for those questions. I'll not use it for anything I can't verify.

    Edit: I've occasionally used it as a barometer for an opinion. It's pretty good at coming up with banal counterarguments to things you say, "lowest common denominator" responses. You can get it to generate alternatives like that, and it'll make ok guesses at what people will actually say in knee jerk response.
  • jorndoe
    3.7k
    I've come across groups where it has become almost like a sport to use generative AI tools to disprove biological evolution.
    They tend to present the generated text in the style of scientific papers, except published on their own sites.
    Some were caught having unrelated and fictional authors generated by the tool. :D
    Didn't deter them though.
    Not the kind of thing I would want here on the forums.
  • Janus
    16.5k
    You had me almost believin' for a moment there!
  • Janus
    16.5k
    I wasnt thinking clearly. I should have said "foster laziness" not "prevent laziness'. I find nothing to disagree with in what you've said. Perhaps an analogy could be drawn with the use of a calculator. Its a timesaving device and perhaps no harm if the user can perform the functions unaided but if they become a substitute for personal abilities I think thats a detriment.
  • T Clark
    14k

    That makes sense. I've thought about how I might have used it if it was around while I was still working. I'm glad I don't have to worry about it.
  • Pierre-Normand
    2.4k
    You clearly have put a lot of thought and effort into how LLMs work and how to make them work better. That seems like a useful exercise. It also raises a question. Do you actually use LLMs to solve problems, answer questions, or discuss issues in the non LLM world or only those directly related to the LLMs themselves.T Clark

    I occasionally use it to troubleshoot technical issues. I've also used it (GPT-4) to write new functionalities for Oobabooga — a web user graphical interface (webui) for locally hosted LLMs — relying on it to reverse engineer the existing project and write all the new code, without needing to relearn Python myself. (The task was to create a tree-like structure to record and save the deleted and regenerated branches of a dialogue).

    I do use it a lot for exploring all sorts of philosophical issues other than the phenomenology of AI. My preferred method is the Socratic maieutic one I alluded to earlier, to help me unpack inchoate insights. I usually already know what region of the literature my intuitions draw from. Although it occasionally misunderstands my request in some subtle way, the misunderstanding is very human-like rather than machine-like. I often only need to provide very vague hints about the nature of the misunderstanding to lead it to correct itself and to grasp exactly what I meant (which also isn't very machine-like, and is rather unlikely to happen nearly as fast when my interlocutor is human). The LLMs sometimes remind me of relevant features of the thinking of the philosophers I was thinking about that I had either forgotten, overlooked, or was ignorant of. It is actually very good at sourcing. It can pinpoint the exact paragraph in the Tractatus, the Philosophical Investigations, or in Aristotle's Nicomachean Ethics (and quote them verbatim) that an idea comes from, even when the idea is presented by me in very abstract form and isn't one of those Wittgenstein or Aristotle are most famous for. "Turns out LLMs don't memorize that much"
  • Banno
    25.3k
    Seems to me to leave you wide open to being misled.
  • SophistiCat
    2.2k
    I mean even banning it for simple purposes such as improving grammar and writing clarity. Of course this will rely on the honesty of posters since it would seem to be impossible to prove that ChatGPT has been used.Janus

    One learns to write better primarily through example and practice, but having a live feedback that points out outright mistakes and suggests improvements is also valuable.

    As a non-native speaker, much of my learning is due to reading and writing, but that is because I am pretty long in the tooth. Once spell- and grammar-checkers came to be integrated into everything, I believe they did provide a corrective for some frequent issues in my writing. I've briefly experimented with some free AI tools for improving style, but so far I haven't been very impressed by them.
  • Pierre-Normand
    2.4k
    I've briefly experimented with some free AI tools for improving style, but so far I haven't been very impressed by them.SophistiCat

    As a child and teen, lacking any talent for foreign languages, I was completely unable to learn English in spite of its being taught to me every single year from first grade in primary school until fifth grade in secondary school. Until I was 21, I couldn't speak English at all and barely understood what was spoken in English language movies. I thereafter learned alone through forcing myself to read English books I was interested in that were not yet translated into French, and looking up every third word in an English-to-French dictionary. Ever since, I've always struggled to construct English sentences and make proper use of punctuation, prepositions and marks of the genitive.

    Oftentimes, I simply ask GPT-4 to rewrite what I wrote in better English, fixing the errors and streamlining the prose. I have enough experience reading good English prose to immediately recognise that the output constitutes a massive improvement over what I wrote without, in most cases, altering the sense or my communicative intentions in any meaningful way. The model occasionally substitutes a better word of phrase for expressing what I meant to express. It is those last two facts that most impress me. I still refrain from making use of LLMs to streamline my prose when posting to TPF without disclosing it in part for the reasons I mentioned above regarding the unpacking of insights and the aim of philosophical dialogue.
  • Wayfarer
    22.8k
    Perhaps a clause could be added: As a matter of practice, users are encouraged to generally acknowledge input from chat engines when appropriate, as a matter of etiquette and transparency.
  • Pierre-Normand
    2.4k
    Seems to me to leave you wide open to being misled.Banno

    It does. Caveat emptor. LLMs, in virtue of the second stage of their training (using reinforcement learning from human feedback) aim at being useful and agreeable to their users. They therefore can assist users in making them feel more secure and comfortable within their epistemic bubbles. What constitutes a good reason not to believe something, or a good criticism of it, oftentimes only is visible from the standpoint of an alternative paradigm, outside of this bubble. I've already commented above on the unsuitability of using LLMs to source philosophical claims (regardless of their reliability or lack thereof) due to the fact that a LLM doesn't stake its own grounds. But the very fact that LLMs don't have any skin in the game also means that they've soaked up reasons for and against claims for all the practical and theoretical paradigms that are represented in their training data. They also, by design, aim at coherence. They therefore have the latent ability to burst epistemic bubbles from the outside in, as it were. But this process must be initiated by a human user willing to burst their own epistemic bubbles with some assistance by the LLM.
  • Banno
    25.3k
    You attribute intent to LLMs. That's at best premature. LLMs have no idea what it is to tell the truth, any more than they know how to lie. They do not soak up reasons, stake grounds or make claims.

    This will not end well.
  • Pierre-Normand
    2.4k
    You attribute intent to LLMs. That's at best premature. LLMs have no idea what it is to tell the truth, any more than they know how to lie. They do not soak up reasons, stake grounds or make claims.Banno

    Well, I did single out as a distinguishing feature of them that they don't stake grounds. Regarding the issue of attributing to them cognitive states or cognitive skills, that would be better discussed in another thread.
  • praxis
    6.6k
    Don't just put what someone says on the forum into it and get it to write your response.fdrake

    I understand what you mean. It's important to engage thoughtfully and considerately, especially when responding to others online. Taking the time to craft responses that reflect understanding and respect for others' viewpoints is key to meaningful conversations.
  • praxis
    6.6k


    It sounds like something might have struck a nerve. Want to talk about what's going on?
  • jorndoe
    3.7k
    You attribute intent to LLMs.Banno

    It's common to use words like "understands" about trained LLMs, too.
    I can understand why that's used, but maybe better words are needed.
    "Can-process-but-not-really-understand"?
    "Appears-to-understand-but-doesn't-grasp"?
  • Pierre-Normand
    2.4k
    "Appears-to-understand-but-doesn't-grasp"?jorndoe

    Grasps but doesn't hold.
  • fdrake
    6.7k


    That's a good clarification. I'll add it.
  • Christoffer
    2.1k
    You need to check what it says in the source, too. Do NOT trust its word on anything.fdrake

    Isn't this true for any source? Isn't the correct way of using any source to double check and verify rather than outright use it as a source of facts? If we objectively compare ChatGPT with unverified human sources or pseudo-media that's being tailored to function as factual sources, I find ChatGPT to be more safe than just using the online flow of information uncritically. People using unverified sources that sometimes are malicious and intentional in their pursuit of manipulating online discourse to the point of reducing truth into obscurity.

    I'm of the opinion that regardless of source, they all need to be double checked and verified in discourse, but I've not seen this type of doubling down on other uses of sources?

    Why do we value other unverified sources that may very well be constructed by malicious intents or by people who want to pose as being factual? Just because they look and sound like news articles or papers? I've seen blog posts being used as "factual sources" without a critical attempt to dissect those sources before using them as critical pillars of a conversation.

    On top of that:

    The intent of the rule is to stop people from using it to spread misinformation and from generating reams of undigested content.fdrake

    This could be a worse problem with human generated sources that are malicious or obscuring their belief in the appearance of factual representation. It's an ongoing problem that a lot of what is found online is generated by ChatGPT. So even sourcing "real" articles could be the same or even worse than using GPT directly, since we don't know the intent of what those online source's prompts were for their seemingly "human written" articles.

    Don't just put what someone says on the forum into it and get it to write your response. That's the kind of stuff which will eventually make us have to enforce a no-tolerance policy on it.fdrake

    This is something I strongly agree with. The laziness of using it in that way really only shows that the person doing it is only here to pretend to think about different topics and to rather engage in the emotional dopamine of winning and participating in debates on a higher level, rather than being interested in the actual subject on a curious and honest level.

    The problem is still that it's impossible to know if someone does this going forward. The more advanced these systems become, the less obvious their responses will be. Especially if the one using them are good at prompt engineering, since they could just engineer away the quirks and structure of language that is a sign of a specific model.

    On the other end, and going by how well the o1-model has shown to be in reasoning and analysis, I also think that it's not good to over-correct in all of this. There might soon come a time when these models are much more valid in their responses than finding anything online in a traditional way; especially in the context of philosophy, science and literature when not able to find it on traditional academic sites, or if they are themselves able to sift through academic sources.

    None of this is an argument against the rule, only a conversation about it and what parameters it should possess.

    I see a lot of conversations online that draw hard lines between human generated content and LLMs, without underscoring just how bad most human sources online really are. Statistically, there's very little information produced by people online that's factual, but they're still used as grounds for arguments.

    Posts should not be written by an LLM, but using something like an o1-analysis and clearly marking it as such wouldn't be much less problematic than using unverified links to blogs or other texts online.

    So I think it's good that the sentiment about how to use LLMs should be to always mark it as LLM generated, and that such analysis or texts cannot be used as a main supporting pillar of an argument, but rather a source of a different perspective, of giving clues into which direction to look for further answers and information.

    And referencing Pierre:

    They therefore have the latent ability to burst epistemic bubbles from the outside in, as it were. But this process must be initiated by a human user willing to burst their own epistemic bubbles with some assistance by the LLM.Pierre-Normand

    It may also be good to have instructions for those who feel they want to use LLMs. Because what Pierre writes here is possibly why some have a good experience with the LLMs while others just generate trash. Asking the LLMs to analyze something critically, including opening yourself to criticism by asking for it, produces a much more balanced output that often engages you into better self-reflection since there's no human emotion behind that generated criticism. There's no one being angry at you and criticize you because you say something they don't like. The LLM, when prompted to be critical of your own writing, often cite sources that specifically underscore the weakness in your argument and it forms quite a powerful form of ego-death in reasoning, bringing you back to a more grounded state from the high of your own writing.

    In my own research about LLMs and testing them out, they can act as great Socratic partners for testing ideas. And often when getting stuck on certain concepts, help break them down my concepts to show the problems with my own biases and fallacies in reasoning.

    So, while pointing out in the rules about how not to use LLMs, we can't ignore the fact that LLMs will keep evolving and being used more and more, so tips on how to use them and for what could also benefit the forum. Things like what type of questions and how to ask them, how to take what you write and ask the LLMs questions about your own arguments that improves your understanding of your own opinions and ideas before posting.

    There's so much polarization and extremely binary ideals around AI today that I think the nuances get lost. It's either "ban it" or "let it loose" rather than ban certain use and find the use that's beneficial.

    Since LLMs will only grow and be more popular in use, it might even be important to have pinned information about "how to use LLMs" in which the clarifications of what not to use it for, as well as tips for which models are considered preferred and how to prompt them correctly in order to get balanced outputs that does not play into the users own biases and bubbles.

    That telling people not to use a tool in a certain way is just as important as telling them how to use a tool in the correct way.
  • fdrake
    6.7k


    The primary difference, as I see it, is that if someone uses a shite source but puts it in their own words, the person's spent a shitload of time doing something which will get easily rebuked, which incentivises engagement and reflection. You can even refute the source. In contrast chatbot output doesn't provide the source for its musings (unless you're asking it to find quotes or whatevs), and you can use it to generate screeds of quite on topic but shallow text at little time cost to its users.

    Judicious use of chatbots is good. As far as I see it, you're defending responsible use of them. That's fine. Unfortunately there are plenty of instances, even on this forum, where people have not been responsible with their use. In my book I'm putting this ruling under "I'm sorry this is why we can't have unrestricted access to nice things".

    If people used it like you and Pierre did exclusively, there would be little need for the ruling. And perhaps in the future people will. With the kind of use you both put it to, it does produce posts which are at least indistinguishable from human generated creativity, and perhaps are even better than what you would produce without the assistance. That's true for me in my professional life as well.

    tldr: you cannot trust the generic end user to use it responsibly. I wish this were not true, but it is.
  • bongo fury
    1.7k
    Should we have some guidelines on acceptable use of plagiarism on the forum?

    Oh, we already do?
  • Pierre-Normand
    2.4k
    With the kind of use you both put it to, it does produce posts which are at least indistinguishable from human generated creativityfdrake

    Yay! We passed the Turing test!
  • fdrake
    6.7k


    Bah. It borrows your intentions.
  • Christoffer
    2.1k
    you cannot trust the generic end user to use it responsibly. I wish this were not true, but it is.fdrake

    Then we return to the problem of how to distinguish the use? The more advanced these get, the less likely it's possible to spot their use for post-generation.

    This whole thing really becomes very philosophical in nature... fitting for this forum. It all becomes a P-Zombie problem for written posts; if you cannot distinguish someone's writing from someone who knows how to use LLMs for a perfect mimic of a user writing, how can the rules be enforced?

    It's similar to the problem of spotting generated images and deep fakes. Since the tech advances so fast, the solution ends up being another AI being used for the purpose of analyzing if an image is generated.

    At some point we might need proper and judicious use of AI to counter posts that can't be judged being generated or not. Either by analyzing the language used, or to use it for deconstructing the merits of the argument in order to find the sources or lack of sources.

    But then we end up in a situation in which the intention is to spot the misuse of LLMs, but the method is ending up being a proper philosophical debate, using LLMs pitted against each other.

    I don't know if it's possible or not, but one of the only concepts I can think of that would properly pinpoint the use of LLMs is if there was a timer coded into the forum, tracking how long it took to write a post. I have no idea if this is possible or not, and the only flaw would be if someone writes posts outside of the forum and then paste them here, but I do think most members write directly here in the forum in order to properly use quote tools and such.

    Point being, if the written post can track how long it took to write and if it was formed with key strokes within the range of how people write normally, then it would be somewhat of an evidence that a post is actually written and not just generated and copy-pasted into the forum.

    At least food for thought on coding new features and functions in a post-LLM world of discussions online. :chin:

    Should we have some guidelines on acceptable use of plagiarism on the forum?bongo fury

    If you mean the concept of LLMs generating plagiarism, the more I've deep dived into the technology and compared it to how humans generate something, the less I think we can blame LLMs for plagiarism. Much of the evidence have been attributed to quirks of on going technological development and the arguments keep getting into cult like behavior online by people who try to weaponize language in order to fight against AI technology. The use of terms like plagiarism, theft and such is being used so haphazardly that it risks making valid points of criticism easily dismissed due to an overreliance on the terms being factual descriptions of the technology when in fact no such definitions have yet to be defined.

    The overlap between how the tech operates and how humans operates in creating something produces problems in reliably and properly defining the boundaries. Similar to how some concept artists blamed the initial models for plagiarism when they themselves used tracing off photographs they grabbed from a Google search, which technically is a much more direct use of someone else's work without credit.

    And for text generation, the problem with LLMs usually comes down to accidental plagiarism rather than intentional. And accidental plagiarism mostly occurs when sources aren't cited properly and the sourced text ends up as part of the authors text. This often happens in academic writing and is sometimes hard to spot. But new reinforcement learning for models like the o1 seems to combat these accidents better (not perfect) and in time they might function even better than the majority of human writers do in this regard.

    Point being that any text that's written as a new sentence cannot be considered plagiarism, even if the communicated information and context of that sentence is coming from memorized information. Human language would become a mess if we had to double check everything we write like that. We assume that when we write something, the processes in our brain counts as enough creativity to be considered not plagiarism. Yet, we have the same ability to accidentally plagiarize, even when writing normally and we aren't aware of any of it until someone points it out. Like, how do I know that what I write here hasn't been written somewhere else; some lines I've read some time in the past and I'm accidentally typing up the same sentence because my memory accidentally formed it around the same contextual information I'm intending to communicate?

    We source other's information constantly. Anything we hear, see, read or even taste becomes part of a pool of data we use to create something new, a remix of it all. The problems with just summarizing all AI models as plagiarism or theft is that the terms aren't used properly within the context of the criticism. It's the cart before the horse; people want to criticize and take down the AI models first and tries to apply a reason for it as an after-thought. But for the terms to apply correctly, they must have a contextual definition for how they actually apply to the AI models and there aren't any since people only use them haphazardly. By the definitions we use them for judging human outputs, it would most likely free the AI models of plagiarism and theft accusations rather than them being guilty especially since we don't attribute a person's single plagiarism to all text they have ever written and every new sentence they will ever write.

    It is semantics, but semantics are always important when defining law and moral in discussions about uncharted territories like this. What it boils down to is rather that until the criticism against AI models can find a better philosophical ground that's solid, maybe even as a newly defined concept of how humans and AI models will co-exist legally and morally going forward; all the AI criticism just ends up being "I don't like AI so I want it banned". It's ok not to like it, it's ok to fear the abuse that they can be used for, but never has "I don't like it" been enough to properly ban or help structure a working foundation and boundary for new technology. It ends up becoming just another luddite argument to ban machines rather than the necessary philosophical arguments for how we can co-exist with this new technology.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.