Comments

  • Banning AI Altogether
    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority.Janus

    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
  • Banning AI Altogether
    Should we argue...Joshs

    What was being argued was that the research required to put together an idea is tedious and outsourceable, and that what one should do is outsource that research, take the pre-made idea from the LLM-assistant, and "get on with the task of developing the idea to see if it works." Maybe try responding to that?
  • How to use AI effectively to do philosophy.
    According to who?Fire Ologist

    The Puppeteer, of course.
  • Banning AI Altogether
    OK. So somewhere between black and white, thus not a blanket ban. :up:apokrisis

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    To be clear, my approach would be pretty simple. It is not concerned with plagiarism, but with the outsourcing of one's thinking, and it is not implemented primarily by a rule, but by a philosophical culture to which rules also contribute. The rule itself would be simple, such as this:

    "No part of a post may be AI-written, and AI references are not permitted"Leontiskos

    I've argued elsewhere that it doesn't really matter whether there is a reliable detection-mechanism (and this is why I see the approach as somewhat nuanced). The rule is supporting and reflecting a philosophical culture and spirit that will shape the community.

    But I don't begrudge anything about @Baden's approach. I actually hope it works better than what I would do. And our means are not at odds. They are just a bit different.

    Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful.apokrisis

    My purpose is quality philosophical dialogue, not plagiarism. I think a focus on sources rather than intermediaries improves philosophical dialogue, and that's the point. Analogously, focus on primary rather than secondary sources also improves philosophical dialogue, independent of whether the primary sources are receiving insufficient royalties.

    The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail.apokrisis

    Yes, I agree.

    What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind?apokrisis

    To put it concisely, I think philosophical dialogue is about thinking our own thoughts and thinking our (human) interlocutor's thoughts, and that this is especially true in a place like TPF. LLMs are about providing you with pre-thought thoughts, so that you don't have to do the thinking, or the research, or the contemplation, etc. So there is an intrinsic incompatibility in that sense. But as a souped-up search engine LLMs can help us in this task, and perhaps in other senses as well. I just don't think appealing to an LLM qua LLM in the context of philosophical dialogue is helpful to that task.

    And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively.apokrisis

    I think that's all true, but I think what I said still holds.

    Maybe you are implying that LLM-appeals would improve the philosophical quality of TPF? Surely LLMs can improve one's own philosophy, but that's different from TPF on my view. I can go lift dumbbells in the gym to train, but I don't bring the dumbbells to the field on game day. One comes to TPF to interact with humans.

    So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say.apokrisis

    If someone sees a crackpot post; goes to their LLM and asks it to find a source demonstrating that the post is crackpot; reads, understands, and agrees with the source; and then presents that source along with the relevant arguments to show that the post is crackpot; then I think that's within the boundary. And I have no truck with the view which says that one must acknowledge their use of the LLM as an intermediary. But note that, on my view, what is prohibited is, "My LLM said you are wrong, therefore you are wrong. Oh, and here's a link to the LLM output."

    But I am not a mod so there is no need to focus especially on my view. If I've said too much about it, it is only because you thought I endorsed @Baden's approach tout court.
  • Banning AI Altogether
    I agree in spirit. But let's be practical.

    A blanket ban on LLM generated OPs and entire posts is a no brainer.
    apokrisis

    Okay, we agree on this.

    I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it.apokrisis

    I tried to argue against appeal-to-LLM arguments in two recent posts, here and here.

    In general I would argue that LLMs are a special kind of source, and cannot be treated just like any other source is treated. But a large part of my argument is found here, where the idea is that a LLM is a mediatory and private source. One may use an LLM, but the relevant sourcing should go to the LLM's sources, not the LLM itself, and if one is not familiar with the LLM's sources then they shouldn't be taking a stand with regard to arguments based on those sources.

    Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element.apokrisis

    Possibly, but I care less about transparency and more about not promoting a forum where thinking is outsourced to LLMs. I see plagiarism as a small matter compared to the outsourcing of one's thinking.

    Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach?apokrisis

    Rules must be black and white to a large extent. I would argue that your approach is less nuanced than mine, and this is because you want something that is easier to implement and less unwieldy. The key is to find a guideline that is efficacious without being nuanced to the point of nullity.

    I appreciate your input. I have to get back to that other thread on liberalism.
  • How to use AI effectively to do philosophy.
    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?Leontiskos

    Another aspect of this is scarcity. LLM content is not scarce in the way human content is. I can generate a thousand pages of LLM "philosophy" in a few minutes. Someone who therefore spends considerable time and energy on an OP or a post can be met by 's, "This LLM output says you're wrong," which was generated lazily in a matter of seconds.

    Forums already have a huge struggle with eristic, showboating, and falsification-for-the-sake-of-falsification. Give free access to a tool that will allow them to justify their disagreement at length in the snap of a finger, and guess what happens?

    I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of @Jamal's arguments, it may become more obvious that there is a problem at stake.

    (@Baden, @Jamal)
  • The Old Testament Evil
    - Some examples of accounts that have been given in the past are spiritual accounts and also genetic accounts. The basic idea is that humankind is more than just a number of irremediably separate individual parts; that there is a real interconnection. I am not exactly sure of the mechanism, but in fact this idea is quite common historically, and especially outside of strongly individualistic cultures like our own. In Christianity the idea is taken for granted when it is said that at the Incarnation God took on human nature, and thus elevated all humans in that event.
  • Banning AI Altogether
    - The context here is a philosophy forum where humans interact with other humans. The premise of this whole issue is that on a human philosophy forum you interact with humans. If you do not accept that premise, then you are interested in a much broader discussion.
  • Banning AI Altogether


    ...a similar argument could be given from a more analytic perspective, although I realize it is a bit hackneyed. It is as follows:

    --

    The communal danger from AI lies in the possibility that the community come to outsource its thinking as a matter of course, constantly appealing to the authority of AI instead of giving organic arguments. This danger is arguably epistemic, in the sense that someone who is interacting with an argument will be doing philosophy as long as they do not know that they are interacting with AI. For example, if Ben is using AI to write his posts and Morgan does not know this, then when Morgan engages Ben's posts he will be doing philosophy. He will be—at least to his knowledge—engaging in human-to-human philosophical dialogue. Ben hurts only himself, and Morgan is (mostly) unaffected.

    --

    There are subtle ways in which this argument fails, but it does point up the manner in which a rule need not "catch" every infraction. Ben can lie about his posts all he likes, and Morgan will not be harmed in any serious way. Indeed it is salutary that Ben his LLM-use, both for Morgan and the community, but also for Ben.
  • Why do many people belive the appeal to tradition is some inviolable trump card?
    Why do many people belive the appeal to tradition is some inviolable trump card?unimportant

    Tradition is not infallible; it's just better than most things. Humans are intelligent; they do things for reasons; the things they do over and over tend to have very sound or deep reasons; therefore tradition is a reliable norm. Most thinking is faddish, and therefore tradition is a good rule of thumb.
  • Banning AI Altogether
    We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it.Baden

    Good stuff.

    And if the product is undetectable, our site will at least not look like an AI playground.Baden

    The "undetectability" argument turns back on itself in certain respects. Suppose AI-use is undetectable. Ex hypothesi, this means that AI-use is not detrimental, for if something cannot be detected then it cannot be detrimental (or at least it cannot be identified as the cause of any detriment). But this is absurd. The whole premise of a rule against AI-use is that excessive and inappropriate AI-use would be detrimental to the forum, and what is detrimental to the forum is obviously also detectable. There is an equivocation occurring between being able to detect every instance of AI-use, and AI-use being a detectable cause given certain undesirable effects.

    So I want to say that one should think about generating a philosophical culture that is adverse to outsourcing thinking to AI, rather than merely thinking about a rule and its black-and-white enforcement. It shouldn't be too hard to generate that culture, given that it already exists in anyone remotely interested in philosophy. This is precisely why it is more important that the general membership would heed such a rule, whether or not the rule could be enforced with some measure of infallibility. The rule is not heeded for mere fear of being found out and punished, but rather because it is in accord with the whole ethos of philosophical inquiry. This is in accord with Kant's idea of respect for a law, rather than obeying out of fear or self-interest.

    In order to be effective, a rule need not be infallibly enforceable. No rule achieves such a thing, and the rules are very rarely enforced in that manner. It only needs to track and shape the cultural sense of TPF with respect to AI. Of course it goes far beyond AI. The fellow who is mindlessly beholden to some particular philosopher, and cannot handle objections that question his philosopher's presuppositions, does not receive much respect in philosophical circles, and such a fellow does not tend to prosper in pluralistic philosophical settings. So too with the fellow who constantly appeals to AI. The TPF culture already opposes and resists the outsourcing of one's thinking, simply in virtue of the fact that the TPF culture is a philosophical culture. The rule against outsourcing one's thinking to AI is obvious to philosophers, and those who aspire towards philosophy certainly have the wherewithal to come to understand the basis for such a rule. But I should stress that a key point here is to avoid a democratization of the guidelines. On a democratic vote we will sell our thinking to AI for a bowl of pottage. The moderators and owners need to reserve this decision for themselves, and for this reason it seems fraught to have an AI write up a democratic set of guidelines, where everyone's input is equally weighed (or else weighed in virtue of their post-count).
  • The Old Testament Evil
    - I think the ontological reality would ground juridical judgments, such as those in question. In traditional, pre-Reformation Christianity God does not make juridical judgments if there is no ontological basis for the judgments.
  • How to use AI effectively to do philosophy.
    Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human.Baden

    I would agree. I would want to say that, for philosophy, thinking is an end in itself, and therefore cannot be outsourced as a means to some further end.

    And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel.Baden

    I'm encouraged that you're willing to put in the work.

    As above, I don't see how the line can be drawn in such a way that mere appeals to AI authority—whether an implicit appeal as found in a post with nothing more than a quoted AI response, or an explicit appeal where one "argues" their position by mere reference to AI output—are not crossing the line. If one can cite AI as an authority that speaks for itself and requires no human comment or human conveyance, then it's not clear why the AI can't speak for itself tout court.

    We could envision a kind of limit case where someone queries AI and then studies the output extensively. They "make it their own," by agreeing with the arguments and the language to such an extent that they are committed to argue the exact points and words as their own points and words. They post the same words to TPF, which they have "baptized" as their own and are willing to defend in a fully human manner. Supposing for the sake of argument that such a thing would be formally permissible (even if, materially, it would be sanctioned or flagged). What then would be the difference when someone posts AI output to justify their claims? ...And let us suppose that in both cases the AI-sourcing is transparent.

    If one wants members to think in a manner that goes beyond AI regurgitation, then it would seem that quote-regurgitations of AI fall into the same category as first-person regurgitations of AI. Contrariwise, if I love Alasdair MacIntyre, imbibe his work, quote him, and begin to sound like him myself, there is no problem. There is no problem because MacIntyre is a human, and thus the thinking being emulated or even regurgitated is human thinking. Yet if someone imbibes AI, quotes it constantly, and begins to sound themselves like AI, in this case the "thinking" being emulated or regurgitated is non-human thinking. If I quote MacIntyre and appeal to his authority, I am appealing to the authority of a thinking human. When Banno quotes AI and appeals to its authority, he is appealing to the authority of a non-thinking language-piecing algorithm.

    The laissez-faire approach to sourcing leads to camps, such as the camp of people who take Wittgenstein as an authority and accept arguments from the authority of Wittgenstein, and those who don't. The laissez-faire approach to AI sourcing will lead to the same thing, where there will be groups of people who simply quote AI back and forth to each other in the same way that Wittgenstenians quote Wittgenstein back and forth to each other, and on the other hand those who do not accept such sources as authorities. One difference is that Wittgenstein and MacIntyre are humans whereas AI is not. Another difference is that reading and exegeting Wittgenstein requires philosophical effort and exertion, whereas LLMs were basically created to avoid that sort of effort and exertion. Hence there will be a much greater impetus to lean on LLMs than to lean on Wittgenstein.

    Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking? If so, then it doesn't matter whether we provide the proper citation to the LLM source.* What matters is that we are letting the LLM do our thinking for us. "It's true because the LLM said so, and I have no need to read the LLM's sources or understand the underlying evidence."

    (Cf. The LLM is a private authority, not a public authority, and therefore arguments from authority based on LLMs are invalid arguments from authority.)


    * And in this case it is equally true that the "plagiarism" argument is separate and lesser, and should not be conflated with the deeper issue of outsourcing thinking. One need not plagiarize in order to outsource their thinking.
  • How to use AI effectively to do philosophy.
    If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address.Baden

    So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated?

    How is this in line with the human-to-human interaction that the rule is supposed to create?
  • How to use AI effectively to do philosophy.
    Arguably the most important part of the job is very often the "calculator" task, the most tedious task.Jamal

    The point is that you've outsourced the drafting of the guidelines to AI. Whether or not drafting forum guidelines is a tedious, sub-human task is a separate question.

    But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as".Jamal

    You are claiming that, "We encourage using LLMs as assistants for research, brainstorming, and editing," means, "If one wishes to use an LLM, we would encourage that they use the LLM in X way rather than in Y way." Do you understand that this is what you are claiming?

    It is very helpful when those who enforce the rules write the rules. When this does not happen, those who enforce the rules end up interpreting the rules contrary to their natural meaning.
  • The Preacher's Paradox
    Faith translates into Russian as "VERA."Astorre

    It's an interesting discrepancy: Etymologically, Latin "fides" means 'trust', but Slavic "vera" (related to Latin "verus") means 'truth'.baker

    This looks to be a false etymology. The Latin fides and the Slavic vera are both translations of the Greek pistis, and vera primarily means faith, not true. The two words do share a common ancestor (were-o), but vera is not derived from verus, and were-o does not exclude faith/trustworthiness.
  • The Preacher's Paradox
    I was surprised by the depiction of what is said to be "Socratic" in your account of the Penner article.Paine

    Well that sentence about "standing athwart" was meant to apply to Kierkegaard generally, but I think Fragments is a case in point. The very quote I gave from Fragments is supportive of the idea (i.e. the Socratic teacher is the teacher who sees himself as a vanishing occasion, and such a teacher does not wield authority through the instrument of reason).

    If I do try to reply, it would be good to know if you have studied Philosophical Fragments as a whole or only portions as references to other arguments.Paine

    I am working through it at the moment, and so have not finished it yet. I was taking my cue from the Penner article I cited, but his point is also being borne out in the text.

    Here is a relevant excerpt from Piety's introduction:

    The motto from Shakespeare at the start of the book, ‘Better well hanged than ill wed’, can be read as ‘I’d rather be hung on the cross than bed down with fast talkers selling flashy “truth” in a handful of proposition’. A ‘Propositio’ follows the preface, but it is not a ‘proposition to be defended’. It reveals the writer’s lack of self-certainty and direction: ‘The question [that motivates the book] is asked in ignorance by one who does not even know what can have led him to ask it.’ But this book is not a stumbling accident, so the author’s pose as a bungler may be only a pose. Underselling himself shows up brash, self-important writers who know exactly what they’re saying — who trumpet Truth and Themselves for all comers. — Repetition and Philosophical Crumbs, Piety, xvii-xviii

    He goes on to talk about Climacus in light of the early Archimedes and Diogenes images. All of this is in line with the characterization I've offered.

    I want to say that Penner's point is salutary:

    One stubborn perception among philosophers is that there is little of value in the explicitly Christian character of Søren Kierkegaard’s thinking. Those embarrassed by a Kierkegaardian view of Christian faith can be divided roughly into two camps: those who interpret him along irrationalist-existentialist lines as an emotivist or subjectivist, and those who see him as a sort of literary ironist whose goal is to defer endlessly the advancement of any positive philosophical position. The key to both readings of Kierkegaard depends upon viewing him as more a child of Enlightenment than its critic, as one who accepts the basic philosophical account of reason and faith in modernity and remains within it. More to the point, these readings tend to view him through the lens of secular modernity as a kind of hyper- or ultra-modernist, rather than as someone who offers a penetrating analysis of, and corrective to, the basic assumptions of modern secular philosophical culture. In this case, Kierkegaard, with all his talk of subjectivity as truth, inwardness, and passion, the objective uncertainty and absolute paradox of faith, and the teleological suspension of the ethical, along with his emphasis on indirect communication and the use of pseudonyms, is understood merely to perpetuate the modern dualisms between secular and sacred, public and private, object and subject, reason and faith—only as having opted out of the first half of each disjunction in favor of the second. Kierkegaard’s views on faith are seen as giving either too much or too little to secular modernity, and, in any case, Kierkegaard is dubbed a noncognitivist, irrationalist antiphilosopher.

    Against this position, I argue that it is precisely the failure to grasp Kierkegaard’s dialectical opposition to secular modernity that results in a distortion of, and failure to appreciate, the overtly Christian character of Kierkegaard’s thought and its resources for Christian theology. Kierkegaard’s critique of reason is at the same time, and even more importantly, a critique of secular modernity. To do full justice to Kierkegaard’s critique of reason, we must also see it as a critique of modernity’s secularity.
    — Myron Penner, Kierkegaard’s Critique of Secular Reason, 372-3

    I find the readings that Penner opposes very strange, but they are nevertheless very common. They seem to do violence to Kierkegaard's texts and life-setting, and to ignore his affinity with a figure like J. G. Hamann (who is also often mistaken as an irrationalist by secular minds). Such readings go hand in hand with the OP of this thread, which takes them for granted even without offering any evidence for the idea that they come from Kierkegaard.
  • The Old Testament Evil
    I apologize for the incredibly belated response!Bob Ross

    No worries.

    I see what you are saying. The question arises: if God is not deploying a concept of group guilt, then why wouldn’t God simply restore that grace for those generations that came after (since they were individually innocent)?Bob Ross

    Yes, good. That is one of the questions that comes up.

    What do you think?Bob Ross

    That's an interesting theory, with a lot of different moving parts. I'm not sure how many of the details I would want to get into, especially in a thread devoted to Old Testament evil.

    My thought is that there must be some ontological reality binding humans one to another, i.e. that we are not merely individuals. Hence God, in creating humans, did not create a set of individuals, but actually also created a whole, and there is a concern for the whole qua whole (which does not deny a concern for the parts). If one buys into the Western notion of individualism too deeply, then traditional Christian doctrines such as Original Sin make little sense.

    annihilation is an act of willing the bad of something (by willing its non-existence)...Bob Ross

    That's an interesting argument, and it may well be correct. Annihilation is certainly unheard of in the Biblical context, and even the notion of non-being is something that develops relatively late.
  • How to use AI effectively to do philosophy.
    the difference between consulting a secondary source and consulting an llm is the following:
    After locating a secondary source one merely jots down the reference and that’s the end of it.
    Joshs

    Well, they could read the secondary source. That's what I would usually mean when I talk about consulting a secondary source.

    When one locates an argument from an llm...Joshs

    Okay, but remember that many imbibe LLM content without thinking of it as "arguments," so you are only presenting a subclass here.

    When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote.Joshs

    Right, and also reading the reference. If someone uses a LLM as a kind of search engine for primary or secondary sources, then there is no concern. If someone assents to the output of the LLM without consulting (i.e. reading) any of the human sources in question, or if one is relying on the LLM to summarize human sources accurately, then the problems in question do come up, and I think this is what often occurs.

    The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all.Joshs

    What do you mean, "The danger of falsehood doesn't come up at all?"

    It seems to me that you use LLMs more responsibly than most people, so there's that. But I think there is a very large temptation to slip from responsible use to irresponsible use. LLMs were built for quick answers and the outsourcing of research. I don't find it plausible that the available shortcuts will be left untrodden.

    If the LLM is merely being used to find human sources, which are in turn consulted in their own right, then I have no more objection to an LLM than to a search engine. In I give an argument to the effect that LLMs should not be directly used in philosophical dialogue (with other humans). I am wondering if you would disagree.
  • The Preacher's Paradox
    - Have you offered anything more than an appeal to your own authority? I can't see that there is anything more, but perhaps I am missing something.
  • How to use AI effectively to do philosophy.
    Again, you have not even attempted to show that the AI's summation was in any way inaccurate.Banno

    True, and that's because there is no such thing as an ad hominem fallacy against your AI authority. According to the TPF rules as I understand them, you are not allowed to present AI opinions as authoritative. The problem is that you have presented the AI opinion as authoritative, not that I have disregarded it as unauthoritative. One simply does not need some counterargument to oppose your appeal to AI. The appeal to AI is intrinsically impermissible. That you do not understand this underlines the confusion that AI is breeding.
  • How to use AI effectively to do philosophy.
    The AI is not being appealed to as an authorityBanno

    But it is, as I've shown . You drew a conclusion based on the AI's response, and not based on any cited document the AI provided. Therefore you appealed to the AI as an authority. The plausibility of the conclusion could come from nowhere else than the AI, for the AI is the only thing you consulted.

    This goes back to what I've pointed out a number of times, namely that those who take the AI's content on faith are deceiving themselves when they do so, and are failing to see the way they are appealing to the AI as an authority.
  • How to use AI effectively to do philosophy.
    It's noticeable that you have not presented any evidence, one way or the other.

    If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.

    But that is not what you have chosen to do. Instead, you cast aspersions.
    Banno

    I am pointing out that all you have done is appealed to the authority of AI, which is precisely something that most everyone recognizes as a danger (except for you!). Now you say that I am "casting aspersions" on the AI, or that I am engaging in ad hominem against the AI (!).

    The AI has no rights. The whole point is that blind appeals to AI authority are unphilosophical and unresponsible. That's part of why the rule you are trying to undermine exists. That you have constantly engaged in these blind appeals could be shown rather easily, and it is no coincidence that the one who uses AI in these irresponsible ways is the one attempting to undermine the rule against AI.
  • How to use AI effectively to do philosophy.
    No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites...Banno

    But you didn't read the papers it cited, and you , "So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random."

    If you were better at logic you would recognize your reasoning process: "The AI said it, so it must be true." This is the sort of mindless use of AI that will become common if your attempt to undermine the LLM rule succeeds.
  • How to use AI effectively to do philosophy.
    So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok.Banno

    We both know that the crux is not unenforceability. If an unenforceable rule is nevertheless expected to be heeded, then there is no argument against it. Your quibble is a red herring in relation to the steelman I've provided. :roll:

    Baden? Tell us what you think. Is my reply to you against the rules?Banno

    I would be interested, too. I haven't seen the rule enforced despite those like Banno often contravening it.

    It is also worth noting how the pro-AI Banno simply takes the AI at it's word, as a blind-faith authority. This is precisely what the end game is.
  • How to use AI effectively to do philosophy.
    With intended irony...

    Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.

    The result.

    "...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."

    So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random.
    Banno

    That's not irony. That's incoherent self-contradiction. It's also against the rules of TPF.
  • How to use AI effectively to do philosophy.
    I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this.Joshs

    You wouldn't see this claim as involving false equivalence?

    If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions.Joshs

    No, not really. There are primary sources, there are secondary sources, there are search engines, and then there is the LLM. Consulting a secondary source and consulting an LLM are not the same thing.

    It is worth noting that those who keep arguing in favor of LLMs seem to need to make use of falsehoods, and especially false equivalences.

    ---

    A pissing contest, combined with quasi-efforts at healing existential anxiety.baker

    Lol!

    ---

    Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different.Banno

    Which is the same thing, and of course the arguments I have given respond to this just as well. So you're quibbling, like you always do. Someone who is so indisposed to philosophy should probably not be creating threads instructing others how to do philosophy while at the same time contravening standing TPF rules.

    For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding.Banno

    The sycophantic appeal-to-AI-authority you engage in is precisely the sort of thing that is opposed.
  • Banning AI Altogether
    Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.Jamal

    This is a good point.

    Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.Jamal

    I don't think this is right. It separates the thinking of an idea from the having of an idea, which doesn't make much sense. If the research necessary to ground a thesis is too "tedious," then the thesis is not something one can put forth with integrity.

    But perhaps you are saying that we could use the LLM as a search engine, to see if others have interpreted a philosopher in the same way we are interpreting them?

    Part of the problem with the LLM is that it is private, not public. One's interaction history, prompting, etc., are not usually disclosed when appealing to the LLM as a source. The code is private in a much starker sense, even where the LLM is open source. Put differently, the LLM is a mediator that arguably has no place in person-to-person dialogue. If the LLM provides you with a good argument, then give that argument yourself, in your own words. If the LLM provides you with a good source, then read the source and make it your own before using it. The interlocutor needs your own sources and your own arguments, not your reliance on a private authority. Whatever parts of the LLMs mediation are publicly verifiable can be leveraged without use of the LLM (when dialoguing with an interlocutor). The only reason to appeal to the LLM itself would be in the case where publicly verifiable argumentation or evidence is unavailable, in which case one is appealing to the authority of the LLM qua LLM, which is both controversial and problematic. Thus a ban on LLMs need not be a ban on background, preparatory use of LLMs.
  • How to use AI effectively to do philosophy.
    The arguments are similar to what I see here. "AI is inevitable and therefore" etc. Some teachers---the good ones---are appalled.Baden

    I think it goes back to telos:

    I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.Leontiskos

    What is the end/telos? Of a university? Of a philosophy forum?

    Universities have in some ways become engines for economic and technological progress. If that is the end of the university, and if AI is conducive to that end, then there is no reason to prevent students from using AI. In that case a large part of what it means to be "a good student" will be "a student who knows how to use AI well," and perhaps the economically-driven university is satisfied with that.

    But liberal education in the traditional sense is not a servant to the economy. It is liberal; free from such servility. It is meant to educate the human being qua human being, and philosophy has always been a central part of that.

    Think of it this way. If someone comes to TPF and manages to discreetly use AI to look smart, to win arguments, to satisfy their ego, then perhaps, "They have their reward." They are using philosophy and TPF to get something that is not actually in accord with the nature of philosophy. They are the person Socrates criticizes for being obsessed with cosmetics rather than gymnastics; who wants their body to look healthy without being healthy.

    The argument, "It's inevitable, therefore we need to get on board," looks something like, "The cosmetics-folk are coming, therefore we'd better aid and abet them." I don't see why it is inevitable that every sphere of human life must substitute human thinking for machine "thinking." If AI is really inevitable, then why oppose it at all? Why even bother with the half-rules? It seems to me that philosophy arenas such as TPF should be precisely the places where that "inevitability" is checked. There will be no shortage of people looking for refuge from a cosmetic culture.

    Coming back to the point, if the telos of TPF is contrary to LLM-use, then LLMs should be discouraged. If the telos of TPF is helped by LLM-use, then LLMs should be encouraged. The vastness and power of the technology makes a neutral stance impossible. But the key question is this: What is the telos of TPF?
  • How to use AI effectively to do philosophy.
    I've mentioned this in the mod forum, so I'll mention it here too. I disagree with diluting the guidelines. I think we have an opportunity to be exceptional on the web in keeping this place as clean of AI-written content as possible. And given that the culture is veering more and more towards letting AI do everything, we are likely over time to be drowned in this stuff unless we assertively and straightforwardly set enforceable limitations. That is, I don't see any reward from being less strict that balances the risk of throwing away what makes up special and what, in the future, will be even rarer than it is now, i.e. a purely human online community.

    The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line.
    Baden

    :up: :fire: :up:

    I couldn't agree more, and I can't but help think that you are something like the prophet whose word of warning will inevitably go unheeded—as always happens for pragmatic reasons.

    Relatedly:

    It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well...Jamal

    Why does it matter that LLMs are going to be used? What if there were a blanket rule, "No part of a post may be AI-written, and AI references are not permitted"? The second part requires that someone who is making use of AI find—and hopefully understand—the primary human sources that the AI is relying on in order to make the salutary reference they wish to make.

    The curious ignoratio elenchus that @Banno wishes to rely on is, "A rule against AI use will not be heeded, therefore it should not be made." Is there any force to such an argument? Suppose someone writes all of their posts with LLMs. If they are found out, they are banned. But suppose they are not found out. Does it follow that the rule has failed? Not in the least. Everyone on the forum is assuming that all of the posts are human-written and human-reasoned, and the culture of the forum will track this assumption. Most of the posts will be human-written and human-reasoned. The fact that someone might transgress the rule doesn't really matter. Furthermore, the culture that such a rule helps establish will be organically opposed to the sorts of superficial AI-appeals. Someone attempting to rely on LLMs in that cultural atmosphere will in no way prosper. If they keep pressing the LLM-button to respond to each reply of increasing complexity, they will quickly be found out as a silly copy-and-paster. The idea that it would be easy to overtly shirk that cultural stricture is entirely unreasonable, and there is no significant motive for someone to rely on LLMs in that environment. It is parallel to the person who uses chess AI to win online chess games, for no monetary benefit and to the detriment of their chess skills and their love of chess.

    Similarly, a classroom rule against cheating could be opposed on @Banno's same basis: kids will cheat either way, so why bother? But the culture which stigmatizes cheating and values honest work is itself a bulwark against cheating, and both the rule and the culture make it much harder for the cheater to prosper. Furthermore, even if the rule cannot be enforced with perfection, the cheater is primarily hurting themselves and not others. We might even say that the rule is not there to protect cheaters from themselves. It is there to ensure that those who want an education can receive one.

    that will lead people to hide their use of it generally.Jamal

    Would that be a bad thing? To cause someone to hide an unwanted behavior is to disincentivize that behavior. It also gives such people a string to pull on to understand why the thing is discouraged.
  • How to use AI effectively to do philosophy.
    This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all.Leontiskos

    All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy.Fire Ologist

    So if you use someone else's words to do philosophy, you are usually appealing to them as an authority. The same thing is happening with LLMs. This will be true whether or not we see LLMs as a tool. I got into some of this in the following and the posts related to it:

    This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word.Leontiskos

    -

    Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position.Fire Ologist

    I tend to agree, but I don't think anyone who uses AI is capable of using it this way (including myself). If one did not think AI added authority to a position then one wouldn't use it at all.

    The presence and influence of AI in a particular writing needs to never be hidden from the reader.Fire Ologist

    I would argue that the presence and influence of AI is always hidden from us in some ways, given that we don't really know what we are doing when we consult it.

    You need to be able to make AI-generated knowledge your own, just as you make anything you know your own.Fire Ologist

    LLMs are sui generis. They have no precedent, and that's the difficulty. What this means is that your phrase, "just as you make anything you know your own," creates a false equivalence. It presumes that artificial intelligence is not artificial, and is on par with all previous forms of intelligence. This is the petitio principii that @Banno and others engage in constantly. For example:

    Unlike handing it to a human editor, which is what authors have been doing for yonks?
    — SophistiCat

    Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo.
    Leontiskos

    Given all of this, it would seem that @bongo fury's absolutist stance is in some ways the most coherent and intellectually rigorous, even though I realize that TPF will probably not go that route, and should not go that route if there are large disagreements at stake.
  • How to use AI effectively to do philosophy.
    So in this case the LLM carried out the tedious part of the task;Jamal

    But is your argument sound? If you have a group of people argue over a topic and then you appoint a person to summarize the arguments and produce a working document that will be the basis for further discussion, you haven't given them a "calculator" job. You have given them the most important job of all. You have asked them to draft the committee document, which is almost certainly the most crucial point in the process. Yet you have re-construed this as "a calculator job to avoid tedium." This is what always seems happen with LLMs. People use them in substantial ways and then downplay the ways in which they are using them. In cases such as these one seems to prefer outsourcing to a "neutral source" so as to avoid the natural controversy which always attends such a draft.

    It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI,Jamal

    It could have been made more irenically, but @bongo fury's basic point seems uncontroversial. You said:

    We encourage using LLMs as assistants for research, brainstorming, and editing. — Deepseek

    To say, "We encourage X," is to encourage X. It is not to say, "If you are doing Y, then we would encourage you to do Y in X manner." To say "allow" or "permit" instead of "encourage" would make a large difference.
  • How to use AI effectively to do philosophy.
    I like this. I asked Deepseek to incorporate it into a set of guidelines based on the existing AI discussions on TPF. Below is the output. I think it's a useful starting point, and I encourage people here to suggest additions and amendments.Jamal

    Isn't it a bit ironic to have AI write the AI rules for the forum? This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all. In this case one might think that by allowing revisions to be made to the AI's initial draft, or because the AI was asked to synthesize member contributions, one has not outsourced the basic thinking to the AI. This highlights why "responsible use" is so nebulous: because everyone gives themselves a pass whenever it is expedient.

    3. Prohibited Uses: What We Consider "Cheating"

    The following uses undermine the community and are prohibited:

    [*] Ghostwriting: Posting content that is entirely or mostly generated by an LLM without significant human input and without disclosure.
    [*] Bypassing Engagement: Using an LLM to formulate responses in a debate that you do not genuinely understand. This turns a dialogue between people into a dialogue between AIs and destroys the "cut-and-thrust" of argument.
    [*] Sock-Puppeting: Using an LLM to fabricate multiple perspectives or fake expertise to support your own position.
    — Deepseek

    I like the separating out of good uses from bad uses, and I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque.

    A sort of core issue here is one of trust and authority. It is the question of whether and to what extent AI is to be trusted, and guidelines etch the answer to that question in a communal manner. For example, it is easy to imagine the community which is distrustful towards AI as banning it, and the community which is trustful towards AI as privileging it. Obviously a middle road is being attempted here. Transparency is a good rule given that it allows members to navigate some of the complexities of the issue themselves. Still, the basic question of whether the community guidelines signify a trust or distrust in AI cannot be sidestepped. We are effectively deciding whether a specific authority (or perhaps in this case a meta-authority) is to be deemed trustworthy or untrustworthy for the purposes of TPF. The neutral ground is scarcely possible.
  • The Preacher's Paradox
    Inspired by Kierkegaard's ideasAstorre

    What primary or secondary Kierkegaard sources do you base your argument upon? So far I've only seen you quote Wittgenstein as if his words were simple truth. I would suggest reading Kierkegaard's Philosophical Fragments where he speaks to the idea that all teaching/learning is aided by temporal occasions (including preaching), and that the teacher should therefore understand himself as providing such an occasion:

    From a Socratic perspective, every temporal point of departure is eo ipso contingent, something vanishing, an occasion; the teacher is no more significant, and if he presents himself or his teachings in any other way, then he gives nothing... — Kierkegaard, Philosophical Crumbs, tr. M. G. Piety

    This is why what I've already said is much more Kierkegaardian than the odd way that Kierkegaard is sometimes interpreted by seculars:

    But is the problem preaching, or is it a particular kind of preaching?Leontiskos

    Kierkegaard wishes to stand athwart the Enlightenment rationalism notion of self-authority, preferring instead a Socratic approach that does not wield authority through the instrument of reason. Myron Penner's chapter/article is quite good in this regard: "Kierkegaard’s Critique of Secular Reason."
  • The Preacher's Paradox
    - Fair enough. I realize I may have been too curt, both in my haste and because I know I will not be able to respond for a few days. On the other hand—and this is what you apparently wish to deny—the OP is a pretty straightforward argument against preaching, complete with responses to objections. I have been trying to present reasons against the conclusion of the OP's argument. I don't deny that it could be interesting to leisurely explore the particular form of preaching in which the paradox resides.
  • The Preacher's Paradox
    - I'm actually out for a few days. I just wanted to submit my responses. If your idea is as "interrogative" as you claim, you may want to ask yourself where all the defensiveness is coming from. It looks as though the idea is averse to interrogation.
  • The Preacher's Paradox


    The preacher who thinks he has to make his listeners believe something that they cannot be made to believe is faced with a contradiction, yes. But to hold that all preachers think such a thing, and that the contradiction is intrinsic to preaching, is to have made a canard of preaching. Or so I think.

    In general I think you need to provide argumentation for your claims, and that too much assertion is occurring. Most of your thesis is being asserted, not argued. For example, the idea that all preachers are trying to make their listeners believe mere ideas is an assertion and not a conclusion. The claim that the preacher is engaged in infecting rather than introducing is another example.

    I encountered the preacher's paradox in my everyday life. It concerns my children. Should I tell them what I know about religion myself, take them to church, convince them, or leave it up to them, or perhaps avoid religious topics altogether?Astorre

    I would suggest giving more credence to the Biblical testimony and the testimony of your Church, and less credence to Kierkegaard's testimony. Faith is something that transcends us, not something we control. It is not something to be curated, either positively or negatively.

    Part of the question here is, "Do you want your children to be religious?" Is it permissible to want such a thing?
  • The Preacher's Paradox
    I was drawn to this topic by conversations with so-called preachers (not necessarily Christian ones, but any kind). They say, "You must do this, because I'm a wise man and have learned the truth." When you ask, "What if I do this and it doesn't work?" Silence ensues, or something like, "That means you didn't do what I told you to do/you didn't believe/you weren't chosen."Astorre

    But is the problem preaching, or is it a particular kind of preaching? Someone whose preaching attempts to connect someone with something that is dead (such as an idea) instead of something that is living (such as a friend or God) will fall into the incoherences that the OP points up. But not all preaching is like that. If someone tries to persuade others to believe things that one cannot be persuaded to believe, then their approach is incoherent. But not all preaching is of that kind.
  • The Preacher's Paradox
    Question: Which of these judgments conveys the speaker's belief that the Sistine Chapel ceiling is beautiful, or proves it?Astorre

    I think this is the same error, but with beauty instead of faith. So we could take my claim and replace "faith" with "beauty": "The temptation is to try to encompass [beauty], both by excluding it from certain spheres and by attempting to comprehend its mechanism." To have the presupposition that one can exhaustively delineate and comprehend things like faith or beauty is to already have failed.

    "What cannot be spoken of, one must remain silent about."Astorre

    False. And self-contradicting, by the way.

    Language is incapable of exhaustively expressing subjective experienceAstorre

    And, "So long as the recipient understands that the conveyance of faith is only a shadow and a sign, there is no danger." But the idea that faith is only a subjective experience is another example of the overconfident delineation of faith.

    And here a paradox arises: infecting another person with an idea you don't fully understand yourself...Astorre

    "Infecting" is an interesting choice of word, no? Petitio principii?

    Communicating supernatural faith is communicating something that transcends you and your understanding. If someone thinks that it is impossible or unethical to communicate something that transcends you and your understanding, then what they are really doing is denying the object of faith, God. They don't think God exists, or they don't think faith in God can or should be intended via preaching because they don't think faith is sown that way. I think the whole position is based on some false assumptions.

    Preaching is a bit like introducing someone to a friend, to a living reality. The idea that one cannot introduce someone to a friend unless they have a comprehensive knowledge of the friend and the way in which the friend will interact with the listener is quite silly. In this respect Kierkegaard is a Cartesian or a Hegelian in spite of himself. His attempted inversion of such systems has itself become captured by the larger net of those systems. The religious rationalist knows exactly what faith is and how to delineate it, and Kierkegaard in his opposition denies the rationalist claims, but in fact also arrives at the point where he is able to delineate faith with perfect precision. The only difference is that Kierkegaard knows exactly what faith isn't instead of what it is. Yet such a punctuated negation is, again, a false form of apophaticism - a kind of false humility.
  • Banning AI Altogether
    It would be unethical, for instance, for me to ask a perfect stranger for their view about some sensitive material I've been asked to review - and so similarly unethical for me to feed it into AI. Whereas if I asked a perfect stranger to check an article for typos and spelling, then it doesn't seem necessary for me to credit them...Clarendon

    Okay sure, but although the OP's complaint is a bit vague, I suspect that the counsel is not motivated by these sorts of ethical considerations. I don't think the OP is worried that we might infringe the rights of AI. I think the OP is implying that there is something incompatible between AI and the forum context.

    Yes, it would only be a heuristic and so would not assume AI is actually a person.Clarendon

    I myself would be wary to advise someone to treat AI as if it is a stranger. This is because strangers are persons, and therefore I would be advising that we treat AI as if it is a person. "Heuristically pretend that it is a stranger without envisioning it as a person," seems like a difficult request. It may be that the request can only be fulfilled in a superficial manner, and involves a contradiction. It is this small lie that we tell ourselves that seems to be at the root of many of the AI problems ("I am going to pretend that it is something that it isn't, and as long as I maintain an attitude of pretense everything will be fine").

    Someone might ask, "Why should we pretend that AI is a stranger?" And you might answer, "Because it would serve our purposes," to which they would surely respond, "Which purposes do you have in mind?"

    Perhaps what is being suggested is a stance of distrust or hesitancy towards the utterances of LLMs.