I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority. — Janus
Should we argue... — Joshs
According to who? — Fire Ologist
OK. So somewhere between black and white, thus not a blanket ban. :up: — apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach? — apokrisis
"No part of a post may be AI-written, and AI references are not permitted" — Leontiskos
Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. — apokrisis
The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. — apokrisis
What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind? — apokrisis
And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively. — apokrisis
So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say. — apokrisis
I agree in spirit. But let's be practical.
A blanket ban on LLM generated OPs and entire posts is a no brainer. — apokrisis
I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it. — apokrisis
Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element. — apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach? — apokrisis
So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated? — Leontiskos
Why do many people belive the appeal to tradition is some inviolable trump card? — unimportant
We have nothing to lose by going in that direction, and I believe the posters with most integrity here will respect us for it. — Baden
And if the product is undetectable, our site will at least not look like an AI playground. — Baden
Reflectivity and expressivity, along with intuition and imagination are at the heart of what we do here, and at least my notion of what it means to be human. — Baden
And, while AIs can be a useful tool (like all technology, they are both a toxin and a cure), there is a point at which they become inimical to what TPF is and should be. The line for me is certainly crossed when posters begin to use them to directly write posts and particularly OPs, in full or in part. And this is something it is still currently possible to detect. The fact that it is more work for us mods is unfortunate. But I'm not for throwing in the towel. — Baden
If you had asked the AI to write your reply in full or in part and had not disclosed that, we would be in the area I want to immediately address. — Baden
Arguably the most important part of the job is very often the "calculator" task, the most tedious task. — Jamal
But I'll keep "encourage", since the point is to encourage some uses of LLMs over others. In "We encourage X," the X stands for "using LLMs as assistants for research, brainstorming, and editing," with the obvious emphasis on the "as". — Jamal
Faith translates into Russian as "VERA." — Astorre
It's an interesting discrepancy: Etymologically, Latin "fides" means 'trust', but Slavic "vera" (related to Latin "verus") means 'truth'. — baker
I was surprised by the depiction of what is said to be "Socratic" in your account of the Penner article. — Paine
If I do try to reply, it would be good to know if you have studied Philosophical Fragments as a whole or only portions as references to other arguments. — Paine
The motto from Shakespeare at the start of the book, ‘Better well hanged than ill wed’, can be read as ‘I’d rather be hung on the cross than bed down with fast talkers selling flashy “truth” in a handful of proposition’. A ‘Propositio’ follows the preface, but it is not a ‘proposition to be defended’. It reveals the writer’s lack of self-certainty and direction: ‘The question [that motivates the book] is asked in ignorance by one who does not even know what can have led him to ask it.’ But this book is not a stumbling accident, so the author’s pose as a bungler may be only a pose. Underselling himself shows up brash, self-important writers who know exactly what they’re saying — who trumpet Truth and Themselves for all comers. — Repetition and Philosophical Crumbs, Piety, xvii-xviii
One stubborn perception among philosophers is that there is little of value in the explicitly Christian character of Søren Kierkegaard’s thinking. Those embarrassed by a Kierkegaardian view of Christian faith can be divided roughly into two camps: those who interpret him along irrationalist-existentialist lines as an emotivist or subjectivist, and those who see him as a sort of literary ironist whose goal is to defer endlessly the advancement of any positive philosophical position. The key to both readings of Kierkegaard depends upon viewing him as more a child of Enlightenment than its critic, as one who accepts the basic philosophical account of reason and faith in modernity and remains within it. More to the point, these readings tend to view him through the lens of secular modernity as a kind of hyper- or ultra-modernist, rather than as someone who offers a penetrating analysis of, and corrective to, the basic assumptions of modern secular philosophical culture. In this case, Kierkegaard, with all his talk of subjectivity as truth, inwardness, and passion, the objective uncertainty and absolute paradox of faith, and the teleological suspension of the ethical, along with his emphasis on indirect communication and the use of pseudonyms, is understood merely to perpetuate the modern dualisms between secular and sacred, public and private, object and subject, reason and faith—only as having opted out of the first half of each disjunction in favor of the second. Kierkegaard’s views on faith are seen as giving either too much or too little to secular modernity, and, in any case, Kierkegaard is dubbed a noncognitivist, irrationalist antiphilosopher.
Against this position, I argue that it is precisely the failure to grasp Kierkegaard’s dialectical opposition to secular modernity that results in a distortion of, and failure to appreciate, the overtly Christian character of Kierkegaard’s thought and its resources for Christian theology. Kierkegaard’s critique of reason is at the same time, and even more importantly, a critique of secular modernity. To do full justice to Kierkegaard’s critique of reason, we must also see it as a critique of modernity’s secularity. — Myron Penner, Kierkegaard’s Critique of Secular Reason, 372-3
I apologize for the incredibly belated response! — Bob Ross
I see what you are saying. The question arises: if God is not deploying a concept of group guilt, then why wouldn’t God simply restore that grace for those generations that came after (since they were individually innocent)? — Bob Ross
What do you think? — Bob Ross
annihilation is an act of willing the bad of something (by willing its non-existence)... — Bob Ross
the difference between consulting a secondary source and consulting an llm is the following:
After locating a secondary source one merely jots down the reference and that’s the end of it. — Joshs
When one locates an argument from an llm... — Joshs
When one locates an argument from an llm that one finds valuable, one decides if the argument is something one can either defend in one’s own terms, or should be backed up with a direct quote from either a secondary or primary source. This can then be achieved with a quick prompt to the llm, and finding a reference for the quote. — Joshs
The fact that proper use of a.i. leaves one with the choice between incorporating arguments one can easily defend and elaborate on based on one’s own understanding of the subject, and connecting those arguments back to verifiable quotes, means that the danger of falsehood doesn’t come up at all. — Joshs
Again, you have not even attempted to show that the AI's summation was in any way inaccurate. — Banno
The AI is not being appealed to as an authority — Banno
It's noticeable that you have not presented any evidence, one way or the other.
If you think that what the AI said is wrong, then what you ought do is to present evidence, perhaps in the form of peer-reviewed articles that say that humans can reliably recognise AI generated text.
But that is not what you have chosen to do. Instead, you cast aspersions. — Banno
No, I presented what the AI had to say, for critique. Go ahead and look at the papers it cites... — Banno
So you cannot see the difference between "A rule against AI use will not be heeded" and "A rule against AI use cannot be enforced". Ok. — Banno
Baden? Tell us what you think. Is my reply to you against the rules? — Banno
With intended irony...
Prompt: find peer-reviewed academic studies that show the effectiveness of any capacity to recognise AI generated text.
The result.
"...there is peer-reviewed evidence that both humans... and automated tools can sometimes detect AI-generated text above chance. Effectiveness is highly conditional. Measured accuracy is often only modest."
So yes, I overstated my case. You may be able to recognise posts as AI generated at a slightly better rate than choosing at random. — Banno
I know more pay attention to the fact that I am using a machine when I consult a.i. than when I use the world-processing features of my iphone to type this. — Joshs
If you have ever been prompted to seek out relevant literature to aid in the composing of an OP, or your response to an OP, then your telos in consulting that textual material is the same as that of the many here who consult a.i while engaging in TPF discussions. — Joshs
A pissing contest, combined with quasi-efforts at healing existential anxiety. — baker
Here is a case in point. I have not made the argument he here attributes to me. I have, amongst other things pointed out that a rule against AI cannot be reliably enforce, which is quite different. — Banno
For those who think philosophy consists in a series of appeals to authority, AI must be quite confounding. — Banno
Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry. — Jamal
Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works. — Jamal
The arguments are similar to what I see here. "AI is inevitable and therefore" etc. Some teachers---the good ones---are appalled. — Baden
I think it would be helpful to continue to reflect on what is disagreeable about AI use and why. For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque. — Leontiskos
I've mentioned this in the mod forum, so I'll mention it here too. I disagree with diluting the guidelines. I think we have an opportunity to be exceptional on the web in keeping this place as clean of AI-written content as possible. And given that the culture is veering more and more towards letting AI do everything, we are likely over time to be drowned in this stuff unless we assertively and straightforwardly set enforceable limitations. That is, I don't see any reward from being less strict that balances the risk of throwing away what makes up special and what, in the future, will be even rarer than it is now, i.e. a purely human online community.
The idea that we should keep up with the times to keep up with the times isn't convincing. Technocapitalism is definitive of the times we're in now, and it's a system that is not particularly friendly to human creativity and freedom. But you don't even have to agree with that to agree with me, only recognize that if we don't draw a clear line, there will effectively be no line. — Baden
It seems to me difficult to argue against the point, made in the OP, that since LLMs are going to be used, we have to work out how to use them well... — Jamal
that will lead people to hide their use of it generally. — Jamal
This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all. — Leontiskos
All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy. — Fire Ologist
This becomes rather subtle, but what I find is that people who tell themselves that they are merely using AI to generate candidate theories which they then assess the validity of in a posterior manner, are failing to understand their own interaction with AI. They are failing to appreciate the trust they place in AI to generate viable candidate theories, for example. But they also tend to ignore the fact that they are very often taking AI at its word. — Leontiskos
Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position. — Fire Ologist
The presence and influence of AI in a particular writing needs to never be hidden from the reader. — Fire Ologist
You need to be able to make AI-generated knowledge your own, just as you make anything you know your own. — Fire Ologist
Unlike handing it to a human editor, which is what authors have been doing for yonks?
— SophistiCat
Nah. You are engaging in the same basic equivocation between a human and an AI. The whole point is that interacting with humans is different from interacting with AI, and the two should not be conflated. You've begged the question in a pretty basic manner, namely by implying that interacting with a human duo is the same as interacting with a human and AI duo. — Leontiskos
So in this case the LLM carried out the tedious part of the task; — Jamal
It didn't occur to me that anyone would interpret those guidelines as suggesting that posts written by people who are usng AI tools are generally superior to those written by people who don't use AI, — Jamal
We encourage using LLMs as assistants for research, brainstorming, and editing. — Deepseek
I like this. I asked Deepseek to incorporate it into a set of guidelines based on the existing AI discussions on TPF. Below is the output. I think it's a useful starting point, and I encourage people here to suggest additions and amendments. — Jamal
3. Prohibited Uses: What We Consider "Cheating"
The following uses undermine the community and are prohibited:
[*] Ghostwriting: Posting content that is entirely or mostly generated by an LLM without significant human input and without disclosure.
[*] Bypassing Engagement: Using an LLM to formulate responses in a debate that you do not genuinely understand. This turns a dialogue between people into a dialogue between AIs and destroys the "cut-and-thrust" of argument.
[*] Sock-Puppeting: Using an LLM to fabricate multiple perspectives or fake expertise to support your own position. — Deepseek
Inspired by Kierkegaard's ideas — Astorre
From a Socratic perspective, every temporal point of departure is eo ipso contingent, something vanishing, an occasion; the teacher is no more significant, and if he presents himself or his teachings in any other way, then he gives nothing... — Kierkegaard, Philosophical Crumbs, tr. M. G. Piety
But is the problem preaching, or is it a particular kind of preaching? — Leontiskos
I encountered the preacher's paradox in my everyday life. It concerns my children. Should I tell them what I know about religion myself, take them to church, convince them, or leave it up to them, or perhaps avoid religious topics altogether? — Astorre
I was drawn to this topic by conversations with so-called preachers (not necessarily Christian ones, but any kind). They say, "You must do this, because I'm a wise man and have learned the truth." When you ask, "What if I do this and it doesn't work?" Silence ensues, or something like, "That means you didn't do what I told you to do/you didn't believe/you weren't chosen." — Astorre
Question: Which of these judgments conveys the speaker's belief that the Sistine Chapel ceiling is beautiful, or proves it? — Astorre
"What cannot be spoken of, one must remain silent about." — Astorre
Language is incapable of exhaustively expressing subjective experience — Astorre
And here a paradox arises: infecting another person with an idea you don't fully understand yourself... — Astorre
It would be unethical, for instance, for me to ask a perfect stranger for their view about some sensitive material I've been asked to review - and so similarly unethical for me to feed it into AI. Whereas if I asked a perfect stranger to check an article for typos and spelling, then it doesn't seem necessary for me to credit them... — Clarendon
Yes, it would only be a heuristic and so would not assume AI is actually a person. — Clarendon
