Ø implies everything
jgill
Ø implies everything
noAxioms
You use of 'solve that' implies a problem instead of deliberate design. LLMs are designed to stroke your ego, which encourages your use and dependency on them.LLMs just follow the pattern of the conversation, their opinions are very programmable with the right context. I wonder how researchers might solve that. — Ø implies everything
That doesn't seem to be the objective at all. For one, it gets so many factual things wrong, and for another, truth is often a matter of opinion, such as the case of your discussion.Because it's objective is to be a helpful assistant, meaning the truth should be the most relevant aspect to the LLM.
Ø implies everything
The LLM is not passing a moral judgement. It is simply echoing your judgement. Your questions are incredibly biased, and it quickly feeds off that, as it is programmed to do. — noAxioms
(I know they don't have actual conscious opinions, because I don't believe LLMs are conscious, but I am using anthropomorphized language here for the sake of brevity). — Ø implies everything
For one, it gets so many factual things wrong, (...) — noAxioms
noAxioms
It sure seems to. Your poll specifically asks "Should we try to stop LLMs from making moral judgements?" which implies that you feel it is making them, instead of just echoing your own.The LLM is not passing a moral judgement. It is simply echoing your judgement. Your questions are incredibly biased, and it quickly feeds off that, as it is programmed to do. — noAxioms
I agree. Are you stating that fact as if it contradicts my post?. — Ø implies everything
What is a 'conscious opinion' as distinct from a regular opinion?(I know they don't have actual conscious opinions...
You defined 'helpful assistant' in terms of truth. Sure, one goal is for it to be helpful, but it doesn't seem to seek truth to attain that goal.You disagree with the premise that LLMs' objective is to a helpful assistant.
That's not the primary design, but it's real obvious that such behavior is part of meeting the 'helpful' goal, or at least giving the appearance of being helpful. Problem is, I might access an LLM to critique something, and it doesn't like to do that, so I have to lie to it to get it to turn off that ego-stroke thing. Banno did a whole topic on this effect.But the question remains: is their actual objective, their intended design, to be that? Or are they really meant to be ego-strokers, as you propose?
Would they? I don't pay for mine. It's kind of in my face without ever asking for it. OK, so I use it. It's handy until you really get into stuff it knows nothing about, such as my astronomy example.Okay, so an LLM that is an ego-stroker is definitely a product lots of people would pay for.
Thats the actual goal of course, distinct from the public one of being helpful. I don't know how the money works. I don't pay for any of it, but somebody must. I don't have AI doing any useful customer service yet, so it has yet to impact my interaction with somebody who might be paying for it. And like most new tech, profits come later. Point at first is to lead the field, come out on top, which is how Amazon got on top despite all the money losses when everybody first started trying to corner the internet sales thing.But is it a product capable of generating profit however?
Maybe. I don't see how anything I discuss can be used as training data. I do see companies having it write code, which seems to require about as much effort to check as it does to write it all from scratch. And there's the huge danger of proprietary code suddenly being out there as training data. An LLM that cannot honor a nondisclosure agreement is useless. But I worked for Dell and they trained a bunch of Chinese to do my job, and China doesn't acknowledge the concept of intellectual property, so how it that any different from what the LLM is going to do with it?... their current free availability is simply so that people will engage with them as much as possible, thus giving the LLMs more training data so that they can become better, more manipulative sycophants?
I'd go more for functional. Programs needs to work. Facts are not so relevant.As a programmer, I'd much rather pay for a more factual, less sycophantic LLM to work with
It gets so much wrong because 1) it has no real understanding, and 2) there's so much misinformation in the training data.What if it gets things wrong because... it's still a work-in-progress?
Ø implies everything
It sure seems to. — noAxioms
Problem is, I might access an LLM to critique something, and it doesn't like to do that, (...) — noAxioms
What is a 'conscious opinion' as distinct from a regular opinion? — noAxioms
Your poll specifically asks "Should we try to stop LLMs from making moral judgements?" which implies that you feel it is making them, instead of just echoing your own. — noAxioms
Bottom line: I probably would agree that any LLM has a public stated goal of being helpful. I just don't agree with the 'therein being truthful' part. — noAxioms
I'd go more for functional. Programs needs to work. Facts are not so relevant. — noAxioms
And there's the huge danger of proprietary code suddenly being out there as training data. An LLM that cannot honor a nondisclosure agreement is useless. — noAxioms
It gets so much wrong because 1) it has no real understanding, and 2) there's so much misinformation in the training data. — noAxioms
That's [sycophancy] not the primary design, but it's real obvious that such behavior is part of meeting the 'helpful' goal, or at least giving the appearance of being helpful. — noAxioms
LLMs just follow the pattern of the conversation, their opinions are very programmable with the right context. I wonder how researchers might solve that. Sometimes, the AI is too sensitive to the context (or really, it is hamfisting the context into everything and basically disregarding all sense in order to follow the pattern), and other times, the AI is not sufficiently sensitive to the context, which is often more of a context-window issue. But yeah, LLMs are not good at assessing relevance at all.
And I would say that sycophantically agreeing the user (or alternatively, incessantly disagreeing with the user as a part of a different, but also common roleplaying dynamic that often arises) is an issue of not gauging relevance well. Because it's objective is to be a helpful assistant, meaning the truth should be the most relevant aspect to the LLM. But instead, various patterns in the context are seen as far more relevant, and as such it optimizes for alignment with those patterns rather than following its general protocols, like being truthful, or in this case, refraining from personal condemnation. — Ø implies everything
You use of 'solve that' implies a problem instead of deliberate design. LLMs are designed to stroke your ego, which encourages your use and dependency on them. — noAxioms
Your topic is about it rendering a moral judgement, and we seem to be getting off that track. — noAxioms
An LLM might be used to pare down a list of candidates/resumes for a job opening, which is a rendering of judgement, not of fact. — noAxioms
Should we try to stop LLMs from making moral judgements? — Ø implies everything
noAxioms
We both are. The alternative is to find new language to describe something else doing the same thing. You also are "producing a piece of text that is moral-judgement-shaped", that being a product based mostly on your training data, and thus may indeed be "not really your own"I am using anthropomorphized language for the sake of brevity. — Ø implies everything
An empty claim. No organization deliberately spreading lies (which almost all of them do, e.g. advertising) admit to the practice.No LLM company would admit to not pursuing truthfulness for their LLMs.
Claiming that and actually doing that are different things. I'm not suggesting that any LLM claims that it's known lies are fact. Most just get an awful lot wrong because there's so much non-fact in the training data. Garbage in, garbage out. A true AI (not an LLM), something that might actually have better ideas than those of its creators, needs to learn and think for itself. Maybe they're working on that, but nobody's going to gather new truth from something with only old information for training data.They'd obviously claim that, whenever speaking on matters of fact, even those LLMs are designed to not state falsities.
Fine. No argument.I simply meant that part of their stated objective is to be truthful.
Funny, but I didn't see facts being a direct part of any of your listed stats.These statistics paint a clear picture. IF they are planning to make a profit from paying subscribers, then they are trying to design factual LLMs.
Sure, but the right answers are so often matters of opinion, and an LLM seems to lacks its own ability to form opinion, so it feeds off the opinion of whoever is interacting with it.I would pay double for an LLM that just got things right.
I can also call all that functionality. I've interacted experimentally with one to design essentially a phone book database (name in, number out) that's searchable by any name (first, middle, last, no duplicates), and is fast and scalable. It was no help at all, coming of with designs far less efficient than they could be. It's a personal issue with me since I actually hold patents in that area.When programming, LLMs need to accurately "understand" (or have an understanding-shaped representation) of the user's instructions. It needs to get the facts of the conversation (the instructions) right, it needs to get the facts regarding the programming language right (sometimes it hallucinates syntax rules that are not correct for that language), it needs to analyze the sample material correctly (and not hallucinate things about the sample material), etc. This is factuality — Ø implies everything
I've no experience in this. I've never actually tried to debug code written by a non-human.There are times when I am using an LLM to program, and I need to debug gigantic heaps of code, and my first attempt is often to try to use the bot that made it.
Really? That's pretty pathetic. I often don't understand my own mistakes (which is the point of debugging), but I don't resort to supposing the computer or language is at fault. It mean you keep on digging deeper.And well, if it doesn't understand its own mistakes, it will often insist on ridiculous explanations like "your computer is broken"
You ask if Trump is immoral. That's not a matter of fact, so it waited to see what you wanted to hear, taking longer to do so that I would have since a direct comparison with Hitler was implied at first mention of Trump.You cannot sell an LLM that is half-sycophantic, half-factual.
Some then. I have very limited data: Copilot, and conversations I see posted by others on various forums.Now I granted you that some of them probably are, but the strong claim that all/most of them are is what I've been arguing against — Ø implies everything
Ø implies everything
We both are. The alternative is to find new language to describe something else doing the same thing. You also are "producing a piece of text that is moral-judgement-shaped", that being a product based mostly on your training data, and thus may indeed be "not really your own" — noAxioms
The LLM is not passing a moral judgement. It is simply echoing your judgement. — noAxioms
Your poll specifically asks "Should we try to stop LLMs from making moral judgements?" which implies that you feel it is making them, instead of just echoing your own. — noAxioms
An empty claim. No organization deliberately spreading lies (which almost all of them do, e.g. advertising) admit to the practice. — noAxioms
Funny, but I didn't see facts being a direct part of any of your listed stats. — noAxioms
I can also call all that functionality. — noAxioms
Really? That's pretty pathetic. — noAxioms
Some then. — noAxioms
Athena
But an advanced AI (not an LLM) that actually understands will probably consume more resources and taking even longer to be profitable. — noAxioms
Athena
Yes, probably some, but not all. So, you agree that other LLMs are probably designed to be factual? And that then, perhaps Gemini 3 is designed to be factual? Not sure if that is really relevant to the actual discussion this here is about, — Ø implies everything
Athena
Yup, it is a known problem that LLMs are far too "arrogant". If they have a gap in knowledge, they assume that is a gap in reality, as opposed to a gap in their ability. — Ø implies everything
noAxioms
The quote you selected is not talking about language, it is talking about the LLM not making its own moral judgement about Trump in the conversation you posted.But now you are saying that "we both are" using this language — Ø implies everything
You think Trump is immoral. It's not hard to pick up on.If you re-read the entire conversation, I think you'll see that you are not actually presenting any coherent, consistent stance here.
I don't think I indicated that anywhere. The resume paring example was one more of judgement of competency, not so much of morality.The only thing that you've sticked to, hitherto, is the view that we should try to prevent LLMs from making moral judgements.
Well, one that isn't pursuing factuality is going to make the exact same claim of factuality as the on that is pursuing it. I cannot think of a company that pursues factuality, at least not one that's for profit. I notice that Google long ago dropped their model of 'don't be evil'.You say that LLM companies claiming that they're pursuing factuality is an empty claim (I assume you mean irrelevant).
And yet their product cannot admit that it doesn't know something.1. LLM companies claim they are pursuing factuality in their commercially available models. [FACT]
Well, I don't necessarily, but I'm also not paying. I ask a lot of philosophical questions, and there's not much fact at all to the subject. Declaring anything to be factual is the same as closed mindedness in that field.2. Therefore, LLM companies believe that their paying customers believe that they themselves want factual LLMs. [OBVIOUS INFERENCE]
I don't think that last one follows. It follows that public LLMs are billed to be as factual as possible. You find 3 shaky, which sounds like you doubt the customer knows himself.3. The paying customer's belief that they themselves want factual LLMs is correct. [CLAIM BY ME]
4. LLM companies are aware of 3, because they have done the bare minimum of market research. [INFERENCE FROM 3]
5. Therefore, LLM companies desire that their commerically available, public LLMs are as factual as possible. [CONCLUSION]
Doesn't sound like factual behavior to me then. You're describing it telling lies rather than admit limitations.If they have a gap in knowledge, they assume that is a gap in reality, as opposed to a gap in their ability.
You'll have to show a clear case where they do, and then show why that's undesirable, else it's like asking if we should chastise the current king of Poland.So anyways, back to the actual discussion. Should we try to stop LLMs from making moral judgements?
Not sure how that would work. I mean, sure, reason is good, but reason is empty without some starting postulates like say the law.What I like is using reason to determine morals. — Athena
Religion is but one way of coercing social behavior in a community of humans. Interestingly, it wouldn't work at all on artificial entities unless they came up with a religion of their own.What I do not like is thinking morality is exclusively a religious matter.
I can reference three apples in a bag without any implication of the trinity being invoked. The number has far more uses than that one example.What is the fact that the number 3 represents the trinity? — Athena
Athena
Not sure how that would work. I mean, sure, reason is good, but reason is empty without some starting postulates like say the law. — noAxioms
I can reference three apples in a bag without any implication of the trinity being invoked. The number has far more uses than that one example. — noAxioms
Religious morality uses reason just as does the law. Again, maybe I don't know what you mean by 'reason' that it is separate from religion instead part of it. — noAxioms
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.