I'm not convinced that the desire for a universal principal is simply the result of us wanting to shirk our responsibility or culpability. — Benj96
Everyone can be rash, everyone can be stupid, misinformed or otherwise malpracticing adequate reason. My question is how does one know when that is the case - ie they're chatting sh*t. And to the contrary, when they really do know what they're talking about. What is the litmus test in the realm of discourse with others which may be either just as misinformed or very much astute and correct? Is there a universal logic/reason? Or only a circumstantial one? — Benj96
…[an animal’s] inability for it to question its existence or purpose does not alleviate guilt on my part then I should be grateful for the food put on my table.… At what point does a human being rationalize its consumption? — Deus
But I think people's inclinations can be affected by arguments… on average, more truthful arguments receive some advantage from their truthfulness. — xorn
it is not me making a judgment about people; I am just describing how disclaiming belief works in the world. And I’ll consider a competing claim, but dismissing the entire project as impossible claiming that I’m in no position is to remove any rationality from philosophical discourse. If someone is claiming they don’t believe in God, in a certain sense they are saying there is no mystery in the world and nothing outside of (above) our power. Now, they might not want that to be the implication of it, but those are some of the things which are believed, and so some of the things which are refused in the denial.That's one hell of a big inference about a whole hell of a lot people you know nothing about. — Vera Mont
You were drawing out the inference you made of what I said. Your interpretation. — Vera Mont
Before every such statement [“I think there is a god." or "I believe there is a god." or "I believe in God."] there is an expressed or implied question. — Vera Mont
the statement points back to a requirement for making it. — Vera Mont
you might want to consider if there's a charitable interpretation of the original post that could resolve this apparent inconsistency. - GPT-4 — Pierre-Normand
I believe this is one of those misconstructions through the substitution of similar but not interchangeable words. The words 'slippery', amorphous' and 'ever-changing' do not mean 'irrational'; nor does 'difficult' mean 'unable to be clarified'. — Vera Mont
…subject to imprecise applications and interpretations. — Vera Mont
[“I believe in God”, “I think there is a god”]…are …separate uses …in the same context: answering the question: "How do you regard God?" — Vera Mont
Language is slippery; difficult to handle effectively. I doubt any hard rule can apply to all the words in one language — Vera Mont
Now, why did you change the example? — Vera Mont
If God comes into it, it should be by way an example such as: "I think there is a god" - uncertainty leaning toward belief - "I believe there is a god" - growing conviction - and "I believe in God" - declaration of faith in a particular deity. — Vera Mont
My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct. — 180 Proof
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization. — 180 Proof
but I am the only one who can bind me to my word. if you bind me to my word, you still do not know what is going to come out of my mouth. — Arne
I don't believe that ethics is characterized by rule following — 013zen
If an AI ever feels something that we might characterize as an internal conflict regarding what makes the most sense to do in a difficult situation, that will affect people's lives in a differing but meaningful manner, then perhaps I might consider it capable of moral agency. — 013zen
But then your argument seems reducible to putting safeguards in place so we can all sleep better at night. . . and relieve ourselves of any moral responsibility for the results of bad actors. — Arne
I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization. — 180 Proof
That [ AI ] can only consider novel situations based on already established laws is no different from how a human operates. — ToothyMaw
I don't see anything preventing an AI from wanting to avoid internal threats to its current existence from acting poorly in the kind of situation you consider truly moral. — ToothyMaw
I don't quite agree that many moral philosophers would consider you moral for following just any self-imposed rule, if you are saying that. — ToothyMaw
Doesn't it matter though if the AI can choose between affecting a moral outcome or a less moral outcome like one of us? …shouldn't we treat it like a human, if we must follow through with holding AIs responsible — ToothyMaw
we can just change the programming so that it chooses the moral outcome next time, right? Its identity is that which we create. — ToothyMaw
It seems to me that we are the ones who need to be put in check morally, not so much the AIs we create. That isn't to say we shouldn't program it to be moral, but rather that we should exercise caution for the sake of everyone's wellbeing. — ToothyMaw
this argument… that nothing can be helped…goes directly counter to the piece of common knowledge that:
some things are our own fault,
some threatening disasters can be foreseen and averted, and
there is plenty of room for precautions, planning and weighing alternatives. — Ryle, p.16, broken apart by me
Very often, though certainly not always, when we say 'it was true that ... ' or 'it is false that ... ' we are commenting on some actual pronouncement made or opinion held by some identifiable person…. — Ryle, p.17 emphasis added
If you make a guess at the winner of the race, it will turn out right or wrong, correct or incorrect, but hardly true or false. These epithets are inappropriate…. — Ryle, p.18
Responsible to whom? Answer to whom? To make it intelligible, clarify, qualify, be read to/by whom? Judged by whom? — baker
I just don’t agree that it is objective. I would say it is inter-subjective. Something can be independent of me and still be subjective, and it can be independent of any randomly selected person and still be subjective. — Bob Ross
Something can be independent of me and still be subjective, and it can be independent of any randomly selected person and still be subjective. — Bob Ross
I don’t think morality is completely arbitrary. I think that morality is either objective (exists mind[stance]-independently) or it does not (e.g., subjective, inter-subjective, etc.). — Bob Ross
But would you say that this ‘fact of our position in the world’ exists mind(stance)-independently and has ‘moral’ signification? I wouldn’t. Having importance or power doesn’t make something a fact. — Bob Ross
Anyway, I wanted to thank you both for making this thread far more interesting, informative and certainly longer than I expected. — Banno
Facts about psychology do not entail the existence of moral facts. — Bob Ross
On the other hand, Austin does not claim that ordinary language may not need reform (p. 63), though admittedly his description of the process, especially the phrase "tidy up", could be described as an understatement and does largely ignore the practicalities of making the changes he is contemplating. — Ludwig V
How seriously should we take the possible conservatism of OLP? — Ludwig V
It might be more relevant to ponder why their work has been so widely disregarded. — Ludwig V
There is a practical issue. Simply, that the style of argument that Ryle, Austin and Wittgenstein deploy is much, much harder than it looks. — Ludwig V
I get really annoyed about the examples one sees that are tiny thumbnails, which are treated as the whole story, when it is clear that a wider context would reveal complexities that are ignored. — Ludwig V
It seems to me that a form of words always suggests a context, no matter how tiny the thumbnail sketch… Context isn't everything, but it isn't an optional extra. — Ludwig V
What do you mean here by "responsibility"? — baker
Possible wrong assumptions are not a matter of propositions/sentences (i.e forms of words) but of forms of words in the circumstances of their use, i.e.statements. — Ludwig V
But we don’t hedge unless there’s some reason for doing so. The best policy is not to ask the question. — Ludwig V
OLP couldn't exist without definitions — RussellA
Does it mean either 1) the OLP uses ordinary language when analysing ordinary language or 2) the OLP analyses ordinary language but doesn't use ordinary language? — RussellA