• Judaka
    1.7k

    Honestly, I use meta-ethical relativism to say that moral positions don't have a truth value, they're not objectively true. When I read meta-ethical universalism, it seems to be capable of saying exactly the same thing. I do disagree with having a one-size-fits-all approach to ethics but there are definitely scenarios where I would think that a universal approach is correct. Since I am learning more about the correct terms, I am not in the best position to comment deeply.

    I think that morality is a conflation of our biological proclivity for thinking in moral terms, the intellectual positions that we create, the personal vs social aspects of morality. Hence, people say "you need a basis for your intellectual position to be rational" but to me, morality is not based on rational thought.

    I don't believe a supercomputer A.I. can reach the moral positions that we do and for it, I think it would really struggle to invent meaningful fundamental building blocks towards morality which for us just come from our biology. When we look at people who commit really horrible crimes, they are often just dysfunctional in some way. We have psychopaths who are just wired differently and cannot understand why we think the way we do. Why would someone cry to see a stranger suffering? That doesn't make any sense but it's how many of us are.

    Morality is often just you being you, the relativity of morality frames morality as being exactly that. You can be logical but your base positions aren't logical, they're just you being you. Morality is not simply an intellectual position. My reasoning is based on feelings which discount any possibility for objectivity, my feeling aren't dependant on reasoning.

    Reasoning becomes a factor when we start to talk about the implications of my feelings. I may instinctively value loyalty but we can create hypothetical scenarios which challenge how strong those feelings are. I may value loyalty but we can create scenarios where my loyalty is causing me to make very bad decisions. That's the intellectual component of morality, interpretation, framing, decision-making and so on. I find all of this happens very organically regardless of your philosophical positions. Even for a normative relativist, I imagine it changes very little in how morality functions for that person.
  • Pfhorrest
    4.6k
    It seems to me Pfhorrest is a meta-ethical relativist, as long as he thinks that everyone has, in fact, the same terminal goals such that rational argument can always in principle result in agreement.bert1

    No, I just “have terminal goals” (i.e. take morality to be something*) that involves the suffering and enjoyment, pleasure and pain, of all people.

    Whether or not other people actually care to try to realize that end is irrelevant for whether that end is right. Some people may not care about others’ suffering, for instance; that just means they’re morally wrong, that their choices will not factor in relevant details about what the best (i.e. morally correct) choice is.

    *One’s terminal goals and what one takes morality to be are the same thing, just like the criteria by which we decide what to believe and what we take reality to be are the same thing. To have something as a goal, to intend that it be so, just is to think it is good, or moral, just like to believe something just is to think it is true, or real.
  • Gregory
    4.6k
    Whether completely amoral people were born that way or not is debatable. Some psychologists believe toddlers have free will, and the corollary to this is that toddlers form the personality they will latter have. I wish this issue was clearer but it's not
  • Isaac
    10.3k
    I criticize the rules and make a great effort to be sure as I can that they really are the correct rules.Pfhorrest

    And how do you carry out this criticism and effort? By following rules.
  • Isaac
    10.3k
    I'm just surprised to be so misread.Banno

    Really? Happens to me all the time.

    Isaac throws out reality every second day.Banno
  • Pfhorrest
    4.6k
    And how do you carry out this criticism and effort? By following rules.Isaac

    Which I in turn criticize and reformulate, ad infinitum.
  • Isaac
    10.3k


    Your ad infinitum is impossible. You must simply accept one set of rules, you do not have infinite mental resources to devote to actually questioning everything. You might have some whimsical theoretical notion of being able to question everything, but you cannot actually do so, which means in practice you're selecting a set of rules and blindly following them. The fact that you might 'one day' question them when you get time is pragmatically irrelevant.
  • Pfhorrest
    4.6k
    You don't have to (and can't, and shouldn't) finish any infinite series of questioning before proceeding with your life. But being open to seeing problems with the rules you live by and revising them as needed, as often and however long as needed, is the exact opposite of following them blindly.
  • Isaac
    10.3k
    But being open to seeing problems with the rules you live by and revising them as needed, as often and however long as needed, is the exact opposite of following them blindly.Pfhorrest

    No it isn't. If you do not question a rule then you are following it blindly. The fact that you might theoretically be open to questioning it sometime in the future if you get time is irrelevant. It just puts a gloss of 'cold, hard calculation' on the exact same gut feeling that everyone else is using.
  • Echarmion
    2.5k
    Honestly, I use meta-ethical relativism to say that moral positions don't have a truth value, they're not objectively true.Judaka

    That much I understand. But, in the case where you are faced with a moral dilemma, don't you then run into a performative contradiction? In order to solve the dilemma, you employ reasoning, and that reasoning will, presumably, reject some answers. What is that rejection if not assigning a truth value?

    I think that morality is a conflation of our biological proclivity for thinking in moral terms, the intellectual positions that we create, the personal vs social aspects of morality. Hence, people say "you need a basis for your intellectual position to be rational" but to me, morality is not based on rational thought.Judaka

    From a descriptive perspective, I agree. Morality is, from the outside perspective, an evolved social capability of humans, and it's probably based on our capability for empathy, that is mirroring feelings.

    I don't believe a supercomputer A.I. can reach the moral positions that we do and for it, I think it would really struggle to invent meaningful fundamental building blocks towards morality which for us just come from our biology.Judaka

    This is an interesting scenario actually. Is an AI independet from human morality even possible? An A.I. would, in the first instance, just be an ability to do calculations. In order to turn it into something we'd recognize as intelligence, we'd need to feed it with information, and that'd presumably include our ideas on morality. Given that we don't have any intelligences to model an AI on other than our own, it would seem likely that the outcome would actually be fairly similar in outlook to humans, at least in the first generations.

    Morality is often just you being you, the relativity of morality frames morality as being exactly that. You can be logical but your base positions aren't logical, they're just you being you. Morality is not simply an intellectual position. My reasoning is based on feelings which discount any possibility for objectivity, my feeling aren't dependant on reasoning.Judaka

    But isn't it the case that, while you may intelectually realize that your basic moral assumptions, your moral axioms, are merely contingent, you are nevertheless employing them as objective norms when making your moral decicions? To me it seems rather analogous to the free will situation: You can intellectually conclude that free will is an illusion, but you cannot practically use that as a basis for your decisions.

    It seems to me that this dualism - that of the internal and the external perspective - is fundamental and unavoidable when decisionmaking is involved.

    Reasoning becomes a factor when we start to talk about the implications of my feelings. I may instinctively value loyalty but we can create hypothetical scenarios which challenge how strong those feelings are. I may value loyalty but we can create scenarios where my loyalty is causing me to make very bad decisions. That's the intellectual component of morality, interpretation, framing, decision-making and so on. I find all of this happens very organically regardless of your philosophical positions. Even for a normative relativist, I imagine it changes very little in how morality functions for that person.Judaka

    I would agree that, in general, your meta-ethical stance has limited bearing on how you make moral decisions in everyday life. We cannot reason ourselves out of the structures of our reasoning.
  • Pfhorrest
    4.6k
    This just seems to be an argument about what “blindly” means now. I’m taking that to mean what I call “fideism”: holding some opinions to be beyond question. You’re taking it to mean what I call “liberalism”: tentatively holding opinions without first conclusively justifying them from the ground up. But the latter is fine, it’s no criticism of me to say I’m doing that, and I’m not criticizing anyone else for doing that. It’s only the former that’s a problem, and you seem to want to impute that problematicness to me, perhaps because you conflate the two together, as so many do. Just like you conflate objectivism (which entails liberalism) with transcendentalism (which entails fideism).
  • Judaka
    1.7k

    That much I understand. But, in the case where you are faced with a moral dilemma, don't you then run into a performative contradiction? In order to solve the dilemma, you employ reasoning, and that reasoning will, presumably, reject some answers. What is that rejection if not assigning a truth value?Echarmion

    Those answers rejected aren't being described as untrue, they're being judged in other ways. An emotional argument like "it is horrible to see someone suffering" for why you should not cause suffering might or mightn't be a logically correct argument, it is based on my assessment. Everything about my choice to call a thing moral or immoral is based on me, my feelings, my thoughts, my interpretations, my experiences. The conclusion is not a truth, the conclusion can be evaluated in any number of ways. Is it practical, pragmatic, fair and the options go on. For me, it is never about deciding what is or isn't true.

    As for A.I, I don't agree, intelligence doesn't require our perspective, I think it is precisely due to a lack of any other intelligent species that this is conceivable for people. It's much more complicated than being based on empathy, one of the biggest feelings morality is based on is fairness - even dogs are acutely aware of fairness, it's not just an intellectual position. We are also a nonconfrontational species, people need to be trained to kill and not the other way around. All of these things play into how morality functions and morality looks very different without them. An A.I. computer would not have these biases, it's not a social species that experiences jealousy, love, hate, empathy and it has no proclivity towards being nonconfrontational or seeing things as fair or unfair.

    But isn't it the case that, while you may intelectually realize that your basic moral assumptions, your moral axioms, are merely contingent, you are nevertheless employing them as objective norms when making your moral decicions?Echarmion

    I don't consider morality to be mainly an intellectual position, we can look at other species and recognise a "morality" to their actions. Lions have a clear hierarchy in their pride, there is a really interesting guy called Dean Schneider who raised some Lions and spends a lot of time with them. Here's a video of what I'm about to talk about:
    https://www.youtube.com/watch?v=cnTlNKZYFjQ check 1:40 specifically.

    He physically smacks a lion to teach it that clawing him is not okay, explains that this is how the pride develop boundaries of right and wrong. It's okay to play around but if you actually hurt me that's not okay and I'll give you a smack. Surprisingly the lions just accept it as fair, you see a similar thing with dog trainers, they explain that the dog is acutely aware of its position in the pack, it has a very specific way of seeing who should eat first, when it should look for permission to do things form the pack leader and so on.

    I've heard that when rats will wrestle each other for fun, the bigger rat will let the little rat win sometimes because otherwise, the little rat won't play anymore since it's boring to lose all the time. I draw parallels between these kinds of behaviours in animals and the behaviours we can see in humans. It's only much more complicated for humans due to our intelligence.

    As humans, we can go beyond mere instincts and intellectually debate morality but that's superfluous to what morality is. Certainly, morality is not based on these intellectual debates or positions. I think people talk about morality as if they have come to all of their conclusions logically but in fact, I think they would be very similar to how they ended up if they barely thought about morality at all. One will be taught right from wrong in a similar way to lions and dogs.

    Since morality isn't based on your intellectual positions, it doesn't really matter if your positions are even remotely coherent. You can justify that suffering is wrong because you had a dream about a turtle who told you so and it doesn't matter, you'll be able to navigate when suffering is wrong or not wrong as easily as anyone else. The complexity comes not from morality but interpretation, characterisation, framing, knowledge, implications and so on.
  • Isaac
    10.3k
    I’m taking that to mean what I call “fideism”: holding some opinions to be beyond question. You’re taking it to mean what I call “liberalism”: tentatively holding opinions without first conclusively justifying them from the ground up. But the latter is fine, it’s no criticism of me to say I’m doing that, and I’m not criticizing anyone else for doing that. It’s only the former that’s a problemPfhorrest

    How is it a problem? The only problem with fideism that I can see is that one might be wrong about some belief and because one does not question it, one will persist in that falsity. Is there some other problem you're thinking of?
  • Pfhorrest
    4.6k
    How is it a problem? The only problem with fideism that I can see is that one might be wrong about some belief and because one does not question it, one will persist in that falsity. Is there some other problem you're thinking of?Isaac

    That is exactly the problem, yes.
  • Isaac
    10.3k
    That is exactly the problem, yes.Pfhorrest

    So how does the fact that you're open to them being questioned alter the issue? We've just agreed that the problem is related to whether you actually do question them, not whether you might do so in future if and when you get the time.
  • Pfhorrest
    4.6k
    So how does the fact that you're open to them being questioned alter the issue?Isaac

    Because if reasons to question them come up, I will. Someone who does otherwise won't. That's the "blindly" part of "blindly follow": turning a blind eye towards reasons to think otherwise.
  • Isaac
    10.3k
    Because if reasons to question them come up, I will. Someone who does otherwise won't. That's the "blindly" part of "blindly follow": turning a blind eye towards reasons to think otherwise.Pfhorrest

    But we've just established that the single issue is that you might be wrong about something and not correct that error. That is, you agreed, the only thing that is at fault with fideism. You're still not altering that by saying that you might question something in future if the matter arises.

    Notwithstanding that, you have not yet met the burden of demonstrating that your method here is at all practically achievable. When faced with simple moral decisions which do not yield the expected results ("don't lie" for example) you say that the 'right' moral decision is heavily context dependant, the right choice for that person at that time in that context. If so, there are 7 billion people in several billion different contexts at several billion different times. Given the raw numbers, the onus is on you, I think, to demonstrate that this idea of yours is pragmatically any different from fideism. It seems clear to me that the vast majority of the time, in the vast majority of cases, your moral decisions will be made without going through this process because the time within which a decision has to be made falls far short of the time it would take to reach anything like the kind of context-specific conclusion you're advocating.

    basically, you'll go through life making almost all of your moral choices on the same gut-feeling, peer-group, social-norms basis that everyone else does because you haven't the time or the mental bandwidth to actually do the calculation. The only difference is you get to act holier-than-thou simply on the grounds that your open (one day, maybe) to changing your mind.
  • Echarmion
    2.5k
    Those answers rejected aren't being described as untrue, they're being judged in other ways. An emotional argument like "it is horrible to see someone suffering" for why you should not cause suffering might or mightn't be a logically correct argument, it is based on my assessment.Judaka

    Yeah but "it's horrible" is not the saying the same as "it's feels horrible to me". That's my point: We treat the outcome of any deliberation on morality as more than just emotion.

    Everything about my choice to call a thing moral or immoral is based on me, my feelings, my thoughts, my interpretations, my experiences. The conclusion is not a truth, the conclusion can be evaluated in any number of ways. Is it practical, pragmatic, fair and the options go on. For me, it is never about deciding what is or isn't true.Judaka

    Doesn't the ability to evaluate anything in any way require assigning truth values? Even the question "do I feel that this solution is fair" requires there to be an answer that is either true or false.

    As for A.I, I don't agree, intelligence doesn't require our perspective, I think it is precisely due to a lack of any other intelligent species that this is conceivable for people. It's much more complicated than being based on empathy, one of the biggest feelings morality is based on is fairness - even dogs are acutely aware of fairness, it's not just an intellectual position. We are also a nonconfrontational species, people need to be trained to kill and not the other way around. All of these things play into how morality functions and morality looks very different without them. An A.I. computer would not have these biases, it's not a social species that experiences jealousy, love, hate, empathy and it has no proclivity towards being nonconfrontational or seeing things as fair or unfair.Judaka

    How do you suppose an A.I. would gain consciousness without human input? It's a bunch of circuits. Someone has to decide how to wire them, and will thereby inevitably model the resulting mind on something. And in all likelihood, in order to create something flexible enough to be considered "strong A.I.", you'd have to set it up so it started unformed to a large degree, much like a newborn child.

    As humans, we can go beyond mere instincts and intellectually debate morality but that's superfluous to what morality is. Certainly, morality is not based on these intellectual debates or positions. I think people talk about morality as if they have come to all of their conclusions logically but in fact, I think they would be very similar to how they ended up if they barely thought about morality at all. One will be taught right from wrong in a similar way to lions and dogs.

    Since morality isn't based on your intellectual positions, it doesn't really matter if your positions are even remotely coherent. You can justify that suffering is wrong because you had a dream about a turtle who told you so and it doesn't matter, you'll be able to navigate when suffering is wrong or not wrong as easily as anyone else. The complexity comes not from morality but interpretation, characterisation, framing, knowledge, implications and so on.
    Judaka

    I agree with all that, but it's notably an intellectual position looking at morality from the outside. It's not how morality works from the inside. Internally, you do have to keep your positions coherent, else you'll suffer from cognitive dissonance. Knowing, intellectually, that your moral decisions are ultimately based on feeling doesn't help you solve a moral dilemma. The position "it's right because I feel like it" is not intuitively accepted by the human mind.
  • Judaka
    1.7k
    Doesn't the ability to evaluate anything in any way require assigning truth values? Even the question "do I feel that this solution is fair" requires there to be an answer that is either true or false.Echarmion

    If I say that "I don't like bob", that's not something I put a truth value on but if you ask "is it true you don't like bob" I will say yes. So, it is not involved in my decision making in the event of a moral dilemma.

    How do you suppose an A.I. would gain consciousness without human input?Echarmion

    The A.I. is just an illustration of my point, no need to get too bogged down in the details. The point is that humans are biologically predisposed to think in moral terms, we are predisposed to have particular feelings about children, violence, fairness, pain and all of this plays a part in how morality is developed. So often when it comes to meta-ethical relativism, there are issues about how morality is going to be able to function given that meta-ethical relativism just strips it of all authority and meaning. So one of the ways it retains those things is because of how morality functions organically in healthy people due to the influence of our biology.

    As for the moral dilemma, when I listen to people talk about moral dilemmas, I hear "it's just wrong" and "what is being done is horrible" more than anything else. Not even dilemmas but on actual moral issues, most people cannot give explanations for why something is wrong without their feelings. People won't say "it's right because I feel like it", obviously, feelings don't work that way. Feelings are highly complicated, reason and feelings aren't separate, they mix and you can't take them apart and examine them.

    One great example for how morality works is the meat industry and dogs, many Asian countries eat dog and it's considered truly awful by many meat-eaters, it crosses a line. My explanation for that is the dog has acquired an image or status in some societies as "man's best friend", Certain cultures view dogs as loyal, loving, great pets and not food. This is where things become complex, the characterisation of the dog is what makes the eating of the dog evil. Mass poison some rats and it's fine but mass poison dogs and you're a monster. The rat is a pest, the dog is a loyal and loving friend - how can you betray and eat a loyal friend? That's wrong.

    As for cognitive dissonance, it is naturally occurring, to reduce cognitive dissonance requires a conscious effort.
  • Edgy Roy
    19
    It is far better to act moral than it is be called moral
  • bert1
    1.8k
    No, I just “have terminal goals” (i.e. take morality to be something*) that involves the suffering and enjoyment, pleasure and pain, of all people.Pfhorrest

    Thanks, this is interesting. It seems to me that that is consistent with metal-ethical relativism. FWIW, I share these terminal goals. However I don't think that other people who do not share these terminal goals have necessarily made a mistake (although they might have done). They are just my enemy. Do you think they have made a mistake such that they could be reasoned with?

    Whether or not other people actually care to try to realize that end is irrelevant for whether that end is right. Some people may not care about others’ suffering, for instance; that just means they’re morally wrong, that their choices will not factor in relevant details about what the best (i.e. morally correct) choice is.

    It means they are morally wrong from yours or my perspective. But I can't get away from the need to specify a perspective when evaluating the truth of moral claims. What is right and wrong just changes depending on whose point of view one is. This just seems like a fact that follows from the idea that people just can have divergent terminal goals.

    *One’s terminal goals and what one takes morality to be are the same thing, just like the criteria by which we decide what to believe and what we take reality to be are the same thing. To have something as a goal, to intend that it be so, just is to think it is good, or moral, just like to believe something just is to think it is true, or real.

    Yes, I agree with that.
  • Pfhorrest
    4.6k
    Do you think they have made a mistake such that they could be reasoned with?bert1

    I think they've made a mistake, but I don't know that they can be reasoned with. It seems to me completely analogous to someone like a solipsist, or anyone else who doesn't believe that there is a reality apart from themselves that can be known from sense experience (whether that be because they think reality is something completely apart from sense experience, or because they think there is nothing to it besides their own). I don't know if I could reason a solipsist out of their position (I have ideas for how to try, but it really depends on them listening), but I think they're wrong; and likewise someone who denies that morality is something apart from their own interests that has to do with hedonic experiences like pain and pleasure (whether that be because they think morality is something completely apart from such experiences, or because they think there is nothing to it but their own).

    EDIT: My reasons for rejecting solipsism, transcendent reality, egotism, and transcendent morality, are all the same pragmatic ones: if any of those are true, we could not know them to be true, nor know them to be untrue, but also we could not help but act in assumption that either they are or aren't. If we assume any of them are true, then we just run into an immediate impasse from which there is no reasoning out of; while if we assume they're all false, and that both reality and morality are beyond merely our individual selves but accessible to all of our experiences, then we can actually start making practical progress sorting out which things are more or less likely to be real or moral. It might always be the case that actually nothing is either, but we can never be sure of that, only either assume it prematurely and so give up, or assume to the contrary and keep on trying to figure out what is each.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.