• Christoffer
    1.8k
    I've been keeping working on an argument I posted long ago that has to do with supported and unsupported belief as a base for morality. So I'd like to post the current one here for discussion. The idea of the argument is to underline a way of thinking in order to act with good morals. It is not about moral absolutism as it doesn't answer what to do but how to understand what to do. As morals are shifting with the times, there has to be a way to find out the best possible morals at these times, with the help of that time's knowledge and information.

    EDIT: The argument is outdated and invalid based on feedback, so will update it in time.

    The argument is done in parts to figure out premises to its final conclusion.

    First, morality and defining levels of acceptable belief.

    Belief
    p1 Choices made from unsupported belief has a high probability of chaotic consequences.
    p2 Supported belief with evidence has a high probability of arriving at calculated consequences.
    p3 Chaotic consequences are always less valuable to humanity than those able to be calculated.

    Conclusion: Unsupported belief is always less valuable to humanity than supported belief.

    Morality based on value
    p1 What is valuable to humans is that which is beneficial to humanity.
    p2 What is beneficial to a human is that which is of no harm to mind and body.
    p3 Good moral choices are those that do not harm the mind and body of self and/or others.

    Conclusion: Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity.

    Combining belief and morality
    p1 Unsupported belief is always less valuable to humans and humanity than supported belief.
    p2 Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity.

    Conclusion: Moral choices out of unsupported beliefs are less valuable and has a high probability of no benefit for humans and humanity.

    So how do we arrive at what is valuable and beneficial to humans and humanity, if that is the what good moral choices should provide?

    The scientific method for calculating support of belief
    p1 The scientific method (verification, falsification, replication, predictability) is always the best path to objective truths and evidence that are outside of human perception.
    p2 Human perception is not adequate to decide what is true or be used as evidence for truths.
    p3 A belief starts with a hypothesis based in internal human perception, experience and what has been learned.
    p4 A hypothesis becomes external truth when it is put through and survived the scientific method.

    Conclusion: When a belief has been put through the scientific method and survived as truth outside of human perception, it is a human belief that is supported by evidence.

    A scientific mindset
    p1 A person that day to day live and make choices out of ideas and hypotheses without testing and questioning them is not using a scientific method for their day to day choices.
    p2 A person that day to day live and make choices out of testing and questioning their ideas and hypotheses is using the scientific method for their day to day choices.

    Conclusion: A person using the scientific method in day to day thinking is a person living by a scientific mindset, i.e a scientific mind.


    Leading to the final argument.

    A scientific mind as a source for moral choice
    p1 Moral choices out of unsupported beliefs are less valuable and have a high probability of no benefit for humans and humanity.
    p3 When a belief has been put through the scientific method and survived as truth outside of human perception, it is a human belief that is supported by evidence.
    p4 A person using the scientific method in day to day thinking is a person living by a scientific mindset, i.e a scientific mind.

    Final conclusion: A person living by a scientific mind has a higher probability of making good moral choices that benefit humans and humanity.

    Therefore, To have good morals is to have a scientific mind and always seek supported belief as the foundation for moral choices.

    Good morals cannot be absolute, they can only be a probability. To use a scientific mind does not equal always making good moral choices, but always maximizing the probability of making good moral choices. Living with a higher probability of good moral choices is to be a person with good morals.



    This argument is a work in progress and is changing as objections are raised.
  • DingoJones
    2.8k
    Belief
    p1 Choices made from unsupported belief has a high probability of chaotic consequences.
    p2 Supported belief with evidence has a high probability of arriving at calculated consequences.
    p3 Chaotic consequences are always less valuable to humanity than those able to be calculated.
    Conclusion: Unsupported belief is always less valuable to humanity than supported belief.
    Christoffer

    Your conclusion should be that unsupported belief has a high probability of being less valuable to humanity (where chaotic consequences are bad for humanity). The “always” doesnt follow from the rest of your equation.
    Also, you can have calculated consequences which are bad for humanity so P3 doesnt follow either.
  • DingoJones
    2.8k
    Morality based on value
    p1 What is valuable to humans is that which is beneficial to humanity.
    p2 What is beneficial to a human is that which is of no harm to mind and body.
    p3 Good moral choices are those that do not harm the mind and body of self and/or others.
    Conclusion: Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity.
    Christoffer

    P1 is not true at all. Many large groups of humans value things that are not beneficial to all humanity. Its arguable humanity as a whole doesnt value what is beneficial to humanity as a whole, so I would say you need more support for p1.
    P2 seems weak as well, as its quite a stretch to claim everything that does no harm to mind and body is beneficial to humanity. Don’t you think there are somethings which do no body/mind harm but do not necessarily benefit mankind? Or vice versa...the sun harms your body but is beneficial to humanity,
  • Christoffer
    1.8k


    Thanks for the replies.

    Your conclusion should be that unsupported belief has a high probability of being less valuable to humanity (where chaotic consequences are bad for humanity). The “always” doesnt follow from the rest of your equation.
    Also, you can have calculated consequences which are bad for humanity so P3 doesnt follow either.
    DingoJones

    Makes sense.

    P1 is not true at all. Many large groups of humans value things that are not beneficial to all humanity. Its arguable humanity as a whole doesnt value what is beneficial to humanity as a whole, so I would say you need more support for p1.DingoJones

    What if I change to "objectively valuable"? Seems that within a context of objectively valuable for one the benefit for the many includes that one person. So to have a value objectively it needs to be of benefit for the whole? Or am I attacking this premise in the wrong direction?

    P2 seems weak as well, as its quite a stretch to claim everything that does no harm to mind and body is beneficial to humanity. Don’t you think there are somethings which do no body/mind harm but do not necessarily benefit mankind? Or vice versa...the sun harms your body but is beneficial to humanity,DingoJones

    What things are beneficial to humanity and humans that do harm to the body or mind? The sun does only damage when exposed to it too much, so that means overexposure to the sun is not beneficial to humans and humanity while normal exposure to the sun is.

    So what is beneficial is valuable as too much exposure to the sun is not beneficial or valuable. The premise also specifically points to one human, so not humanity as a whole, but could be applied with expansion to it. But it's hard to see anything beneficial to a human that is at the same time harming the body and/or mind. Even euthanasia can't be harming the mind of body if the purpose is to relieve the body or mind from suffering.
  • DingoJones
    2.8k
    What if I change to "objectively valuable"? Seems that within a context of objectively valuable for one the benefit for the many includes that one person. So to have a value objectively it needs to be of benefit for the whole? Or am I attacking this premise in the wrong direction?Christoffer

    Well Im not sure how that would change that there are exceptions to your claim that haven't been accounted for. How exactly do you mean objectively valuable?

    What things are beneficial to humanity and humans that do harm to the body or mind? The sun does only damage when exposed to it too much, so that means overexposure to the sun is not beneficial to humans and humanity while normal exposure to the sun is.

    So what is beneficial is valuable as too much exposure to the sun is not beneficial or valuable. The premise also specifically points to one human, so not humanity as a whole, but could be applied with expansion to it. But it's hard to see anything beneficial to a human that is at the same time harming the body and/or mind. Even euthanasia can't be harming the mind of body if the purpose is to relieve the body or mind from suffering.
    Christoffer

    Ya, that example doesnt hold up. Ok, so let me try another in the i interest of testing your claim further. I suspect your syllogism can be applied to these as well so im prepared to stand corrected on that last criticism but Ill give it a shot.
    What about if there are two harms, smoking and stress. The smoking relieves the stress, but harms the body, but so would stress. In that case, the smoking is harmful to body but its also beneficial to the human.
    On a macro scale, what about decisions that benifit more people than it harms. Wouldnt any kind if utilitarian calculation be an exception to your rule?
  • Christoffer
    1.8k
    Well Im not sure how that would change that there are exceptions to your claim that haven't been accounted for. How exactly do you mean objectively valuable?DingoJones

    By objectively valuable I mean things that do not have to do with preferences but necessities. The value of things that reduce harm and suffering while increasing well being. A smoker value smoking, but it isn't objectively valuable to that person as the smoking harms him. The objectively valuable thing is to stop smoking.

    p1 What is objectively valuable to humans is that which is beneficial to humanity.

    The value of things that do not harm and increase well being is that which is beneficial to humanity. For one is for all.

    Or maybe this premise needs to be phrased differently? Maybe the intention of the premise is weak due to its rhetoric? What might be a better premise that speaks of what is good for one is good for all?
    I might need to rephrase the entire argument of value-based morality?

    What about if there are two harms, smoking and stress. The smoking relieves the stress, but harms the body, but so would stress. In that case, the smoking is harmful to body but its also beneficial to the human.DingoJones

    In this example, I would argue that the long term is an important factor as well, the smoking relieves stress, but the harm isn't visible until later. If the smoke directly caused instant harm, no one would use it to relieve stress. If someone got cancer after one smoke as a replacement for the harm of stress, no one would smoke to relieve stress. Human ignorance is the only thing arguing for a smoke being a good thing for them. What about yoga? Yoga has scientific support for relieving stress, so why choose a cigarette to battle stress when yoga has no side effects? By breaking down acts we can find to the best of our ability and time, which is most beneficial to humanity.

    In this, we can go on to object that there are some things people feel is beneficial to them even if it harms them. Meaning, a smoker just likes to suck smoke and would gladly trade a few years of their life to reap the benefit of that smoke rather than doing yoga. But that would not be beneficial to others, people who need to deal with the consequences of this person's degradation in health, death, or people affected by second hand smoke.

    So they are related. But in terms of one person, what is beneficial to one is not really always something that they agree with, but that still doesn't take away the fact that beneficial in an objective sense needs to be defined as not doing harm to body/mind. It's beneficial to be in good health and doing something that has the consequence of putting you in bad health is not beneficial to you.

    p2 What is beneficial to a human is that which is of no harm to mind and body.

    The counter-argument has to prove that there are beneficial things that do harm to the mind and/or body. What things are good for us, short and long term, that is harmful to the mind and/or body?

    On a macro scale, what about decisions that benifit more people than it harms. Wouldnt any kind if utilitarian calculation be an exception to your rule?DingoJones

    This one is trickier, but I don't think it's an exception really. It can be argued as an extension to the argument and I think the final conclusion I'm trying to build to has to do with using a scientific mindset in order to calculate the best moral choice and that the intention to use the method will lead to the most probable good moral choice. So in terms of utilitarianism, if you calculate case to case that killing one to save 10 actually has merit, it is the good moral choice to do. The method is supposed to bypass absolutism and utilitarianism as both being valid and invalid depending on individual cases. It's a form of epistemic responsibility to not slave by moral broad concepts and/or teachings, but by a scientific method mindset of calculating each situation based on basic objective properties about benefit and harm. I guess it's a form of Nonconsequentialism?

    It's more about the probability of good or bad rather than objectively good or bad. To calculate the probability of an outcome, choosing for the probability of most good, and in calculating that and choosing that, you are acting with good morals.

    Maybe this moral theory needs another name. Something like Probabilitarianism (though another field of philosophy), Moral Probabilitarianism?
  • DingoJones
    2.8k


    Ok, so your central claim seems to be that what is good for the individual is whats good for the group aa long as the good is defined as not doing harm to the body/mind. Is that correct?
  • Christoffer
    1.8k
    Ok, so your central claim seems to be that what is good for the individual is whats good for the group aa long as the good is defined as not doing harm to the body/mind. Is that correct?DingoJones

    Central claim to that part of the argument, yes. As what is good for the individual will eventually also be good for the group. What is good for the machine is good for each of the cogs in that machine and what is good for one cog will benefit the entirety of that machine. So a good moral choice is that which is good for the individual and the group. The idea for this part of the argument is that what is considered morally good can be calculated as a basic form in terms of harm/wellbeing, but the variations in how we know what is good is the problem with morality. Which is tackled in the other parts of the argument.
  • DingoJones
    2.8k


    Have you read “the Moral Landscape” by Sam Harris?
  • ztaziz
    91
    I'd agree with the OP partially.

    Morality is contained within the universe. It does not exist in purely, space, though it could, in this context it doesn't.

    With which the universe has enough good to work.

    I'm experiencing an event where this is explained to me visually.

    An analogy...

    If Gods came down from heavens, the law that prevents them destroying/dominating everything is contained within the logic of 'sending of Gods'. Suddenly we all start having an hallucination of this one God, and he cannot attack us more than some petty attack for how it lives in such condition.

    Another analogy. If suddenly a glitch happened in a video, of a man who takes the place of certain characters, if not logical, like a hallucination - phantasmal - then it would break the video.
  • Christoffer
    1.8k
    Have you read “the Moral Landscape” by Sam Harris?DingoJones

    I have not but am familiar with his thinking. The problem is that he tries to expand the idea of the objective to parts that are questionable (and he's also pushing the argument to favor his anti-Islam ideas and seems to be his primary goal, not creating a moral theory). And the overly focus on neurological facts for well-being seems to demand us to fully understand the mind before his ideas can be applied, which we can't yet. So before we know everything about the mind, he can't really claim science to conclude what is moral. That's why I'm trying to tackle this in another direction.

    I'm more based in the idea of epistemic responsibility as a foundation for morality. That choices should be made out of the individual opening up their minds to a scientific method of questioning their choices before a choice is made. To scrutinize all options until the option and choice that makes the most sense in terms of well being can be decided.

    It's the trolly problem. 5 people against 1. You have 30 seconds to decide, you choose according to utilitarianism because that makes most sense according to the situation. Even pushing the fat man is. But with more time you can question, who is the one person against the other five? Is that person someone who is of such importance to humanity that the utilitarian approach is to let five die and the one to survive and do the deeds that make it the better utilitarian choice. It's a variable method of morality based on the probability of a very basic definition of well being.
  • DingoJones
    2.8k


    That book is very similar to what you’ve talked about so far. You are operating around one of the “peeks” of the moral landscape, you just do mot realise it. I highly recommend the book, as it is essentially the same as what you are proposing here. Also, you are wrong about Harris’s focus on islam over moral theory. Moral theory is his primary focus, which is why he takes issue with religions.
    Anyway, lets focus on one thing at a time. Its always a temptation with presenting a theory to jump around between all the explanations and arguments and supporting arguments and premisses because you are uniquely familiar with them. Im not though, so one thing ar a time.
    Fir example, you've jumped into utilitarianism and some other concepts and Im not really sure how that even matters as if yet.
    So as to good for the individual is good for the group...let me set up a scenario.
    If there is a threat to a person, or their family maybe. I murderer has declared you or your family his next target. He has done this before with other targets, and has always followed through with his threat. I would say you are protecting yourself from mind/body harm to kill the murderer before he kills you. So that seems like it qualifies as good in your view, since the individual mind/body harm is at stake. Is that right?
  • ztaziz
    91
    Man has a right to steal bar other men.

    There's nothing immoral about theft, which is in fact good, or moral stimulating. Murder is possibly wrong and theft too is possibly wrong but generally, no. It is part of life of consciousness.
  • deletedusercb
    1.7k
    I would need to see evidence that people with scientific minds are as empathetic as other people, have emotional intelligence, have good introspective skills so they know what biases they have when dealing with the complicated issues, where testing is often either unethical or impossible to perform, that are raised around human beings. And I am skeptical that the scientific minds are as good, in general, as other people when it comes to these things. I mean, jeez, look at psychiatry and pharma related to 'mental illness', that's driven by people with scientific minds and it is philosophically weak and also when criticized these very minds seem not to understand how skewed the research is by the money behind it, the pr in favor of it, selective publishing and even direct fraud. Scientific minds seem to me as gullbile as any other minds, but further often on the colder side.
  • Christoffer
    1.8k
    Its always a temptation with presenting a theory to jump around between all the explanations and arguments and supporting arguments and premisses because you are uniquely familiar with them. Im not though, so one thing ar a time.DingoJones

    Agreed.

    So that seems like it qualifies as good in your view, since the individual mind/body harm is at stake. Is that right?DingoJones

    In a sense, yes, but the mind/body harm is only a springboard towards how to tackle the situation. So, we can first assess that because we have a murderer who's always following his threats we have a real possible situation where the harm of me and my family is at risk, the murderer is acting out of bad moral and I can prevent this act by doing the same kind of harm. Now, if the choice was to either be killed or kill we can easily conclude that the one killing the other to prevent a killing is the best moral choice. But reality is never that black and white and what I propose is that we have an obligation to gather as much "data" as possible about the situation to know what choice is the moral one.

    1. Why does the killer want to kill me and the family?
    2. What is the timeframe for me to act upon knowledge of the threat?
    3. Are there any other preventive measures that can be taken instead of killing the killer?
    4. If other actions are taken to prevent the killer, will the killer always have try again until succeeding?


    1. If the killer wants to kill me and my family because of something I have done to him, I must have already done something to justify this action from him and if so, is it proportional, reasonable? Is it out of rage or thoughts of justice? Or have the killer chosen me and my family randomly and is acting out of mental health issues?

    2. Will this act of killing me and my family happen five minutes from now, a day from now or a week from now? If the answer is unclear or if it's just a minute before the killer bursts into my home, then acting out in deadly defense is justifiable since there's no time to find other options. If it's a week from now, I have an obligation to seek answers that can help me decide if killing the killer is the moral thing to do or not. Which makes less sense in such a long timeframe.

    3. Can I call the police? Can I get protection? If I have whereabouts of the killer so that I could kill him, then I can take other precautions and actions to stop the killer instead of killing him.

    4. If I have options to prevent the killer, but the killer will always return and try again, or do anything to succeed, then I have exhausted all the options and would need to permanently stop him.

    Taking all of these into consideration you can find the best moral action to take. Which could be to kill the killer, based on the parameters of the problem, but it could also be that the initial thought was to kill the killer, but there are options only seen when looking closer at the problem. And even if there was, if the timeframe was too short, it's not a morally bad thing to kill the killer if there wasn't any time to act upon further research.

    The idea of the method is to always ask questions and research the moral choice to take and that the act of research is the morally good thing to do. The intention of figuring out the best outcome with respect to harm of body/mind of everyone involved, including the killer, is the morally good path to take.
  • Christoffer
    1.8k
    I would need to see evidence that people with scientific minds are as empathetic as other people, have emotional intelligence, have good introspective skills so they know what biases they have when dealing with the complicated issues, where testing is often either unethical or impossible to perform, that are raised around human beings. And I am skeptical that the scientific minds are as good, in general, as other people when it comes to these things. I mean, jeez, look at psychiatry and pharma related to 'mental illness', that's driven by people with scientific minds and it is philosophically weak and also when criticized these very minds seem not to understand how skewed the research is by the money behind it, the pr in favor of it, selective publishing and even direct fraud. Scientific minds seem to me as gullbile as any other minds, but further often on the colder side.Coben

    I see your point and I agree that there are problems with viewing scientists as morally good, but that's not really the direction I'm coming from. It's not that science is morally good, it's that the method of research used in science can create a foundation of thinking in moral questions. Meaning, that using the methods of verification, falsifiability, replication and predictability in order to calculate the most probable good choice in a moral question respects an epistemic responsibility in any given situation.

    It does not simplify complicated issues and does not make a situation easy to calculate, but the method creates a morally good framework to act within rather than adhering to moral absolutes or utilitarian number calculations. So a scientific mind is not a scientist, but a person who uses the scientific method to gain knowledge of a situation before making a moral choice. It's a mindset, a method of thinking, borrowed from the scientific method used by scientists.
  • DingoJones
    2.8k


    Sure, lets say that all moral due diligence is done in that scenario, so as you laid it out that act is justified morally, it is good for the individual. To be clear, I mean that in the sense that the person is morally justified to murder that guy first, basically morally permissible vigilantism. Are you agreeing that under a certain set of circumstances, after all due consideration of all options (there is a scenario where police are not the best option for example) etc, its good (avoiding mind/body harm) to go kill this guy?
  • Christoffer
    1.8k
    Are you agreeing that under a certain set of circumstances, after all due consideration of all options (there is a scenario where police are not the best option for example) etc, its good (avoiding mind/body harm) to go kill this guy?DingoJones

    If the inductional thinking of the situation leads to the best option to kill the killer and that the killer doesn't have any justification for that killing other than malice or mental illness that is impossible to change, then yes, it is justified since you are defending lives from a morally bad choice another is taking.

    But the research into the situation also requires understanding the reasons the killer has to kill you and the family. So what if you actually caused the death of the killer's family or more people around them? Then surely the killer has the moral on his side to take out that justice. Well, not really, since its an act of retaliation and such an act doesn't hold up to harm since my family didn't do anything, I did. And there isn't really anything to say that my death, if I've done such a thing, is bad. But that requires insight into who I am today compared to who I was when I caused this killer's family to die. So it almost always becomes a bad moral choice when balancing factors in terms of retribution against someone. The killer also have the obligation to validate his actions based on this method of thinking. Will I kill more families? Am I a changed man? Is the better act for him to propose me to help others as a justice for all harm I caused?

    But if I didn't do anything and this killer is on his way to kill my family out of malice or mental illness and I have no help from others than to act in a situation to defend other lives (my family and my self in this case), then there's no time to change the laws of mental health care etc. and the only option is to kill the killer. The morally good thing to do here is to kill, but also to maybe push for better mental health care so that there aren't any situations like these happening to other families. And also arguing for better handling by police and protectors that couldn't handle the situation I ended up in.

    So, even after the act of killing the killer, I could find further moral actions to be taken that are proxy-choices to the initial moral choice of killing him.
  • DingoJones
    2.8k
    If the inductional thinking of the situation leads to the best option to kill the killer and that the killer doesn't have any justification for that killing other than malice or mental illness that is impossible to change, then yes, it is justified since you are defending lives from a morally bad choice another is taking.Christoffer

    Ok, so is that individual good translate to the group? I would argue that it doesnt, that the group consideration is different since now you also have to weigh the cost to the group, which you never have to do with the individual consideration. Thats why we have laws against vigilantism, because people can lie about their moral reasons or moral diligence in concluding that killing the murderer is correct. Hopefully the possibilities are fairly obvious.
    So that would be an example of whats good for the individual not being good fir the group.
    I think that this part of your argument is foundational, and it will all fall apart unless you can alter the premiss to exclude exceptions to the rule like we did above.
  • Christoffer
    1.8k
    Ok, so is that individual good translate to the group? I would argue that it doesnt, that the group consideration is different since now you also have to weigh the cost to the group, which you never have to do with the individual consideration. Thats why we have laws against vigilantism, because people can lie about their moral reasons or moral diligence in concluding that killing the murderer is correct. Hopefully the possibilities are fairly obvious.
    So that would be an example of whats good for the individual not being good fir the group.
    I think that this part of your argument is foundational, and it will all fall apart unless you can alter the premiss to exclude exceptions to the rule like we did above.
    DingoJones

    Yes, but it is good for the group if I defend the group (more people, i.e the family and possibly other families after mine). Also by proxy-choice if I try to help people with mental illness not to be handled wrongfully by society in order to end up like the killer I act upon my kill of this person to help others to not end up in a situation of being killed in self-defense. The choices accumulate and if more and more act through the method I propose, they all help each other.

    The act of killing is also in a sense forced vigilantism. The moral choice of killing the killer only occurs because police and other systems fail to protect. Killing the killer is only a last resort when looking at the options in front of you. So lying about moral reasons is in itself morally bad and doing vigilante actions fail when using the method I propose. It's not an act of vigilantism if it's the only option left to protect others from harm. But it is vigilantism if done without regard to other options and solutions.

    Doing something as a vigilante is to act against the possibility of other options. You can't be a vigilante if there isn't a choice to act outside of a legal system and second, you can't be a vigilante if there isn't a system to begin with. To act as a vigilante is to actively act against a system that's supposed to protect you. If such a system fails or doesn't exist, then you are not a vigilante.

    If you think about a real situation with the current legal system, police and everything we have for justice. There's no court that would charge you with murder or call you out as morally wrong if the police fail to act, there weren't any protective measures against a known killer where the information of his whereabouts existed and the threat of the killer acting out his threats where concrete truth based on previous killings. Your action is in that case, not an act of a vigilante since there was no system to balance your act against. If the system was there and worked, then the police would have acted upon it and if the police was mislead and you ended up in a situation of self-defense, then it would be self-defense, not vigilantism.
  • Echarmion
    2.5k
    Morality based on value
    p1 What is valuable to humans is that which is beneficial to humanity.
    p2 What is beneficial to a human is that which is of no harm to mind and body.
    p3 Good moral choices are those that do not harm the mind and body of self and/or others.
    Conclusion: Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity.
    Christoffer

    I am not sure why you're equating benefit and value in P1. Both "beneficial" and "valuable" are value judgements, and there doesn't seem to be any obvious reason to use one term or the other.

    Furthermore, what you mean by "humanity" remains vague. Is humanity the same as "all current humans"? When you write "valuable to humans" do you mean all humans or just some?

    In P2, it's questionable to define a benefit as the mere absence of harm, but it's not a logic problem. What is a logic problem is that P1 talks about benefits to humanity, and p2 about benefits to a single human. That gap is never bridged. It shows in your conclusion, which just makes one broad sweep across humans and humanity.

    P3 is of course extremely controversial, since it presupposes a specific subset of utilitarianism. That significantly limits the appeal of your argument.

    Combining belief and morality
    p1 Unsupported belief is always less valuable to humans and humanity than supported belief.
    p2 Good moral choices are those considered valuable to humans because they are beneficial to humans and humanity.
    Conclusion: Moral choices out of unsupported beliefs are less valuable and has a high probability of no benefit for humans and humanity.
    Christoffer

    Again, I am confused by your usage of valuable and beneficial here. Since P1 already talks about what's valuable, it doesn't combine with P2, which defines value in terms of benefit. So the second half of P2 is redundant.

    The scientific method for calculating support of belief
    p1 The scientific method (verification, falsification, replication, predictability) is always the best path to objective truths and evidence that are outside of human perception.
    Christoffer

    I have to nitpick here: the scientific method works entirely based on evidence within human perception. It doesn't tell us anything about what's outside of it. The objects science deals with are the objects of perception. What the scientific method does is eliminate individual bias, which I assume is what you meant.

    A scientific mindset
    p1 A person that day to day live and make choices out of ideas and hypotheses without testing and questioning them is not using a scientific method for their day to day choices.
    p2 A person that day to day live and make choices out of testing and questioning their ideas and hypotheses is using the scientific method for their day to day choices.
    Conclusion: A person using the scientific method in day to day thinking is a person living by a scientific mindset, i.e a scientific mind.
    Christoffer

    That's not a syllogism. Your conclusion is simply restating P2, so you can omit this entire segment in favor of just defining the term "scientific mind".

    A scientific mind as a source for moral choice
    p1 Moral choices out of unsupported beliefs are less valuable and have a high probability of no benefit for humans and humanity.
    p3 When a belief has been put through the scientific method and survived as truth outside of human perception, it is a human belief that is supported by evidence.
    p4 A person using the scientific method in day to day thinking is a person living by a scientific mindset, i.e a scientific mind.

    Final conclusion: A person living by a scientific mind has a higher probability of making good moral choices that benefit humans and humanity.
    Christoffer

    While I understand what you want to say here, the premises just don't fit together well. For example P1 is taking only about what is less valuable and has a high probability of no benefit. It's all negative. Yet the conclusion talks about what has a high probability for a benefit, i.e. it talks about a positive. And p4 really doesn't add anything that isn't already stated by p3.

    Your conclusion is that knowing the facts is important to making moral judgement. That is certainly true. Unfortunately, it doesn't help much to know this if you are faced with a given moral choice.

    What you perhaps want to argue is that it's a moral duty to evaluate the facts as well as possible. But that argument would have to look much different.
  • tim wood
    8.7k
    This argument is a work in progress and is changing as objections are raised.Christoffer

    And a handsome work it is, too! But I wonder: many of the legs holding up your argument are either themselves unsupported claims or categorical in tone when it seems they ought to be conditional. In terms of your conclusions it may not matter much. The question that resounds within, however, is of how much relative value a "scientific mind" is with respect to the enterprise of moral thinking. It's either of no part, some part, or the whole enchilada. If it's not the whole thing, then what are the other parts?
  • deletedusercb
    1.7k
    I see your point and I agree that there are problems with viewing scientists as morally good, but that's not really the direction I'm coming from. It's not that science is morally good, it's that the method of research used in science can create a foundation of thinking in moral questions. Meaning, that using the methods of verification, falsifiability, replication and predictability in order to calculate the most probable good choice in a moral question respects an epistemic responsibility in any given situation.Christoffer
    In a sense i wasn't questioning whether they are morally good, but if they have all the necessary kinds of skills and knowledge needed to make decisions.
    It does not simplify complicated issues and does not make a situation easy to calculate, but the method creates a morally good framework to act within rather than adhering to moral absolutes or utilitarian number calculations. So a scientific mind is not a scientist, but a person who uses the scientific method to gain knowledge of a situation before making a moral choice. It's a mindset, a method of thinking, borrowed from the scientific method used by scientists.Christoffer
    My concern here is that the scientific mind tends to ignore things that are hard to track and measure. For example, let's take a societal issue like drug testing in the work place. Now a scientist can readily deal with the potential negative issue of false positives. This is fairly easy to measure. But the very hard to track effects of giving employers the right to demand urine from its employees or teachers/administrators to demand that from students, also, may be very significant, over the long term and in subtle but important ways, is often, in my experience, ignored by the scientific mind. And I am thinking of that type of mind in general, not just scientists, including non-scientists I encounter in forums like this. That a lot of less easy to measure effects for example tend to be minimized or ignored.

    A full range mind uses a number of heuristics, epistemologies and methods. Often scientific minds tend to not notice how they also use intuition for example. But it is true they do try to dampen this set of skills. And this means that they go against the development of the most advanced minds in nature, human minds, which have developed, in part because we are social mammals, to use a diverse set of heuristics and approaches. In my experience the scientific minds tend to dismiss a lot of things that are nevertheless very important and have trouble recognizing their own paradigmatic biases.

    This of course is extremely hard to prove. But it is what I meant.

    A scientific mind, a good one, is good at science. Deciding how people should interact, say, or how countries should be run, or how children should be raised require, to me at the very least also skills that are not related to performing empirical research, designing test protocols, isolating factors, coming up with promising lines of research and being extremely well organized when you want to be. Those are great qualities, but I think good morals or patterns of relations need a bunch of other skills and ones that the scientist's set of skills can even dampen. Though of course science can contribute a lot to generating knowledge for all minds to weigh when deciding. And above I did describe the scientific mind as if it was working as a scientist. But that's what a scientific mind is aimed at even if it is working elsewhere since that is what a scientific mind is meant to be good at.
  • TheMadFool
    13.8k
    The scientific method consists of the following:

    1. collecting unbiased data
    2. analyzing the data objectively to look for patterns
    3. formulating a hypothesis to explain observed patterns

    How exactly do these 3 steps relate to ethics?

    What would qualify as unbiased data in ethics? Knowing how people will think/act given a set of ethical situations.

    What is meant by objective analysis of data and what constitutes a pattern in the ethical domain? Being logical should make us objective enough. Patterns will most likely appear in the form of tendencies in people's thoughts/actions - certain thoughts/actions will be preferred over others. What if there are no discernible patterns in the data?

    What does it mean to formulate a hypothesis that explains observed patterns? The patterns we see in the ethical behavior of people may point to which, if any, moral theory people subscribe to - are people in general consequentialists? Do they adhere to deontology? Both? Neither? Virtue ethicists? All?

    Suppose we discover people are generally consequentialists; can the scientific method prove that consequentialism is the correct moral theory? The bottomline is that the scientific method applied to moral theory only explains people's behavior - are they consequentialists? do they practice deontological ethics? and so forth.

    In light of this knowledge (moral behavioral patterns) we maybe able to come up with an explanation why people prefer and don't prefer certain moral theories but the explanation needn't reveal to us which moral theory is the correct one; for instance people could be consequentialists in general because it's more convenient or was indoctrinated by society or religion to be thus and not necessarily because consequentialism is the one and true moral theory.

    All in all, the scientific method, what it really is, is of little help in proving which moral theory is correct: the scientific method applied to morality may not lead to moral discoveries from which infallible moral laws can be extracted for practical use. Ergo, the one who employs the scientific method to morality is no better than one who's scientifically illiterate when it comes to making moral decisions.

    That said, I can understand why you think this way. Science is the poster boy of rationality and we're so mesmerized by the dazzling achievements it has made that we overlook the difference between science and rationality. In my humble opinion, science is just a subset of rationality and while we must be rational about everything, we needn't be scientific about everything. In my opinion then, what you really should be saying is that being rational increases the chances of making good decisions, including moral ones and not that being scientific does so.
  • Wolfman
    73
    p2 What is beneficial to a human is that which is of no harm to mind and body.
    p3 Good moral choices are those that do not harm the mind and body of self and/or others.
    Christoffer

    In another possible world people play Tetris all day. They are otherwise physically and psychologically healthy people, but they make the decision to play Tetris, in a room by themselves, for 10 hours per day. Now, this decision doesn't seem to harm their mind or body, nor the minds or bodies of anyone else; however, making the decision to play Tetris all day doesn't seem like the sort of decision we would normally categorize as "moral" either. But by the lights of your own theory, we would have to do that. How would you account for that?
  • David Mo
    960
    Rather than discussing the detail of the proposal, I would like to point out two difficulties of principle that are valid for any attempt to solve moral problems scientifically. And this is true for Sam Harris and other illustrious backgrounds such as Spinoza, utilitarians or B. F. Skinner.

    1. A technical impossibility: human affairs are not predictable. You cannot objectively predict effects from causes as in physics. If these were the case it would be awful. Imagine predictable tools in Hitler's hands. Human slavery would be warranted.
    2. There is not logical contradiction in preferring the falling of the whole world before I have a toothache. That is to say, you cannot deduce "to ought" from "to be". Unless you scientifically establish that the lowest good of the highest number is preferable to the highest good of the lowest number. And with what yardstick do you measure the greater or lesser good. But the utilitarians have been trying to solve this question for centuries, without success so far.

    That is why I am afraid that in ethics we will always find approximate answers that will convince more or less good people.
  • Christoffer
    1.8k
    I am not sure why you're equating benefit and value in P1. Both "beneficial" and "valuable" are value judgements, and there doesn't seem to be any obvious reason to use one term or the other.

    Furthermore, what you mean by "humanity" remains vague. Is humanity the same as "all current humans"? When you write "valuable to humans" do you mean all humans or just some?

    In P2, it's questionable to define a benefit as the mere absence of harm, but it's not a logic problem. What is a logic problem is that P1 talks about benefits to humanity, and p2 about benefits to a single human. That gap is never bridged. It shows in your conclusion, which just makes one broad sweep across humans and humanity.

    P3 is of course extremely controversial, since it presupposes a specific subset of utilitarianism. That significantly limits the appeal of your argument.
    Echarmion

    As per the discussion that followed with DingoJones, I'm aware of the problems in my premises for this part of the argument, so I'm reworking this (I should maybe mark it).

    Again, I am confused by your usage of valuable and beneficial here. Since P1 already talks about what's valuable, it doesn't combine with P2, which defines value in terms of benefit. So the second half of P2 is redundant.Echarmion

    Agreed. I think the main culprit is the argument about value morals which would change the one combining the two conclusions.

    I have to nitpick here: the scientific method works entirely based on evidence within human perception. It doesn't tell us anything about what's outside of it. The objects science deals with are the objects of perception. What the scientific method does is eliminate individual bias, which I assume is what you meant.Echarmion

    Scientific theories are still the best measurement for truth that we have and some of them bypass our perception by pure math. I guess if you apply cartesian skepticism you could never know anything, but the theories we arrive at in science still have practical application that further proves their validity outside of mere perception.

    But, as you say, my point here is how the scientific method eliminates the individual bias and it's important for epistemic responsibility.

    That's not a syllogism. Your conclusion is simply restating P2, so you can omit this entire segment in favor of just defining the term "scientific mind".Echarmion

    Makes sense, might have been overly focused on making the argument foolproof that it fooled itself :)

    While I understand what you want to say here, the premises just don't fit together well. For example P1 is taking only about what is less valuable and has a high probability of no benefit. It's all negative. Yet the conclusion talks about what has a high probability for a benefit, i.e. it talks about a positive. And p4 really doesn't add anything that isn't already stated by p3.Echarmion

    I think that the first premise needs rework since it's based on the flawed first part of the argument. So if I rework that I think it's gonna be more logical.

    Your conclusion is that knowing the facts is important to making moral judgement. That is certainly true. Unfortunately, it doesn't help much to know this if you are faced with a given moral choice.

    What you perhaps want to argue is that it's a moral duty to evaluate the facts as well as possible. But that argument would have to look much different.
    Echarmion

    I think you're right and I might have clogged down the argument in parts that are unnecessary. The conclusion I'm arguing for is that using a scientific method of thinking about day to day moral choices is how you act morally good. Since morals shift and change, we either have to say that there are no morals and there's no point in discussing morals if there aren't any, or conclude that there are some basic things for which we think morally. If there are such things, what are they, and how do we act morally good according to them.

    So the argument needs to prove that we have an objective need for well being and objective need to avoid harm of body and mind. Then how we can arrive at the most probable truth possible using methods found in the scientific method and how this method can be applied to look for the best outcome in any given choice. Point being that borrowing the scientific method as a framework for how to think and applying that to a set of basic objective human needs is a moral strategy that is what we can consider morally good.

    That we cannot define good morals by acts or consequences alone, only by maximizing understanding of a complex moral issue and acting to the best of our ability using such a method can we maximize the probability of doing good and therefore the act of doing this is what it means to act with good morals.
  • Christoffer
    1.8k
    And a handsome work it is, too! But I wonder: many of the legs holding up your argument are either themselves unsupported claims or categorical in tone when it seems they ought to be conditional. In terms of your conclusions it may not matter much. The question that resounds within, however, is of how much relative value a "scientific mind" is with respect to the enterprise of moral thinking. It's either of no part, some part, or the whole enchilada. If it's not the whole thing, then what are the other parts?tim wood

    Thank you :) And yes, it's obvious when reading comments that there's more work to be done on this.

    As I see it, we can definitely find some truths about well being and harm for humans and humanity. But those truths are still shifting with the tides of new knowledge findings of human health. Still, the intention to draw upon the current knowledge of well being and harm is the foundation and the method of thinking, the mindset that is built upon that foundation is the morally good act. So what I'm proposing is that you can never create axioms of moral or calculate absolutes of moral, but the intention and method to choose morally can be the defining form of what is good moral. So the method to find out how to act is what morality is about, not the act itself. So finding a method that excludes biases is to find a system for good morals that are objectively working for that purpose. It's my hypothesis that such a system can exist and should be the foundation for how we act morally.
  • Christoffer
    1.8k
    In a sense i wasn't questioning whether they are morally good, but if they have all the necessary kinds of skills and knowledge needed to make decisions.Coben

    No, they don't, but the method scientists use are focused on bypassing biases and perception to arrive at truths outside of the human mind. If such a process can be reframed as a mindset, a way to filter your choices to arrive at moral acts that exist outside of your biases, based on foundational definitions about harm and well being, the method itself is that which defines good morals, not the act.

    My concern here is that the scientific mind tends to ignore things that are hard to track and measure. For example, let's take a societal issue like drug testing in the work place. Now a scientist can readily deal with the potential negative issue of false positives. This is fairly easy to measure. But the very hard to track effects of giving employers the right to demand urine from its employees or teachers/administrators to demand that from students, also, may be very significant, over the long term and in subtle but important ways, is often, in my experience, ignored by the scientific mind. And I am thinking of that type of mind in general, not just scientists, including non-scientists I encounter in forums like this. That a lot of less easy to measure effects for example tend to be minimized or ignored.

    A full range mind uses a number of heuristics, epistemologies and methods. Often scientific minds tend to not notice how they also use intuition for example. But it is true they do try to dampen this set of skills. And this means that they go against the development of the most advanced minds in nature, human minds, which have developed, in part because we are social mammals, to use a diverse set of heuristics and approaches. In my experience the scientific minds tend to dismiss a lot of things that are nevertheless very important and have trouble recognizing their own paradigmatic biases.

    This of course is extremely hard to prove. But it is what I meant.

    A scientific mind, a good one, is good at science. Deciding how people should interact, say, or how countries should be run, or how children should be raised require, to me at the very least also skills that are not related to performing empirical research, designing test protocols, isolating factors, coming up with promising lines of research and being extremely well organized when you want to be. Those are great qualities, but I think good morals or patterns of relations need a bunch of other skills and ones that the scientist's set of skills can even dampen. Though of course science can contribute a lot to generating knowledge for all minds to weigh when deciding. And above I did describe the scientific mind as if it was working as a scientist. But that's what a scientific mind is aimed at even if it is working elsewhere since that is what a scientific mind is meant to be good at.
    Coben

    I think there's a fundamental misinterpretation of how I use the idea of a scientific mind as a method. Maybe it's closer to traditional rational inductive methods. The idea isn't to science the hell out of day to day moral choices, but to have the scientific method in mind as a guide for how to arrive at moral choices.

    Meaning, if I have a moral problem to solve, I need to factor in "data" and think about the moral problem with my biases in mind. Can I verify my hypothetical act? does it hold up against falsification (is it the act that arrives at the best conclusion in terms of well being and reduction of harm), is my hypothetical choice applicable to replication (can it be universal in other situations or is it only a gain for me?) and can I through this thinking predict the outcome (even past obvious consequences). As I filter my hypothetical act through the cornerstones of the scientific method, does it hold up as the best choice on the foundation of well being and minimizing harm for me, another individual and humanity as a whole? The best choice does not equal the act to be objectively good, but the process of arriving at that conclusion is objectively good since it's the limit of how well humans can arrive at truths outside of their own minds. It's a way of thinking that maximizes the ability we have to rationally find an answer to a moral question and by doing it, we act with good morals regardless of the consequences as we cannot factor in things that haven't happen yet, only what we know at the time of calculating the choice. To calculate the best moral choice is the good moral act, not the calculated act itself.
  • Christoffer
    1.8k
    ↪Christoffer The scientific method consists of the following:

    1. collecting unbiased data
    2. analyzing the data objectively to look for patterns
    3. formulating a hypothesis to explain observed patterns

    How exactly do these 3 steps relate to ethics?

    What would qualify as unbiased data in ethics? Knowing how people will think/act given a set of ethical situations.

    What is meant by objective analysis of data and what constitutes a pattern in the ethical domain? Being logical should make us objective enough. Patterns will most likely appear in the form of tendencies in people's thoughts/actions - certain thoughts/actions will be preferred over others. What if there are no discernible patterns in the data?

    What does it mean to formulate a hypothesis that explains observed patterns? The patterns we see in the ethical behavior of people may point to which, if any, moral theory people subscribe to - are people in general consequentialists? Do they adhere to deontology? Both? Neither? Virtue ethicists? All?

    Suppose we discover people are generally consequentialists; can the scientific method prove that consequentialism is the correct moral theory? The bottomline is that the scientific method applied to moral theory only explains people's behavior - are they consequentialists? do they practice deontological ethics? and so forth.

    In light of this knowledge (moral behavioral patterns) we maybe able to come up with an explanation why people prefer and don't prefer certain moral theories but the explanation needn't reveal to us which moral theory is the correct one; for instance people could be consequentialists in general because it's more convenient or was indoctrinated by society or religion to be thus and not necessarily because consequentialism is the one and true moral theory.

    All in all, the scientific method, what it really is, is of little help in proving which moral theory is correct: the scientific method applied to morality may not lead to moral discoveries from which infallible moral laws can be extracted for practical use. Ergo, the one who employs the scientific method to morality is no better than one who's scientifically illiterate when it comes to making moral decisions.

    That said, I can understand why you think this way. Science is the poster boy of rationality and we're so mesmerized by the dazzling achievements it has made that we overlook the difference between science and rationality. In my humble opinion, science is just a subset of rationality and while we must be rational about everything, we needn't be scientific about everything. In my opinion then, what you really should be saying is that being rational increases the chances of making good decisions, including moral ones and not that being scientific does so.
    TheMadFool

    I'm not sure you have carefully read my reasoning in this thread, there are a lot of things mentioned in the responses to others that further explains my point. I argue that the scientific mind is about how a scientist tackle the scientific method and borrowing this mindset into how we tackle moral questions on a foundation of definitions around well being and harm creates an act of unbiased epistemic responsibility in thinking and that this mindset is how we act morally good, not the act itself. To rationally reason past our own biases and arrive at the best moral choices is how we act with good morals and that the act and consequences cannot be considered objectively good or bad, only how we arrive at what choice we make.

    What you are describing is closer to use the scientific method to arrive at the best moral system, which isn't what my argument or conclusion is about.

    But your last sentence touch upon what I'm saying. That being rational through using a method of thinking is how we define good morals. What I'm defining the method as is to borrow how scientists tackle their questions, to tackle moral questions you encounter. That being moral is to detach yourself, examine the choices through the scrutiny of unbiased reasoning and arrive at a choice. By doing so you act with good morals, however immoral the chosen act seems to be at a surface level.

    The fundamental question in ethics is, how do we choose/act good or bad? My answer is, you don't, you calculate the probability of a good choice/act and that calculation is what is morally good, not the resulting act of the calculation itself since trying to answer what is objectively good morals is impossible.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment