Well Im not sure how that would change that there are exceptions to your claim that haven't been accounted for. How exactly do you mean objectively valuable? — DingoJones
By objectively valuable I mean things that do not have to do with preferences but necessities. The value of things that reduce harm and suffering while increasing well being. A smoker value smoking, but it isn't objectively valuable to that person as the smoking harms him. The objectively valuable thing is to stop smoking.
p1 What is objectively valuable to humans is that which is beneficial to humanity.
The value of things that do not harm and increase well being is that which is beneficial to humanity. For one is for all.
Or maybe this premise needs to be phrased differently? Maybe the intention of the premise is weak due to its rhetoric? What might be a better premise that speaks of what is good for one is good for all?
I might need to rephrase the entire argument of value-based morality?
What about if there are two harms, smoking and stress. The smoking relieves the stress, but harms the body, but so would stress. In that case, the smoking is harmful to body but its also beneficial to the human. — DingoJones
In this example, I would argue that the long term is an important factor as well, the smoking relieves stress, but the harm isn't visible until later. If the smoke directly caused instant harm, no one would use it to relieve stress. If someone got cancer after one smoke as a replacement for the harm of stress, no one would smoke to relieve stress. Human ignorance is the only thing arguing for a smoke being a good thing for them. What about yoga? Yoga has scientific support for relieving stress, so why choose a cigarette to battle stress when yoga has no side effects? By breaking down acts we can find to the best of our ability and time, which is most beneficial to humanity.
In this, we can go on to object that there are some things people feel is beneficial to them even if it harms them. Meaning, a smoker just likes to suck smoke and would gladly trade a few years of their life to reap the benefit of that smoke rather than doing yoga. But that would not be beneficial to others, people who need to deal with the consequences of this person's degradation in health, death, or people affected by second hand smoke.
So they are related. But in terms of one person, what is beneficial to one is not really always something that they agree with, but that still doesn't take away the fact that beneficial in an objective sense needs to be defined as not doing harm to body/mind. It's beneficial to be in good health and doing something that has the consequence of putting you in bad health is not beneficial to you.
p2 What is beneficial to a human is that which is of no harm to mind and body.
The counter-argument has to prove that there are beneficial things that
do harm to the mind and/or body. What things are good for us, short and long term, that is harmful to the mind and/or body?
On a macro scale, what about decisions that benifit more people than it harms. Wouldnt any kind if utilitarian calculation be an exception to your rule? — DingoJones
This one is trickier, but I don't think it's an exception really. It can be argued as an extension to the argument and I think the final conclusion I'm trying to build to has to do with using a scientific mindset in order to calculate the best moral choice and that the intention to use the method will lead to the most probable good moral choice. So in terms of utilitarianism, if you calculate case to case that killing one to save 10 actually has merit, it is the good moral choice to do. The method is supposed to bypass absolutism and utilitarianism as both being valid and invalid depending on individual cases. It's a form of
epistemic responsibility to not slave by moral broad concepts and/or teachings, but by a scientific method mindset of calculating each situation based on basic objective properties about benefit and harm. I guess it's a form of Nonconsequentialism?
It's more about the probability of good or bad rather than objectively good or bad. To calculate the probability of an outcome, choosing for the probability of most good, and in calculating that and choosing that, you are acting with good morals.
Maybe this moral theory needs another name. Something like Probabilitarianism (though another field of philosophy),
Moral Probabilitarianism?