• _db
    3.6k
    Moral philosophy is filled to the brim with talk of deontic principles: duties, responsibilities, obligations, requirements, imperatives. If you have a special deontic obligation, you are expected to act accordingly in order to see that you can fulfill this obligation. If you do, you are praised, and if you do not, you are condemned. But where do these deontic obligations come from?

    My claim is that the only situation in which a second-order (explained later) moral agent (an individual capable of making rational decisions) has deontic responsibilities is when they voluntarily enter a mutual agreement based on reasonable terms and conditions. Those familiar with social contract theory will see the similarities.

    For example, say I promise my friend I will help her with her math homework later this evening. She agrees and plans her schedule around the event. We are now committed to meeting up and working on math homework together. I have a special duty to show up prepared and ready to work on math homework, as does she.

    But say the country I live in (without any realistic alternative) decides to go to war with another country, for a reason I find to be illegitimate. Do I have a responsibility to support my country and even fight and die for it? It doesn’t seem so. This is an instance of coercion - the ascription of special obligations without any prior consent.

    Many consequentialists like to argue that we have some special responsibility to reduce suffering and promote happiness. I myself am a consequentialist of sorts (but actually more of an axiological welfarist), and I’ll agree that we have genuinely good reasons to consider the value of maximizing sentient welfare. What I will not concede is that we have a special deontic obligation to do so.

    In other words, I claim that there is nothing wrong with maximizing sentient welfare, but neither is there anything (technically) wrong with not doing so. To be more precise: sentient welfare ought to be maximized, but not necessarily by us. I see no incoherence in stating that something ought to happen, even if we aren’t obligated to make it happen. Something can be a problem, but it doesn’t have to be our problem. We have no responsibility to clean things up - janitorial service is optional.

    I believe a couple of piggy-backing reasons justify this view:

    The Argument from Technicality (or the pointing-out of Affirmative Hypocrisy)

    1.) Sentient existence is, in the words of Hannah Arendt, both mortal and “natal” - that is to say, sentients have both mortality and “natality” in the sense that they exist in virtue of a contingent event of creation (birth) and are destined to be annihilated (death).

    2.) Birth is manipulatively asymmetrical: those affected cannot be asked for approval before being born. The very capability of giving consent is dependent upon an event that transpired without consent.

    3.) Special deontic obligations are legitimate only when grounded in agreement.

    4.) Sentient life is filled with eventual promises and agreements that inevitably lead to special deontic obligations.

    5.) However, although those alive right now may consent to the terms and conditions of intra-wordly agreements, no sentient being had the opportunity to consent to the terms and conditions of having terms and conditions in the first place.

    6.) Therefore, no sentient agent has any inherent special deontic obligations. All special deontic responsibilities are second-order and contingent.

    Summed up: unless you are eternal, you cannot possibly have any special, intrinsic deontic responsibilities.


    The Argument from Legitimate Rational Self-Preservation (or the Negation of Affirmative Aggressive-ness)

    1.) Again referencing Arendt, sentient existence is mortal. We also have the (unfortunate) capability to experience suffering. And although it’s sometimes easy to simply talk about suffering in an abstract and impersonal manner, Arendt tells us that suffering is perhaps one of the most personal of all experiences. The upshot is that continual existence is a legitimate threat to the welfare of the sentient being - paradoxically, we can either choose to die right now, or choose to persist and risk dying anyway (in perhaps very painful and traumatic ways).

    2.) The future, despite what we like to tell ourselves, is never completely known. There is always the possibility of unknown, unexpected contingencies arising.

    3.) An inherent special deontic obligation ascribes a path of action for an agent that did not consent to the terms of conditions of the obligation - a path that may lead the agent into unknown terrain and has a possibility of severely damaging this person.

    4.) No person can be expected to sacrifice their own life and ultimate well-being without any approval on their part. Alternatively, no person can be expected to lower their own well-being below that of those who they are assisting.

    5.) Therefore, no agent can be legitimately expected to oblige to apparently-inherent special deontic responsibilities.

    Take, for example, the drowning child thought experiment. Although at first glance it seems as though the man standing by the water has a moral obligation to get his shoes and pants wet and save the child, there are unknown contingencies at play. Perhaps there is a crocodile in the lake, and if the man enters the lake, he’ll get eaten alive by the beast. Now it seems ambiguous whether or not the man has any special obligation. Not saving the child because the man might ruin his shoes is a poor reason, but not saving the child because the man might himself die in agony does seem like a pretty fair and good reason. Thus, simply being in the right place at the right time doesn’t seem to be enough to entail special obligations. Since perfect, omniscient knowledge of all possible moves is impossible, a rational agent cannot be expected to obey apparently-inherent special deontic responsibilities.

    To deny this is strikingly similar to the optimist demanding that all dissenting pessimists “suck it up”. Indeed, when one is confronted with the objection that you simply have to imagine how horrible the suffering of someone else is to realize there’s an obligation to assist, one simply has to imagine themselves experiencing identical experiences. If it’s happening to someone else, it can happen to you.

    The failure to recognize a fundamental breach of consent in the creation of a life, paired with or left independently of the inherent risk of personal persistence, leads to the uncomfortable conclusion that life, or Being for that matter, is not adequate for morality. The paradox arises when we see that morality arises within life, but when taken to its radical conclusion, ends up condemning its own origins. This is, more or less, the kernel of the meta-ethics of Julio Cabrera in his A Critique of Affirmative Morality. Life, or perhaps even Being, produces ideals (possibilities) that are inevitably disappointed by the imperfect and degraded actuality.

    Affirmative morality, for Cabrera (and henceforth myself as well), is that sort of system that is pseudo-legitimate: it appears to be justified, but is actually very shallow in its roots. The value of life (as a whole) is not questioned. The inevitable inconsistencies and contradictions are swept away as unimportant contingencies: accidents that apparently don’t represent life as a whole, like the awkward stepson at a family reunion. As such, affirmative morality is aggressive and hypocritical. The radical questioning brought up by people like Cabrera is seen as unimportant - the show must go on, for whatever reason.

    A first-order moral agent does have special, intrinsic deontic responsibilities - but none of us qualify as first-order moral agents, and neither do we live in a world in which first-order morality is even possible. Second-order morality is all that can be fruitfully pursued, and because of this, intrinsic deontic responsibilities must be jettisoned.
  • Thorongil
    3.2k
    I claim that there is nothing wrong with maximizing sentient welfare, but neither is there anything (technically) wrong with not doing so.

    Take, for example, the drowning child thought experiment. Although at first glance it seems as though the man standing by the water has a moral obligation to get his shoes and pants wet and save the child, there are unknown contingencies at play. Perhaps there is a crocodile in the lake, and if the man enters the lake, he’ll get eaten alive by the beast. Now it seems ambiguous whether or not the man has any special obligation. Not saving the child because the man might ruin his shoes is a poor reason, but not saving the child because the man might himself die in agony does seem like a pretty fair and good reason. Thus, simply being in the right place at the right time doesn’t seem to be enough to entail special obligations. Since perfect, omniscient knowledge of all possible moves is impossible, a rational agent cannot be expected to obey apparently-inherent special deontic responsibilities.
    darthbarracuda

    This sounds like a development in your thought from the last major conversation we had, one I cannot fail to notice is in the direction of what I argued. :P
  • _db
    3.6k
    Perhaps a similar conclusion, but for different reasons I believe.
  • Moliere
    4k
    I'm not familiar with the distinction between first- and second- order moral agents.
  • _db
    3.6k
    This is understandable as it's not really a "thing" in moral philosophy. A first-order moral agent can act in first-order moral ways. A second-order moral agent can only recognize first-order morality, but can only act in a second-order, or bastardized first-order, moral way.
  • Moliere
    4k
    What is it to act in a first-order moral way, and how does that differ from the second-order moral way?
  • _db
    3.6k
    Second-order morality takes its ground to be unquestionably justified, and as such suffers from inconsistencies, hypocrisy, aggressive-ness, and tendencies to compromise.
  • Moliere
    4k
    OK I think I got a handle on your distinction now. It didn't pop out for me in reading your OP, so thanks.


    I think your first argument is question begging. P3 basically states the conclusion you said you wanted to defend in your 2nd paragraph.

    Special deontic obligations are legitimate only when grounded in agreement.darthbarracuda

    Which is what you define second-order moral agents to be able of doing -- and are basically defined as second-order moral agents by this, no? At least, as you clarify later, it seemed to me that the contractual nature of morality is derived from the fact that it is inconsistent when taken unquestionably -- which we must necessarily do, hence why we are 2nd-order moral agents.

    But do you see the circle in the reasoning there?



    The second argument is more interesting, in my opinion. Since we are ignorant of the future, and inherent special deontic responsibilities are the sorts of responsibilities one must follow regardless of what the future may entail, and because it is unreasonable to expect a person to act against their well-being, it is unreasonable to expect people to act on inherent special deontic responsibilities as it may lead to them -- due to their ignorance of the future -- to act against their well-being (with specific reference to acts which risk life, it seems to me you are saying). (I'm just restating it to make sure that I have you right)

    But just because it is unreasonable to expect people to act on such and such that doesn't mean they don't have such and such. It is unreasonable to expect people to follow the law within their land, but they have the legal obligation to follow the law or be punished all the same. We know people will break the law, of course. Similarly so, the deontologist could argue that we know people will be immoral, but that does not then mean that they don't have these responsibilities even though it is unreasonable to expect people to follow them.
  • Numi Who
    19


    ONLY APPLICABLE TO THE UNENLIGHTENED

    Your entire body of thought (terms included) is only applicable to the unenlightened - that is, to those who have not yet identified an Objective Value, which humanity has not sufficiently done yet (though I've sufficiently identified three).

    In a subjective haze, then your body of thought here can be given consideration. With an Objective Value (a universal value or a core value or an ultimate value in life), then your body of thought (and all of your terms) are rendered trivial, misguided, blind, futile, and a big waste of time and energy.

    Consider the Ultimate Value of Life - Higher Consciousness (I'll consider it for you): Given the overwhelming evidence, higher consciousness (that capable of extended reasoning and proactive action based solely on that) (which humans currently embody) 'evolved' from mere consciousness (the level of current animals), which 'evolved' from non-conscious life, such as vegetation and microbes (life with no central brain). So now we have the three Objective Values of Life - Higher Consciousness (in whatever species attains it), mere Consciousness (the level of animals - which can attain Higher Consciousness), and ditto for currently Non-Conscious Life (vegetation and microbes, from which we theoretically started as).

    Now, thinking further (which I have done), what come with a value? A 'goal' comes with a value - i.e. securing that value. Here, the 'goal' (the Ultimate Goal of Life) is to secure higher consciousness against a harsh and deadly universe. Note that from this goal you can now clearly distinguish good from evil (their being goal-driven), and with this ability you can now build worthwhile individual lives and relevant civilizations (finally).

    Now you can go back and consider all that you have presented with the new ability of giving it proper value - i.e. what value does it all have with respect to achieving the Ultimate Goal in Life - securing higher consciousness in a harsh and deadly universe.

    Just to note, another issue is "Why bother?" which happens to be the Greatest of the Great Questions of Life (and one which science will never address - hence the relevance of philosophy and Stephen Hawking's error in stating that 'Philosophy is Dead' (it is just in the toilet right now).

    So have I arrived at the Answer to the Greatest of the Great Questions of Life (as well as the Answers to all the Lesser Great Questions of Life (such as "Why are we here?" "What is our purpose in life?" etc. etc. lame lame) (in comparison to the Greatest Question). Yes, I have. I only have room for the Answer to the Greatest of the Great Questions of Life, which is (should I give it away for free... hmmm... why not - it needs to be disseminated...) "Because consciousness is a good thing". (consider the alternative). Note that higher consciousness takes priority, the assumptions being that it increases the odds of broader survival in a deadly universe, and that there is no guarantee, in a chaotic universe, that any other species on earth will attain it.
  • TimeLine
    2.7k
    But say the country I live in (without any realistic alternative) decides to go to war with another country, for a reason I find to be illegitimate. Do I have a responsibility to support my country and even fight and die for it? It doesn’t seem so. This is an instance of coercion - the ascription of special obligations without any prior consent.

    Many consequentialists like to argue that we have some special responsibility to reduce suffering and promote happiness. I myself am a consequentialist of sorts (but actually more of an axiological welfarist), and I’ll agree that we have genuinely good reasons to consider the value of maximizing sentient welfare. What I will not concede is that we have a special deontic obligation to do so.
    darthbarracuda

    I have my concessions of axiological welfarism and though I appreciate that our happiness is exclusive, I fear it may contain characterisations of benefitism rather than considerations as to the greatest distribution of well-being; the latter, in addition, is just as convoluted and needs to descriptively capture how it represents and measures this; economically this is feasible, but philosophically it is rather loose.

    If your country decides to go to war as a way to protect its internal state of affairs because of outside aggression, then you are in a position of responsibility despite the misery it may inflict on your own happiness since the risk or outcome of failing may render a permanency viz., domestic instability that would ultimately impact on your capacity to achieve happiness. Otherwise you would need to revoke your citizenship and go live on a mountain by yourself like Jean-Baptiste Grenouille, maybe grow carrots or poke sticks in your face. But if you've signed the social contract and it is an act of aggression on part of your country to another that would create the same risks and instability to this greater well-being, then you should not.

    Ultimately, maximising sentient welfare may require what would appear a disadvantage to ones own welfare and happiness - as in the path towards reaching that may appear so - but the aim itself is still the same.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.