Comments

  • Pragmatic Justification question??
    yeah thank you for that. I admit my phrasing is quite confusing, and I think it is a product of me being confused. I'm trying to understand something specific but even the author isn't very clear. But your answer was helpful.
    I think what I'm trying to ask is if there is an indispensability criterion built into pragmatic justification? In other words, is some theory need to be the only game in town in order for it to be pragmatically justified? The author I'm reading seems to be saying "yes"...but my intuitions just don't agree with that. I think (though I don't have a good argument for why) it possible that two competing theories can be both pragmatically justified.
  • The Argument from the Scientific Test of Reality
    I'd take the moral naturalist route and argue that we are justified in our scientific postulations insofar as they play an indispensable explanatory role in our scientific theorizing. Scientific and moral theorizing, though different, are sufficiently analogous in that if we are justified in our scientific postulations, then we are likewise justified in our moral postulations...but again, only insofar as those moral postulations are ineliminable in explaining moral practice, experience, etc.


    Regarding your argument from scientific testability, it raises the question of why we ought to limit our ontology and only admit entities describable in terms of the natural sciences. What motivates one to limit their ontology in this way? Is it a fear of metaphysical profligacy? If so, it's unclear to me how adopting a sort of property dualism in order to admit moral properties into our ontology is any more metaphysically profligate than the postulations in the special sciences that are not amenable to strict reductions.
  • What are some utilitarianistic analysis with regard to morality of pet keeping?
    Just for clarification, the above list is what a classical utilitarian believes. Other utilitarians have rejected one or more of those claims.
  • What are some utilitarianistic analysis with regard to morality of pet keeping?
    If the dog can't help it, do you think we would have a moral obligation? (according to utilitarianism of course). It would seem yes. Hurricanes "can't help it" but if we had the technological capacity to curb hurricanes away from cities, I think utilitarianism would say we have an obligation to do so.
    Maybe one way a utilitarian could disagree would be by focusing on the practical results. Maybe a utilitarian could argue that the logistics of rounding up all carnivores and eliminating them would produce more suffering, especially when it comes to biodiversity and ecological stability. But as for your own, personal decision to kill your dog...I need to think about it more.
  • What are some utilitarianistic analysis with regard to morality of pet keeping?
    If I say he has the right to eat meat because he can only survive on it, I'm appealing to the "Intrinsic Rights" school of morality which is incompatible with utilitarianism.

    I'd also be very careful here. Classic utilitarianism is a view about alternative choices, about what makes one action right over another. It does not necessarily recommend a method of arriving at the right action. A more sophisticated utilitarian may consistently believe that one ought to adopt an "intrinsic rights" perspective because doing so will produce the most amount of good (however it's defined).
  • What are some utilitarianistic analysis with regard to morality of pet keeping?
    I think most utilitarians abide by "ought implies can." If you cannot not eat meat, then the claim that you ought not to cannot govern over you. You would have no such obligation. In your dog's case, you would not have an obligation to prevent him from eating meat either, since what you ought to do for your dog is derived from what he is capable (and not capable) of doing. Perhaps you have other obligations about where you get the meat, etc., but a carnivore's act of eating meat (without a viable alternative) is not one up for moral criticism.
  • Personal vs Doxastic Justification
    I'm not entirely sure how this lends insight into the question.
  • Personal vs Doxastic Justification
    can you unpack that more please (if you have time of course).
  • Philosophy is ultimately about our preferences
    Preferences would be irrational as they aren't reasoned positions.
    I don't see how that follows. I'm asking why you think preferences are irrational if they aren't "reasoned." Are all non-reasoned things irrational? Is my tv set irrational? My taste for Indian food? These are non-rational...not irrational. Also, rationality and reasons can split apart, as my examples illustrated. Furthermore, rationality is a complex; it is an achievement of sorts. There are no default rational items. In order to arrive at "rational" beliefs, we will need justifiers. Justifiers themselves need not be justified. Insofar as our preferences can act as justifiers (I can think of several accessibility internalists who think this) they would be part of the process of rational belief formation. Because they themselves are non-rational elements of one's justifying base does not make them irrational.
  • Philosophy is ultimately about our preferences
    What if you were an internalist about justification, and maintained that non-belief states, themselves not justified, can act as justifiers?
  • Would Plato have approved...?
    Would Plato have approved of the current US administration?
    probably not.
  • Why be rational?
    Consciousness does not drive the mind, it follows along with a notebook and writes things down.

    This is going to be a really random question. But are you Andy Clark? lol I think it unlikely as you said you were a civil engineer. But I'm still asking haha.
  • Why be rational?
    In the coherence sense, one is rational if one acts according to reasons that, regardless of their reality, cohere with the rest of a person's beliefs and desires.
    I think this is a plausible analogy. There are many senses to the word reason. An exhaustive taxonomic breakdown may be too much to ask for, but I think what you point out is relevant to the question. Sometimes we ask for the reason why someone did something, and all we are looking for is what motivated them. Other times we are looking for a justification. When I think of normative reasons, it's the justificatory role that is primary. So in my example of a person (call him Joe) mistakenly believing a murderer was in his house, one could respond that Joe was responding to "reasons", and what we mean by that is Joe was responding to what Derek Parfit called apparent reasons. Still, this would be an example in which rationality is not tied to actual normative, justifying reasons.


    The rules are set up, and the agent follows them correctly, but they don't really mean anything.
    This has always bothered me, but I can't quite seem to figure out why I find this so bothersome. Do our intuitions demand rationality to be a thick concept that has a non-arbitrary connection with the world? Because my intuitions about that annoyingly oscillate back and forth. I guess I fear that if we make criteria for rationality external to mental processes then the criteria itself becomes arbitrary. How do we come to know which external criteria actually count as genuine requirements of rationality? To avoid that problem, I adopt a more limiting, less thick, conception of rationality, relegating it to consistency amongst beliefs, conative states, etc.

    If morality is aligned with rationality, so that what is rational is also what is moral, then the question "why be moral" is eclipsed by the question "why be rational?"

    I think what matters is how exactly morality is lined with rationality. If we think of morality as necessarily reason-generating (if something is morally wrong, then there is a reason not to do it), and if rationality is tracking actual reasons, then it would seem one must be moral to be rational. I personally don't think this is plausible, as it is way too demanding (unless of course we are ok with admitting we are all irrational and rationality is rarely, if ever, achieved). As a result, I reject the tie between rationality and reasons to avoid that issue. But even if true, I don't think this would replace the OP question, as one can still ask for actual reasons to be rational. I also don't think this question can be jettisoned as a failure to grasp the concepts involved.
  • Philosophy is ultimately about our preferences
    we go the way we are drawn
    this is actually quite beautiful. Is this from somewhere or did you make this up? Either way, I like it.
    Reveal
  • Why be rational?
    What other definition of rationality is there apart from having reasons?
    One that doesn't have to do with reasons. A rather intuitive one is the view that rationality is a property of persons; it supervenes on the mental. If two individuals in different universes are mentally equivalent, then they both have the same degree of rationality. Under the assumption we can be mistaken about reasons, if I mistakenly think a murderer is in my house, truly genuinely believe it, and I do not wish to die, then it is rational for me to try to escape whether or not I actually have a reason to escape. I can lack a reason to act and be rationally permitted to act. It is also rather natural for us to say that I would be irrational if I did not intend to escape given my beliefs and desires, because rationality, according to this conception, is more a matter of consistency between our beliefs, intended goals, etc., and not actual reasons.
  • Why be rational?
    Are you asking for a rational justification for being rational? Isn't that circular?
    No. I'm asking for a reason to be rational. I'm wondering if people think of rationality as normative. It would be circular if one adopted a reason-loaded conception of rationality, which I'm leaving open-ended.
  • Philosophy is ultimately about our preferences
    Ah thank you for clarifying. Now, let us assume you're right, that which argument we adopt is based on our preferences. Why would that not be rational?
  • Philosophy is ultimately about our preferences
    A lot has been said, but it seems the underlying assumptions in your post haven’t been quite extracted yet.

    The interesting thing is that any two conflicting stances are reasoned positions.
    I’m not sure how this is the case. Two conflicting stances can both be reasoned positions, but the mere fact they are conflicting does not mean they are reasoned. Perhaps I have misunderstood you, and you are describing a hypothetical scenario in which there are two reasoned conflicting position, both logically adequate.
    Therefore the difference between thesis and antithesis must lie with the axioms of the arguments offered in support of them
    This language is rather confusing, because reasons are those things we cite when we try supporting a position. If axioms support arguments, then axioms could act as reasons. From here, your overall concern is rendered moot, because we could maintain a position by citing said axiomatic reasons. When you say:

    Differences in choice of axioms must originate with our preferences (likes and dislikes).

    If axioms are reasons, then citing them can be “rational.”

    philosophy is not so much about rationality as it is about our personal preferences.
    Are you saying if it is rational for me to do X, then there must necessarily be a reason for me to X? Or are you saying if there is no reason for me to X, then doing X cannot be rational? I’d be very careful about employing a reason-loaded conception of rationality.
    It seems you think reasons are required for rationality when you say:
    Axioms, by definition, have no supporting reasons. So, can't be rational
    But I'd examine this claim if I were you. If I smell smoke upon waking, and believe, genuinely, my house is on fire, and I desire to live, is it not rational of me to save myself? Now suppose there is no fire. There is no reason for me to flee. Am I rendered irrational because I have no reason to flee? You could say my own beliefs and desires give me reason to act, but then you'd be admitting our preferences/ mental states can act as reasons, which is contrary to your point.

    Also, why is something irrational if it is chosen by our preferences?
  • Philosophy is ultimately about our preferences
    A lot has been said, but it seems the underlying assumptions in your post haven’t been quite extracted yet.

    The interesting thing is that any two conflicting stances are reasoned positions.
    I’m not sure how this is the case. Two conflicting stances can both be reasoned positions, but the mere fact they are conflicting does not mean they are reasoned. Perhaps I have misunderstood you, and you are describing a hypothetical scenario in which there are two reasoned conflicting positions, both logically adequate.
    Therefore the difference between thesis and antithesis must lie with the axioms of the arguments offered in support of them
    This language is rather confusing, because we cite reasons within arguments, not axioms. Axioms are the foundation of certain mathematical or logical universes. They don't show up in premises when we argue. It is possible you are using "axiom" more casually to mean, a starting point in an argument, e.g. the first premise of an argument. If that's what you mean, then axioms could act as reasons because we cite reasons in argumentation. Axioms would just be those starting reasons we cite. But this way of interpreting things would render your question moot, as our choice of positions would be determined by reasons/axioms.

    Differences in choice of axioms must originate with our preferences (likes and dislikes).

    Do we choose axioms? We certainly choose arguments. Perhaps that is what you were trying to say.

    philosophy is not so much about rationality as it is about our personal preferences.
    Are you saying if it is rational for me to do X, then there must necessarily be a reason for me to X? Or are you saying if there is no reason for me to X, then doing X cannot be rational? I’d be very careful about employing a reason-loaded conception of rationality.

    Also, is something irrational if it is chosen by our preferences?
  • Why be moral?
    I know the question was asked a year ago, but I'll give my two cents.

    In this post I'm assuming morality supplies us with reasons to act or not act (If something is morally wrong then you have a reason not to do it). If reasons, if they are to count as reasons for an action, are to play a role in justifying behavior, then it's by appeal to those reasons (not our beliefs in those reasons) that does the justifying. I say "justifying" because I'm assuming that's what you're after in your question of "WHY be moral?"

    We may be ignorant of those reasons, and we may be wrong about what we ultimately claim we ought to do all things considered. World 2 and world 3 differ in many ways. I'm not sure why you would think they are the same. Unless if we adopt extreme skepticism of our epistemic access to moral truths, our beliefs in world 3 would be aligned with the fact that killing babies is morally permissible. (I'm also assuming you mean "morally permissible" or "supererogatory", and not "morally required", when you write "it is moral to kill babies"). Given a large chunk of the world believes it is morally permissible to kill babies, world 3 would have a lot more dead babies in it.

    " Or would you convert to baby killing if you'd found it to be moral?"

    Well... for the sake of the thought experiment, I'd imagine people would not find it morally problematic and so wouldn't really make concerted efforts to stop it from happening.

    "In the unlikely case you'd say yes: then it's your belief that matters, not the fact-of-the-matter"

    I'm not sure how this follows at all. It is my discovery of the fact that causes my belief. Without the discovery I would not believe it, or I would not have a reason to believe it. Insofar as rational people respond to reasons, and morality gives us justifying reasons to act or not act, and it is possible to know the fact that it is morally permissible to kill babies, then there really isn't any moral reason for me to refrain from killing babies.

    If all that mattered were beliefs, then none of our actions would be justified. Why? Because the fact that I have a belief does not justify the belief. It's true that having a belief, plus a desire or other conative state perhaps, may give a motivating explanation of my behavior. My mental states in World 2 and World 3 may also be identical: I may believe in world 2 that killing babies is moral, and I may believe in world 3 that killing babies is moral. My beliefs and conative states would explain my behavior in both worlds. They may be "reasons" (as in motivating reasons) for why I act a certain way, but my having beliefs or certain pro-attitudes does not justify anything.

    "Would you accept a morality that stands in stark opposition to your personal values? What would it mean for you if you'd found this to be the case? "

    This seems rather contrived. If I was in world 3 and I didn't believe the truth that killing babies is moral, then of course I wouldn't accept it. If I came to discover the truth in it, then how would that be in opposition to my personal values? Categorical imperatives (if they exist) are principles that we have most reason to adopt. If I discovered I had an all-things-considered, sufficient reason to adopt the view that killing babies is morally required, permissible, or whatever, then that would not be in opposition to my values. In fact, that would be in direct alignment with them. After all, it is what I have determined to be what I have most reason to do. That means I have weighed my reasons already and killing babies won out.