• Depression and 'Doom and Gloom' Thinking vs Positivity: What is 'Self-fulfilling Prophesy' in Life?


    I certainly think we can answer your questions in the affirmative. Here are just two historical examples:

    First, being very interested in Marxism, I have read a lot of history focusing on communist movements. Here, activists' faith in the eventual triumph of communism, its inevitability, was often a potent force motivating their persistence in the face of adversity.

    Second, many wars have been started because one side sees war as "inevitable" and also assumes that they will become weaker vis-á-vis their enemy in the future (due to shifting demographics, disparate economic growth, etc.). This sort of thinking played a role in the outset of the American Civil War, the Third Reich's decision to invade the Soviet Union, the Lebanese Civil War, the Japanese decision to strike Pearl Harbor, etc.

    Essentially, the fear that war must come motivates people to actualize that very fear, hoping to start such a conflict on more favorable terms, rather than seeking to avoid a conflict, since they see time as "on the side of the enemy."

    This sort of decision-making plays a key role in the tendency of "rising powers" to engage in wars with the dominant hegemon, what Graham Allison terms the "Thucydides Trap." Historically, it is rare for a hegemonic power to be replaced peacefully (although it does happen, e.g. the US and UK in North America—a few attempts to invade Canada aside lol*—and the US war with Spain to take on total hegemonic control of the Western Hemisphere was not particularly bloody.)

    *One of the oddities of history is that, before other colonies even entered the Revolutionary War, New York and Massachusetts attempted a cross alpine winter invasion of Canada, and actually managed to sack Montreal. It always cracks me up to envisage the thinking that went into that one.
  • Objectivity and Detachment | Parts One | Two | Three | Four


    Ah yes, by "he" I meant Taylor. I feel like the empirical approaches to faith in the article have much to do with adopting the "disengaged" frame. It's a frame that very much lends itself to a "method epistemology," although there is obviously a certain virtue of apatheia elevated here too.

    You could contrast it with the virtue epistemology of someone like St. Maximus the Confessor, where the attainment of truth is itself dependent on a deep personal transformation, and where all virtue is ultimately connected to knowledge (and love!).

    From my knowledge of Eastern philosophies, this is often a common thread there too. One does not recognize the Dao or come to Enlightenment by withdrawing into a widely accessible frame and applying method to sense data, but it is rather a internal process.

    This sort of admonition that spiritual/ascetic therapy must come prior to understand is pretty typical of Eastern Christianity for example:

    17. If wounds in the body have been neglected and left unattended, they do not react to medicine when the
    doctors apply it to them; but if they have first been cleansed, then they respond to the action of the medicine and so are quickly healed. In the same way, if the soul is neglected and wholly covered with the leprosy of self-indulgence, it cannot experience the fear of God, however persistently it is warned of the terror and power of God's judgment. When, however, through great attentiveness the soul begins to purified, it also begins to experience the fear of God as a life-giving medicine which, through the reproaches it arouses in the conscience, burns the soul in the fire of dispassion. After this the soul is gradually cleansed until it is completely purified; its love increases as its fear diminishes, until it attains perfect love, in which there is no fear but only the complete dispassion which is energized by the glory of God. So let us rejoice endlessly in our fear of God and in the love which is the fulfilling of the law of perfection in Christ (cf. Rom. 13:10).

    From the Philokalia - St. Diadochos

    Eastern Christianity generally tended to avoid the sweep towards empiricism, although this is in part because it often seems suspicious of anything that isn't at least close to a millennia old as "innovation" (which can have its own difficulties :rofl: )
  • Objectivity and Detachment | Parts One | Two | Three | Four


    Interesting. Charles Taylor covers some of the same ground in "A Secular Age." An interesting point he makes re empiricism/verification and its relation to historical events and notions of Providence is that people are not "forced to the facts" (as they might be by some experimental conclusion) when it comes to rewriting history on secular terms. It's rather a particular sort of framing.

    And it is a framing people get comfortable with. It's fairly uncommon to see historical drama/fiction not depict all the lead characters, or at least the heroes, as post-modern agnostics with contemporary class consciousness, etc.

    He has a pretty compelling diagnosis of the psychological impetus for the "disengaged" frame of Hume and Gibbon vis-á-vis questions of religion as well. It represents a sort of control and insulation. At the same time, I think critics (including the post-moderns) have a point that it is always somewhat illusory, while also not being appropriate for all epistemic situations.
  • What is faith
    22. The deep waters of faith seem turbulent when we peer into them too curiously; but when contemplated in a spirit of simplicity, they are calm. The depths of faith are like the waters of Lethe, making us forget all evil; they will not reveal themselves to the scrutiny of meddlesome reasoning. Let us therefore sail these waters with simplicity of mind, and so reach the harbor of God's will.

    Saint Diadochos of Photiki
    On Spiritual Knowledge and Discrimination - One Hundred Texts
  • Quine: Reference and Modality


    That was in reference to Quine though. "This system implies 'Aristotelian essentialism,' (at least as he understood it), therefore it is flawed."

    Now that is a fine point to make. If formalism implies a position one takes to be false, it is indeed deficient. The point is rather that one cannot then turn around and point at an "approved" formalism as evidence of the rightness of a metaphysical position.

    This same point is made re Quine's methods in metaontology. That "there exists at least one Hercules that is strong" implies an ontological commitment to the existence of Hercules, but not strength, is obviously not neutral.
  • The Boom in Classical Education in the US


    I think it could be a helpful aid and comparison. If one accepts the idea that the goal of education is to produce virtue and help individuals to "be good people" and "live good lives" than differing approaches to virtue are probably helpful.

    The problem is that there is so much to put on curricula, making it difficult to know what to add or drop. This is why one needs to consider an organic "approach." Plus, there is always the issue that it is often better to cover one thing well than many things in a shallow fashion.

    The whole idea of intellectual and epistemic virtue is that one should be able to learn things on one's own, "learning how to learn," and also what is worth investing time in. This is obviously a good deal different from chasing utility and "the needs of the current job market," that comes from conflating consumption and well being.
  • Quine: Reference and Modality


    A lot of weight must rest on intuition. A rational argument in support of rational argument must presuppose the authority of rational argument. You cannot rationally justify reason in a non-circular manner. One cannot justify all the laws of thought, or one's inference rules, without at least starting from accepting some of them. Like Gadamer says, one needs prejudices to even begin.

    The classical inference rules are not counterintuitive. They are so intuitive that man studied them for millennia and largely came to the conclusion that they could not be otherwise. What is counterintuitive is having to translate things into logical form and properly apply the rules.

    But more to your point, the reason I bring up probability is simply because it is a good analogy in terms of the sort of disagreement here. Were I a subjectivist, a frequentist, etc., I could give exactly the sort of response you've given. "Ah, but that can be framed in frequentist terms." Indeed, both subjectivists and frequentists have some sort of explanation to cover every case. If they didn't, it would be a decisive deficiency. Nonetheless, some explanations require much more "stretching" than others. And there is also a similarity here in that both sides make use of the same methodology and formalisms, and then sometimes point to the formal apparatus to say "see, this works, so the interpretation must be correct." But of course, the fact that Bayes' Theorem is useful doesn't really say much to undercut frequentism.

    This is similar to Spade's point re Quine's sort of austere Platonism. Perhaps he is wrong about why Quine chooses this sort of theory of predication, but supposing he is right, it would be an example of the cart pulling the horse. In this case, metaphysics would be dictated by the particular formalism one is familiar with. This is "I have a hammer, so the world must be composed of nails.



    And, to the contrary, Klima and other Aristotelians claim that this framing of modality, and of essence in terms of modality, is in fact hostile to Aristotle. But this shows the problem of pointing to formalisms to attempt to adjudicate metaphysics. Here we see an influential philosopher claiming that a formalism must be abandoned precisely because it seems to him to lead to metaphysical conclusions he disagrees with.

    Klima's point is that contemporary modal framings of essence seem to be leading to a sort of conceptual blindness vis-á-vis classical notions of essence, and that one difficulty is the demand that realist theories be translated into systems made by nominalists with a nominalist bias. And right here we can see just one example of a nominalist saying "let's ditch this system because it isn't consistent with the proper metaphysics."

    Yet then philosophers will turn around and point to their preferred formalisms, designed with these biases and aims in mind, and try to call on them to adjudicate metaphysical questions. "Look, truth is not relational, it cannot ultimately apply to the adequacy of intellect to being, because in this formalism it applies to propositions and has an arity of one."

    Obviously, this sort of appeal to formalism will be particularly inadequate if the camp using it has themselves claimed that such formalisms are just a few among an infinite number of possible creations that must be selected for on the basis of some vague criteria of usefulness



    The first point which arises about this usage is that it seems to rely for its truth on certain beliefs about the physical world. I'm thinking of something like: "The causal 'flow of time' is unidirectional, toward what we call the future. Nothing can reverse this causality, and nothing can return to a previous moment in the flow and 're-cause' something in a different manner."

    I think the Principal of Non-Contradiction is enough. Something cannot happen and have not happened. George Washington cannot have been the first US President and not have been the first US President (p and ~p).

    The idea of the "past changing" would seem to imply some sort of second time dimension by which there is a past that exists at some "second time" ST1 and then changes at some later time ST2. But if the past changes and George Washington was not the first president then he was never president.

    Philosophers of time have discussed if such a notion is even coherent, but at any rate I see no reason why we should trouble ourselves too much about it. It seems on par with questions like: "but what if the world was created 5 seconds ago and then all our memories shall change in another 5 seconds?," "what if an evil demon has messed with all my concepts and I don't really even know what a triangle is?," or the misologist's "what is reason has no authority and does not lead to truth?" or "what if nothing is really true or false?"

    My personal thoughts are that, if one walks down the cul-de-sac of radical skepticism, there is no "certain" way out. The various coping mechanisms created by modern thought's love affair with radical skepticism are all subject to the challenge: "but what if it is radically wrong?" Reason has the capacity to question anything. Yet I also think that philosophy is under absolutely no obligation to start from radical skepticism. And I think having to maintain that:

    "It is possible that giving my child milk tonight shall transform them into a lobster.";

    "It is possible that if I recite this incantation, Adolf Hitler will become the first President of the USA, retroactively changing history."; or

    "It is possible that I did not eat this dinner that I just ate"

    Are all out in the realm of "radical skepticism." Certainly, on a common sense usage of "possible," I should not worry about the possibility that giving my child milk will transform them into a lobster, nor do I think the actual necessity in play here is inaccessible to the human mind (else the project of the sciences and philosophy would be doomed). Likewise, my having both ate and not ate my dinner seems straightforwardly contradictory.

    Do we know this to be true? I would say we do not -- we know so little about how time functions, physically -- but let's grant it. Is it, then, a necessary truth? This, notice, would be a necessary truth that guarantees a whole host of other necessary truths, but on quite different grounds. Do we need it to be a necessary truth? Could the (in 2025 allegedly necessary) truth that "Washington was born in 1732" depend for its necessity on a contingent truth that "Nothing can be uncaused or re-caused"? Well, why not?, we might reply. Why shouldn't a contingent truth ground a necessary truth? Isn't it the same case as the (contingent) truth that GW was born in 1732 causing the (now necessary) truth that "GW was born in 1732"?

    But there's a flaw here. We're equivocating. We don't want to say that GW's birth in 1732 caused anything here other than the truth of a subsequent statement to that effect. Whereas, with a law about "causality and the flow of time," we do want to say that this law, whether necessary or contingent, literally causes events to become necessary subsequent to time T1 -- that is, when they in fact occur.

    So, pausing again before I go on -- do you think this is a reasonable analysis of some of the issues involved in "necessity" statements involving the past?

    I am not sure if this is a good way to go about it. You're splitting everything up into individual propositions. So, you have it that the specific individual proposition involving Washington's birth is necessarily true in virtue of the particular event of Washington's birth. This is not how it is normally put at least. The necessary relationship between the truth of a proposition and the fact that what it describes obtains is generally framed as a general principle. It's true for all true propositions, in virtue of the fact that they are adequate to being. Causes are many and are instantiations of principles, but necessity "flows" from principles.

    Existential, metaphysical, and physical necessity are normally described as ordered, which I think is the right way to look at it. One is able to describe physical necessity through appeals to more general principles, rather than as some sort of heap of propositions that can either be true or false and necessary or contingent. That George Washington can't have been both born in 1732 and not born in 1732 is explicable by the more general principle that a thing cannot both be and not be in the same way, at the same time, without qualification.
  • The Boom in Classical Education in the US
    I didn't feel it warranted a new thread, but I also thought, re the complaints on "diversity" that the classical tradition probably lends itself to studying other cultures as a useful source of synthesis and comparison. Whereas, the similarities across cultures as one goes further back might denote a need for a sort of temporal diversity.

    For example:

    To my mind the ethics of Gautama Buddha can best be interpreted as a virtue ethics. Confucius' view of the moral person as an artistic creation resonates well with Plato's view of the unity of reality, the good, and the beautiful. Agreeing with his Greek contemporaries, the Buddha also established an essential link between goodness and truth on the one hand and evil and untruth on the other. Both the Buddha and Christ, however, would have asked for two major changes in Greek virtue ethics. In both Buddhism and Christianity pride is a vice, so the humble soul is to be preferred over Aristotle's "great soul" (megalopsychia). (Aristotle's megalopsychia may even be too close to megalomania for the comfort of most contemporary persons.) Both the Buddha and Christ would also not accept Aristotle's nor Confucius' elitism. For Aristotle only a certain class of people (free-born Greek males, to be exact) could establish the virtues and attain the good life. (Greek eudaimonism has been called "an ethics of the fortunate.") In stark contrast, the Dharmakaya and the body of Christ contain all people, including the poor, the outcast, people of color, and women. For Buddhism we will perhaps have to change the definition of virtue ethics from "the art of making the soul great and noble (megalopsychia)" to "the art of making the soul balanced and harmonious..."

    A. J. Bahm's more literal translation of samyag- as "middle-wayed" view, "middle-wayed" conception, etc. brings out the parallel with Aristotle's doctrine of the mean even better. Bahm observes that the Buddha's mean "is not a mere, narrow, or exclusive middle [limited by strict rules or an arithmetic mean], but a broad, ambiguous, inclusive middle." Therefore, the virtues of the eight-fold path are seen as dispositions developed over a long time, and they are constantly adjusted with a view to changing conditions and different extremes. Bahm acknowledges that the translation of "right" is acceptable if, as it is in both Buddhist and Greek ethics, it means

    that which is intended to result in the best [i.e., the summum bonum]. . . . However, right, in Western thought, tends to be rigorously opposed to wrong, and rectitude has a stiff-backed, resolute, insistent quality about it; right and wrong too often are conceived as divided by the law of excluded middle. But in samyag- the principle of excluded middle is, if not entirely missing, subordinated to the principle of the middle way."

    Neither the Buddha nor Aristotle give up objective moral values. They both agree, for example, that is always wrong to eat too much, although "too much" will be different for each individual. It is also impossible to find a mean between being faithful and committing adultery or killing and refraining from doing so. But even with this commitment to moral objectivity, we must always be aware that the search for absolute rightness and wrongness involves craving and attachment. Besides, developing the proper virtues will make such a search misdirected and unnecessary.

    Or another interesting similarity I am aware of (since Confucius seems to clearly be in the camp of virtue ethics):

    Among the traits connected to ethical nobility are filiality, a respect for and dedication to the performance of traditional ritual forms of conduct, and the ability to judge what the right thing to do is in the given situation. These traits are virtues in the sense that they are necessary for following the dao, the way human beings ought to live their lives. As Yu (2007) points out, the dao plays the kind of role in ancient Chinese ethics that is analogous to the role played by eudaimonia or flourishing, in ancient Greek ethics. The junzi is the ethical exemplar with the virtues making it possible to follow the dao.

    Besides the concepts of dao and junzi, the concept of ren is a unifying theme in the Analects. Before Confucius’s time, the concept of ren referred to the aristocracy of bloodlines, meaning something like the strong and handsome appearance of an aristocrat. But in the Analects the concept is of a moral excellence that anyone has the potential to achieve. Various translations have been given of ren. Many translations attempt to convey the idea of complete ethical virtue, connoting a comprehensive state of ethical excellence. In a number of places in the Analects the ren person is treated as equivalent to the junzi, indicating that ren has the meaning of complete or comprehensive moral excellence, lacking no particular virtue but having them all. However, ren in some places in the Analects is treated as one virtue among others such as wisdom and courage. In the narrower sense of being one virtue among others, it is explained in 12.22 in terms of caring for others. It is in light of these passages that other translators, such as D.C. Lau, 1970a, use ‘benevolence’ to translate ren. However, others have tried to more explicitly convey the sense of ‘ren’ in the comprehensive sense of all-encompassing moral virtue through use of the translation ‘Good’ or ‘Goodness’ (see Waley, 1938, 1989; Slingerland, 2003). It is possible that the sense of ren as particular virtue and the sense of comprehensive excellence are related in that attitudes such as care and respect for others may be a pervasive aspect of different forms of moral excellence, e.g., such attitudes may be expressed in ritual performance, as discussed below, or in right or appropriate action according to the context. But this suggestion is speculative, and because the very nature of ren remains so elusive, it shall be here referred to simply as‘ren’.
  • Objectivity and Detachment | Parts One | Two | Three | Four


    I believe that altered states of consciousness, epiphanies and what are called religious experiences are certainly possible, they do sometimes, under certain conditions, happen. I know this from personal experience. But I cannot demonstrate even that possibility to anyone who has not experience an altered state themselves, and then I don't need to demonstrate anything—my experience is irrelevant to them. It is their own experience that might lead them to belive.

    But, if I am understanding your objections properly, wouldn't this equally apply to knowing that anyone else is having any experiences at all?

    How do you "demonstrate" that someone else is experiencing red, enjoying a song, or in pain, for instance?

    For example, some Christians believe that Jesus caused Lazarus to return to life when he had been dead, that Jesus walked on water, and that Jesus himself "rose from the dead". How would you verify such claims? 'Verify' does not mean merely 'convince others'.

    Presumably the same way we "verify" other historical claims. But if your problem is not the plausibility of particular Christian claims, but rather our capacity to verify these sorts of claims at all, it would seem that the problem of verification you identify here would apply equally to virtually all fact claims about historical events.

    How does one "verify" that Hannibal won the Battle of Cannae through a double envelopment, for instance? Or that the Germans started World War II with a false flag attack? Or that St. Augustine was a Maniche in his youth? Or that St. Thomas' studied in Paris?
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    Faith or intuition are valid ways of knowing—simply because inhabiting a faith or intution is a knowing. It is a knowing of a certain kind of experience. It is not, however, a propositional knowing—although it might lead to propositional beliefs, those beliefs cannot be verified by the faith or intuition. And note, this is not to say that the faith or intution cannot be convincing to the one inhabiting it, it is just to say that it cannot provide sufficient grounds for an argument intended to convince others.

    If others are convinced by your intution-based conviction then it will be on account of their being convinced by your charisma, or they are sufficiently lacking in critical judgement to buy an under-determined argument, or they can relate to the experience you describe because they have had similar experiences and feel the same way. In other words, they are being convinced on the basis of rhetoric or identification, not reason.

    I think this is a good distinction. One deficiency in contemporary thought is to tend to collapse all knowledge into propositional belief (and to include "justification" in the definition of knowledge). To be sure, this tendency has never been absolute, but it has been strong in some contexts.

    The deficits here are most obvious when one considers "knowing that..." versus "knowing how to..." Knowing how to ride a bike does not seem to involve mere assent to certain propositions with proper justification, and this distinction seems to recover the older notion of "techne." Yet certain elements of religious life seem to involve a sort of "know how."

    In terms of the bolded above, I would just note that "similar experiences that feel the same way," might undersell the strength of the arguments that can be mustered in support of a defense of intuition or noesis. After all, how does one demonstrate that reason itself is valid or has any authority, or demonstrate the Principle of Non-Contradiction, etc.? It seems quite impossible to give a non-circular argument in favor of reason, one that does not already assume the authority of reason.

    So, this is a "feeling" that underpins the authority of argument itself, and one might suppose that because of this it is better known than knowledge that is achieved through rational demonstration.

    This does not, however, imply that all noesis is equally easy for all people to come to. Indeed, if it is akin to dianoia, to discursive knowledge, we shouldn't expect this sort of democratization. The challenge then is that Plato, the Patristics, Eastern philosophies of Enlightenment , etc. claim that this sort of noesis or gnosis is in fact not easy to achieve, but usually quite arduous. To use the framework of the Patristics, the nous is damaged and in need of significant healing and therapy before it can properly attain to the truth.

    Which is all to say that to collapse faith into assent to propositional knowledge will tend to totally miss this and will mean just talking past numerous other traditions (e.g. Neoplatonism, Orthodoxy, etc.).
  • Quine: Reference and Modality


    Bad judgement can apply to any interpretation of probability. Infamous examples include people being sent to prison for years, having their lives ruined, because of poor interpretations of probability. Perhaps these examples only tend to involve frequentism because it is already dominant, or perhaps it speaks to its being truly counterintuitive?

    A famous case from the UK involved a woman being convicted of murdering her own children after two of them died of SIDS. The lead witness in the case, an expert in statistics, argued for conviction on the grounds that the frequency with which a woman of her demographic background could be expected to lose two kids to SIDS was incredibly low, meaning the odds of foul play should be considered far higher. But this is simply bad reasoning, since the question should be "given a woman has already lost one child to SIDS, what is the chance that they will lose another?"

    Actually, families that experience SIDS are much more likely to experience it again, and there are causal explanations for this that don't involve foul play (although, it seems obvious that people who murder their kids are also more likely to do so in the future as well). The explanation that the prosecution offered in terms of population frequencies was clearly deficient (leading to exoneration).

    A proponent of frequentism might argue, however, that the problem is simply that the wrong population was chosen by the expert. The population in question should have been "mothers who have already lost their first child to SIDS." So, the frequentist can say the mistake is looking at frequency in the wrong population. The obvious rebuttal here is that the population of "mothers who lost their first child to SIDS" is relevant because this population has a much higher propensity to experience SIDS. That is, population selection often has an implicit notion of propensity that is built in.

    You see the same thing with the Monte Hall Problem, Mr. Brown's kids, etc. Originally you had PhDs focusing on probability writing in to give the wrong answer to this question. The answer only seems obvious now because everyone gets taught it in intro stats. But of course, if you use Bayes' Theorem, something you can teach to a middle schooler, the correct answer is easy to come by.



    For example, if there are three possible worlds of different colours, then why should the existence of these three distinct possibilities automatically imply that each colour is equally likely or frequent? In my opinion, the fallacy that logical probability implies frequential or even epistemic probability is what gave rise to the controversial and frankly embarrassing Principle of Indifference.

    It doesn't, at least not in the Principle of Indifference as described by Leplace, Keynes, etc. It's the simplest non-informative prior. Obviously, it cannot be applied to all cases, rather a special set of them. But the general reasoning used here tends to be at work in more complex non-informative priors.

    Anyhow, part of the reason why subjectivist probability has made such a comeback is through information theory. On a frequentist account, the question of "what is the relevant distribution" vis-á-vis information becomes extremely fraught. For the (now I believe minority) group that wants to deny information any "physical reality" the argument is that, for every observation/message, the values of each variable just are whatever they happen to be, occuring with p = 1. Hence, mechanism is all that is needed to explain the world. I think Jaynes' work is particularly instructive here.



    Exactly. There are indeed plenty of ways to misapply the Principle of Indifference, or cases where it will not be appropriate. There are other non-informative priors, PI is just easiest to teach for simple examples. However, critiques of it often simply include information in the example that would necessarily preclude using PI in the first place, which doesn't really say anything more than "if you misapply a rule is doesn't work right."
  • Quine: Reference and Modality
    Or more simply, if something is impossible, this is in some sense necessary, since the impossible, being "not possible" necessarily cannot occur. It will occur in 0% of potential futures.

    If you try to sprout pinto beans by putting them in an incinerator, this will not work. Anyone is free to rebut this by successfully starting a garden by first incinerating their beans. Otherwise, the claim seems pretty secure.

    So why commit ourselves to a conceptual apparatus where we must say: "Actually, the impossible is actually possible because we can string together the words 'I incinerated my beans to sprout them'" and thus be committed to the "existence" of some "possible worlds" where the impossible is possible?

    Why collapse all necessity into one sort? It seems clear that there are different sorts. A triangle cannot have four sides. This is impossible in a way that seems to vary from how incinerating beans cannot possibly result in their sprouting however.
  • Quine: Reference and Modality


    It might be expressible in terms of accessibility, (although I would say you are losing things); that's not really the point. Framing modality in terms of possible worlds requires a radical, counterintuitive retranslation of counterfactual reasoning into terms speakers themselves are unlikely to recognize as true to their intentions, while at the same time requiring either a bloated ontology of "existing" possible worlds, or some other sort of explanation of what they are.

    Why must we be under a commitment to understanding modality in these terms? Certainly not because this is how modality has been historically or widely conceived, or because it's what most people mean by the common usage of the term.

    I will throw out a very similar example. You can also explain probability in terms of frequency alone. This will work quite well in some situations, when you are picking colored jelly beans blindly out of a jar for instance. "A randomly chosen jelly bean has a 25% chance of being red" just means "25% of the jelly beans are red."

    Frequentism is not the only way to understand probability however. It only really becomes popular in the 20th century due to some quite contingent events (I don't think its eventual dominance is unrelated to the switch to viewing modality in terms of possible worlds either). One can claim "probability is just frequency" just as one could try to claim that "modality is just possible worlds."

    But there are several other views of probability: propensity, subjectivism, logical, etc. and it's far from obvious that these aren't better ways to look at things. Frequency can, for instance, be explained as the result of propensity. Frequentism often leads to grave mistakes because it is very counterintuitive for certain sorts of issues.

    For instance when we say "Trump had only a 20% chance of winning the 2016 election," do we (must we?) mean something somehow parsable into frequentist terms? E.g., "in only 20% of possible worlds including the election did Trump win," or "if we ran the election 100 times these polls suggest Trump would win 20 of the 100."

    These are, IMO, bizarre rewriting exercises that dogmatic frequentists have to engage in as a means to hold up the assertion that probability and potentiality just are frequency. A propensity view suits one-time events far better, or the Bayesian view. Possible worlds sometimes looks a lot like frequentism, only of a sort particularly concerned with what occurs with 0% or 100% frequency. It also has to rely on bizarre rewriting exercises.

    It also often seems to get things completely backwards. There are no possible worlds without x because x is necessary, not "x is necessary because no possible worlds exclude it" (this is essentially just a special case of the frequentist dogma that probability just is frequency, which has been appropriately lampooned in recent years). This, in turn, leads to having to explain complex cases (although perhaps fairly simple in naive counterfactual reasoning) with ever finer webs of relations. This is the opposite of the goal of explaining complex things in terms of more general principles, e.g. principles like "conscripts who aren't soldiers don't spontaneously know how to be good soldiers without being taught because potential isn't spontaneously actualized without a cause sufficient to its actualization; a cause is necessary."
  • Quine: Reference and Modality


    In fact, before I develop this any further, let me ask whether you think (2) is a fair elaboration of what you meant by "If it is not possible, then it is in some sense necessary."

    Obviously, you can rename a pet, but it seems accurate in the sense that something that has happened cannot possibly have not happened. It has already been actualized.

    However, I don't think we'd want to limit this sort of consideration only to the past.

    Consider: "In order for the green conscripts to be effective in battle it was necessary for Napoleon to train them into a disciplined army first."

    This is the sort of sentence historians commonly write. Are they deficient in their understanding or does this make sense?

    I think it makes sense, but it can be taken in ways that don't. For instance, it can hardly mean "in all possible worlds this set of individuals needed to be trained by Napoleon to become a combat-effective fighting force." It seems possible that this set of men might have received training at some other point, by some other means, or that they might have some sort of preternatural aptitude as soldiers and not require formal training.

    Rather, I think we can correctly interpret it as: "Given the conscripts lacked combat skills it was necessary for this potentiality (to be good soldiers) to be brought into act because potentialities necessarily do not go from potency to act without some sufficient cause." Basically, people who lack skills necessarily don't spontaneously gain them for no reason at all, but will only gain them through certain actualities (a sort of physical necessity). This is falls under the more general principle that actuality must lie prior to any move from potency to act, else things could happen for "no reason at all."

    And this sort of relationship between actualities and potentialities can be layered on in many ways, which is what we often see in complex counterfactual reasoning.
  • What should the EU do when Trump wins the next election?


    Europe is absolutely capable of defeating Russia in terms of war-making capacity. Russia, even at its more rapid pace of gains in recent months, would have to spend over a millennia at war to conquer all of Ukraine. They are down to sending out men to conduct frontal assaults with golf carts and passenger cars instead of armored vehicles. Their artillery advantage has shrunk dramatically, etc.

    What Europe lacks is the political will and courage to defeat Russia, and make the sacrifices that would come with actual wartime defense spending and actually cutting off Russian energy sales. German defense spending remains below half of pre-1990 levels, as does French spending. The more comparable situation, given an active war in Europe, would be the 50s and 60s and spending to GDP now is about 25-33% of those rates, which are more in line with active deterrence.

    It's not the case that, three years into the war, Europe and the US were incapable of mobilizing the resources to give Ukraine artillery superiority, armor superiority, or even air superiority. They simply decided it was too expensive, a decision they stuck with long after it was clear that "escalation" was not a real issue.

    What is Michel Houellebecq's phrase on mainstream secular French culture, "a civilization that has lost its will to live?"
  • What do you think about Harris’ health analogy in The Moral Landscape?


    In Aristotelian language we would say that certain first principles are readily known even if there is disagreement about some entailments of those first principles. We do not disagree on the foundation, even though we can disagree on the more speculative matters which are not as easy to see as the foundation.

    This is a good way to frame it.

    I think with art it's similar, but significantly more difficult. Often, when people point to difficulties in aesthetics, they start by doing things like comparing Beethoven and Bach, Homer and Dante, etc. Yet it seems wise to start with the easiest comparisons (i.e. the worst and the best), and to then work our way up to the difficult ones.

    Art has many goals, and that's why it is such a difficult topic. One common goal of art is to be edifying. In this area, which art is "better" will depend on the receiver. Some people are more in need edification in some areas than others, or may not be ready for certain moves, etc. The Chronicles of Narnia might be an excellent work for children according to this criteria, but what is suitable for children is not necessarily what is suitable for adults.
  • Donald Trump (All General Trump Conversations Here)


    It's not a strawman, everyone was pardoned. Nowhere in the article is there any indication that he does not think this is a good thing. Actually, he seems to heavily imply that if anything bad happened it was some sort of conspiracy, a "false flag."

    There was a strange energy on the west side of the Capitol, but it was not a mood of revolution. A friend likened it to stepping onto a movie set with a troupe of paid actors. He witnessed activist “theater kids” dressed in black changing costume into Trump gear, and sensed a difference between the organic crowd at the rally and the melodrama of paid provocateurs. The scaffolding, flashbangs, colored smoke, and flags seemed staged for cinema, and my friend felt like an “unwilling extra” for a Hollywood production: Insurrection Day: A National Disgrace.

    Actually, forget "imply,' he outright calls those engaged in violence "paid provocateurs" (obviously under deep, deep cover, since many had long histories of far-right activism, some with prior convictions for political violence).

    Which is frankly, contradictory. This would imply that the people being pardoned, the ones caught on tape attacking law enforcement, were agent provocateurs... (so we shouldn't be happy about the pardons).

    Or, if it wasn't a false flag (after all, he seems happy about everyone being pardoned) it was actually the police's fault for making people assault them and then baiting them into forcing their way in:

    As one got nearer to the Capitol on the west side, one could see people climbing scaffolding and hear yelling. Even closer, and there was an acrid smell of teargas, which was enough to make most people keep their distance. Why did the teargas start? Was it a desperate attempt of an understaffed security to quell a riot?



    The question here is, "Why was Gold charged with a 20-year evidence-tampering sentence?"

    Now this is a strawman. Mentioning the statutory maximum and then pretending that prosecutors were charging for the maximum is bad faith argument. Vandalism can also be charged as a felony, and simple check or mail fraud also comes with multi-year maximums, but these are for serious versions of the crime. From what I understand she got a fine and 60 days in jail.
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    The problem fdrake has with this thinking is that it's utterly totalising despite pretending not to be, and can't be articulated without reducing every aspect of human comportment to a single existential-discursive structure. It's everything it claims not to be, all the time. The utter hypocrisy of the perspective is nauseating. Everything mediates everything else, "there is no ontological distinction between discourse and reality" {because the distinction is a discursive one}. It's The One with delusions of being The Many.

    :up:

    Yes, it's a strange sort of dogmatism, sometimes manifesting as the dogmatic presupposition that everyone who disagrees is necessarily a dogmatist. I honestly don't think this would bother me so much if it was put forward in a straightforward manner. Yet it's an area of philosophy where things seem to often be put forward in as abstruse a manner as possible.

    It actually reminds me of debates in esoterica. Anyone who disagrees cannot possibly have truly fathomed it, and of course it will prove near impossible to show what "truly fathoming" the doctrines entails.




    Not exactly, a quantum of force cannot actually be weaker than it is... you and T Clark have made me consider my perspective a bit more, and what I'm coming to is that ... but say St. Thomas's Quantum of Force in faith is already this grand mountain... we can say his Faith is still as strong... but say instead of St. Thomas being 100% faith-based, he's 60% Faith and 40% logic and perhaps a lack of clarifying here has caused all sorts of equivocations, perhaps of myself even... due to the quantum of force not actually being lesser... just because a persons intellect may be divided in a 60/40 split doesn't necessarily mean that because a persons thought moves to 55/45 split that the quantum of force behind faith grew less... but that the quantum of force behind reason grew more...
    there IS a nuance to it... so for some people a quantum of force of faith may not be phased by reason...

    Not a blanket quality for all or even most though...

    That's probably a better way to look at it. Although again, this will really depend on how one conceives of "faith." I feel that disagreements here often stem from people using the word in quite different ways. Classically conceived, the fruits of the light of faith are generally taken to produce the highest level of understanding, as opposed to assent in the absence of understanding (noeises being superior to dianoia) . On this view, the two might not be so easily separable because both involve the actuality of the intellect and, in the end, the same Logos.

    That aside, a common distinction in modern philosophy is "faith in" and "faith that." Faith in others is not reducible to reason, but can certainly be aided by it.

    For example, I've had some employees I've had a great deal of faith in. They were great, very diligent. But they liked to go over their work with me, and so we'd run through things step by step. So then, I knew everything tied out, that it was correct. But I certainly didn't lose faith in them through this process, even though it gave me ample, rational evidence that their work was correct. I suppose then one way of putting it is that "faith in" ties to a person, and not to any particular facts.

    Perhaps this is also easier to see as respects practical reason. Often, we are not able to get anything like remotely decisive conclusions vis-á-vis moral reasoning. Yet we might have a good deal of faith in certain moral principles.
  • Between Evil and Monstrosity


    First, I'll throw out a proposal: the general issue of "how do norms and duties evolve?" and "is there progress in this dimension?" (and "if there is 'progress' how is it achieved?") are more properly questions for the philosophy of history. Much "moral progress" is not the work of heroic individuals qua individuals, but of institutions, which serve to shape individual identity and action, and which give context and shape individual acts of heroism.

    Of relevance, M.C. Lemon's introduction to his survey of the philosophy of history has some pointed questions for the skeptic of speculative history:

    For example, one might ask sceptics whether they at least accept the notion that, on the whole, ‘history has delivered’ progress in the arts, sciences, economics, government, and quality of life. If the answer is "yes," how do they account for it? Is it chance (thus offering no guarantees for the future)? Or if there is a reason for it, what is this ‘reason’ which is ‘going on in history’?

    Similarly, if the sceptics answer ‘no’, then why not? Again, is the answer chance? Or is there some ‘mechanism’ underlying the course of history which prevents overall continuous progress? If so, what is it, and can it be defeated?

    I would just add that the person who denies progress tout court has the additional difficult of explaining why, presumably, they still think that a biology text book from 2025 is likely to be more correct than one from 1925, and the one from 1925 more accurate than the one from 1825. It seems that one must either deny scientific and technological progress, which seems absurd, or offer some explanation for why it is unique in terms of "progress" as a whole.

    I think the answer to "how does an ethics of individuals motivate social change?" is: "it doesn't." Or at least, "that shouldn't be its focus." Individuals are no doubt important in the gyre of history, but the degree of influence they wield is almost always largely a function of the larger structures they play a role in. It's a historically situated and contingent influence. To be sure, some individuals qua individuals are important as exemplars, as say Socrates or St. Francis are, but their ability to be exemplars is contingent on certain institutions existing and persisting as well (else historical memory of them would simply be lost. Most saints are forgotten).

    Anyhow, I think the classical perspective Lewis draws on so much would have it that what is truly best for us is the perfection of virtue and freedom (the former being necessary for the latter). This obviously is a process that can be aided or hindered by our environment, by our economic, cultural, and historical context.

    Hence, I think the question is a difficult one. How does the individual act as part of the gyre of history? If we can answer the questions:

    Is there progress? And;
    If there is, how does it occur?

    We will have a better idea about how to relate to the demand to go beyond current cultural, historical, and economic conditions, and how monstrous this must be. I would say that we do indeed have a corporate duty to transcend these factors, but the prudent and wise way to do so is a difficult question.

    So, perhaps one way of dealing with monstrosity is to look to higher levels of organization. For example, it is one thing for one man in a standoff to disarm, a heroic sacrifice, another for all involved to drop their weapons (perhaps of obvious benefit to all).

    I
  • Donald Trump (All General Trump Conversations Here)


    Well, first, I just realized this is a different anti-vaccine female doctor who was convicted. The one I recalled from earlier is literally on bodycam footage punching police officers in the face, hence my incredulity.

    Is it possible that some people wandered into the scene of the riot? Certainly. Just as I am sure some people arrested on the Brooklyn Bridge really did think they were being allowed to march across it (I was there that day, it seems one could easily have just followed the crowd on). And just as some people who were at George Floyd protests that descended into rioting might not have seen any rioting.

    Does the fact that some George Floyd protestors didn't come face to face with any rioting or police brutality mean it didn't happen, or that all rioters should be given pardons?

    Or more relevant, if someone with obvious political motivations and a vested personal interest in downplaying a riot was present at an extremely well-documented riot, and claims they saw no rioting, should we really presume they are credible until decisively proven otherwise? This is someone who was arrested at the scene of a riot and who pled guilty, but their claims that nothing untold was happening should be taken at face value?

    I think it's entirely possible that some people were overcharged, or perhaps should not even have been charged in the first place. This is very common during violent protests and riots, the wrong people get punished, or punished too severely. In fact, I would even judge this likely. This isn't a conspiracy so much as a function of how the justice system works and the chaos inherent in such events.

    However, the original discussion here was the charge that all the prosecutions were unjust and thus that the wholesale pardons were just.

    What's the idea here, that no crimes were committed that day? That all those pardoned were innocent? That the police somehow tricked people into bringing weapons and writing emails about disrupting the certification, then baited protestors into attacking them on video?

    This seems to me every bit as outrageous as "no crimes were committed during the Minneapolis, Ferguson, or LA riots, and all involved should be pardoned." Some of those pardoned were in for felony assault they are on tape clearly committing and have prior convictions for other violent crimes, rape, etc. It seems ridiculous to me that the blanket pardon could be considered anything but a gross violation of the rule of law.
  • Donald Trump (All General Trump Conversations Here)


    Because she's lying lol. It's incredibly common for people to make up politically biased lies about police. Just like police "invited Occupy Wall Street" to march across the Brooklyn Bridge before arresting them midway through (never happened, they just decided to cede ground to angry demonstrators and arrest them when reinforcements arrived, granted some of those arrested who were further back might have simply followed the crowd on), and just as very many people claimed to see Michael Brown gunned down in Ferguson who were later proved by surveillance tapes to be blocks away, or "police just began spontaneously clubbing people at the 2012 NATO summit" (also not what happened). Politicos lied about all those events, or else suffered serious amnesia about events they claim to have been feet away from.

    The New York Times had an entire article on "misremembering" after numerous people took to social media to claim they had seen an unarmed black man gunned down in Times Square while running from police. But video released later that day showed a man swinging a hammer at police before being shot at close range. Did they hallucinate? This was the excuse. Yet I highly doubt people hallucinated seeing Mike Brown killed from half a mile away, they just lied about it because it suited their politics.

    There is ample video of what happened on the 6th and ample evidence that at least some of those charged and pardoned had made plans explicitly to breech the building and disrupt the election certification.

    Yes, people lie about events they are at in person all the time, and they edit video to support their lies. It's hardly shocking, activists are found doing this constantly. People who get arrested and charged with crimes (as is the case here) also lie about it, all the time, it shouldn't be strain credulity.

    But clearly a jury forced to sit through all the evidence and video thought some of the pardoned were guilty unanimously.

    BTW, 147 rioters when to trial and two (2) were found not guilty by juries. That's an insanely high success rate for "actually, no crimes happened."
  • The alt-right and race


    Some of the findings supporting eugenics turned out to be wrong. Others are quite robust. For instance, some mental illnesses are indeed quite heritable. The question of whether eugenics is a good public policy is quite different from "are some conditions/behavior patterns heritable," which was the key idea motivating eugenics.

    Eugenics has, in some sense, always been a thing. Some people have decided who to have children with, at least in part, based on a folk understanding of heritability from time immemorial. Plato is talking about a state-led program of intentional breeding back millennia ago. Similarly, today people who know they have recessive genes for serious disorders often do consider this sort of thing. The whole burgeoning field of genetic counseling gets at precisely this concern.

    The most obvious place where this plays out is with disorders like Down Syndrome, which are now screened for early in most pregnancies in some countries. People terminate these pregnancies at vastly higher rates, leading to very stark declines in prevalence in some countries (e.g. Iceland).

    So, in a sense, eugenics is alive and well and a big industry. However, the term is now largely associated with state mandated programs that involved extremely invasive state action, such as forced sterilization for criminals. It's also often associated with a racial component, although in many cases the focus was on health (which I suppose is where it also still thrives today). Where people advocate for state programs today (which is rare given the history) it is normally instead in terms of incentives to, or not to have more children.

    However, we are now at a point where it is possible to screen embryos for not only disease, but for sex, genes associated with intelligence, height, eye color, etc. This is where the sorts of things you see in films like Gatica are a bit more plausible in the short term. Many embryos are fertilized, and then the "best" is selected. Although, sex and disease is, from what I understand, overwhelmingly what is selected for when this sort of thing is done.

    Anyhow, the Alt-Right is fairly broad and in some sense the idea of "race realism" is more tangential than I think a lot of coverage suggests. Since immigration is such a huge focus, one might think these sorts of heavily racialized ideas play a huge role here, but I don't think this is quite accurate. Rather, it's the most obviously objectionable thing for critics to focus on, but many of the Alt-Rights arguments against migration have nothing to do with "race realism" or anything that seem particularly explicitly racist.

    You can see this with alarmism over "Replacement Theory." There are indeed people who say that there is a vast Jewish conspiracy to replace White populations across the globe. However, there are also think tanks and government agencies, for instance the UN that have put out memos on "replacement migration" as a solution to aging populations, and some liberal parties have explicitly pointed out in their internal strategizing that this could be a windfall win for their long term electoral prospects (assuming demographics continue to dictate party alignment in the same ways, which seems increasingly to have been a bad assumption).

    For instance, the New York Times just had a (fairly unconvincing) op-ed claiming that the solution to Germany's Far-Right problem was in fact more migration of this sort. But the political response from critics was to conflate any mention of replacement with the extreme, fringe Neo-Nazi theories, and I would at least agree with some of the targets of these charges that they are in bad faith. The fact that many of Europe's largest countries will be minority European by the time today's children are in middle age is a historically huge shift, and it hardly seems that all concerns about the pace of change can be dismissed as racism of conspiracy theory fever dreams. But there is a political incentive for both left-wing critics and far right racists to both try to pivot discussion towards things like "race realism."
  • Between Evil and Monstrosity


    I enjoyed their commitment to the inherent beauty and moral value of nature, though we ended up having a lot of heated discussions regarding whether brutal tragedies, like miscarriages, should be seen as other parts of God's artwork. I was of the impression that all of creation meant all of it, the nun agreed. Neither of us could quite stomach loving the majesty of suffering and indifference. The damnedest thing we spoke about was that it was ultimately our senses of compassion and espirit de corps with humanity that stopped both of us from also loving pain.

    Metaphysical optimism, the idea that we must live "in the best of all possible worlds," is, as far as I can tell, a concern that largely arises during the Reformation. I think David Bentley Hart's book on the 2004 Indian Ocean Tsunami is a quite good response to this. He uses Dostoevsky's Brothers Karamazov (and particularly the short story the Grand Inquisitor that is nested inside it) to address this issue.

    It will suffice to say that the position of "metaphysical optimism," so well lampooned in Voltaire's Candide, does not share much in common with the idea of a corrupt and fallen cosmos that has been degenerated by man's free choice to sin and which is ruled over by freely rebellious archons and principalities. This is a world where St. John can say that Satan is the "prince of this world," (John 12:31) and that "the entire cosmos is under the control of the Evil One" (I John 5.19), or where the messenger of the Lord is delayed by a corrupt dominion in Daniel, etc.

    Hart says:

    Now we are able to rejoice that we are saved not through the immanent mechanisms of history and nature, but by grace; that God will not unite all of history’s many strands in one great synthesis, but will judge much of history false and damnable; that he will not simply reveal the sublime logic of fallen nature but will strike off the fetters in which creation languishes; and that, rather than showing us how the tears of a small girl suffering in the dark were necessary for the building of the Kingdom, he will instead raise her up and wipe away all tears from her eyes – and there shall be no more death, nor sorrow, nor crying, nor any more pain, for the former things will have passed away and he that sits upon the throne will say, ‘Behold, I make all things new...'

    …of a child dying an agonizing death from diphtheria, of a young mother ravaged by cancer, of tens of thousands of Asians swallowed in an instant by the sea, of millions murdered in death camps and gulags and forced famines…Our faith is in a God who has come to rescue His creation from the absurdity of sin and the emptiness of death, and so we are permitted to hate these things with a perfect hatred…As for comfort, when we seek it, I can imagine none greater than the happy knowledge that when I see the death of a child, I do not see the face of God, but the face of his enemy. It is…a faith that…has set us free from optimism, and taught us hope instead...

    For, after all, if it is from Christ that we are to learn how God relates himself to sin, suffering, evil, and death, it would seem that he provides us little evidence of anything other than a regal, relentless, and miraculous enmity: sin he forgives, suffering he heals, evil he casts out, and death he conquers. And absolutely nowhere does Christ act as if any of these things are part of the eternal work or purposes of God.

    A key distinction then is that what "God wills" and what "God permits" are not identical.
  • "Underlying Reality" for Husserl


    Both perceptual synthesis and biological unity resist full reduction to mechanistic explanations as they're intrinsically holistic.

    Hence the soul as the form/actuality of man and the notion of essence/nature/formal causality (and thus final causality) :cool: .
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    So, St. Thomas's Five Ways demonstrate a lack of faith and are contrary to Church doctrine?

    Faith not being obtained through reason does not imply: "Faith in God requires belief without reason-based thought," nor that " using reason-based thought for God is necessarily a showing of a lack of faith in God."

    You've got to be kidding me. He's clearly saying that if you require argumentation for belief in God, you lack faith.

    Yes, that's the first premise, which I labeled "true/consistent with doctrine." Others follow that are false/inconsistent.

    I feel like you're not even reading my posts since you have interpreted me as dissenting to a premise I specifically affirmed and also interpreted a post containing: "faith is not achieved through reason" as somehow claiming that "reason is the source of faith." You also seem to be asserting "reason is the source of faith," as my position over and over, despite my specifically clarifying with: " I certainly didn't assert that "reason is the source of faith."'



    St Anselm was very well aware that faith is not based on reason.

    Indeed, what's the point? Anselm is quoted in the Catechism: "faith seeks understanding," to which replied "no it doesn't."

    You're returning to a point that, as far as I can see, no one has made in this thread.
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    Believing, not knowing...

    And knowing is not an act of the intellect?

    Consider I Corinthians 2:

    Even so no one knows the things of God except the Spirit of God. Now we have received, not the spirit of the world, but the Spirit who is from God, that we might know the things that have been freely given to us by God. These things we also speak, not in words which man's wisdom teaches but which the Holy Spirit teaches, comparing spiritual things with spiritual. But the natural man does not receive the things of the Spirit of God, for they are foolishness to him; nor can he know them, because they are spiritually discerned. But he who is spiritual judges all things, yet he himself is rightly judged by no one. For "who has known the mind of the LORD that he may instruct Him?" But we have the mind of Christ.

    The above sentence is great because the author obviously forgot that paradox "this sentence is a lie." Because his faith was stronger than his knowledge about Truth and its paradoxes...

    Did they "obviously forget it?" Most philosophers throughout history did not think the Liar's Paradox demonstrates that truth can contradict truth (i.e. that LNC does not obtain).

    No it doesn't.

    Apparently, you know more about how to instruct Christians on their own faith than St. Anselm.
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    Au contraire, the argument has two premises inconsistent with doctrine (see the Catechism above).



    all of you who do require reason-based thought, have a severe lack of faith in God. (true/consistent with doctrine)

    Faith in God requires belief without reason-based thought. (false/inconsistent with doctrine)

    A logical argument for God is an attempt to provide reason-based thought. (true/consistent with doctrine)

    Therefore using reason-based thought for God is necessarily a showing of a lack of faith in God. (false/inconsistent with doctrine)

    Faith does not require belief without reason-based thought. One can have both. Indeed, if "reason-based thought" is taken to mean "understanding" then it is a fruit of illumination, not contrary to it.

    "Using reason-based thought for God" is consistent with both faith and a lack of faith. It does not necessarily demonstrate a lack of faith, else St. Augustine, St. Thomas, etc. would all be examples of a lack of faith.

    "The assent of faith is 'by no means a blind impulse of the mind'"
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    Nothing I've said is at odds with the official doctrines of the Church. I certainly didn't assert that "reason is the source of faith." I said explicitly that "faith is not achieved through reason." However, it involves the intellect and understanding, and faith is not contrary to work on external proofs.

    This is right in line with the Catechism:

    155In faith, the human intellect and will co-operate with divine grace: "Believing is an act of the intellect assenting to the divine truth by command of the will moved by God through grace."27

    Faith and understanding

    156 What moves us to believe is not the fact that revealed truths appear as true and intelligible in the light of our natural reason: we believe "because of the authority of God himself who reveals them, who can neither deceive nor be deceived".28 So "that the submission of our faith might nevertheless be in accordance with reason, God willed that external proofs of his Revelation should be joined to the internal helps of the Holy Spirit."29 Thus the miracles of Christ and the saints, prophecies, the Church's growth and holiness, and her fruitfulness and stability "are the most certain signs of divine Revelation, adapted to the intelligence of all"; they are "motives of credibility" (motiva credibilitatis), which show that the assent of faith is "by no means a blind impulse of the mind".30

    157 Faith is certain. It is more certain than all human knowledge because it is founded on the very word of God who cannot lie. To be sure, revealed truths can seem obscure to human reason and experience, but "the certainty that the divine light gives is greater than that which the light of natural reason gives."31 "Ten thousand difficulties do not make one doubt."32

    158 "Faith seeks understanding":33 it is intrinsic to faith that a believer desires to know better the One in whom he has put his faith, and to understand better what He has revealed; a more penetrating knowledge will in turn call forth a greater faith, increasingly set afire by love. the grace of faith opens "the eyes of your hearts"34 to a lively understanding of the contents of Revelation: that is, of the totality of God's plan and the mysteries of faith, of their connection with each other and with Christ, the centre of the revealed mystery. "The same Holy Spirit constantly perfects faith by his gifts, so that Revelation may be more and more profoundly understood."35 In the words of St. Augustine, "I believe, in order to understand; and I understand, the better to believe."36

    159 Faith and science: "Though faith is above reason, there can never be any real discrepancy between faith and reason. Since the same God who reveals mysteries and infuses faith has bestowed the light of reason on the human mind, God cannot deny himself, nor can truth ever contradict truth."37 "Consequently, methodical research in all branches of knowledge, provided it is carried out in a truly scientific manner and does not override moral laws, can never conflict with the faith, because the things of the world and the things of faith derive from the same God. the humble and persevering investigator of the secrets of nature is being led, as it were, by the hand of God in spite of himself, for it is God, the conserver of all things, who made them what they are."38

    This seems clearly at odds with:



    Faith in God requires belief without reason-based thought.

    Particularly if "reason-based thought" is taken to mean understanding tout court, and not merely demonstration. To say that "x is not y" is not to say "x requires the absence of y."
  • the basis of Hume's ethics


    Surely, the will is not intellect. But when you refer to "rational wish", you are referring to a problem, not a conclusion. If all wishes were rational, it would not be possible to act irrationally

    Rational wish relates to the rational soul, but there are also desires of the sensible and vegetative soul. Still, we can make choices involving the intellect that we might deem "irrational" and this occurs when a person is ignorant about what is truly good or has weakness of will. This seems like a sensible explanation to me. Plato, for his part, actually seems to deny the possibility of weakness of will in a number of places, the Parmenides being the place where he discusses this at most length. However, this makes more sense when one considers the extremely high standard Plato has for knowledge, and knowledge of what is truly best.

    A key difference here is that not all desire comes from the sensible and vegetative souls (or spirited and appetitive parts of the soul for Plato). Of course some desires do come from these. Hume argues from a false dichotomy where it is either one or the other. This makes sense only if the intellect has been reduced to "the means by which one moves from premise to conclusion." In that case, Hume would be correct, reason can never motivate action.

    That's what the classic puzzle about the practical syllogism is about. Is the conclusion words/thoughts? They are not action.

    They are thoughts that imply action. "X is choice-worthy" implies "choose x." This makes perfect sense in the Aristotelian psychology where the will (natural or absolute will in some terminology) is always directed towards the good in a general sense and requires the intellect to inform it. The intellect informs the will (conditioned will) vis-á-vis which particular goods to seek. Crucially, the will is also itself an intellectual power, part of the rational soul (Aristotle says "the will is in reason" (De Anima, III, 9), see also Aquinas Summa Theologiae, I, Q82.

    Plato, on the other hand, does supply a bridge in his third element, thumos. Thumos differs from appetite in that it is capable of submitting to reason (or better, nous is capable of training thumos. When that doesn't work out, reason is incapable of controlling both "thumos" and appetite. That's why I prefer the translation "emotion" for "thumos", since emotions include a cognitive element and so can be seen to bridge the gap.

    I'm not sure what you mean by "bridge" here. For Plato, the rational part of the soul has its own desires, which can motivate us to action. Indeed, it is reason's desire to know truth, and to know what is truly good, as opposed to what merely appears to be good (appetites) or is said to be good by others (spirited part), that drives his entire psychology. This desire of reason is what allows us to transcend current belief and desire, to become more than what we already are, and so to be self-determining wholes instead of a mere bundle of external causes and warring desires. The spirited part of the soul is not needed as a "bridge" if this is to mean that the rational part of the soul cannot induce us to action.

    However, Plato does indeed say that the spirited part of man is the "natural ally" of the rational part in a number of places (e.g. Republic). In the chariot image of the Phaedrus, the charioteer of reason trains/breaks both horses (the black horse of the appetites is treated much more violently). To know the good, the "whole person" must be turned towards it, including the appetites. The higher part of a person must shape the lower. This certainly carries over in Aristotle, who thinks we can be trained, habituated to, and educated in either virtue or vice.

    That's what the entire model of reflexive freedom and self-determination hinges on, the idea that we can shape our own appetites, that reason can (and ought be) the master of the passions and appetites. The movement from vice, to incontinence, to continence, to virtue, involves shaping desire. The virtuous person enjoys virtue, as opposed to the merely continent person. This comes out strong in the Neoplatonic/Aristotelian thought of Dante, who centers the entire Commedia around discourses on choosing good as opposed to bad loves, a process that the intellect is deeply involved in.

    This is obviously very different from modern views where "no desire is bad, only acts," and where the influence of Freud has generally led people to see "repressing" desire as a chief root cause of all our woes. And of course there is the view, in Hume and much stronger in Nietzche, that the ascetic disciplines and efforts to cultivate this older ideal are misguided, which I responded to above.

    Plato, on the other hand, does supply a bridge in his third element, thumos. Thumos differs from appetite in that it is capable of submitting to reason (or better, nous is capable of training thumos. When that doesn't work out, reason is incapable of controlling both "thumos" and appetite. That's why I prefer the translation "emotion" for "thumos", since emotions include a cognitive element and so can be seen to bridge the gap.

    :up: The spirted part of the soul is often later called the "passions," but then in modern discourse the appetites (and the desires of reason) sometimes get collapsed into this label, such that all desire is "a passion." It's also associated with the irascible as opposed to concupiscible appetites.

    Finally, I accept that there are some things that are good for human beings as such. But it doesn't follow that there are not other things that are good for some human beings, but not for others. That even applies to some foods. In addition, there are some foods that become poisons in excessive amounts. When you try to implement the generalization, you very quickly get into trouble.

    Sure, but the demand for "universal maxims" is not so much a centerpiece for pre-Enlightenment ethics. Running is healthy for man, but not for the man with a broken leg. However, to object that "having mercury in kids' drinking water is bad for them," because "what if some deranged tyrant blood tests every kid and enslaves and tortures everyone whose mercury levels are below a certain threshold," would be a facile objection. There, we are just replacing one known ill with a greater one. To know facts about what is truly good for some being does not require a reduction to universal maxims.

    "Goodness" is an extremely general principle, on the Scholastic view among the most general. We should not expect that ethics can be reduced to general maxims or any great deal of precision (Aristotle for his part warns against this at the outset of the Ethics).

    This doesn't save Hume though. For it still seems we can know facts about what is truly most choice-worthy in some situations. However, if Hume is not to be an implicit anti-realist (or at least a skeptic) he is in the position of having to argue that we can know, as a fact, through reason, that "x is truly most choice-worthy," but must then turn around and claim that "x is most choice-worthy" does not ever imply "choose x," which is absurd. It's like claiming that reason cannot pick out the greatest number in a set because, though reason can know the greatest number, it cannot move to pick it out. At best, this is simply confusion about how the will and intellect interact (or, IMO, an implausible psychology where knowing is completely divorced from action).

    More seriously, that argument does enable one to work out what is good for some beings, at least. As it hapens, I'm content with that relativistic notion of good, but it may be that you are looking for something higher or deeper, such as "what is good?". I don't have any idea how to answer that question and doubt whether it has an answer. What's worse is that people often think they have an answer to that question when they do not, and that is the source of much evil. (I don't blame reason as such for that. I do blame the difficulty in being sure that one has not made a mistake.)

    My rejoinder would be that paralysis over fear of error can often be every bit as damaging as fear of error itself. There is what Hegel termed in the preface to the Phenomenology, "the fear of error become fear of truth."

    Things fall apart; the centre cannot hold;
    Mere anarchy is loosed upon the world,
    The blood-dimmed tide is loosed, and everywhere
    The ceremony of innocence is drowned;
    The best lack all conviction, while the worst
    Are full of passionate intensity.


    - William Butler Yeats - The Second Coming
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    "Faith" there is the theological virtue of faith, which does not seem in line with OP's usage.

    This is the section on faith in the Catechism:

    Faith

    1814 Faith is the theological virtue by which we believe in God and believe all that he has said and revealed to us, and that Holy Church proposes for our belief, because he is truth itself. By faith "man freely commits his entire self to God."78 For this reason the believer seeks to know and do God's will. "The righteous shall live by faith." Living faith "work(s) through charity."79

    It depends on what you mean by "through reason." Reason in contemporary thought is often restricted to nothing more than demonstration and computation, the means by which man moves from premises to conclusions. However, reason is sometimes still used for the intellect, i.e., the rational part of the soul or nous. Faith is not achieved through reason, but neither is it unrelated to it.

    The "light of faith," illumination, involves the nous, and the regeneration of the nous. The attainment of understanding and spiritual knowledge (gnosis) is a key element of the spiritual life. However, illumination is not an "achievement" of the nous, but something that happens to the nous, although the cooperation of man is often deeply intertwined in this. Progress towards theosis is generally seen as involving ever greater degrees of understanding, certainly not its absence. As man is deified there is a greater and greater coincidence of the divine will and man's will, joined in love, but this could hardly be a free, self-determining movement if the intellect remained ever blind to the Good sought by the will. Love/Beauty was generally related to both will and intellect (Goodness and Truth), which is why the great text of Orthodox spirituality is titled "The Love of Beauty" (Philokalia).

    Apologetic arguments were generally seen as removing barriers to faith, not instilling it. St. Thomas is, of course, not the official philosopher of the Catholic Church, being one doctor among many, but he is as close as you can get. In the Summa Contra Gentiles he has a chapter titled "Why man's happiness does not consist in the knowledge of God had by demonstrations." Nonetheless, he spent an immense amount of time on such demonstrations because they are not without their purposes and merits.

    Perhaps a disconnect here is the modern tendency in Anglo-American thought to consider knowledge as a type of belief, that which is justified and true. This would imply that all knowledge is had through justification via demonstration and inference, moving from premises to conclusions. Yet the Patristics and most theologians following them see noesis as superior to discursive reasoning (although the latter may sometimes "set the ground" for the former). Illumination involves knowledge, but not that had through discursive argument.

    Even St. Augustine, Calvin's main inspiration, dedicated significant efforts to apologetics and philosophy as well. Faith here is not will as uniformed by intellect. Hence, the credo "I believe that I might understand."
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    Yes, that is not the same thing as what OP is saying at all. The Catechism has an entire section on the role of apologetics. To claim the Catholic Church thinks doing apologetics is a sign of lack of faith is frankly absurd and demonstrates a total ignorance of the topic.

    The Church runs a number of discount philosophy programs precisely because they see this as important to spreading and defending the faith.
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    Exactly the human spirit is the rope between two opposites faith and reason...

    Though I suppose I could have clarified "absolute" faith. The more you require reason and knowledge for God the less faith you have.

    The more I know/understand that my wife won't cheat on me the less faith I have in her? This seems bizarre to me.

    This would imply that I have more faith in friends I have just met and less in those who have stood by me through thick and thin.
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    It is traditionally held that Paul believed that faith is a gift from God. This scripture is interpreted as saying that:

    Which is not to say "faith is irrational" and "faith does not involve understanding," even from the standpoint of Evangelicals. St. Paul, like St. Luke, doesn't think we do anything entirely on our own; it is God in whom we "live and move and have our being" (Acts 17). Hence, Paul often describes people doing things, and then God doing the same thing to them, in reciprocal pairs.

    The traditional reading of that passage is that "being saved" is "not your doing," not necessarily "having faith" (some have faith in the Gospels through signs and wonders, yet "blessed are those who have believed and not seen" John 29:29).

    What you are putting forth is a common Evangelical reading, but it isn't a common thread until 1,500+ years later, and is obviously a minority view today. The "faith" in question is also often taken to be the faith of God/Christ as well (e.g., in Orthodox readings). Even if one accepts Sola Fide, this does not necessitate that faith is the result of a sort of supernatural, arational autopilot. Such a view shows up only in the Reformation (since it only makes sense given a modern nature/supernatural distinction) and has always been a minority view.

    The OP has a view of faith that is only consistent with an austere sort of fideism. This has been far from the norm in Church history, although it has always been a minor thread. It's certainly far from the norm in the largest denominations both historically and today. It might be the norm in the most vocal set of Anglo Protestants, but even there I think this is probably not actually the case.

    I will put it this way, apologetics, the reasoned defense of the faith, has been part of Church History and among the main works of many of its great saints and doctors pretty much from the Apostolic Fathers on. If "Pauline Theology" means abandoning a reasoned defense of the faith, then "Pauline Theology" was largely lost to the world for thousands of years until recovered by people who spoke Paul's Greek as a second, dead language, who apparently understood what he meant much better than native speakers who were learning from people taught directly by him or his close successors.
  • Logical Arguments for God Show a Lack of Faith; An Actual Factual Categorical Syllogism


    If you have faith that your spouse will not cheat on you, does proper faith require that you not understand why they would not choose to cheat on you?

    Argument, discourse, proof—these are all means of understanding. "Believe that you might understand." "Faith seeking understanding." Etc.

    The assertion that faith precludes understanding, or attempting to understand, seems odd to me.



    What he expressed was Pauline doctrine.

    Certainly not according to most Christians through most of history. Consider: Faith and Reason or Philokalia.

    Faith and reason are like two wings on which the human spirit rises to the contemplation of truth; and God has placed in the human heart a desire to know the truth—in a word, to know himself—so that, by knowing and loving God, men and women may also come to the fullness of truth about themselves (cf. Ex 33:18; Ps 27:8-9; 63:2-3; Jn 14:8; 1 Jn 3:2).

    St. Paul thinks the existence and glory of God is manifest in the signs of creation:

    Romans 1

    18 For the wrath of God is revealed from heaven against all ungodliness and unrighteousness of men, who hold the truth in unrighteousness;

    19 Because that which may be known of God is manifest in them; for God hath shewed it unto them.

    20 For the invisible things of him from the creation of the world are clearly seen, being understood by the things that are made, even his eternal power and Godhead; so that they are without excuse:

    21 Because that, when they knew God, they glorified him not as God, neither were thankful; but became vain in their imaginations, and their foolish heart was darkened.

    22 Professing themselves to be wise, they became fools,
  • the basis of Hume's ethics


    Right, the will is not the intellect, that's what the passage gets at. However, Aristotle has motivating desire coming from appetite, spirit, and rational wish, from the rational soul/intellect. Hume has it coming only from the appetites and passions because the intellect/nous has been deflated to just ratio.

    But perhaps more important is the idea that facts about what is good for beings, their telos, can be reasoned about from the nature of things, because the world isn't value free. "Food is good for man" or "water is good for plants" are accessible to theoretical reason as facts.

    It leads to a radically different ethics. "Reason is and ought only be a slave of the passions" is an inversion of the rule of the rational soul, not a supped up version of it.

    Arguably, the "scientific" view of the world as value-neutral is a specialized stance, adopted in certain contexts, but abandoned completely when we return to ordinary lif

    This is "scientific" only in the sense that proponents of this view tend to want to conflate it with "science" in order to give it legitimacy. And yet science cannot exist without distinctions of value, as between good evidence and bad evidence, good argument and bad argument, science and pseudoscience, good scientific habits and bad ones, etc. Notions that one ought not simply falsify one's data, or argue in bad faith for whatever is expedient is, or turn science into power politics, etc. are of course, value-laden.
  • Quine: Reference and Modality


    Sorry for any misunderstanding. There are, of course, views that deny any such sufficiency, on the grounds that cause is just observed constant conjunction that may vary at any time. As points out and I said earlier, one can get at this with accessibility. The benefit I see in conceptualizing modality in terms of potentiality and actuality is that you are then explaining modality in terms of a principle that is already in play and useful throughout metaphysics, philosophy of nature, and epistemology and because it seems to how much closer to the necessity of common sense counterfactual reasoning.

    And yes , the example with Washington would not involve essences. Physical necessity involving natures would come into play with something like counterfactual reasoning about growing a bean plant. Watering the plant is a necessary condition for its sprouting and growing. We can well imagine a world where this is not the case, where Jack throws the beans on the ground and a bean stalk reaching into the clouds sprouts up. Yet watering your beans seems to be a necessary prerequisite for their growing in reality.

    Likewise, to St. Thomas' point on Metaphysics IX, if we come across a dead man, we know that there is necessarily some cause of death. It might be foul play or it might have been a heart attack or stroke. However, he won't have died "for no reason at all." This seems trickier to capture in terms of accessibility, but in terms of potentiality and actuality it is just the notion of entelécheia, "staying-at-work-being-itself."



    Indeed, but what is this internal coherence? It's asserted but not explained other than its needed to say this object is this and not that.

    But it isn't? One might say the Physics and the 2,000+ years of commentaries and extensions on it is flawed, but it certainly presents both explanation and argument.
  • Quine: Reference and Modality


    Well, you could follow Quine and try to get rid of proper names and say that: "there is some X that gandalfizes." Spade's article, which is quite good, points out some of the ways in which Quine's approach is more similar to Platonism. The variable, being a sort of bare particular (substratum, bearer of haecceity) sort of takes on the role of matter (the chora), with properties fulfilling the role of forms.

    Sheer "dubbing" runs into the absurdities of the "very same Socrates" who is alternatively Socrates, a fish, a coffee mug, Plato, a patch on my tire, or Donald Trump, in which case we might be perplexed as to how these can ever be "the very same" individual.

    The problem with the broadly "Platonic" strategy is that it does indeed have difficulty explaining how particulars exist and if the substratum lying beneath them to which properties attach is either one or many. This is complicated even more by certain empiricist commitments that would seem to make proposing an unobservable, propertyless substratum untenable. Without this substratum though, you often end up with an ontology that supposes a sort of "soup" prior to cognition, with the existence of all "things" being the contingent, accidental creation of the mind (e.g. The Problem of the Many, the problems of ordinary objects, etc.).

    Hence, the Aristotelian idea of particulars as more than bundles of properties, as possessing an internal principle of intelligibility, self-determination, and unity (although they are not wholly self-subsistent).

    The problems of broadly Platonist approaches are perhaps less acute in philosophies with a notion of "vertical reality" (described quite well in Robert M. Wallace's books on Plato and Hegel). They seem particularly acute in physicalist ontologies that want to be "flat."

    One solution is essentially hyper voluntarist theology with man swapped in for God. So, instead of "a deer is whatever God says it is," we get "a deer is whatever man says it is."
  • Quine: Reference and Modality


    You might capture this in terms of accessibility, yes. The question then is if we might want some notion of physical necessity (i.e., related to changing, mobile being) as an explanatory notion.

    My last response to J above points out part of the case for this. If "George Washington was the first US President" is true, and it is not possible for it to become false, it is in a sense necessary. However, it is clearly not necessary in terms of being true de re. Being president is a relation. And it is not true in every imaginable possible world.

    In terms of essences, the articles Leo posted are quite good, particularly the ones by Spade and Klima.
  • Quine: Reference and Modality


    But if something could stop the sun from rising -- or, in the case of the rock and window, prevent the rock from breaking the window -- why would we call the event "necessary"? You can of course stipulate that "necessity" can refer to something that is overwhelmingly likely, such as the sun rising tomorrow, but I can only reply that this isn't what discussions about necessity are usually about.

    That wouldn't be it though. Necessity is not just a case of high probability. Obviously, many things might stop a ball that has been thrown at a window from breaking the window. It could hit a bird, like that time Randy Johnson accidentally killed a pigeon with a fastball. The point is rather that if none of those things happen, and the ball goes through the window, then the window will necessarily break.

    It might be easier to think in terms of the "necessary versus sufficient" conditions of counterfactual reasoning. If a plant is to grow, it is necessary that it receive water. If causes are sufficient to bring about a seeds' germination and growth, it will necessarily occur.

    Or, you could consider St. Thomas' framing in terms of act and potency in the commentary on Book IX of the Metaphysics. Here is an example he uses: a human body will naturally tend towards health (and homeostasis) if nothing hinders it. Medicine is thus in some sense primarily the removal of external impediments of the movement from potency to act. This is a case of necessity involving natures.

    This is what I meant by ceteris paribus conditions. Sure, if certain conditions hold steady, then certain results will occur. This is the same as saying that in some possible worlds the sun will rise, while in others it may not -- which is hardly "necessity". This has nothing to do with denying that the past determines the future; if some unlikely intervening event occurs, that will be the past in that possible world.

    Yes, the synchronic view of possible worlds is different. One can of course collapse any distinction between metaphysical, existential, physical, etc. necessity and try to explain it solely in terms of frequency across synchronic possible worlds. In which case "necessary" only applies to what is true in all possible worlds (and hence whatever is possibly necessary is necessary tout court). This seems to me like an impoverishment of concepts though.

    Even phrasing it this way seems contrary to the idea of what "necessary" is supposed to mean, but let's grant it.

    Sure, if what "necessary" is supposed to mean is just whatever a narrow clique of Anglo-Americans Baby Boomers decided it must mean in their infallible wisdom :grin: (I am being facetious here, to some degree, since Leibniz did have similar notions earlier). Avicenna or Al Farabi would disagree.

    They're different views of modality; they will not agree in every respect.

    You want to say that, in our world, the sun rising tomorrow is physically necessary.

    I never said that though. I said that if conditions are sufficient to bring about the sun's rising then it will necessarily rise, and that this can be explained in terms of physical necessity in that things necessarily act according to their nature. It has nothing to do with things that might not happen being necessary. Likewise, if the sun rose, it is necessarily true that it rose.

    "The former case" refers to "9 is necessarily greater than 7", yes? Are you positing "7" as being in the present, and "9" in the future? And that 7 thus causes 9? I must not be understanding your meaning here.

    No, sorry I was referring to the sun rising. The reference to popular theories in physics might make more sense now :rofl: .

    Certainly. As Kripke helps us understand, this could become false in two different ways. 1) We might discover that someone else briefly held that office, but this fact was suppressed for conspiratorial purposes. 2) We might discover that the man who first held the office was not the man we designate as "George Washington". It turns out that the real George Washington was murdered as a young man, and replaced with an impostor.

    This just seems bizarre to me. A lie is true if enough people believe it and then becomes false when people discover it is false?

    A misattribution is correct until it is corrected?

    I don't recall Kripke ever advancing such a claim, but it would essentially amount to defaulting on truth being anything other than the dominant current opinion. "Adolf Hitler was the first US President" would "become true" if enough people thought it was true, which seems to veer towards a sort of Protagotean relativism.

    These are absolutely ridiculous suppositions. But something doesn't become necessary just because the possible counterexamples are ridiculous. Necessity is supposed to mean that there are no counter-examples -- that it is not possible for the truth to be other than it is.

    I think you're missing the point by focusing on epistemic issues. Discovering that something you thought was true is not true is not the same thing as facts about past events becoming true or false. The latter implies that becoming occurs in the past, not just in the present, which seems like a contradiction in terms.

    Suppose it is indeed true that George Washington was the first president. There is no conspiracy, no misattribution, no epistemic issue. This is true and we know it. Is it possible for this to become false in the future? Might it one day be true that Adolf Hitler was in fact the first US President?

    The sun rose yesterday. Is it possible that it did not rise yesterday, that the proposition "the sun rose yesterday should become false be the sunset today?

    If it is not possible, then it is in some sense necessary. If you just look at frequency over possible worlds, where "possible worlds" gets loosely imagined as "whatever we can imagine" then it will be impossible to identify this sort of necessity though. But what then, are all facts about the past possibly subject to change in the future?

    Why would we think that? It seems obviously false. Hence, a notion of physical and accidental necessity is needed.

    Is there a "possible world" where the sun didn't rise yesterday and we just think it did? Only for the radical skeptics.
  • Quine: Reference and Modality


    Those all seem like physical necessity to me. Some things can pass through others without breaking them. For instance, a beam of light from a flashlight will pass through a window just fine, and a rock could pass through a layer of water without "breaking it" (it might break the surface tension, but this would reform).

    The fact that a rock's passage through a window entails it breaking has to do with what rocks, windows, and the surrounding cosmos are. Rocks and windows, being more or less heaps of external causes, wouldn't be the sorts of things that we would tend to think of as possessing self-determining natures (although possession of a nature/essence is also not a binary).

    "The sun must necessarily rise tomorrow"?

    Why not exactly? To be sure, there might conceivably be something that could stop the sun from rising. A large exo-planet could utterly destroy the Earth, leaving nothing for the sun to rise on I suppose. Perhaps similar cosmic-scale events could occur as well. But barring any of these, the sun will rise. To deny this would be to deny that the past determines the future, and it's hard to think of anything there is more evidenced to suggest than this relation. Yet that's part of the nature of physical necessity. What comes before dictates what comes after. Indeed, even when we speak of non-temporal dependence and causality we nonetheless use the language of temporal ordering, e.g. "prior," "consequent."

    And, given how we learn about the world, how we learn to speak and reason, etc., it seems fair to assume we learn about consequence and entailment through the senses. This sort of relation is abstracted from the senses.



    These necessities, if that is indeed what they are, seem very different from either "9 is necessarily greater than 7" or "Water is necessarily composed of H2O". Why would all three be described as "necessary"?

    Right, hence the distinctions in terms of the modes of predication or something like Avicenna's necessity per aliud versus per se.

    The latter two examples involve what is true of multitiudes and water intrinsically. The former case is necessary in the sense that the present appears to in some sense contain the future. Causes contain their effects in a way akin to how computational outputs are contained in the combination of input and function perhaps. It has to do with how the cosmos is, as a whole.

    On the pancomputationalist view, which is fairly popular in physics, past physical states determine future ones (or a range of them) in exactly the way the output for some algorithm given some input is determined (and necessary). Perhaps this is so, but the philosophy of information has so many open questions that it's hard to know what this really means, and the use of digital computers as the model for being seems pretty suspect to me. Back when the steam engine was new the universe was conceived of as a great machine, the human body as a great engine, and while this got something right, it left out quite a bit.

    You could consider "George Washington was the first President of the United States." Is it possible for this to become false? If not, then it seems it is in some sense necessary, although it also seems to be something that was contingent in the past. A way this might be explained is to say that it is not possible for any potency to have both come into act and not come into act. So if Washington was the first president (and he was) this is necessary de dicto (although not de re, since president is not predicated of Washington per se).

Count Timothy von Icarus

Start FollowingSend a Message