• On religion and suffering


    Am I? What language was that quote originally written in? If one is to be a literalist about this, then one has to take into consideration the fact that the passage in question was not really written in English. And whatever word was originally used there, it most certainly was not etymologically related to the Latin word Ratio.

    Yeah.

    It's in Greek like all of the NT. λογίζομαι, logisamenos, it's the middle voice of logos, word/reason, from which we get "logic." It is used throughout the period to denote reasoning, philosophizing, or calculating.

    Mystics would disagree

    Which ones? Anthologies of Christian mystics and spiritual guides for monks like the Philokalia are packed with the the essence/energies distinction. Dionysius the Areopagite, St. Gregory Palamas, St. Bonaventure, etc.

    The author from which we get the term "mystical theology" is famous for clarifying this distinction.

    And mystikos, and mystics, in the Christian tradition tended to be heavily involved in anagogic readings of Scripture. It first refers to the hidden/secret meanings; not exactly modern literalism (which is very much a modern phenomenon).

    It only places the word of the Bible at odds with the word of science.

    No, it also places it at odds with places where different Biblical authors interpret the Bible allegorically, including Christ.

    But this has nothing to do with Kierkegaard's thought at all. Kierkegaard was not a fundamentalist.

    "Yes" to both questions.

    Well now I can't take you seriously. :rofl:
  • p and "I think p"


    Sure, because of the sheer number of scribbles and rules for putting them together in strings, not because of some special power of the scribbles have apart from representing things that are not scribbles. When communicating specifics, do the scribbles invoke more scribbles in your mind, or things that are not just more scribbles, but things the scribbles represent? To represent specifics you must already be able to discern the specifics the scribbles represent. Do the names of new colors for crayons create those colors, or do they refer to colors that we can already discern?

    This seems to be a common issue. A conflation of sign vehicles and signified, and of sense/interpretant and referent.

    My hunch is that the dominance of computational theory of mind and computational theories of reason/rationality are sort of the culprit here, since they can be taken to imply that everything, all of consciousness, is really just symbols and rules for shuffling them. Logic gets demoted to computation in this way too, and on some views the whole of physics as well.

    I'm not saying these theories don't get something right, but they seem inadequate, and might be misleading when it comes to language, meaning, perception, etc. It doesn't seem they can all be right, for if pancomputationalism in physics is right, the saying the brain works by being a computer as CTM does explains nothing, because everything "is a computer."

    Of course, when stream engines were the hot new technology the universe and the body was said to work like a great engine, and while this wasn't entirely wrong, it also doesn't seem to have been particularly accurate.
  • On religion and suffering


    I didn’t say a coastline or an ant didnt exist until painted.

    No, but you suggested a coastline does not exist separate from the act of measuring it, and then used painting as a follow up example, and that one can "imagine" that a coastline exists independent of our concepts, but that it doesn't exist separate from our interactions and anticipations vis-a-vis it, no? It only has a "dependent independence?" Hence my confusion. Is it the coastline or the "notion" we're talking about?

    The word coastline implies a particular sense of meaning, and there are as many senses of meaning for it as there contexts of use.

    The word "coastline" refers to coastlines. You are collapsing sense and reference here, which I find confusing. If "coastline," "tiger," and the like only implied/referred to our own sense of meaning then it would be impossible to ever speak of anything but our own perceptions and judgements. But we make the distinction between the actual things we are speaking of and our thoughts, perceptions, and speech about them all the time. This distinction is normally essential for explaining the phenomenon of error.

    Is this distinction itself an error?

    Animals who interact with a coastline produce their own senses of meaning for it , even though they don’t perceive it in terms of verbal concepts.

    Sure, but North America has one coastline, not one for every species that experiences it.

    If we only experience and know concepts and senses, our own "anticipations," how is this not recreating the very representationalism you were complaining about? It strikes me as very similar, just using different language. And representionalists never denied that we interact with things, or come to know things through our interactions with them. They also don't want to affirm the existence of any independent things (or at least anything about them, save your bare "placeholder"), since all we have access to are "mental representations." Yet as far as I can see your "notions/concepts" and "anticipations" seem to be filling the exact same role as "mental representations" here, and some sort of diffuse soup of "constraints" that is only known through concepts/notions looks to be something like a rebranded noumena.



    Yes, this is not how I would phrase the issue myself, but I "get your point", so to speak. What I would say, is that if the catholicity of reasons exists (and if catholicity simpliciter exists), then it pre-dates the foundation of the Catholic church. Catholicity, if it exists, existed before the Catholic church existed. That's what I would say. And if this is so, then it follows that the Catholic church does not, and cannot, have a monopoly on catholicity. Which is why one can be a catholic outside the Catholic church. Agree or disagree? I feel like you disagree with me on this specific point, among others

    I'm not sure what it means to "be a catholic." To affirm the catholicity of the Church? Then sure. I didn't intend to suggest anything about the Roman Catholic Church. I'm part of an Orthodox church, but we still recite the Creed, "one holy, catholic, and apostolic."

    Same with "catholicity simpliciter." I'm not sure what you mean. It's a property, I don't think it can "exist simpliciter."


    Yes, it is. At the end of the day, it is

    I just don't see it. Or your use of "blind faith," is perhaps anachronistic. I have a friend who is a very skilled mechanic. I know he's good with cars, I've seen the cars he's rebuilt. If I trust his authority on automobiles I don't see how this is necessarily "blind."

    For example, I have blind faith in my feet, in the sense that I completely trust them when I absent-mindedly step up and walk towards the kitchen.

    Presumably you have a lifetime of experience walking. Again, I am not seeing how this is blind. This is like saying it's "blind faith" to assume that you'll get wet when you jump in a pool.

    I feel like that's not sound reasoning on your part. It seems like you are appealing to the majority. Kierkegaard is in the minority here, sure. But that doesn't mean that he's necessarily wrong. Majorities can make mistakes, especially interpretative mistakes. That's why there is a literal use of the language to begin with: so that there are no interpretative mistakes, you just read what it says.

    Where does Kierkegaard ever say Abraham isn't being tested? I don't think he does.

    In any case, this view is right in Scripture, you can't appeal to literalism and deny the interpretation.

    Hebrews 11:17 By faith Abraham, when he was tested, offered up Isaac. He had received the promises, yet he was ready to offer up his only son. 11:18 God had told him, “Through Isaac descendants will carry on your name,” 11:19 and he reasoned that God could even raise him from the dead..."

    If you're committed to the literalist view you're committed to Abraham reasoning in this case.

    then I would ask: What is God testing here in the first place, if not Abraham's faith?

    Sure, it's a test of faith. Even if it was a test of wholly irrational faith, that wouldn't make the test or the person giving the test irrational. The test is not given "for no reason at all."

    And we might distinguish between "faith in," and "faith that." I hardly see how it is irrational and "blind" to ever have faith in anyone. I have faith in some of my friends because they are good friends, good people, and have always supported me. I fail to see how that is irrational. But the same is true for God.

    Anyhow, fideism is not the view that faith is important, or even most important (although St. Paul puts love above faith). Lots of people affirm that. It's the view that religious beliefs are entirely based on faith alone.

    not the one who tries to rationalize what God is,

    But that isn't what most theology does. One cannot know God's essence, only His energies. That's all over the Church Fathers. One can only approach the divine essence through apophatic negation, the via negativa, or analogy. Which is what Kierkegaard also ends up affirming, he basically works himself painfully towards Dionysius (painfully because his blinders stop him from referencing all the relevant thought here).

    And that is exactly the sort of discussion that I point to, when I say that things cannot be metaphors and figurative language all the way down.

    Were the followers who abandoned Christ after he told them they must eat his flesh and drink his blood because they thought he was advocating cannibalism in the right (John 6)? Why does Christ himself primarily teach in parables and allegory?

    Or did Christ come to save livestock (the lost sheep of Israel) and will the Judgement really be of actual sheep and goats? Is St. Paul breaking the rule of faith when he interprets Genesis allegorically in Galatians 4?

    "The Spirit gives life; the flesh counts for nothing” John 6:63

    "He has made us competent as ministers of a new covenant—not of the letter but of the Spirit; for the letter kills, but the Spirit gives life.” II Corinthians 3:6

    The Gospels are full of references of Christ fulling OT prophecies, often in counterintuitive ways that would be completely lost in a literalist reading. So, to at least some extent, a hyper literalist reading is self-refuting.

    Then why should anyone listen to Christ instead of Epicurus? For Epicurus also had a concept of friendship.

    On the Christian account, because those who have had faith come to understand, as the Apostles did, that Christ is God and Epicurus, if Christians are correct, is badly deluded.
  • Mathematical platonism
    I heard an argument related to this recently.

    Bertrand Russell says something like: "mathematics is the field where we believe we know things most certainly, and yet no one knows what mathematics is about." By contrast, earlier mathematicians often did think their subject had a clear subject matter. Where they simply mistaken? Naive?

    Here is the argument, the difference is one of equivocation. Barry Mazur has this really nice article called: "When is one thing equal to another?" https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://people.math.osu.edu/cogdell.1/6112-Mazur-www.pdf&ved=2ahUKEwiPlpKX5_WKAxXmhIkEHcOREwcQFnoECBgQAQ&usg=AOvVaw0j1f7DfoQP7OKuvRZ37rIU

    If you read older mathematics though it might seem like they could be talking about different subjects, because there is often a strong distinction between magnitude and multitude, and both are primarily derivative of/abstracted from things. That is, there can be a multitude of things, e.g. 6 cats, or magnitudes related to things, e.g. a wood board that is twice as long as another. Mathematics is the consideration of the properties of magnitude and multitude in the absence of any other properties. For instance, a ratio would be understood as specifically a relationship of magnitude, never as a number.

    Because of this, metaphysics and the philosophy of perception/epistemology end up bearing a closer relationship to mathematics.

    Anyhow, on first glance, if one accepts this and a "study of magnitude and multitude," it seems like it may make various flavors of realism more plausible (immanent realism or platonism).
  • What does Quine mean by Inscrutability of Reference


    Presumably if it specifies the things in virtue of which all tigers are tigers, while not having anything that isn't a tiger fall under the definition. "Animal" for instance, seems essential. DNA, by contrast, won't work (or won't work alone) because a tiger liver or tiger blood has tiger DNA, but is not a tiger.

    How this is accomplished might vary. Aristotle, for instance, allows for many types of definition. One way, given certain metaphysical assumptions, would be a substances genera and species-specific difference. Another way, provided one assumes that reality is adequately mathematically describable, might be to look at things as information-theoretic structures and identify all the morphisms shared by some type of thing. This is impossible in practice though. The other difficulty here is that the things that we might think most properly have essences are living things, and they have natures precisely in that they are goal directed, but how to get goal-directedness, let alone intentionality, from information is anyone's guess (if it can be done). So we might well be missing a key component. Likewise, any phenomenological aspect of something seems difficult to account for in this way.

    I'd argue that a key part of what makes discrete things discrete is their resistance to divisibility (unity) and capacity for self-organization. And while this might be greatest in living things, it also shows up in atoms and molecules. These are divisible, but it normally isn't easy to divide them, which is part of why they are often offered up as the paradigmatic "natural kinds" outside the example of living things (although stars, planets, galaxies, etc. might be similar in this respect).

    That furnishes a fine example, the periodic table. "Atom with 79 protons," seems to cover gold pretty well. It also seems possible to give a definition of stars such that it doesn't allow anything in that isn't a star, nor exclude any stars. But it's also important to note that a definition doesn't need to be something like a set or some sort of mathematical description. Whether such things would be appropriate depends, I suppose, on metaphysical assumptions. I'd argue that, at the very least given current tools, these methods fail because they cannot capture the quiddity of things and so are a poor match for defining the "what-it-is-to-be" (essence) of things.
  • On religion and suffering


    Is this what you call "the catholicity of reason"?

    No, perhaps I should have specified since the word is uncommon. I mean it in the original sense, as in "all-embracing and unified, one." This is the sense in which the Orthodox and many Protestants still affirm: "I believe in one, holy, catholic and apostolic Church," at every service, when they recite the Nicene or Apostles' creed.

    I would say: there are many truths, they are not sui generis, and they are not potentially contradicting truths. In Henological terms: There are Many Truths, and none of them contradict each other. Contradictions only arise in Opinion (Doxa), not in Episteme.

    :up:

    The catholicity of reason is just this, plus the assumption that this applies to the logos by which truth is known (although some might want to take the further step to claiming that the two are deeply related).

    Kierkegaard also pointed out (and rightly so) that God gave Abraham a fideist order when he ordered him to sacrifice his son. Do you disagree with that?

    Yes, particularly your earlier point that the order itself was "irrational." That is not how the story has generally been read, either by the Patristics, later theologians, or Jews who say that God has a purpose in the command, or rather several. The most common purpose offered up is to test Abraham (e.g. St. Athanasius). Also popular is the idea that God is forcing Abraham to test Him, in a continuation of Abraham's pleading/testing of God re sparing any righteous souls in Sodom. Further, God's purposes in the Bible are not taken to be solely, or even mostly about those immediately involved in many cases. The Patristics tend to see Isaac as a type prefiguring Christ. That is, the purpose is also prophetic, and this includes God substituting the atoning sacrifice and sparing the children of men.

    Here is a summary of early Christian accounts for instance:

    The account is interpreted as the drama of faith as opposed to the natural affections, a drama that applies to the reader (Origen). Not only is Isaac a figure of Christ in the Spirit, but also the ram symbolizes Christ in the flesh (Origen, Ambrose). Even Chrysostom abandons his customary moralizing and employs a typological interpretation. That Isaac was a type and not the reality is seen in the fact that he was not killed (Caesarius of Arles). Readers are also invited to interpret the story spiritually and apply it to themselves, so as to beget a son such as Isaac in themselves (Origen).

    Second, is Abraham blind at this point? God has been very active in his life, working wonders for his benefit. He only has a son because God worked a miracle that allowed his post-menopausal wife to bear him a son. He has seen God destroy cities. Does he have any reason to think that he can defend his son from God if God wants Isaac dead? Does he have any reason to think God is out to play a trick on him?

    Is all deference to authority "blind faith," or is there proper deference to authority that is rational? We would balk if a random man on the street says he wants to crack open our child's skull and remove part of their brain, but might readily accept this if a neurosurgeon recommends it, despite having no relevant expertise in the matter ourselves. And yet sometimes doctors perform unnecessary, dangerous procedures to make money, and aren't acting for our child's benefit. Is God less trustworthy than a board certified physician though?

    We might also consider that not all the acts of the Biblical heros are supposed to be good. Jacob is a deceiver. David is an adulterer who kills Bathsheba's husband to cover up his adultery, etc.

    Things cannot be poetry and figurative language all the way down.

    But it isn't, it's allegorical and anagogic.

    Why? That's exactly what it is. Believe, so that you might understand. It's a conditional statement: if P, then Q. In this case, the antecedent is Believe, just that, Believe, and that is 100% fideist. It's absolute blind faith, without an ounce of reason to it.

    St. Augustine says something like this in many places. The most famous quotation is from the Tractate on John (it is a paraphrase of Isaiah), however he makes the case for it more fully in Contra Academicos. There he is arguing against radical skepticism, the doubt of all things.

    For instance, doubting the senses, and doubting that we can learn things from them. One must first believe in the reliability of the senses, at least tacitly, in order to take empirical inquiry seriously. But is trusting that what you see in front of you "blind" faith?

    And the point is that one believes in order to understand, whereas fideism tends towards "you cannot understand, but you must have faith and obey." Yet Christ tells the Apostles: "No longer do I call you slaves, for the slave does not know what his master is doing; but I have called you friends, because all things that I have heard from My Father I have made known to you," (John 15:15) and "the Lord would speak to Moses face to face, as one speaks to a friend," Exodus 33:11.

    We could also consider here how Plato has it that one must "turn the entire body" towards the Good before one can know it. The turning must come before the knowing, but it does not exclude the knowing.

    Can you explain it to me in simpler terms, please?

    It just means: "I believe because it is absurd," is a later invention loosely based on Tertullian, and that he has been rather selectivity read at times.
  • What does Quine mean by Inscrutability of Reference


    How would we know when one was correct?

    Well, suppose someone gave a definition of "tiger" as: "a large purple fish with green leaves, a tap root, and horns." Clearly, this is off the mark and we can do better or worse (although in this case, not much worse).

    Anyhow, to return to the difference between words/signs and what they signify, we could consider "Samuel Clemens" and "Mark Twain," which would seem to extend to the same person, having the same referent, such that everything that is true of one is true of the other.

    Yet:
    "Samuel Clemens's pen name was Mark Twain"
    Cannot be swapped with:
    "Mark Twain's pen name is Samuel Clemens."
    And remain true.

    Likewise: "Mark Twain topped the best seller list for much of the late-19th century" is true. Swap in "Samuel Clemens" and we might still consider it true, but in another sense it isn't, since one could search the lists and find nary a mention of "Samuel Clemens."

    It's obvious that people aren't their names. Samuel Clemens is 13 letters long, but the man is not composed of letters or syllables, nor is Mark Twain 13 letters long. And obviously we might replicate some of this with man and homo sapiens, etc.

    Sense versus reference. But in natural language, reference is often ambiguous, and for abstractions like, say, "justice," some will claim that there either is no reference or that the reference and sense collapse. Whereas a realist would presumably claim that there is a referent, be it an "abstract object/form" or else a principle. I would argue Socrates generally wants to get to the reference of "piety," "justice," etc., and is dealing with something like muddled senses/intentions. Thracymachus wants to refer to justice, but what he means by "justice" isn't justice, or is a cloudy, inadequate sense of justice.

    Or, to introduce other terms, neo-scholastics might grant Hegel and co. that something like "concepts" evolve. But they instead like to say our "intentions" evolve, hopefully becoming more clear. Or as Sokolowski puts it, we "more fully grasp the intelligibility of things through the course of the 'Human Conversation.'" But for them, the "concept" stays the same, because we're thinking about the same thing. For instance, when we say "water is H2O," we still are referring to the same water our cave man ancestors knew quite well.

    With a principle, we might have it unequally realized in a diverse multitude, as with beauty, goodness, justice, etc. And we might want to predicate this term analogously of different things, and I guess that's where the use of modern terminology breaks down because analogy has proven difficult to formalize (but also began to be neglected on primarily theological grounds originally).

    So, if the Good is "that to which all things aim," and what is "choiceworthy," it might still be the case that things are good in very different ways, as signs of goodness, symptoms of goodness, etc. And obviously goodness will be contextual. I think St. Thomas uses the example of "walking being healthy for man," (and so presumably good for man), but obviously not if you have a broken ankle. Yet it is good to walk on a broken ankle if you need to escape an artillery barrage.

    Anyhow, confusingly, I think Plato (or at least Platonists) would often want to have it that there is one referent, a Good, referred to in all goodness, even as respects what merely appears good, yet also that there are many goods. There is "the human good," and "finite goods," plural, and these can also be referents in some sense. I don't think the idea of unequal "possession," "participation," or "virtual quantity," plays all that nice with a lot of modern terminology here. Plato's analogy of the sun might be best. Everything is light in virtue of the sun's light, but they all reflect light differently and in doing so reflect their own image, and they really do have their own image, but it's also only in virtue of the sun that they can possess and reflect this image.
  • On religion and suffering


    The word ‘bus’ implies a system of interactions with the object ‘bus’ based on our understanding of what it is and what it does. Someone who doesnt know about automobiles or even carriages would see it as very different kind of object and interact with it in different ways as a result. If you want to see how different people interact differently with the same coastline ask them to sit down and paint a painting of the scene as accurately as possible. There will be similarities among the paintings, but none will look identical. This is not just due to different skill levels but to the fact that each person’s procedure for measuring and depicting it makes use of a slightly different process. Objective space is derivative of our subjective determination of space.

    First, a bus is a poor example because it is an artifact.

    Second, your claim is that the coastline changes because different people paint or think of it differently, and that it doesn't exist until painted, mapped, etc. Nothing you've said supports this claim; it doesn't follow from the premises. No one disagrees that different people will paint a coastline differently or that coastlines interacted with birds before men. However, most would disagree that the coastline didn't exist until it was painted. Again, you seem to need a premise like: "things are entirely defined by their relations and all relations and properties are essential." But I don't see why anyone would agree to premises like this because it implies things like: "you change when someone lights a picture of you on fire," and "ants didn't exist until people developed an abstraction of 'ant.'"

    The ordering seems bizarre here too. Wouldn't it make more sense that people mapped a coastline or developed an abstraction of "ants" because they encountered coastlines and ants?


    Let me give an example of why the idea that concrete particulars change when people's ideas about them change is ridiculous. Suppose that in the far future people have a very poor understanding of our epoch of history. Due to a loss of sources, they have come to conflate Adolf Hitler and George Washington. They know of the USA, and Germany, and they think America was founded by Hitler after he fled Germany after losing World War II and ordering the Holocaust. Is it now true that: "Adolf Hitler, perpetrator of the Holocaust, was the first President of the United States?"

    But that's a patently absurd commitment, as is "mosquitos didn't exist until man experienced them." We have plenty of evidence to suggest mosquitos were around and interacting with things long before man.

    It depends on the system of convictions that underlie your beliefs concerning what is good and what is bad for a baby, just as what constitutes genital mutilation depends on such guiding assumptions. Archeologists found tiny tools and weapons dating back 1700 years.

    A completely facile counter example, toys are not the real weapons. People today let toddlers have toy guns and swords too. They might even let them play with an unloaded gun. They don't load a revolver, cock it, and then throw it in a crib with a 9 month old unless they're trying to kill their child (or play a unique form of Russian roulette). Not to mention these are clearly for older children, who might very well be given duller knives to help prepare food even today. An infant isn't honing any skills besides basic grasping. This is another obvious constraint, you cannot teach a three month old to ride a bike or dress a deer.

    Circumcision, scarification, tattooing, foot binding, etc. all have reasons, even if they might be abhorrent ones. Letting a child randomly maim themselves by accident doesn't fit the mold. And at any rate, absolutely none of that matters because its still the case that one wouldn't do it unless one wanted their child to accidently slash themselves, which is the constraint in question. If one wants to give a baby a toy they will actually enjoy, a razor sharp knife will never be appropriate.






    If one thinks a brain is a physical organ that generates perceptual events, then it has to be explained how it is possible that these events can be about objects in the world.

    But a naturalist with a proper understanding of perception wouldn't say that. Brains don't generate experiences of objects by themselves. This is what I mean by inappropriate decomposition and reductionism. Take a brain out of a body and it won't be experiencing anything. Put a body in a vacuum and what you'll have is a corpse, not experiences. It's the same thing if you put a body on the surface of a star or the bottom of the sea. Nothing looks like anything in a dark room, or in a room with no oxygen, etc.

    In physicalist explanations of perception the objects perceived and the environment are all essential.

    But I said it is far worse. If causality cannot deliver "knowledge about" this means ALL that stands before me as a knowledge claim--explicit or implicit, a ready to hand pragmatic claim or a presence at hand (oh look, there is a cat) claim, or just the general implicit "claims" of familiarity as one walks down the street---requires something entirely other than causality to explain how it is possible.

    Ok, but you haven't, as far as I can tell, done anything to justify the claim that we cannot know things through their causes or effects, you've just stated it repeatedly. Prima facie, this claim seems wrong; effects are signs of their causes. Smoke, for instance, is a natural sign of combustion.

    If effects didn't tell us anything about their causes, or causes about their effects, then the main methods of the empirical sciences should be useless. But they aren't. Likewise, if pouring water into my gas tank caused my car to die, it seems that I can learn something about my car from this.

    But the above seems plainly false for the only way for an exemplification to exemplify is assume a particular causal series that demonstrates this. This is rare, and when it comes to a causal matrix of neurons and, synapses and axonal connectivity, well: my cat in no way at all "is exemplified" by this.

    I'm sorry, I couldn't parse this. Nothing can exemplify anything?

    I couldn't really understand the rest of the post either.
  • p and "I think p"


    This must come up for translators of epics all the time as a more practical concern. They all make a habit of referring to people, places, etc. by circuitous names. "Son of..." "he who was last upon the battlements err the Achaeans breached the gates of fair Ilium," "that long bearded warrior, fiercest among the Franks," etc., where the phrase is primarily serving as a name.

    Virgil identifies himself initially with this whopper:


    ‘Sub Julio’ was I born, though it was late,
    And lived at Rome under the good Augustus,
    During the time of false and lying gods.

    A poet was I, and I sang that just
    Son of Anchises, who came forth from Troy,
    After that Ilion the superb was burned.


    Is the last tercet equivalent with: "I am the poet who wrote the Aeneid?" (which would be equivalent with "I am Virgil?") Can we consult the truth tables?

    It would be fun to see the Iliad or Beowulf rendered in logical form.
  • On religion and suffering


    Also, is it supposed to be a vice to "assert with bold certainty" that a knife is a bad toy to give a baby?

    Yes, I'm quite certain you shouldn't throw a razor sharp object into a baby's crib. Anyone whose philosophy has led them to think that they mustn't lean in too hard to the courage of their convictions on this has adopted a "philosophy" that seems to be a far cry from the "love of wisdom."

    Are you sure this isn't Hegel's "fear of error become fear of truth?"

    No doubt, it would be more acceptable to say merely that it "wouldn't be true for me that razors are good toys for 6 months olds," and to allow that others might justifiably disagree. The ol' tyranny of bourgeois metaphysics I suppose—temperance, prudence, fortitude, and justice all subservient to tolerance.
  • On religion and suffering



    Kierkegaard didn't believe in the catholicity of reason, he was a protestant from Denmark. He was essentially a Christian Viking, from a theological POV. That's why he emphasizes irrationality (i.e., "berserk") and the knight of faith (i.e., "berserk-er").


    Yes, but Kierkegaard believes in a transcendent orientation towards the Good in the same way that Plato, St. Augustine, or Hegel did. Our desire for—and to know—what is truly good is what allows us to transcend the given of what we already are.

    IMO, Kierkegaard's problem is that he has inherited the deficient presuppositions of his era and leaves them unchallenged. For him, the desire for the Good cannot be the desire of reason (practical reason) because all desires relate only to the passions and appetites. This is the same presupposition that leads Hume to posit his Guillotine (the is-ought gap), and to declare that "reason is and ought only be the slave of the passions." (Lewis speaks to this in the passage quoted here: https://thephilosophyforum.com/discussion/comment/956012)

    Hence, reason is sterile and inadequate for Kierkegaard because his era has already deflated it into mere calculation, and so the infinite sought by the soul must be sought in passion, as set against reason.

    However, even ignoring this, what I would also consider to be his error is to suppose that this transcendence could only apply to practical reason/passion (whose target is the Good) and not to theoretical reason. He essentially grants his opponents their deficient premises on theoretical reason, and in doing so sets the "subjective" against the "objective" in a sort of contest where one must prevail. Much of the prior tradition, by contrast, makes them both part of the same Absolute. The Good, the Beautiful, and the True are all equally Transcendentals, practical, aesthetic, and theoretical reason part of a unity. The desire to know what is "really true" is also a source of transcendence, pushing us beyond the given of current belief and opinion, just as practical reason pushes us beyond current desire.

    For him, you mean? Or for anyone in general? If it's the latter, then I agree with Kierkegaard on this point: how do we even know that human reason has catholicity? It could just be secular universality for all we know.

    Are there many sui generis, potentially contradicting truths or just one truth? Likewise, are there many unrelated, perhaps contradictory reasons? Can one give reasons for reason that are not circular?

    Kierkegaard is a Christian, and so he should recognize that there is one "Way, Truth, and Light," (John 14:6) and one Logos (John 1). Yet he is also the inheritor of Luther, who told Erasmus:

    "If it is difficult to believe in God’s mercy and goodness when He damns those who do not deserve it, we must recall that if God’s justice could be recognized as just by human comprehension, it would not be divine.”

    ...opening up an unbridgable chasm of equivocity between the "goodness of God," and anything known as good by man. Calvin does something similar with his exegesis of I John 4:8, "God is love," such that it is [for the elect, and inscuratble, implacable hatred for all else].

    I already gave you a Dante allusion, so here is another. In Canto IX, Dante and Virgil are barred from entering the City of Dis by the demons. Virgil is a stand-in for human reason. The furies who taunt Virgil irrationally claw at themselves, as misologes also strike out without reason. Then they threaten to call for Medusa, to turn Dante to stone.

    Virgil is so scared of this threat that, not trusting Dante to keep his eyes closed, he covers the Pilgrim's eyes himself. Then Dante the Poet bursts into an aside to the reader to mark well the allegory here.

    There are a few things going on. The angel who opens the gates of Dis for them is reenacting the first of the Three Advents of Christ, the Harrowing of Hell (all three show up), but I think the bigger idea is that one risks being "turned to stone" and failing to progress if one loses faith in reason after it is shown to be defenseless against the unreasoning aggression of misology (D.C. Schindler's Plato's Critique of Impure Reason covers this "defenselessness" well).

    The very next sinners Dante encounters are the Epicureans, who fail to find justification for the immortality of the soul and so instead focus on only worldly, finite goods. It's an episode filled with miscommunication, people talking over one another, and pride—exactly what happens when reason ceases to be transcendent and turns inward, settling for what it already has. This is the Augustinian curvatus in se, sin as being "curved in on oneself." Dante himself was seduced by this philosophy for a time, and was seemingly "turned to stone" by it.

    Anyhow, one would misread St. Augustine's "believe that you might understand," if it was taken to be some sort of fidest pronouncement of blind faith. In context, it is very practical advice. One cannot learn anything if one doubts all one's teachers and refuses to accept anything. This is as true for physics as theology. We can even doubt that our parents are truly our parents. We might have been switched at birth. But we will never understand, be it physics, or what it is to be a good son, if we do not transcend such skepticism.


    What do you think of Tertullian's (or whoever "really" said it): Credo quia absurdum, "I believe because it is absurd."?

    Tertullian never said it. He said "prorsus credibile est, quia ineptum est," "It is completely believable because it is unfitting," and the context is Marcion claiming that it would be unfitting for Christ to die a bodily death. The point is more that it makes sense because only God's radical, unfitting condescension can bridge the chasm between man and creature. As St. Athanasius says "God became man that man might become God."

    Post-Reformation anti-rationalists glommed on to Tertullian because of "a plague on Aristotle," and "what has Jerusalem to do with Athens?" but fundamentalists would do well to note that two paragraphs after this part of Prescriptions Against the Heretics he says: "no word of God is so unqualified or so unrestricted in application that the mere words can be pleaded without respect to their underlying meaning," and that we must "seek until we find" and then come to believe without deviation. Also worth considering, the things they like most about Tertullian seem like they would be precisely those things that made him prey to the Montanist heresy.




    To say that America has a coastline is to assume some configurative understanding of what a coastline is, which is to say, a system of anticipations concerning what it means to interact with it.

    No, it's to assume that there is a difference between land and sea and a place where the two meet. Words, concepts, models, I'd contend these are a means of knowing, not what we know. Hence, when a concept or model changes, it does not imply that what is known through them changes. This is for the same reason that if I light a photograph of myself on fire I don't suffer burns, or if I unfocus my telescope, the craters in the Moon aren't smoothed away.

    whenever we use the word we commit ourselves to a particular implied system of interaction

    Yes, a system of interaction where the ocean is not a cliff or a beach. But these interactions don't depend on us knowing about them.


    "America did not have a coastline until it was mapped," and "penguins and cockroaches didn't exist until man experienced them," are prima facie implausible claims. Extraordinary claims require solid evidence. Yet as noted above, one can easily accept enactivist premises, reject the "view from nowhere," and recognize the epistemic primacy of interaction without having to suppose any of this. You seem to need additional premises to justify this sort of claim, not merely dismissing other views.

    As it stands, this looks akin to saying "three and three doesn't make five, thus it must make seven." Well, the first premise is right. The conclusion is extremely counterintuitive though and it's unclear how it is supposed to follow.

    Alicia Juarrero explains:

    Forgive me , but I am at a loss for how this is supposed to support the suppositions in question.

    Nor should the meanings of these examples be reified as epistemological truths, as G.E. Moore tried to do when he attempted to demonstrate an epistemological certainty by raising his hand and declaring ‘I know that here is a hand’.
    You’re doing the same thing by asserting with bold certainty ‘ a knife is a bad toy to give a baby!’ , ‘one can't mate a penguin and a giraffe!’ and ‘ one cannot take flight by flapping one's arms vigorously like a bird’! Are these certainties that need to be justified, and if so, is there an end to justification, a bedrock of belief underlying their sense and intelligibility? And what kind of certainty is this bedrock?

    I didn't say anything about certainly, I said one could explain the nature of some constraints very well without recourse to cognitive science and dynamical systems.

    But to the point, I would simply reject the unchallenged assumption made by many critics of Moore that all knowledge is demonstrative knowledge, or that knowledge is merely justified opinion. Yes, if all knowledge requires justification then one has to traverse an infinite chain of syllogisms to know anything, this was a going concern of the skeptics as far back as ancient Athens. But here is a syllogism:

    P1: If all knowledge was demonstrative we would need an infinite chain of justifications to know anything and one cannot consider an infinite number of syllogisms in a finite lifespan (making knowledge impossible)
    P2: But we do know things.
    C: Therefore, not all knowledge is demonstrative.

    If one rejects P1, they have rejected the grounds for complaining about "justification stopping somewhere." Either they affirm that we can consider an infinite chain of syllogisms or that we don't need to.

    If they reject P2, then they are committed to the claim that they don't know anything, in which case they can hardly know that either P1 or P2 is false.
  • On religion and suffering



    Scientific advances in understanding gravity, mass and energy from Newton to Einstein changed the meaning of these concepts in subtle ways. The notion of coastline doesnt exist independently of the actual processes of measuring it, and these processes conformist conventions of measurement.

    Sure, the concepts/notions might change (or we might say our intentions towards them). That seems fine. What seems implausible is that all the interactions mass should have changed because our scientific theories did, or that North America had no coastline, no place where the land met the sea, until someone measured it.

    Complex dynamical systems approaches applied to cognitive intentionality explain how intentional stances produce specific constraints, constants which do not act
    as efficient causes.

    How so?

    Anyhow, the fact that a knife is a bad toy to give a baby, that one can't mate a penguin and a giraffe, or that one cannot take flight by flapping one's arms vigorously like a bird does not seem the sort of things that should require recourse to cognitive science to explain.
  • What does Quine mean by Inscrutability of Reference


    To start, it might be helpful to recall that, pace modern practice, when Aristotle is talking about definitions he is talking about the definitions of things, not words. From what I understand, this was common practice, and this certainly seems to be what Socrates is involved in. A key idea here is that definitions can be more or less correct; a definition is not just "however a word is currently used." This is obviously not how dictionaries come up with their definitions. They add a sense when a word begins to be commonly used in an equivocal manner. It's closer to scientific classification, or questions like "are viruses a living organism?" (i.e. proper per se predication re viruses).

    Anyhow, in the Euthyphro I think Plato is getting at knowing what piety is, not what the word piety means. I don't see how he is committed to the idea that some particular combination of syllables or characters uniquely maps to it. Indeed, a big thing he focuses on is that we often fail to reach such concepts in our words and propositional thought.

    The notion of pros hen, analogical predication is his student Aristotle's, but the grounds for it in his own work is pretty clear.

    Indeed, Plato denigrates words in a number of places. Words can only speak to relative good, not the Good. D.C. Schindler has a pretty good treatment of this in "Plato's Critique of Impure Reason," but it can be found most explicitly in Letter VII, where he explains why he has never and will never write something like a dissertation on metaphysics. Rather, such knowledge must be gained by "a long time and a life lived together, as one candle flame jumps to another."

    But think about this re Socrates. I believe he'd dispute it vigorously

    But this would be to elevate the mutable, contingent sign to the level of what it is a sign for (confusing the mutable and immutable/intelligible). IMO, St. Augustine, probably the most influential Platonist, stays pretty true to Plato in his semiotics, in which corporeal signs only direct our attention to what is intelligible. The triangle drawn in chalk that directs our attention in geometry class is not the triangle grasped by the intellect.

    Now, in Augustinian semiotics these problems in translation could be overcome because one understands the intelligible by looking "inwards and upwards.," not by comparing sets of behaviors and conducting statistical analysis on them or something of that sort. Knowledge is a sort of self-knowledge. The relationships between mutable and corporeal (not to mention contingently stipulated) signs and mutable objects is decidedly not the sort of thing one "grasps noetically." To focus on them is to swan dive into multiplicity.

    But we might suppose there is also a happy medium between the high flying "noesis-focused" approach of Augustine and limiting ourselves to a "third-person" view that requires us to consider how some sort of blank slate Bayesian AI would come to corelate words with phenomena based on a data feed of empirical measurements. As Gadamer points out, you can't begin any analysis without some prejudices, and so we need not attempt to flee from them, which wouldn't work anyhow.




    I also take MacIntyre's idea that we've lost the meaning of classical terms to exemplify this. The assumption seems to be a kind of "one word, one meaning" theory, so that if A comes along and says,"I'd like to use 'virtue' and 'essence' in the following ways" (giving cogent reasons, we'll assume), B replies, "No, you can't, for that is not what 'virtue' and 'essence' mean."

    It's probably helpful to take a look at MacIntyre's inspiration, A Canticle for Leibowitz. There, people have lost most scientific knowledge and are just aping the forms of science as a sort of a blind tradition.

    On most views, all scientific knowledge claims are not equally correct. Hence, the problem here isn't supposed to simply be one of conceptual drift, with any and all concepts having equal standing and the only difficulty being translation. Rather, the problem is that the degenerated "science" is muddled and incorrect, misunderstanding its subject matter. In some sense, what is left is the form/signs and not the intelligible content.

    But the assumption here isn't "one word, one meaning." It is "there are ways to be more or less correct about virtue." Thracymachus has his reasons for asserting that justice is whatever is to the advantage of the stronger. He is simply wrong about what justice is. Disputes over the "meaning of justice" are only going to appear totally irresolvable if one already starts off by assuming that there is no way to be more or less correct.

    The essential idea isn't "the word justice → justice" but rather that thereis such a thing as justice, it is not simply a bundle of mutable associations.
  • On religion and suffering


    But st the same time , the laws and properties that we ‘discover’ in nature are not external to the ways we arrange and rearrange our relations with that world as knowledge
    develops.

    It seems easy to agree with your enactivist precepts, agree with the critique of "the view from nowhere," and to agree on the importance of act ("act follows on being"), and on the error of focusing on "things-in-themselves," while not wanting to affirm this though.

    Prima facie, does it make sense that scientific advances in understanding gravity change what gravity is and how it works? Did the coastline of North America change when men began to map it?

    It seems other premises would be needed for this assertion. Something like: "things are defined entirely by their relations" (e.g. a bundle type theory). Being known is one such relation. Thus, when our knowledge changes, the thing known changes, and so "things' properties are not external to our knowing."*

    But this would seem to indicate a further premise along the lines of: "Natures and essences do not exist," and following from that "all predication is per accidens, and no predication is per se." That is, nothing is said necessarily of any particular substance/thing. Whereas if any cat or tree necessarily interacts in certain ways (has certain properties) then our knowing cannot change this.

    Yet these premises are seem harder to swallow. You mention constraints. The next question is, "from whence these constraints?" Well, one view that might recommend itself is that "things do what they do because of what they are," i .e., natures that explain why things interact as they do, and we might think the case for natures is particularly strong for those substances that are (relatively) self-determining, self-governing, self-organizing wholes (principle, organisms, although other dissipative systems might be lower down the scale here).

    *Hegel gets close to this in the Doctrine of the Concept, but this is merely an "unfolding," and so avoids making all predication per accidens.
  • On religion and suffering


    I don't think Kierkegard was a fideist. I do think that at times he errs by setting practical reason (the "subjective") over and against theoretical reasoning in a pernicious manner, abrogating the catholicity of reason (which is the first step on the road to misology). I don't think this is a road he wants to travel though. One of the things that cracks me up about Kierkegard is that he seems very much motivated by the same concerns as Hegel, his arch-rival.

    He might have benefited from St. Augustine and St. Anselm's "believe so that you may understand."




    Or did you really think a human brain was some kind of mirror of nature? A brain and its "sensory equipment"--a MIRROR? Let's see, the electromagnetic spectrum irradiates this grass, and parts are reflected while others absorbed, and what is reflected is received by the eye and is conditioned by cones and rods and sent down the optic nerve and....now wait. Have we not entirely lost "that out there" in this?

    No? Where exactly do you suppose we lost it?

    Saying "we only see light that interacts with our eyes, so we never see things," is a bit like saying "it is impossible for man to write, all he can do is move pens around and push keyboard keys."

    One basic, but ignored premise must come to light: there is NOTHING epistemic about causality.

    I already have a quote ready for this: "...every effect is the sign of its cause, the exemplification of the exemplar, and the way to the end to which it leads." St. Bonaventure - Itinerarium Mentis in Deum.

    Rather, when an encounter with an object occurs, it is an event, and must be analyzed as such. What lies "outside" of this event requires a perspective unconditioned by the perceptual act, which is impossible. Unless you actually think that the world intimates its presence to a physical brain...by what, magic? Just waltzes into the brain and declares, here I am, a tree! I assume you do not think like this.

    But since you are concerned with others' unattended to presuppositions, I will just point out a few I think I might be seeing on your end:

    1. Representationalism and correlationalism are the correct ways to view perception and epistemology.

    2. Truth is something like correspondence, such that not being able to "step outside of experience" makes knowledge of the world impossible (and, in turn, this should make us affirm that there is no world outside experience?)

    3. Perceptual relationships are decomposable and reducible such that one can go from a man seeing an apple to speaking of neurons communicating in the optic nerve without losing anything essential (reductionism).

    And then the old "view from nowhere" and "mind as the mirror of nature," which seems to get rolled out to create strawman and false dichotomies far more often then it is ever actually endorsed. "Oh look, it's impossible to know the world as one would know it without a mind. If not-A, then B" (where B is variously anti-realism, pragmatism, deflationism, eliminativism, etc.). But we might reject the premise: "It is either A or B," C or D might be options open to us as well.

    I don't even disagree with the idea that being and thought are two sides of the same coin, but I do think the empiricist assumptions behind "no events occur unless they are witnessed" might be off base.

    Kierkegaard knew very well about this problematic, for he had read Aristotle, Augustine, Kant, Hegel, and so on (he was, of course, literally a genius). One must know in the first place in order to acknowledge the "collision" between reason and existence. Reason cannot, keep in mind, understand what it is, cannot "get behind" itself (Wittgenstein). for this would take a pov outside outside of logic itself and this cannot be "conceived".

    It depends on how reason is conceived. Reason for the ancients and medievals is ecstatic and transcendent, "the Logos is without beginning and end." Often today it is not much more than computation. How it is conceived will determine its limits. Is reason something we do inside "language games?" Is it just "rule following?" Or is it a more expansive ground for both? Does reason have desires and ends?

    Important considerations.
  • p and "I think p"


    I don't know what Kant means by unknowable things-in-themselves. What is knowledge then if not something independent of the thing itself? You're assuming that there is more to know about something, when it could be possible that a finite number of sensory organs can access everything there is to know about other things. In fact, there are many characteristics of objects that overlap the senses. You can both see, hear and feel the direction and distance of objects relative to yourself. All three senses confirm what the other two are telling you. Having multiple senses isn't just a way of getting at all the propertied of other objects but also provide a level of fault tolerance that increases the level of certainty one has about what they are perceiving.

    Indeed, whatever properties something has when it is interacting with nothing else and no parts of itself are not only epistemically inaccessible, but causally inert and can make no difference to anyone, ever. Hence, we might think that knowledge of "things-in-themselves," far from being the "gold standard," is rather worthless. Things participate in the world by interacting, as the old scholastic adage goes actio sequitur esse, "act follows on being."

    Now, its obviously true that what we are affects how we interact with things. This is the ol' quidquid recipitur ad modum recipientis recipitur, "everything is received in the manner of the receiver." But this is true of all interactions. Salt dissolving in water only occurs because of what both salt and water are, and just as a ball only "appears red" in the presence of a seer," salt only ever dissolves when placed in an appropriate solvent.

    Direct realism need not be naive. Aristotle, for instance, combines some of the precepts of enactivism with the idea that what we experience is the interaction between our sense organs and things, as mediated through the ambient environment, and that knowledge involves universals that are not, strictly speaking "in" things. Yet he also doesn't have everything taking place in the imagination, through phantasms/representations, as many moderns would have it.





    You can also depend on the process of causation in a deterministic universe as providing another level of certainty. Effects carry information about their causes. You can get at the cause by making multiple observations over time and finding the patterns. This allows you to predict with a higher certainty the cause of some effect you experienced. When billions of people use smart phones everyday, almost all day, and 99% of them work as intended, does that not give you a certain level of certainty that your smartphone will work today? Can we be 100% certain? No. Are we more than 0% certain? Yes, depending on the case. You seem to be maintaining that we can only every be 0% certain of anything.

    :up:

    "...every effect is the sign of its cause, the exemplification of the exemplar, and the way to the end to which it leads." St. Bonaventure - Itinerarium Mentis in Deum.

    You can explain this in terms of modern supervenience theories as well. As you pile up more and more observations the set of possible P-regions (spatio-temporal regions capable of producing some interval of experience) consistent with our experiences gets smaller and smaller.

    To illustrate this, suppose we have three mostly identical systems involving three identical subjects having identical experiences of seeing an apple. In one case, the apple is whole. In another case, the apple has been carefully hollowed out. In a third, the apple is fake, plastic, but it appears indiscernibly similar to the real apples. In this example, we would say all three systems have the same B-Minimal properties because they produce the same phenomenal experience. Another way of thinking about this is that the B-minimal properties associated with any given interval of experience could be said to correspond to a set of possible P-regions, different physical ensembles that give rise to the same experience. Our experiences have a direct correspondence to some of the object/environment’s properties, just not all of them.

    But now consider a longer interval of experience where a person sees an apple, walks up to it, picks it up, and takes a bite of it. The B-minimal properties of a system giving rise to this experience must be quite different. It is no longer the case that the hollowed-out apple or the
    plastic one will be indiscernible from the real apple in these scenarios. As we can see from this example, the subject’s interaction with the environment affects the B-minimal properties required to produce their experience. When we move to an interval where the subject bites into the apple, the B-minimal properties must now be such that they can produce not only the sight of the apple, but the feeling of weight it produces in our subject’s hand, its taste and its smell.

    One way to think of this interaction might be to say that the set of possible P-regions that have the B-minimal properties required to produce our subjects' experiences has been reduced by their walking up to the apple and taking a bite of it. In the case where the subjects merely looked at the apple, we just needed something that looked indiscernibly like a given apple from a fixed angle to produce the experience. In the second scenario, we need something that looks the same from different angles, feels the same, and tastes the same.

    As we interact with objects in our environment, using more of our senses, we greatly reduce the number of possible physical systems that could give rise to our experiences.xiii In turn, the one-to-one correspondence between the B-minimal properties that act as the supervenience base for our experiences and our experiences comes to apply to a narrower and narrower set of possible physical systems (set of P-regions), and we come closer to uniquely specifying the properties of the objects we interact with.

    Now consider what happens when we conduct scientific experiments, using finely tuned instruments that allow us to probe the properties of objects. In such cases, there is an even greater reduction in the number of possible P-regions that are consistent with the experiences of the experimenter. Moreover, we can consider what happens when a vast number of people are involved in such experiments, leading to a very large number of such experiences. As we consider more and more experiences, there appear to be fewer and fewer ways “the world could be” and still produce the same experiences investigators are having.

    Such an insight does not, of course, rule out radical skepticism. We could still suppose that such experiences might be the product of some “simulation” or an “evil demon” à la Descartes. Yet representationalists generally allow that our experiences do have something to do with the world. Indeed, their claims are often based on findings in the sciences. Rather, they claim that the relationship between our experiences and the properties of objects is too indirect and dynamic to allow for true knowledge of those things.
  • What does Quine mean by Inscrutability of Reference


    Rather, we're trying to shake up a very common assumption among philosophers, which is that there is some sort of binding action (I called it "metaphysical Superglue" elsewhere) that makes a word inseparable from its object or meaning or concept -- take your pick of these imprecise terms. ("Cannot be grounded in any infallible a priori knowledge," in the words of the SEP article.)

    Who held such a position though? I find this whole area of philosophy to be filled with straw men and ghosts. It's obvious that different peoples use different words for different things and that anything can be said in many ways. Poetry as far back as Homer and the Bible makes use of this.

    This was, if anything, likely more obvious in ancient and medieval times when dialects, language groups, and practices varied over relatively tiny geographic areas. Today we live in a globalized, and so homogenized world. Whereas Herodotus, Xenophon, or Marco Pollo seem acutely aware of the dramatic differences between their culture and the "barbarians." An understanding that meaning varies with context, or that use helps to determine meaning is also very old. One could not identify fallacies of equivocation or develop theories of analogous predication otherwise.

    I take it that Quine is mostly responding to his own immediate tradition, to Russell, the early Wittgenstein, Carnap, etc., yet I find nothing so naive as this in their own understanding, even if I do agree that something like the translation of language into logic or falsification conditions is probably unprofitable.

    Yet if the point is that translation doesn't involve one single string of syllables or characters, the point is trivial. However, the conclusions drawn, e.g. the inscuratabiliy of reference, tend to be much more radical than this. The point of inscrutability isn't that we can also call Rome "the Eternal City," "capital of Italy," or "the largest city on the Tiber," or New York "the Big Apple," but rather the (initially at least, bizarre) claim that one can never refer to exclusively to Rome or New York City, but that we alway refer just as much/just as plausibly to very many other things (on some views, an infinite number).

    Does this help us understand the relation of word and object, which I believe is Quine's point with "gavagai"? Not a rhetorical question -- you may well be seeing something here that I'm not.

    Sure. One doesn't even need to assume some sort of realism, we could just assume a sort of loose scientific realism and reject meteorological nihilism (i.e. there are true/proper part/whole relations). Organs are a great example of proper parts.

    Let's say our linguist is trying to discover the word heart. He sees the natives butchering a rabbit and, since he is an active participant, picks up the heart (which has been separated because, being a different organ with a different function, it is made of tough muscle and requires prolonged cooking that would spoil a liver, etc.). He gets a word in reply. He moves around the cook fire and picks up a deer heart. He gets the same word. Then he points to his own chest, and gets an affirmative response.

    I would conclude that it is pretty obvious what the word means now. Different cultures have different words for the same organs because organs are distinct parts. To assume that the word might as well apply to any number of assemblages of empirical observations seems to me to presume that there aren't proper parts for us to identify.

    Now, against this someone might complain that the word could just as well mean blood, or tough (because heart meat is tough), or chest. This already doesn't seem plausible, but we can just consider here that the linguist is going to have a vast number of interactions where they can actively pursue such distinctions. You could reference a pot of blood sausage being prepared for instance, if the concern is that "blood" is what is meant.

    An opponent might backtrack even further and say that there is no way to know when one has received an affirmative or negative gesture. This just seems implausible; one doesn't need a common language to signal assent or dissent. Indeed, we can even understand other animals on this front, because communication is important. Someone who has never seen a dog before doesn't stand in utter confusion as to the dog's attitude towards them when they see it growling, bearing it's teeth, and readying to pounce, just as a sheep doesn't need to be exposed to dozens of wolves to know it should flee from them. The first is enough.

    There may well be. Rodl devotes an entire chapter to discussing Nagel's "view from nowhere," and one of his criticisms is this problem of the "loss of the viewer" -- what it does to 1st person propositions.

    Right. Hearts are the types of things people have words for. The idea that some "set of behaviors" referencing hearts could just as well be applied to any number of bizarre, counter intuitive assemblages of properties, needs to take the human out of the learning process and leave nothing but a "set of observations" to be mapped to other sets of observations.

    But to assume that human language could be arbitrary in this way seems to me to have already implicitly presupposed the very thing in question.
  • What does Quine mean by Inscrutability of Reference



    So leave it.

    What, the fact that you don't seem to have even grasped the very basics of what you're talking about?

    Tell you what, essences and essential properties are still very popular in philosophy. If your argument actually dispatches them in a few sentences, instead of failing to understand what an essence is, you should have absolutely no problem getting it published. It should quickly become one of the most cited articles in metaphysics. Go for it.
  • What does Quine mean by Inscrutability of Reference


    Nowhere, my only point is that the example, with the presuppositions attached to it, assumes what it sets out to demonstrate.

    Also, leaving aside the idea that an essence is a "set of properties," (bundle theory), I don't see how your exercise demonstrates much of anything. The idea of essences is not that some "items" cannot lack certain attributes, it's that they cannot lack those attributes and still be the sort of thing they are. The essence is the "what it is to be" of a certain kind of thing.

    So, if Socrates is a man, and "being human" is his essence, the counter example would be a possible world where Socrates does not possess the attribute of being human and is still a human, an oak tree that is not a tree, etc. On the view that a corpse is not a man (i.e., death as substantial change) it's obvious that "Socrates," as merely an item/body, can cease to have his essence. Socrates can be eaten by a tiger and his flesh and bones can become tiger flesh and bones, which can in turn become part of insects, fungus, plants, rocks, etc.
  • Ways of Dealing with Jihadism


    Which other targets of jihad are you talking about?

    Mali, Niger, Burkina Faso, Nigeria, Chad, CAR, etc. Southeast Asia has had its share of Jihadi groups too.
  • What does Quine mean by Inscrutability of Reference


    Sort of. Since the example concerns two linguistic communities who don't yet share a common translation for "gavagai", what else besides behavior would we have to go on?

    Your own grasp of the intelligibility of things and understanding of what it is to be human. IIRC, Wittgenstein makes the point re the indeterminacy of rules that, when we point at something, we could just as well be indicating that others should look at whatever is behind the shoulder of our extended hand.

    Except that wouldn't make any sense. Our eyes are not on our backs, and so we'd have no idea what we are identifying.

    Let's assume for the sake of argument an older, realist perspective. Things have essences. Our senses grasp the quiddity of things. We all, as humans, share a nature and so share certain sorts of aims, desires, powers, faculties, etc. Given this, given we are already interacting with the same things, with the same abstractions, and simply dealing with them using different stipulated signs, translation doesn't seem like that much a problem. We might even allow that our concepts (intentions) and understandings of things might vary, but they are only going to vary so much.

    The idea that "all we have to go on is behavior" seems like it could be taken as an implicit assumption of nominalism. Yet then the conclusion seems to be, in some sense, an affirmation of nominalism.

    Anyhow, this scenario has come up, very many times. Yet in the age of discovery, people became translators between languages that were about as unrelated as you can get very rapidly. And children were soon born in contexts where they were native speakers of both languages, and as far as I am aware, this never led to reports by native bilingual individuals that "actually, all these translations are inadequate, they don't really understand each other!"

    One of the weird things about many empiricist doctrines is that the quiddity of things, their intelligible whatness, and phenomenology as a whole, is considered "unobservable." It's always seemed to me that this is sort of strange, what could be more observable? In general, it seems that "unobservable" tends mean something more like "difficult to impossible to quantify or model, and thus something that must be excluded."

    I read him rather as using the gavagai story to show why the word/meaning pair is problematic.

    I don't even necessarily disagree with this, I just don't think it shows much of anything. From the IEP article on this:

    These views of Quine and Davidson have been well received by analytic philosophers particularly because of their anti-Cartesian approach to knowledge. This approach says knowledge of what we mean by our sentences and what we believe about the external world, other minds, and even ourselves cannot be grounded in any infallible a priori knowledge; instead, we are rather bound to study this knowledge from a third-person point of view, that is, from the standpoint of others who are attempting to understand what we mean and believe. What the indeterminacy of translation/interpretation adds to this picture is that there can never be one unique, correct way of determining what these meanings and beliefs are.

    There is a sort of parallel between this and what Rodl is saying about not removing the thinker from thoughts. What exactly is meant by "third person" here? We learn things like language by doing, so already this is potentially falling into the old empiricist mistake of seeing the knowing subject as largely passive. But moreover, to the extent that the goal is a "view from nowhere" or a "view from a blank slate" it also erases the real language learner, leaving behind a set of observations with no observer. The appeal to Cartesian "infallible a priori knowledge" also strikes me as an implied false dichotomy. Either we adopt a certain sort of empiricism, or we're with Descartes; but there is plenty of room for a via media here.

    I don't think that's the problem. Rules of math and logic are also extremely general principles, but we don't have trouble finding agreement there.

    As general as "justice" or "beauty?" I don't think so.



    I'll go a step further and suggest that we have overwhelming agreement as to what is true and what is good.

    The stuff we focus on is the stuff about which we disagree. That misleads some to think that we disagree about stuff. But our agreement about what the world is like is overwhelming. And our agreement about what things folk ought and ought not do is pretty broad, too.

    :up:

    Exactly. But the problem also comes up when people want to give a single, comprehensive, univocal definition to these terms, or if it is assumed that they can be decomposed.
  • Ways of Dealing with Jihadism


    Nothing does more for Jihadism, and brings more to its cause, than its oppression.

    Counterpoint: if the US and the rest of the region hadn't rapidly stood up a massive air campaign against IS as they advanced into the Baghdad suburbs in 2014 (and provided significant ground support) it seems fairly obvious that IS would have taken most of Iraq, all of Syria, and likely expanded into Lebanon by 2016.

    Even with a massive amount of air and artillery support it still took almost three years to retake Mosul, with the siege of the city lasting 9 months after the initial encirclement and resulting in civilian losses that were, on some measures, a significantly higher proportion of the population than Gaza to date.

    Essentially, IS wasn't going anywhere without the coalition carrying out an extremely large scale air campaign. IS was smashing through US trained and equipped Iraqi divisions despite huge numerical and material disadvantages, fighting Iran and Hezbollah and winning, fighting other Jihadis and winning, and advancing against both Russia and the SAA, while also engaged and making progress on a third major front while tangling with both the Kurds and the Turks.

    There is very little reason to think the problem would have just "gone away."




    Many of the countries with the largest Jihadi problems are hostile to Israel and have essentially no footprint in the Middle East. Likewise, Iran, Hezbollah, and other "Shia kufar," threatened with Salafi Jihad are not exactly huge fans of Israel.
  • How can one know the ultimate truth about reality?


    To follow up:

    Is this your belief too?

    I think that Plato does get at something essential here. If the good is "that to which all things aim" (or, anything goal-directed at least), Aristotle's definition, then it seems that all goods are in some sense related. However, good is predicated analogously, not univocally.

    For instance, I don't believe that one could have a "moral calculus" or ascribe some sort of "goodness points" to things or acts. Yet neither do I think all desirability and choiceworthyness breaks down into completely unrelated categories.

    So, to the example of back treatments, of course what constitutes a "good treatment" is relative and contextual. Surely it is not good medical practice to install steel spikes into someone's spine if they have a perfectly healthy spine. It can be good medical practice to stab someone in the throat or chest, we learned this in my EMT course. It is not, in general, good to go around stabbing people in the throat.

    But this is precisely why goodness is best thought of as a general principle.

    How would we demonstrate that this is the case? It also seems kind of circular: claiming that the absolute encompasses all reality and appearances, doesn't it take for granted what it is supposed to establish?

    It's a definition not an argument. How would one demonstrate that cows are "cows" either? For something to be transcendent, it cannot fail to transcend. If "absolute" is to mean "all-encompassing" and we posit both reality and appearances, than by definition the absolute cannot exclude one of the things we've posited.

    Perhaps the definition is defective. One can have bad definitions. I don't think it is though.
  • An Analysis of "On Certainty"


    Are you saying that our fixer knows they can, but doesn't believe they can?

    Perhaps, depending on how "belief" is defined. If belief is just something like "the affirmation of a proposition," then one would always believe what one knows (although we might say that it is things/principles that are primarily known, not propositions).

    This doesn't suggest that knowing is a form of believing though. Whenever one is running, one is also breathing, but running doesn't consist in breathing. Similarly, swimming entails but does not consist in not drowning. In the same way, belief might go along with knowledge without being what knowledge consists in.

    The point here to work through the various ways in which "I know" is used? it would be prejudicial to supose that any was paradigmatic.

    Sure. It would be equally prejudicial to suppose they are all unrelated as well though. Are they related? I should think so.
  • Question for Aristotelians


    I suppose it depends on how it is approached. But Rödl maintains the ultimate proper orientation of the individual towards the Good (at least in Wallace's treatment of him). Some of the quotes sound largely consistent with (if not necessarily suggestive of) a more Platonic/Patristic notion of knowledge as self-knowledge.

    Not surprising, he's a Hegel guy and Hegel was deeply inspired by Meister Eckhart and Boheme, who were very influenced by the whole Augustinian tradition.

    Anyhow, I tend to agree with Kierkegaard that the more common risk in Hegelianism (if not present for Hegel himself, properly understood) is not the elevation of the self and of human particularity/authenticity, but of washing it out and ignoring it.
  • An Analysis of "On Certainty"
    Anyhow, hinge propositions obviously aren't arbitrary. Why do disparate cultures share so many, e.g. "we have bodies," and "there are corporeal objects?" A shared human "form of life," perhaps. However, that term is quite ambiguous. One might suspect that such propositions are accepted rather because they involve non-demonstrative knowledge.

    But then the knowledge is in some sense prior to and constitutive of the language.
  • An Analysis of "On Certainty"


    :up:

    That makes sense. I think that, aside from difficulties from outside "Wittgenstein space," though, there is invariably the difficulty that people read the book in very different ways.



    suggests that Wittgenstein had the contestable view that knowledge is the very same as belie

    I agree, it might suggest this if I had only written the quoted part and not clarified in the next paragraphs. Nowhere though do I suggest that the problem is that "not all beliefs are true and justified," but rather that belief does not imply identity between the intellect and what is known, and does not capture what is meant by many uses of "to know" (e.g. sensing as knowledge, to "know how to ride a bike," or the original example of "carnal knowledge,")

    But one must surely believe what one knows? "I know it's raining, but I don't believe it!" is ironic? A play on our expectations?

    People speak this way without irony all the time. "So you believe you could have fixed the problem?"

    "No, I know I could fix it."

    People often get offended when their knowledge is impugned as mere belief/opinion.

    Of course one affirms what one knows. So yes, it wouldn't make sense to essentially affirm and deny the same thing. Generally, the distinction involves understanding, a grasp of the thing known, as opposed to merely holding a justified opinion that also happens to be true.

    For instance, does one "know Jimmy Carter," if one affirms some justified beliefs about the man but has never met him? Certainly one doesn't "know how to fix a car" or "train a horse," or "know horses" through merely holding justified true opinions about them, and the same goes for "knowing what coffee tastes like." And we might question if one "knows justice" or "what is just" by being able to affirm informed, true opinions about just action.

    English is hardly unique in its many senses of the word to know. Attic Greek, for instance, offered up distinctions between sophia, gnosis, techne, episteme, phronesis, and doxa. And no doubt, there is plenty of analytic thought, particularly more recently, that pays particularly close attention to the distinction between "knowing that" and "knowing how" (and even "knowing why").

    The question is which sort of knowledge is paradigmatic of knowledge in its fullest sense (or maybe none and we have a sui generis plurality?) In general, justified true belief has, in part because of the particular philosophy of language and rationality in vouge, tended to focus on the justified/informed affirmation of true propositions.

    However, it seems fair to question if the horse tamer and the horse researcher, who both read on and spend their lives with horses, might know horses in a way that someone who has simply read some books on them (and so holds justified, true beliefs) does not. I suppose the philosophy of perception/imagination is relevant here too.

    A big issue in OC is precisely what comes up when all knowledge is demonstrative knowledge. This problem is an old one. In this case, an infinite regress of (circular) syllogisms would be required.
  • p and "I think p"


    He’s being a little sarcastic, in my reading, but his meaning is clear: If we continue to allow p to float somewhere in the World 3 of abstracta, without acknowledging its dependence on thought1, we are going to get a lot of things wrong.

    Makes sense, for Big Heg, "the truth is the whole," and the process of knowing and the knower is not excluded from the Hegelian circle.

    I seem to recall from past Rödel exposure that one of the crucial points he makes is that an understanding of action and actors is essential to understanding the world. We are involved in the world (a point going back to Aristotle), not passive recipients, as in many empiricist views. A notion of ends and aims, terminating ultimately in the Good and the True (unified in the Absolute Idea) is required to fully explain this. "All men by nature desire to know," and the first principle of science is wonder.

    It's like Plotinus says, thinking and being are two sides of the same coin and, at the limit, in the One/Absolute, they are not two things.

    But we can slide away from this into confusion and multiplicity, which is what excising any thinker from thoughts does.

    If “the I think accompanies all our thoughts” has been rendered uncontroversial, is it now also uninteresting, unimportant? This is a further question, which I’m continuing to reflect on.

    It seems very important in a context where propositions are often thought to stand in for or "represent" states of affairs (say, as physical ensembles) that bear nothing more than a contingent relation to thought.

    One might think this should be more normative though: "P" should include "I think P." It seems clear that it fails to for some people (or is at least heavily obscured). Rödl might be in danger of, as Big Heg puts it, arguing that the "flower refutes the bud."
  • Question for Aristotelians


    For example, judging that Obama lives in Chicago and judging that I correctly judge that Obama lives in Chicago are not two distinct judgments. This is just one act of judgment, which is at once a judgment that Obama lives in Chicago and a judgment that I correctly judge that Obama lives in Chicago. Often the adverb "correctly" (also "validly" or "rightly") drops out, and Rödl frames the self-consciousness of judgment as the idea that judging that p and judging that I judge that p are not distinct. For example, he says that "the act of the mind expressed by So it is is the same as the one expressed by I think it is so" (6). In any case, the thought is that one cannot pry apart the judgment that things are so and the judgment that I (correctly) judge that things are so. When I judge that Obama lives in Chicago, in that very act of judgment, I also judge that I (correctly) judge that Obama lives in Chicago.

    Hmm, this is exactly what Sokolowski tries to disambiguate. For one, we can fail to be proper "agents of truth." We can live into our nature in this respect more or less well. We can side with Machiavelli over Cicero as respects the practicality of lying and deception. We can attempt to simply abdicate our role as agents of truth. Indeed we can do so self-consciously, essentially refusing to take on Rödel's notion here.

    This is exactly what lands Pilate in Hell in my preferred reading of the Inferno (https://thephilosophyforum.com/discussion/comment/959841).

    Now, in a certain sense, I feel like Rödel is simply getting at the "agent of truth," idea from a different angle, which isn't surprising because there is a confluence of historical influences here. The resemblance to Neoplatonic readings of Aristotle I mentioned does not seem off-base given: "We need to understand how judgment can be knowledge of what is in such a way that, precisely as knowledge of what is, it is nothing other than self‐knowledge of knowledge, or knowledge knowing itself. This is a formula of absolute idealism."

    However, either this reviewer has missed the mark (which is possible, because I agree that Rödel writes confusingly) or Rödel seems to be collapsing knowing and judgement, while also assuming that all knowing reaches the status of Absolute Knowing (noesis). Maybe it's just the reviewer, but the mutable, contingent fact that "Obama lives in Chicago," is not the sort of thing that is really the proper object of noesis.

    I always read PhS as sort of suggesting, like Aristotle, that Absolute Knowing is more a sort of a virtue—and I suppose it might make more sense if the recognition of the self-conscious nature of knowledge is an ideal we are removing road blocks to attain, as opposed to something clearly applying to all human thought.
  • Question for Aristotelians


    From the other thread:

    I hate to say it, but a great deal of this comes down to how we want to use very ordinary words like "thought" and "accompany."

    It's worth noting that for Aristotle thought is far broader than judgement. Judgement comes in two forms, one involving affirmation and negation and the other definition. "Knowledge" comes in many more forms than in most other thinkers (a good thing IMO, we use "know" in many different senses).

    Thought is necessarily as broad as can be, since the mind is "potentially all things" (Aristotle draws the comparison to a blank slate upon which anything might be written in De Anima, although this cashes out in a way that is radically different from Locke's later invocation of the same image). Thought is, in this sense, a parallel of prime mater, although it is also, as determinate, the parallel of act/eidos.

    Rödl's point on the self-awareness of assertion reminds me of Plotinus and co.'s interpretation and expansion of Aristotle. Except that for them this self-awareness only applies to knowledge, and really only to that knowledge that is most properly called "knowledge," which involves the co-identity of the intelligible and the intellect, as opposed to mere (informed, true) belief. All knowledge of this sort is, in a sense, self-knowledge, whether that be through the "undescended intellect" of Plotinus, or later attempts to put this capacity "within" embodied intellect. St. Augustine, for instance, has all (true) knowledge coming through a process of turning inward and upward. The mind is a "microcosm" of being (St. Bonaventure) and knowing is a sort of conformity, but also a sort of self-knowing; however, the microcosm is not a representation (which would open the Neoplatonists to all the charges of the Sextus and the other Empiricists).

    I think Rödl is on much shakier ground though, because it's less obvious that this sort of self-reflection is either implied in all judgements, nor does it seem impossible in recursive judgements.

    For one thing, in the broader English sense of "judge," we do things like judge where a line drive hit to us in baseball is going to land and run to catch it without any obvious self-reflection.

    But more importantly, we often do seriously reason about our own judgements when it comes to practical reason. If we are continually courting lust, gluttony, and wrath, we might seriously question if we truly judge these to be bad. Do we really believe what we think we believe? We might affirm it with good justification, but do we know it, do we understand it, do we possess a noetic grasp of it?

    Perhaps we do judge such vices to be bad, but perhaps we do not know this, or perhaps we only know it in a muddled in unclear way (consider here Plato's dual contentions that the person who truly knows Justice does not act unjustly, and that knowing Justice requires turning the "whole person" towards it in the first place).

    As mentioned earlier, all knowledge is knowledge of eidos and of universals. If we predicated unique terms of unique sensations we could not be wrong, and no meaningful knowledge of an unbounded number of causes can be had but through a grasp of finite, unifying principles. But Aristotle tanks the idea of subsistent forms (and it's unclear if Plato even intended this, although he was certainly interpreted this way) and later commentators tank the idea of totally subsistent natures, leading to the idea that all knowledge is ultimately knowledge of the Logos/One/God. Or for Hegel, it is knowledge of the Absolute.

    And self-knowledge is implied here. For Hegel Spirit is an essential moment, not contingent. However, it's also a moment that needs to be attained. You have to suffer read through PhS and the Logic to get to Absolute Knowing, which is itself historically situated. But it isn't a facet of every moment. It's decidedly absent from the moment of sense certainty where PhS begins.

    Another way to look at the distinction is the difference between "first person declarative" and "informational" statements. Robert Sokolowski's very interesting phenomenology-centered approach to these same questions (also through the lens of St. Thomas and Aristotle) in "The Phenomenology of the Human Person," plumbs this distinction. In the former, our "I" statements involve us as thinking, agents of truth. However, we can also make merely informational statements about the world and ourselves without asserting ourselves as agents.

    What Sokolowski gets right in the tradition is that demonstration is a means of grasping the intelligibility of things, of knowing, not what knowledge, much less thought, wholly consists in. For instance, science is a virtue, not a set of demonstrations. The idea that to "think p" is to judge p, and also to judge that one judges p, seems to court the reduction of thought to judgement (which does happen in many philosophies, there is a sort of Cartesian theater of sensation and imagination that the "buffered self" thinks about, which is more along the lines of what I think Kant is getting at).
  • An Analysis of "On Certainty"


    They are not equivalent.

    I never suggested they were. I think what is "off the mark" is your reading comprehension. The premise that prior traditions rejected was that knowledge is a type of belief at all.

    Nothing in that posts suggests "analytic philosophy tends to assume that all belief is knowledge." That is clearly false and a silly thing to suggest.
  • An Analysis of "On Certainty"


    2. Where do we find Wittgenstein claiming that knowledge is justified true belief?

    The idea that it's absurd to say one "knows" that one has a toothache suggests that "knowing" is about justification. The idea that one can (indeed, just be able to) doubt anything one "knows" also makes it pretty clear that "knowledge" here is something like belief.

    When he is talking about how it is nonsense to discuss whether a rod does or doesn't have length, or asking "are there physical objects," the key idea seems to be that knowledge involves both belief and verification. I don't think Sam is wrong on this interpretation (as noted, I do think many—on solid grounds—might reject Wittgenstein's premises.)

    1. Where in the grammar of ordinary language do we find the idea that knowledge is justified true belief?

    I don't know if we do. It probably varies by time and epoch as well. When someone says: "I know what it is like to lose a parent," they aren't talking about affirming the proposition that one of their parents has died, for instance. And "to know someone well," doesn't seem to be just to have a lot of true beliefs about someone. We might have many true beliefs about someone we have never met and claim not to know them.
  • An Analysis of "On Certainty"


    You implied - stated - that Wittgenstein, and analytic approaches generally, equate belief and knowledge. That is not so.

    They do. "Justified true belief," was and is an extremely common definition of knowledge in analytic philosophy. Do you deny this?
  • An Analysis of "On Certainty"


    Wittgenstein certainly did not equate knowledge and belief. He consistently takes knowledge to be both believed and true, and spends much effort in working through what else is needed.

    Of course knowledge must be true. A true belief is a belief though. The contested position would be that knowledge is merely (justified) true belief.

    If knowledge is just belief, and one can never "step outside belief" to compare belief with the subjects of belief, then all knowledge is uncertain. This problem (and related infinite regresses of representations) are why correspondence definitions of truth had a nadir from late antiquity to the early modern period, and why folks like Hegel still vigorously object to them.

    Truth does not require justification. A proposition may be either true or not true, regardless of its being justified, known or believed.

    Right, that's what I said would be most controversial in Sam's premises.
  • Hinton (father of AI) explains why AI is sentient
    Anyhow, I feel like: "Hinton explains why neither we nor AI are sentient," might be more accurate here. :wink:
  • Behavior and being


    I would say here: things cannot be figurative language and metaphor all the way down. Right?

    Sure. However, a concern with that might denote a conflation of the means of knowing and communicating with what is known and communicated. When we read a book about insects or French history, do we only learn about words since that is all the books contain?

    To be sure, there can be figurative language that is more or less empty, but it is not all empty. Plato and Dante are two of the finest philosophers in history and both make extensive use of imagery and present their works in narratives packed with symbolism and drama. They are successful, in part, not in spite of this technique but because of it.

    Indeed, both suggest that what they most want to speak about cannot be approached directly, through syllogism and dissertation, but must "leap from one soul to another, as a flame jumps between candles."

    IMO, one of the great losses in modern philosophy is its move away from drama and verse. Nietzsche is a standout for this in our own epoch, and yet many of the older great works, from Parmenides, to Plato, to St. Augustine, to Boethius, to Dante, to St. John of the Cross to Voltaire are filled with it. Camus and Sarte it seems, were not enough to start a trend.
  • An Analysis of "On Certainty"


    I think your conclusions work fine. A lot of philosophy would take issue with P1 and P4 (is P4 supposed to be a conclusion rather?).

    Wittgenstein stays within his narrow analytic context (since he never much ventured beyond it), but the idea that:

    A. Knowledge is belief.
    B. That truth (particularly in a "traditional sense") requires justification.

    Are both historically hotly contested issues. For the Platonic and Aristotelian traditions for instance, knowledge cannot be belief. If it was, this would lead to all the problems of representationalism and towards universal skepticism (of the ancient sort). Knowledge is, for them, rather the co-identity of the intellect and the intelligible that is known.

    Or, in the mutable realm, we could consider how "acquiring carnal knowledge" of another man's wife was considered a sin that had nothing to do with belief.

    Then, more broadly, and more popularly in the modern context, truth is something like "the adequacy of intellect to being." But such adequacy, while it might itself be known through justification, is not defined in terms of justification and does not require it as some sort of "prerequisite."

    And then if "justification" is meant to be something like: "moving from premises to consequents" in speech, propositional thought, writing, or formal logic, there will be further disagreement. Just for one example:

    ...neither are all things unutterable nor all utterable; neither all unknowable nor all knowable. But the knowable belongs to one order, and the utterable to another; just as it is one thing to speak and another thing to know.

    Saint John of Damascus - An Exposition of the Orthodox Faith

    Now John of Damascus is a saint for both Catholics and the Orthodox, but you'll see ideas like this (and going further) embraced a lot more in eastern thought, and it leads to a much different view of justification. Whereas the cataclysmic Wars of Religion that rocked Latin Christendom elevated a very specific sort of rigorous, legalistic, and above all written/deductive form of "justification" as the norm.
  • Behavior and being


    I'm not really sure what point you're trying to make. A blanket renouncement of figurative language and metaphor? I don't know the context of the quote, but it certainly seems like it could be plenty meaningful, and an indictment. There is much in correlationism and representationalism's skepticism that might rightly fall into what Hegel terms the "fear of error become fear of truth," in the Preface to the Phenomenology.

    Of course, as this thread is well an example, skepticism, doubt, and an aloofness from wonder and truth, once philosophical vices, have become a virtue in our era, the highest virtue being the "tolerance" of "bourgeois metaphysics." I don't think this is "howling in a bear trap," so much as being happy to sit in the trap, even as gangrene sets in.

    A metaphorical critique can work here. For instance, the first of the damned that Dante and Virgil encounter in the Commedia are the souls of those who refused to take any stand while on Earth. Barred from Heaven, they are also rejected by the rebellious demons of Hell and forced to spend eternity aimlessly chasing a banner that flees arbitrarily ahead of them all around Hell's vestibule, their ceaseless pace a parody of the vigor and conviction they lacked in life.

    Pointedly, none of these are named; they have no legacy. One might be Saint/Pope Celestine, who abdicated the papacy, but I find it more probable and poetic that the one who "made the great refusal," is supposed to be Pontius Pilate, who, to dodge responsibility for killing an innocent man, responded to Christ—the Logos and the "Way the Truth, and the Light" itself—"but what is truth?"

    See, plenty of good work to be done by metaphor and image!
  • Hinton (father of AI) explains why AI is sentient


    Reminds me of the opening of the Abolition of Man:

    In their second chapter Gaius and Titius quote the well-known story of Coleridge at the waterfall. You remember that there were two tourists present: that one called it 'sublime' and the other 'pretty'; and that Coleridge mentally endorsed the first judgement and rejected the second with disgust. Gaius and Titius comment as follows: 'When the man said This is sublime, he appeared to be making a remark about the waterfall... Actually ... he was not making a remark about the waterfall, but a remark about his own feelings. What he was saying was really I have feelings associated in my mind with the word "Sublime", or shortly, I have sublime feelings' Here are a good many deep questions settled in a pretty summary fashion. But the authors are not yet finished. They add: 'This confusion is continually present in language as we use it. We appear to be saying something very important about something: and actually we are only saying something about our own feelings.'1

    Before considering the issues really raised by this momentous little paragraph (designed, you will remember, for 'the upper forms of schools') we must eliminate one mere confusion into which Gaius and Titius have fallen. Even on their own view—on any conceivable view—the man who says This is sublime cannot mean I have sublime feelings. Even if it were granted that such qualities as sublimity were simply and solely projected into things from our own emotions, yet the emotions which prompt the projection are the correlatives, and therefore almost the opposites, of the qualities projected. The feelings which make a man call an object sublime are not sublime feelings but feelings of veneration. If This is sublime is to be reduced at all to a statement about the speaker's feelings, the proper translation would be I have humble feelings. If the view held by Gaius and Titius were consistently applied it would lead to obvious absurdities. It would force them to maintain that You are contemptible means I have contemptible feelings', in fact that Your feelings are contemptiblemeans My feelings are contemptible...[/i]

    ...until quite modern times all teachers and even all men believed the universe to be such that certain emotional reactions on our part could be either congruous or incongruous to it—believed, in fact, that objects did not merely receive, but could merit, our approval or disapproval, our reverence or our contempt. The reason why Coleridge agreed with the tourist who called the cataract sublime and disagreed with the one who called it pretty was of course that he believed inanimate nature to be such that certain responses could be more 'just' or 'ordinate' or 'appropriate'to it than others. And he believed (correctly) that the tourists thought the same.The man who called the cataract sublime was not intending simply to describe his own emotions about it: he was also claiming that the object was one which merited those emotions. But for this claim there would be nothing to agree or disagree about. To disagree with "This is pretty" if those words simply described the lady's feelings, would be absurd: if she had said "I feel sick" Coleridge would hardly have replied "No; I feel quite well."

    When Shelley, having compared the human sensibility to an Aeolian lyre, goes on to add that it differs from a lyre in having a power of 'internal adjustment' whereby it can 'accommodate its chords to the motions of that which strikes them', 9 he is assuming the same belief. 'Can you be righteous', asks Traherne, 'unless you be just in rendering to things their due esteem? All things were made to be yours and you were made to prize them according to their value.'10

    Of course most people claim they have subjective experiences, that there is a 'whatness' to the objects of experience, etc. But, on this view these assertions really mean something like: "unless my perceptual system is in grave error, I must have subjective experience, apprehend a 'whatness' in my experiences, and have an intelligible content to my thoughts."

    I'm not sure what to make of this. On the one hand, it suggests that most people, most of the time are suffering from delusions, that our sensory systems are generally in profound error down to our most bedrock beliefs. Yet, given this is the case, why is the advocate for this radical retranslation more likely to be correct themselves? Indeed, the thesis itself seems to presuppose that it itself at least does have intelligible content, rather than simply being a string of signs correlated with some given inputs.

    At any rate, this sort of radical retranslation of what folks like Plato, Plotinus, Kant, Nietzsche, etc. really mean seems to land one in the category of hostile, bad faith translations. This is fairly obviously not what they mean. One has to have begged the question and assumed the core premise to start with the justify such a radical retranslation.

    This is hardly a unique strategy though. Contemporary philosophy of language is filled with claims like:

    "For words to have 'meanings' it must be the case that such meanings can be explained in some sort of succinct formalism (e.g. Carnap-Bar Hillel semantic information based on the number of possible worlds excluded by an utterance). But I can't perfect such a formalism and I don't think anyone else can, thus conventional notions of meaning must be eliminated."

    Or: "Let us begin with the premises of behaviorism. Clearly, it is impossible to discover any such 'meanings,' ergo they must be eliminated."

    Well, in either case the premises in question might very well be rejected. Yet there is a tendency to go about simply assuming the controversial premises (which is essentially assuming the conclusion in question).



    What's an example of an organism choosing its motives, goals, or purposes? Aren't those things we discover rather than determine?

    Something like Harry Frankfurt's "second order volitions," perhaps?

    I would agree that purposes are, in some sense, something discovered. But they are also something we determine, and at the limit, the Platonic or Hegelian "search for what is truly best," (or Kierkegaard's pursuit of the subjective), it would be something like: "it is our purpose/telos to become free to determine our aims," with freedom as the classical "self-determining capacity to actualize the Good."



    You wrote that humans are reflexively aware of themselves. This aligns with the notion of subjectivity as consciousness, and consciousness as self-consciousness ( S=S). When God was believed to be the origin of all things, he-she was deemed as the true being, the basis on which to understand all other beings. When man eclipsed god, subjectivity and consciousness took on this role of true Being. An object is that that which appears before a positing self-affecting subject.

    A different way to think about being is articulated by people like Heidegger. When he says that Dasein is the being who cares about his own existence, he is rejecting the notions of
    subjectively as identity, as self-reflective awareness (S=S), in favor of the notion of being as becoming , as practical action. Being as thrownness into a world. This is consistent with Pierre-Normand‘s suggestion that the appearance of subjectivty ‘emerges from the co-constitution of the animal/person with its natural and social environment, or habitat and community.’

    Yes, but a common criticism of Heidegger (e.g. from Gadamer) suggests itself here. Heidegger uses the late-medieval nominalism he is familiar with (e.g. Saurez) as the model for all prior philosophy, reading it back into past thought.

    God is not a being in prior thought though. God doesn't sit on a Porphyrian tree as infinite substance alongside finite substance for the same reason that the Good is not on Plato's divided line. E.g., "If I am forced to say whether or not God exists, I am closer to his truth in saying he does not exist," (St. Maximus), or "it is wrong to say God exists. It is wrong to say God does not exist. But it is more wrong to say God does not exist." (Dionysius), or: "God is nothing," (Eriugena).

    God as "thought thinking itself" (Aristotle), or as "will willing itself) (Plotinus' expansion) has a very different ring if assessed within the modern presupposition that there is something outside of act/intellect, and that subjectivity is essentially representational.


    If it cannot, then my argument that only humans and other living organisms can change their normative motives, goals and purposes would seem to fail. But I would argue that this way of thinking assumes a split between psycho-social and biological processes, ontogeny and phylogeny, nature and culture. It is now understood that behavior feeds back to and shapes the direction of evolutionary processes directly through its effect of genetic structures. This means that the biological brain-body architecture organizing human motives, norms and purposes exists in a mutual feedback loop with cultural behavioral processes. Each affects and changes the other over time. The same is true of the machines we invent, but in a different way. We produce a particular A.I. architecture, and the spread of its use throughout culture changes the nature of society, and sparks ideas for innovations in A.I. systems.

    But notice that human intelligence functions as interactive coping in contextually specific circumstances as an intrinsic part of a wider feedforward-feedback ecology that brings into play not only our reciprocal exchanges with other humans but also other animals and material circumstances. Machine ‘intelligence’, by contrast, does not participate directly in this ecological becoming. There is no true mutual affecting taking place when we communicate with ChatGPT. It is a kind of recorded intelligence, a dynamic text that we interpret, but like all texts , it is not rewriting itself even when it seems to respond so creatively to our queries.

    :up:
  • Behavior and being




    Harman rejects Whitehead’s relationalism for two reasons: 1) he worries it reduces ontology to “a house of mirrors” wherein, because a thing just is a unification of its prehensions of other things, there is never finally any there there beneath its internal reflections of others;

    I've only read one of Whitehead's books, but this does seem to be a problem for process philosophy in general. Of course, simply positing objects and essences does very little to fix the issue either. If the question: "why do some sorts of processes just happen to occur?" is problematic (which I'd agree it is), it seems the same sort of question would be problematic for objects, which was Srap's point earlier.

    However, this is not a problem supposing objects with nature's/essences that are (perhaps relatively) intelligible in themselves and self-determining (if not self-subsistent). However, such things, being in the order of becoming, are in some sense processes as well, although processes with an intelligible locus.

    “The ontological structure of the world does not evolve…which is precisely what makes it an ontological structure” [GM, 24]


    This is a deeper problem for process theologians from what I've seen, the inability to avoid self-refutation by making everything mutable. Also, there is a sort of move from the directed procession of the Absolute in Hegel to an arbitrary progression (because arbitrariness is "more free" and "creative."




    Why is it that people agree on so much? I think this comes down to how norms of judgement are generated. Peoples' eyes agree on object locations very durably, so location within a room works like that. Even if they might disagree on the true locations of objects when the rulers come out - like if my coaster is 30cm or 30.005cm from the nearest edge of my desk to me. If correct assertibility is an assay, truth is crucible.

    This seems to require that what you're saying about how people's eyes and measurements agree is actually true. If it's "norms of assertibility all the way down," then everything you've just claimed is only true relative to some contexts. Is this context universal even for all human beings? Well, according to the radical skeptics, the cognitive relativists, etc. it isn't. They reject your norms because they reject your judgements.

    When people share the same contexts, the overwhelming majority of conduct norms about such basic things are very fixed like that. That includes various inferences, like "if you put your hand in the fire, you'll burn it", "don't put your hand in the fire" comes along with that as the judgement that burning your hand in the fire for no reason is bad is very readily caused by the pain of it.

    Yes, but this doesn't speak to the very many cases where people don't agree, and have radically different assertibility criteria. Consider for instance, the difference between a radical fundamentalist, who thinks their literalist interpretation of the Bible or Koran is the ultimate standard of truth versus an follower of atheistic scientism. They have incommensurate assertability criteria. Are they then both speaking truth when they assert contradictory claims that meet their disparate criteria?


    In the latter regard, there's a room for a moral realism in terms of correct assertibility, since the conventions are so durable and there's room to claim that "needless harm is bad" is true.

    Sure, there is room to "claim" that "raping and torturing like the BTK killer is bad" is true. But there is clearly room in our modern discourse to claim the opposite, i e., that moral nihilism or extreme relativism is the case. Indeed, people claim these sorts of things all the time; they are extremely popular assertions in the context of our current norms.

    So is torturing children for fun bad? Is the truth of this something that changes from time to time and place to place, based on the norms in vouge? If norms decide this, then it obviously does, since norms concerning child slaves were extremely permissive through much of human history.

Count Timothy von Icarus

Start FollowingSend a Message