Comments

  • Inherent subjectivity of perception.
    I wonder, at what point does the agreement, "there is a truth," degenerate into the disagreement "this is the truth?"Pantagruel

    The Peircean answer is when it becomes "my truth" rather than "our truth".

    Language binds us as social animals to a collective identity, a communal point of view, a culturally-constructed model of "the self". So "truth" becomes that to which a community of inquirers practising practical reasoning would tend.

    The community of inquiry is broadly defined as any group of individuals involved in a process of empirical or conceptual inquiry into problematic situations. This concept was novel in its emphasis on the social quality and contingency of knowledge formation in the sciences, contrary to the Cartesian model of science, which assumes a fixed, unchanging reality that is objectively knowable by rational observers. The community of inquiry emphasizes that knowledge is necessarily embedded within a social context and, thus, requires intersubjective agreement among those involved in the process of inquiry for legitimacy.

    https://en.wikipedia.org/wiki/Community_of_inquiry

    Pragmatism navigates the middle path between the extremes of relativism and positivism, or idealism and realism.
  • Inherent subjectivity of perception.
    That's an interesting assumption. Nothing more.Banno

    As I have argued, it seems more like basic psychological science. Understanding cognition and "truth making" has been a major human endeavour of the last 100 years.

    So I am curious. What is your actual model of the psychological processes that are in play when we utter propositions that speak of the world? Show by telling how you are not merely making the familiar errors of the naive realist.

    Psychology feels it has it well worked out. What is it that you dispute and why?
  • Inherent subjectivity of perception.
    Mine fits with my understanding, I'm sure yours fits with yours.Pantagruel

    Heh heh. The flat earther is the ultimate naive realist. Their point of view is the most subjective possible. It avoids all revisionist fact.
  • Inherent subjectivity of perception.
    As perception is the recognition of something already learned, then, how to perceive objective information, when subjectivity (its antithesis) lies in perception?Marax

    One point to consider is that perception is possible because it is the brain forming a model of the world with "you" in it.

    The brain is not trying to see the world "as it is" in some actually objective sense. That would be silly and useless.

    Instead, it is learning to construct a point of view in which "you" are being experienced as being an actor in a "world" that makes sense in terms of all that you could do (or not do).

    So in one sense, it is all subjective - the idealist trope. There is only the model in your head. The model that is of "you" in your "world". But it is also all objective. There is actually the world. And now there is also this further objective fact of a creature with a brain and a set of intentions, running about doing material things in causally effective fashion. The self may be an idea - the construction of a viewpoint - but there are physicalist consequences that flow from the reality modelling.

    This is the view of perception that is now standard in the enactive or embodied approach to cognition.

    An act of perception has to make the two things of the world that is being seen and the self that is the anchoring locus of that seeing.

    The fact that both are a dynamical construction - two aspects of the one co-construction - can be demonstrated by what happens to you in a sensory deprivation chamber. A lack of feedback from the world also results in a depersonalisation of the self. Our physical boundaries disappear when we no longer feel the world in resistance to our actions.

    So perception is the act that leads to a stable seeming world and a stable seeming self as the two sides of the same process of constructing a "meaningful point of view". A model of the world with us as its centre.

    And that is why in perception the useful "information" is not objective. It is really a semiotic system of sign. The sensory system is set up from the get-go to deliver only the differences that make a difference.

    The brain has a model of the world in terms of what it has learnt to expect. And so it is then primed only to react to the physical events - the patterns of sensory energy - that could count as unpredicted and surprising.

    So this is another reversal on the usual view that the brain is a computer that must crunch its input - that objective physical information.

    In fact it processes the world the other way round. It has already decided how the world should be if nothing surprising or meaningfully different happens. And from that very self-centred perspective, anything actually novel or unexpected must leap out as the aspect of reality to quickly analyse and assimilate as best as possible to the running model of the self~world relationship.

    So the self predicts the world. The world is perceived ahead of time at the level of a perceptual habit or expectation. And then the act of perception is completed by a discovery of what failed to be foreseen and now has to be assimilated to re-stabilise that sense of being in the world as a causally-effective agent.

    Perception is more about filtering out the actual world - as that blooming, buzzing, indeterminate confusion - so as to construct a perfectly self-centred point of view where there is a world that makes complete sense in terms of our intentionality or agency.

    Perception is as much a business of making an intelligible self, as making an intelligible world, in short.
  • Inherent subjectivity of perception.
    If you are standing at the antipodes of the globe, and the ball is dropped, then what is it's direction, relative to you?Pantagruel

    Zing!
  • Definitions
    In fact, I could prove to you with fMRI, that Pavlovian response triggers, even if they're words, pass neither through the ventral pathway of object recognition, nor through the areas of the cerebral cortex where we might expect with some concept recognition, but rather straight to the sensorimotor systems to get you to duck.Isaac

    Interesting example. What does it highlight then? For me, it demonstrates the developmental trajectory from iconic to indexical to fully symbolic levels of language. And how this becomes so as novelty (which would demand the whole brain being applied) becomes reduced to the simplest habit (where the brain simply emits a response without conscious deliberation).

    You have to wonder where “duck” became a word that could mean get your head out of the way fast? I would guess it arose iconically. The image I have is of the way a duck bobs its head. So there would have to have been some process of habituating that image within a language community - distilling it down to a learnt motor pattern where not stopping to consider the imagistic analogy was a major part of the deal.

    Shouting “magpie” might be a more meaningful command where I live. They have a habit of actually going for heads.

    But anyway, a key thing about symbols is in fact their lack of direct representation of anything they might represent. We call a duck a duck rather than a quack quack. The four letters and the sound they make could be the symbol for any habit of thought or behaviour. And that is precisely why they are so meaningful once we associate them with just the one (general) habit of interpretation. If some word noise is intrinsically meaningless, then that makes our employment of it the most purely symbolic. It is rid of the iconicity or indexicality it might otherwise have.

    So my point seems to be that we have a process of refinement going on. And the different views on what language is can arise from focusing on either pole of its developmental trajectory. Both sides can feel right as there is evidence for opposing views depending on whether one focuses on the early iconic and imagistic stages, or the late symbolic and unthinkingly habitual stages.
  • Evolution of Logic
    It’s been 30 years since I was digging into that particular literature so I am sketchy on the details. And I chucked out all the papers long ago.

    This is an example of the kind of thing though -https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4206216/#!po=6.09756

    The point was not that great apes couldn’t master a first step of reasoning - the equivalent of a disjunctive syllogism where the ape could tell that if one food reward cup was empty, then the treat was hidden in the other. It was that once you started adding one such rule on top of another, performance fell off fast. It was too much working memory load to keep more than one contradiction in mind.
  • Evolution of Logic
    Is fundamental logic instinctual to organic cognition as a function for processing certain types of spontaneous causality?Enrique

    Experiments have been done to test apes for a capacity to learn simple logic rules. The evidence is they struggle to master more than a step or two of reasoning depth even with training.

    This is what we would expect if logic basically piggy backs on the human capacity for language. We have the neurology for syntactic structure - the recursive grammar trick. We can stack up the if/then steps in our working memories.

    Just speaking is proto-logical in forming our thoughts as grammatically structured causal tales of who did what to whom - the canonical subject-verb-object pattern that organises all language (if not necessarily in that order).

    And if speech acts can be true, then they can be false. Think of the social advantages that came with the invention of lying. A lot of the elements of logic as an explicit reasoning discipline are there once we have speech.

    But logic itself is then a culturally developed habit. Anthropological research showed that illiterate Uzbek herdsmen resisted categorising the world in ways that seem “obvious” to any educated modern person - like putting a set of tools such as an axe and hammer separate from the things the tools acted on, like nails and wood. In their experience, those things went together and there was no abstract distinction that made sense.

    Anyway, this has been a topic of a fair amount of research. Human brains are preadapted for logic due to the recursive or nested structure of grammar. And then actual logic has developed as a useful cultural habit of thought. It becomes embedded through the standard modern childhood. (With various degrees of success perhaps.)
  • Is space/vacuum a substance?
    Probability is not consistent with the three laws, when maintained as three, because identity of an object gives us determinateness.Metaphysician Undercover

    The conclusion I draw is that yes, we can't presume complete determinism. But nor do we then need to lapse into complete indeterminism.

    Pragmatisim is the middle path of constructing a theory of logic in which indeterminism is what gets constrained.

    As an ontology, that says reality is foundationally indeterminate, and yet emergently determinate. And the determinate aspect is not merely something passively existent (as often is taken to be the case with emergence - ie: supervenient or epiphenomenal). It is an active regulatory power. The power of emergent habit. The power of formal and final cause to really shape indeterminate potential into an actualised reality.

    So it is a logical system large enough to speak of the world we find ourselves in - complete with its indeterminant potentials and determining contraints.

    Further, the author of your referred article, Robert Lane, explains how Peirce allows that the term of predication might be defined in a multitude of ways.Metaphysician Undercover

    Again, I am taking the systems view of ontological reality. So the internalist approach that Peirce takes on this would be the feature, not the bug. I'm still digesting that aspect of Lane's argument, but that was one of the sharp ideas that grabbed me.

    Notice how Robert Lane provides no indication, throughout that article, as to how Peirce shows any respect whatsoever to the law of identity in his discussion of the LNC and LEM.Metaphysician Undercover

    There is equivocation here on Peirce's part because his logic of vagueness was a project still in progress.

    His early worked was couched in terms of Firstness - free fluctuations. But as we have discussed, a fluctuation already seems too concrete and individuated. Formal and final cause appear already to be playing a part by that point. A fluctuation has to be a fluctuation in something - or so it would seem.

    This is precisely the obvious hole in the vogue for accounts of the Big Bang as simply a rather large quantum fluctuation. Even if a quantum field is treated as the most abstract thing possible, the field seems to have to pre-date its fluctuation. Verbally at least, we remain trapped in the "prime mover" and "first efficient cause" maze you so enjoy.

    But he was recasting Firstness as Vagueness in later work. And we can see that in his making a triad of the potential, the actual and the general - as the mirror of the three stages of the laws of thought.

    A fluctuation is really a possibility. A spontaneous act, yet one that can be individuated in terms of the context it also reveals. We are nearly there in winding our way back to bootstrapping actuality.

    A step further is "potential" properly understood as a true vagueness. A fluctuation is a spontaneity that is not caused by "the past". It is called for by the finality of its own future - the world it starts to reveal. This is one of the things that smashes the conventional notion of time you prefer to employ.

    But anyway, when it come to the law of identity, it is enough for everyday logic that reality is already reasonably well individuated - at least in the ways that might interest us enough to speak about it. The law of identity can work even if any instance of individuation is merely a case of uncertainty being sufficiently constrained.

    However when we get to ontological questions about the machinery of creation, then this background to the laws of thought become relevant. The details of how things really work can no longer be brushed under the carpet, or shoved in a black box labelled "God".
  • If Brain States are Mental States...
    I don't see the issue. You haven't refuted that brain state language is scientific. You seem to be saying why it's scientific. I'm just claiming it IS scientific.RogueAI

    It's a big deal to me as it was the central issue I was dealing with when I first ventured into mind science as a youth. :grin:

    Artificial intelligence was the first great disappointment. The guys were only talking about machines it turned out. And then brain imaging promised to be the new revolution. Consciousness would be put on the neuroscientific agenda at last as a concord had been agreed with philosophy of mind.

    We would all be starting off in humble fashion by merely identifying the neural correlates of consciousness (NCC) - a dualistic approach where the material explanation in terms of a physical state would be married to the reportable phenomenology produced by a mental state.

    But that great and expensive exercise produced remarkably little directly. It just brought home how muddled people were in their conventional Cartesian division of reality into physical and mental states. All that could result was a doubling down on the underlying dualistic incompatibility of descriptive languages.

    So science did have to question what was "scientific". And for a start, it wasn't speaking of the brain in terms of a machinery with physical states. It had to be some kind of embodied information process - but information processing is another domain of jargon founded on a mechanical "state-based" ontology. Neuroscience couldn't make progress by swapping out a biological mechanism and wheeling in a computational one. That still just left it chasing the phantom of a purely mechanical explanation.

    Long story short, you now have generic models of the "mind~brain" in terms of Bayesian Brain theory and the enactive turn within cognitive psychology, not to mention social constructionism being brought into play to account for the extra features of the human mind~brain system in particular.

    So the science here is a shifting beast.

    Neuroscience was doing a god-awful job in the 1970s as it was basically a branch of medical science and so absolutely wedded to a mechanist ontology. To fix your schizophrenia, the best theory might to be kick your head hard enough that maybe you might repair it like thumping a TV (back when TVs had vacuum tubes and loose connections, so it could work).

    But does modern neuroscience still try to explain the mind - as we humans like to say we experience it - as "talk of neurotransmitters, synaptic gaps, certain chemicals, etc"?

    I would certainly question that. A big picture scientific account would use the jargon appropriate to the whole new level of theorising that has emerged over the past 20 years or so.

    If someone tells you they're in pain (a mental state word, obviously), they've communicated information to you. You know more now than you did before they talked to you. That's meaningful communication.RogueAI

    Exactly. This would be defining "mental vocabulary" in terms of what works in ordinary social and culturally appropriate settings. It is a way of co-ordinating and regulating "other minds" within a shared "mental space" of pragmatically social meaning.

    The problem lies with the extent to which this folk psychology - very useful in the business of existing as a social creature - gets reified as some kind of deep philosophical wisdom. I have "a mind". I can see you have "a mind". Maybe a cockroach has "a mind". Maybe the Comos too? Maybe "mind" is a another substantial property of reality - a soul stuff - like Descartes suggested.

    So what contrasts with the scientific vocabulary? Is it a folk psychology vocabulary? A religious vocabulary? A mystical vocabulary? Where does all this mind talk come from?

    Good anthropological studies show just how culturally specific our own philosophically-embedded mind talk actually is. The Ancient Greeks played a large hand in inventing it as it had a pragmatic use - it gave birth to the cultural notion of a person as a rational individual who could thus play a full part in a rationally-organised society. It was a way to thinking with powerful results in terms of evolving human social structure. A seed was planted that really took off with the Enlightenment and Scientific Revolution (and which, with its flowering, engendered its own counter-revolution of Romanticism and Idealism).

    So mind talk also has its instructive history. It has its pragmatic uses and has continued to evolve to suit the expression of them.

    A greater compatibility between the two sources of language might be a good thing.

    But for me, the irony there is that mind science has to move away from the machine image and become more use to discussing the brain in properly organic terms. While folk psychology also needs to make its shift away from the "dualistic substance" shtick that mostly just ends up aping the errors of an overly-mechanical model of reality.

    Even to oppose the subjective to the objective means you have to buy into the existence of the objective (and vice versa).

    The distinction may be pragmatically useful. People seem to like that sharp separation between the world of machines and the realm of the mind. But the mind~brain question is about whether this distinction is real or merely just our pragmatic social model of the reality.

    Neuroscience has pressed on to deliver answers I am much more comfortable with these days. Dropping talk of "states" is part of that change. Or rather, always framing the word states in quotes to acknowledge the presumptions we have put into play just there.
  • If Brain States are Mental States...
    No, but every chemical state is identical to some physical state. But not the other way around: not every physical state is identical to some chemical state.Pfhorrest

    That's just restating supervenience as a claim. The claim only holds if "states" actually exist in the world rather than in the scientific imagination.

    The language of states - as part of the language of machines - is certainly a pragmatically useful way of looking at reality. If we frame the facts that way, we have an engineering blueprint we can deal with.

    But "states" is a pragmatic construct. And the reality we encounter often doesn't fit that construct so well. The map ain't the territory. And so claims of supervenience must be regarded as having a logical force only within a particular reality-modelling paradigm.
  • If Brain States are Mental States...
    2. Brain state vocabulary is scientific.
    ...
    6. Bob and Sheila can meaningfully communicate about mental states.
    RogueAI

    The flaw in the argument would be the suppressed premise of what kind of communication the second kind is?

    If brain state vocabulary is "scientific", it needs to said what class of vocabulary is instead employed to talk about mental states. Is it merely "unscientific" (a vague contrary claim)? The argument needs to clarify in what way such communication could be meaningful.

    Scientific vocabulary is meaningful in its pragmatic application. If we talk about the world generally as a machine, and thus the brain as a specific kind of mechanism, then the pragmatic effect of this form of language is that - implicitly - we should be able to build this damn thing.

    We are viewing the conscious brain as an example of technology - natural technology - that we can thus hope to replicate once we put what it is and what it does into the appropriate engineering language.

    So "scientific" vocabulary isn't neutral. It has meaning in terms of what it allows us to build. It is all about learning to see reality as a machine (a closed system of material and efficient causes).

    Of course, science is a broad enough church that it doesn't have to reduce absolutely everything to mechanism. And the aim can be also to regulate flows in the world as a substitute to making a machine. Engineering covers that gamut.

    But you see the issue. Brain states language is itself a reflection of a particular reason for describing nature. It aims to extract a blueprint of a machine.

    Then where does mental state vocabulary fit in to the picture? In what sense is it meaningful to someone or some community of thinkers? What is the larger goal in play?

    To be commensurate, the two linguistic communities would have to share the same goal. And they are going to be talking at cross-purposes to the degree that they don't. And in both cases, they may be talking meaningfully (ie: pragmatically), but also, they are both just "talking". They are both modelling the noumenal from within their own systems of phenomenology.

    10. Therefore, (1) is false.RogueAI

    The conclusion can't be so definite as "mental state vocabulary" is too ill-defined here. What makes it meaningful?

    [Note that a social constructionist - as a scientist - would have plenty to say about how humans do use "mental state" language as a pragmatic means of regulating their (social) environment. We talk about our emotions all the time - love, jealousy, boredom, happiness. But are these "feelings" or "culturally meaningful rationalisations"? Even a phenomenologist would examine "feelings of love" and find a whole lot of unreferenced physiological responses that seem fairly aligned with a counter view of the brain and body as "a machine".]
  • Definitions
    The whole "meaning is use" shtick is not wrong, but clearly also not the final word on language as a semiotic phenomenon. We want a properly general theory that covers "everything" in a nicely totalising fashion.

    Such a general theory is that language is about placing limits or constraints on uncertainty. And precision - a "definitional" strength usage - boils down to a sign (a verbal construction) dividing the world with a logical force. The sign has to split the thing in question into what it is, in terms of what it is not. If the goal is precision, this is arrived at via the logical machinery of a dichotomy.

    So this gets at the pragmatics of what language does, why it has such extraordinary power, but also why it is at root a vague business.

    The meaning of any locution is a game. The words could be taken to mean "anything". But what they mean this time is how they function to divide the uncertain world into some binary Gestalt opposition of figure and ground, event and context.

    If there is pointing going on - and in some sense there always is - it is a pointing to some relatively defined thing, but a pointing that involves also pointing away from its "other", the holistic context needed to construct the thing as "that thing".

    So a locution is relative in its logical claim. It is only precise to the degree that it precisifies the "other" - the negative space, the context - that must also be "spoken of" in a definite fashion.

    The fact that language is often not used with that level of precision is a reason why meanings or definitions - the "right habits" of interpretation - feel unstably communicated.

    And there is then the deeper ontological point that the world itself is uncertain or probabilistic. It resists accurate definition because it is not some naive realist "state of affairs" or "set of concrete particulars". It actually is vague or unstable. And language - aiming at crisp bivalence - is simply cutting it to fit its Procrustean bed.

    Offering a definition, at least in part, is informing people of how a word is intended to be used.bert1

    My argument is that language - as a semiotic tool - has this natural goal. It wants to do the powerful thing of regulating nature. And power is maximised by binary precision - the logic of the dialectic. Being able to present a "precise definition" is thus a demonstration of one's mastery of language as just such a tool.

    But we should also remember that we can only point towards something if we are simultaneously seen to be pointing away from something - its dichotomous "other". That is the act that reduces the most uncertainty or entropy, creates the most meaning or information.

    And we should remember that the world we speak about is not itself so crisply divided into a clutter of parts. We do violence if we cut across the actual holism of the world which - pansemiotically - can be regarded as itself a system of sign. A "conversation" nature is having with itself so as to impose a relatively bivalent state of organisation on its own fundamental uncertainty.

    The semiotic view of language is thus a truly general explanation of language as a phenomenon. It is how the Comos itself organises in principle.

    Anyway, summing up, language is a technology of reality stabilisation. For us humans, words allow us to co-ordinate our Umwelts - share our points of view.

    There is always irreducible uncertainty in every stab at creating such a state with words. And yet the dialectical logic - the way words can act as a binary switch - means it is possible to aim as high as we like in asserting some precise state of affairs. While also, the pragmatism of language - the fact that we are social beings operating at many levels of "world-making" - is where the "language games" shtick comes in. Our practical purposes may be quite low brow when shooting the shit with mates.

    Language gives us the means to aim as high as we like. But by dichotomistic definition, the same means can be used to be as vague and ambiguous as one wishes. On the surface, the two can pretend to look the same thing. :wink:
  • Definitions
    What's your problem in answering this question exactly?

    Does one say “not everything” to mean “almost nothing”? Or to mean “well, there are exceptions”?

    No need to get so huffy. Just tell us what your words mean. Point to the right answer. :grin:
  • Definitions
    Do you have a point?Banno

    Clearly. Will you pretend not to see it? Undoubtedly.
  • Definitions
    If...Banno

    Does one say “not everything” to mean “almost nothing”. Or to mean “well, there are exceptions”. A simple exercise in the logic of quantifiers one might have thought. Apparently not.
  • Definitions
    If you say that isn’t the logical corollary of your proposition, then you are agreeing the statement was vague.

    Sounds legit.
  • Definitions
    Sure. But not every thing we do with words is pointing.Banno

    So the corollary is that every thing we do is largely pointing? Harry is thus largely correct?
  • "Turtles all the way down" in physics
    . It confirms my interpretation that when you evoke "a sea of U1 photons", you are talking about the unreachable limits between which time and the universe happen.Olivier5

    I’m not sure what you mean there. For what I have in mind, refer to figure 6.2 in this excellent paper - https://www.mso.anu.edu.au/~charley/papers/LineweaverChap_6.pdf
  • "Turtles all the way down" in physics
    I was trying for a simple explanation. Obviously too simple.

    As the gif illustrates, the plane of rotation is orthogonal to the plane of translation. The sine wave is then observed as a trace on the plane of translation. So it is a helical path mapped on to a side view using complex numbers. A combination of the two kinds of motion taking place and the projection of that onto a plane of observation.
  • "Turtles all the way down" in physics
    A dot on a circle spinning along a straight line traces a cycloid.Olivier5

    The disc rotates around, and rolls along, its origin. Check the gif I linked to.
  • Is space/vacuum a substance?
    This is exactly what I was talking about. If you take the laws of non-contradiction and excluded middle out of context, remove them from their relationship with the law of identity, you no longer have anything to ground truth or falsity in, no substance. Without identity truth and falsity is not relevant.Metaphysician Undercover

    You are reading it backwards. A logical definition of vagueness (and generality) is what helps ground your desired "truth-telling" apparatus. It tells you the conditions under which the laws of thought will fail - ensuring you do what is needed to fix those holes.

    So you have to establish that you are dealing with a concrete case where a binary judgement can apply. The thing in question has to be that thing and no other thing. You can't simply presume it. You have to check it.

    But that is then why you need a pragmatic definition of "truth". One that has measurable consequences.

    Theism routinely by-passes that constraint on logicism. God becomes a concept so general that nothing is impossible of Him, a concept so vague that anything can be taken as evidence of Him.

    There is evil in the world? It's put there as a test. You recovered from your heart attack? It was the power of prayer. But your dead neighbour prayed too? God probably knew he was a paedo.

    You are treating the laws of thought as if they are Platonic abstractions. Peirce was concerned with rooting them in the reality of the world. And so defining when a rule does not apply is necessary to being able to define when it actually does.

    We can never simply assume that the law of identity has been truthfully applied, Peirce was correct in this, and it's the starting point for skepticism.Metaphysician Undercover

    Exactly. But having started skepticism going, we then need to rein it in appropriately. And that is what this is about.

    We want to avoid the two errors of credulity and unbounded skepticism. We want to be like scientists and say, as far as our model goes in terms of the measurements it suggests, the theory is probably true.

    Peirce also did critical work on probability theory so that exact numbers could be put on the relatively likelihood of something being false rather than true. His was a system of logic with methodological consequences.

    It makes no sense to conclude that the law of identity cannot be applied, because that just demonstrates a lack of effort.Metaphysician Undercover

    Again, yes. And what does that effort look like?

    (Reveal: Pragmatism rather than theism!)
  • "Turtles all the way down" in physics
    The way I understand your take, this perfect simplicity at at begining and end of time is still an unreachable limit, a state of affairs that never actually happened at any point in time.Olivier5

    It is more complicated. The simplicity is about the Universe being in a state of perfect thermal equilibrium. That means its expansion - or its "cooling by expanding" - is "adiabatic". The system grows, but does so in such a smooth and even way that its internal balance isn't broken or disturbed. It retains its simplest possible state.

    At the very moment of the Big Bang - given we are assuming it is represented accurately by the Planck limits - it would have had that thermal balance. But it almost immediately got disrupted by the quick succession of symmetry-breaking steps represented by the Standard Model of particle physics - SU3, SU2, SU1, and Higgs. And so the smooth flow - the development of the Universe as a spreading space containing a cooling material - got disrupted.

    This is important for how we even imagine "time". A universe that is just the simplest thing of a spreading bath of radiation is essentially timeless. All action happens at the speed of light - c. And so there just isn't anything different to measure that temporal rate against.

    It is only when particles become "massive" - once the Higgs field in particular gets switched on by the messy symmetry-breaking and particles begin to "weigh" something - that time has the kind of meaning it has for us conventionally. Mass makes it possible for particles to go slower than c. They can even be "at rest" from the right inertial frame perspective.

    In effect, mass makes particles fall out of the general adiabatic flow - the spreading bath of mass-less radiation. There is some part of the initial heat of the Big Bang now lagging behind as crumbs of matter. A gravitating dust, as cosmology puts it. And that - emergently - gives us a new kind of temporal potential to be unwound.

    The mass is moving about slowly as lumps of energy density. It is taking a variable time to do things - have interactions like lumps crashing into each other - while, making the constant backdrop, radiation continues to move at its single rate of c.

    Ultimately - to erase this particular complexity and return to the maximal simplicity of a bath of adiabatically expanding radiation - all the lagging mass particles need to be swept up and boiled away into radiation by black holes. Time as we know it - a spectrum of possible rates between "rest" and c - will then disappear. There will be only a simpler kind of time that is the universal rate of an unchanging thermal equilibrium - a world of event horizons formed by light cones.

    Now we get into the really head-spinning topic of de Sitter models. So I won't start that.

    But the point is that "time" is defined by change. Or the ability for change to happen at a variety of different rates within the one world. And time as we know it only exists from soon after the Big Bang until about the Heat Death - that being the period of Cosmic history during which particles could be massive and so go relatively slower or relatively faster within the gradient of rates defined by the opposing limits or "rest" and c.

    At the Heat Death, this kind of gradient will have been erased. Only a continuing c-rate flow will continue. But that is a kind of frozen state of no effective change. A vanilla and featureless state. So a state that is both eternal and timeless - at least viewed relative to our current "timeful" view of things where differences in rate are a thing that can matter.

    Yet when viewed overall - as a trajectory from a point-like Big Bang to a de Sitter light cone Heat Death space that "freezes out" at a scale of 36 billion light years in diameter - there is clearly some other notion of "time as a global change in state" to be had here.

    Something did happen. One view of it is that a lot of "hot stuff" - the initial Big Bang energy density - got exchanged for a matching amount of "cold stuff", all the vast empty space that could act as a sink in which that heat content could be "wasted".

    But even that is a rather too simple description of the actual deeper simplicity in which the two things of spacetime and energy density would be unified under the description of a theory of quantum gravity or "theory of everything".

    Time, as we conventionally imagine it - a Cartesian axis marked by divisions, an unbounded sequence of instants - gets radically rewritten as we go along here. Time becomes merely an effective "dimension", an emergent distinction. Time is only another way of talking about the possibility of change. And change is always relative to the possibility of stasis.

    The simpler the state of the Universe, the less meaningful difference there is between change and stasis. The definition of an equilibrium is a state where differences don't make a difference. In an ideal gas, every particle changes place freely. But overall, the temperature and pressure stay the same. The gas is effectively in a timeless state.

    So our very notion of time has concealed complexity. It is not a physically simplest state as we normally conceive it - living in a world that has lumpy mass blundering about at any old speed between rest and c.

    At least with space, we accept it has the complexity of three dimensions. Maybe many more with the higher symmetry states modelled by String Theory.

    But with time, we brush all its complexity under the carpet by just modelling it as a single extra "dimension" against which everything can be measured in some abstracted fashion.

    This is why time is the issue to unpick in arriving at a final theory. And thermodynamics - as the probabilistic view of nature, the laws that deal with emergent statistics - would be the key to that.
  • Is space/vacuum a substance?
    Further, the laws of non-contradiction and excluded middle, provide guidelines as to what we can truthfully say about any identified object.Metaphysician Undercover

    That is rather the point. Peirce was highlighting the presumption you have “truthfully” identified an object. Some concrete particular under the first law. And he was drawing out the logical implications of the corollary - the case when the principle of identity doesn’t apply.

    Perhaps a more scientific pair of definitions would be that anything is 'general' in so far as the principle of the excluded middle does not apply to it and is 'vague' in so far as the principle of contradiction does not apply to it.

    Thus, although it is true that "Any proposition you please, 'once you have determined its identity', is either true or false"; yet 'so long as it remains indeterminate and so without identity', it need neither be true that any proposition you please is true, nor that any proposition you please is false.

    So likewise, while it is false that "A proposition 'whose identity I have determined' is both true and false", yet until it is determinate, it may be true that a proposition is true and that a proposition is false.

    C.S. Peirce, 'Collected Papers', CP 5.448
  • Is space/vacuum a substance?
    If you can show me how knowing the truth is possible when the PNC is violated, then I might give up that necessary presumption.Metaphysician Undercover

    The PNC is not about "truth". It is about "validity". Or indeed, merely about "computability".

    So let's take the deflationary tack here.

    The PNC could apply to a world of definite particulars - a mechanical realm of being. It just is the case (it is the ontological truth) that identity has this binary quality of having to be one thing and not its "other". If that is how we find reality, the PNC is a good metaphysical model. We might build in that strong presumption as a given.

    But it is quite reasonable to question the claim the world in fact is divided quite so crisply. Indeed, that is the very thing that quantum indeterminism has challenged in the most fundamental way. If two particles are entangled, there is no fact of the matter as to their individual identity. They happily embody contradictory identities - until the further thing of a wavefunction collapse. A thermal measurement.

    So right there is a canonical modern example of how reality is vague (a quantum potential in which identity is accepting of contradictions). But then - emergently - it can also evolve a binary crispness. The PNC now applies. A definite measurement one way, and not the "other", can be entered in the ledger of history.

    So a logic of vagueness, in which the PNC becomes an emergent feature of classical reality, has direct empirical proof now. Peirce was right in his willingness to question some ancient metaphysical "truths".

    The PNC remains a useful tool because we also know that wavefunctions do get collapsed. Well, that is if you can move past the no-collapse quantum interpretations and accept a thermal decoherence model of time itself. :wink:

    But anyway, wavefunctions do collapse and so the PNC does apply from a classical perspective. Yet we then need a logic of vagueness to account for how the PNC could emerge from a ground of being, a ground of quantum indeterminism, where it patently doesn't.
  • Is space/vacuum a substance?
    Your approach is, who cares if this naturalist metaphysics leads us into contradiction...Metaphysician Undercover

    It only contradicts some assumptions you take as axiomatic to your theism. The PNC is a case in point. A belief in some Newtonian and non-thermal model of time being another.

    It is good that your theism is constrained by the attempt at a self-justifying metaphysics - a rational logical structure. And I agree that conventional scientific metaphysics - being overly reductionist - fails palpably to have this kind of causal closure.

    But that is why pragmatism – particular in the Peircean sense - is the royal route to "truth". It combines that causal closure of the formal metaphysical model with the empirical checks that are needed to be able to say the resulting metaphysical model indeed predicts the world as we can observe or measure it.

    Your reaction to Peirce's relaxation of the PNC is telling. He makes the PNC an emergent limit whereas you cling to it as a brute fact. You need it as an input to construct your system. Peirce showed it to be a natural outcome of any kind of systematic development of a "rational cosmos".

    Sure, you can have an argument against that. But it has to be better than: "I don't like the challenge it creates for my necessary presumptions".
  • "Turtles all the way down" in physics
    though i'm unclear about the U1 part.Olivier5

    U(1) is just the simplest possible symmetry group. It is the symmetry of a rotating circle. And nothing is more symmetric than a circular object.

    If you have a sphere, it always looks the same no matter how you rotate it. That is why the Greek Atomists imagined atoms as little spheres - the simplest material form.

    A triangle (or tetrahedron) would have a more complex symmetry. The smallest turn of a triangle makes a visible difference. You can see right away something has moved. It is only after a 120 degree rotation that the triangle maps back on to itself as if nothing in fact changed.

    Compare that to spinning a circular disc - one that has no marks to give the game away. Nothing visible ever changes no matter how furiously it is turned. The disc could be standing still for all you can tell.

    Photons - as avatars of electromagneticism - have this simplest rotational symmetry. A sine wave is the trace carved out by letting a disc roll for a length by a mark on the circumference. So a photon - understood as a ray with a frequency - is just the simplest way to break the simplest state of symmetry.

    It makes use of the two irreducible freedoms of nature under Noether's Theorem - rotational and translational symmetry. A photon rotates once and rolls one length - as the minimal definition of its existence.

    At the Planck scale, such an electromagnetic event - a U(1)-expressing rotation + roll that marks a single wave-like beat of "hot action", something energetic happening - clearly happens in an unusual place.

    Being confined to a spin and roll limited to a single Planck distance, it would also be the shortest, hence hottest, frequency event to ever exist. And energy being matter, it would also be the most gravitationally massive possible material event - so would curve the spacetime around it to a black hole extreme.

    So it all becomes self defining. To break the simplest symmetry takes the simplest asymmetry - the combination of a spin and a roll that creates the mark, the trace, that is a spacetime-filling and energetic event. A single hot beat. The heat of that event defines the size of that spacetime (due to gravitational closure). And the size of that available spacetime in turn defines the heat that that even must have (due to the severest shortening of its frequency).

    Ah, good point: there is a mathematical limit in terms of mass to infinite spliting, a limit that is equal to 0 mass, just as there is a solution in the form if a mathematical limit in Zeno's paradox.Olivier5

    So what I have just described is different in that instead the zero is about the zero sum game by which we can get "something from nothing" due to a symmetry-breaking that is based on a reciprocal balancing act.

    A photon expresses the world of the circle. We can't tell if a circle is rotating. So that means that if reality is constrained by a generalised demand for maximum symmetry, then the ultimate best solution to that demand is to arrive at the shape of a circle. It is most stable shape in that it always must look the same.

    A circle has translational symmetry as well because - without the help of outside reference marks - we can't tell if it is rolling along. This is the standard relativity. Motion is only detectable if the symmetry of the reference frame is broken in some way.

    And as I say, putting a dot on the edge of a circle free to rotate + roll then counts as the most minimal mark, the simplest symmetry-breaking. The result is the "energetic event" of a spacetime frame that now contains a single sine wave.

    We thus have a toy world described in reciprocal limits. There is both near perfect symmetry (U(1)) and near perfect symmetry-breaking - the dot on the circumference that reveals the still unconstrained "Noether" freedoms of the ability to spin, the ability to roll. The ability to thus mark an empty space constrained to perfect circularity with a sine wave event that the constraints can't in fact eliminate. And what can't be eliminated, must happen.

    Real physics is more complex as the real Big Bang could not access the great simplicity of a U(1) world so directly. It actually had to constrain all the other possible symmetries - the many higher or more complex symmetries of group theory - and remove them from the fray first.

    That created the shower of other particles with more complex rotational actions – the particles of SU(3) and SU(2) symmetries, according to the Standard Model. And even to get to U(1) perfection involves the Higgs kluge that cracks SU(2).

    But the basic picture of what reality is seeking to achieve is to arrive at its greatest state of simplicity - as defined by the complementary limits of a perfect U(1) symmetry broken by a matchingly-perfect least form of asymmetry. The slightest blemish on the Cosmic cheek. :grin:

    The Heat Death tells us that this perfection is where we will arrive in the future.

    Given the discovery of Dark Energy (a new unexplained ingredient in the story), we at least know that the Universe is coasting towards a destiny where spacetime will be best described by a reciprocal U(1) structure of holographic event horizons and their "as cold as possible" black body radiation. That is, a de Sitter cosmology.

    Spacetime will be devoid of matter. Blackholes will have gobbled up all remaining gravitating matter and spat it out as electromagnetic radiation. So spacetime will be empty with an average temperature of zero degrees K. But it will also be filled with the even radiance of a cosmic bath of photons produced by the quantum holographic mechanism - the Unruh effect.

    These would be photons that - in effect - span the width of the visible universe in just a single wavelength. Their frequency would be measured in multi-billions of lightyears. A single rotation + roll that spans the gap that the speed of light can transverse.

    So spot the connection. The beginning of spacetime - the Big Bang - and the Heat Death are mirror images.

    Both are defined by that single U(1) based rotation + roll deal. Except at the Big Bang, the spacetime extent is the smallest possible, making the energy of the frequency as hot as possible. And at the Heat Death, it all has unwound to arrive at the complementary state of the largest possible spacetime extent and thus the coldest possible photon, the lowest possible energy wavelength.

    Simplicity is always the goal. But because complexity has to be constrained first - all the other available higher symmetry states have to be got rid of along the way - it is only by the end of time that U(1) perfection (in terms of a simple circle and its irreducible symmetry breakings) is achieve.

    That is why it isn't turtles all the way down. Existence is a push towards the limiting extreme that is simplicity. And that push is self-terminating in that the constraints (an insistence on arriving at maximal symmetry) contains within it the terminating thing which are those irreducible symmetry breakings.

    Every kind of difference can be eliminated by U(1) circular symmetry, except a rotation and a roll. So already, the necessary blemish is built in to break that symmetry (in the simplest way).

    Reality can go no further as there is no further splintering of the system arrived at. The constraining towards a symmetrical nothingness gets hung up on an irreducible grain of being. Things can go that far and no further - leaving reality as the coldest-possible fizzle of holographic event horizon radiation. Photons with the physical wavelength of the visible universe - that sea I speak of.

    Charlie Lineweaver at ANU has written a bunch of decent papers about all this.

    And as a caveat, Dark Energy remains a fly in the ointment. It is necessary to explain why spacetime expansion will get truncated by the de Sitter event horizon mechanism. But we need some further bit of machinery - another Higgs-like field or irreducible entanglement - to fold that into the final theory of everything.

    As someone once said, explanations ought to be as simple as possible. But not too simple.

    U(1) is the simplest possible story. But getting there was not a simple process as all other symmetries had the possibility of being the case. And the way they would then interact and entangle with each other becomes part of the story of where things actually wound hung up in practice.

    The world of quark/SU(3) symmetry and lepton/SU(2) symmetry, plus the Higgs mechanism, is how we are all still hung up at that more complex level of things at the moment. The Universe is still breaking its way down through all those entanglements along the ultimate path.

    The more complex symmetries have more complex spin states - chiral spin. And they thus have their own equivalent irreducible rotational symmetries. Higher level Noether freedoms that can't be eliminated directly.

    By rights, in a symmetric world, matter ought to be annihilated by anti-matter leaving only radiation. But these complex spins produce uneven outcomes. So some matter survives. Quarks can then protect themselves by forming triplet structures like protons and neutrons.

    And so that complexity could last forever. Proton and neutron crud messing up the empty perfection of a cooling and expanding void. A flood of ghostly neutrinos as well, messing up reality with their pointless SU(2) weak force interactions.

    So long as black holes perform as advertised - hoovering up the crud and evaporating it into photons - the universe can get there in the end. SU(3) and SU(2) will be rendered relic memories. Maybe surprisingly, the Cosmos will arrive at the mathematically ultimate state of simplicity in terms of its symmetry - and the symmetry-breaking events, the holographic U(1) photons, needed to reveal that that symmetry in fact "exists".

    The edge of the disc has to be marked to reveal the world within which it can rotate + roll. The blemish is needed to complete the deal that conjures "something from nothing".
  • Is space/vacuum a substance?
    . The scientific community hijacks and restricts the use of "time" to conform to their empirical observations, i.e. they define time in relation to the material world.Metaphysician Undercover

    Yes, we can consider this a contest between pragmatic naturalism and dogmatic theism if you like. One holds consequences here in the real world. The other not so much.
  • Mathematics as a way to verify metaphysical ideas?
    And there are logically consistent but mutually inconsistent theories out there.fishfry

    But Euclidean geometry was merely shown to be a special limit case on non-Euclidean geometry.

    The two were consistent. The advance was to find a parameter - a constraint on parallel lines - that could be relaxed to arrive at an even more symmetric or generalised mathematical structure.

    And in that, the maths rather exactly mirrored the physics.

    The everyday world looks flat and Euclidean - at the inertial scale we are likely to be measuring it. That only proves to be the special case as we become able to step back and factor in gravitational and quantum "curvature". The flatness becomes relative to the dynamics of spacetime and energy density.

    So the abstraction in terms of mathematics - the relaxation of some critical constraint on the model - mirrors the actual thermal evolution of the hot Big Bang. Go "further back" towards the Planck scale and relativistic and quantum effects intrude. The geometry loses its cold and expanded Euclidean flatness. It becomes the chaos of quantum gravity - equal parts black hole strength curvature and quantum strength uncertainty.

    Mathematics can not say what the truth of the universe is.fishfry

    What the steps from Newton to Einstein and so on tells us is that the Ancient Greeks were already on the right track with Euclid. His geometric model of space was the correct starting point. It was simply over-specified in not having time and energy explicitly included in the dynamics.

    Once the symmetries were expanded to include these - once space could be bendy, and then bend as a precise reciprocal of its energy density - we had stumbled into the model that could describe a GR world.

    Quantum field theory adds another dimension of plasticity to the frozen Euclidean realm. Instead of being all bendy, now the spacetime manifold is all "grainy" - composed of fluctuations that need to be collapsed.

    QG is already clear as the final step that would unite GR and QFT in a still higher state of geometric symmetry. Bendy space would be unified with grainy space as two sides of the one dynamical coin.

    The great metaphysical project has been working out really well. Ancient Greeks got the ball rolling. Modern mathematical science has now got the hang of its deep logic.

    All aboard for the ride. :up:
  • Is space/vacuum a substance?
    Again, you are ignoring the contradiction involved in "time emerges". Time must already be passing for anything to emerge, so time is necessarily prior to emergence.Metaphysician Undercover

    If time is what is emergent, then it is necessary that nothing be happening before it gets started. The idea of "before" becomes the incoherent claim here.

    You presume time to be eternal. Thus there is always a "before". Hence time is proven to be eternal. Your argument is a simple tautology.

    A thermal model of time is about the emergence of a global asymmetry - an arrow of time pointed from the now towards the after - the present towards the future. So the past, the before, is a backwards projection. It is imagining the arrow reversed. And reversed to negative infinity.

    Yet the reality - according to science - is that time travel (in a backward direction) is unphysical. And the Big Bang was an origin point for a thermal arrow of time.

    Yes, we can still ask where the heat to drive that great spatial expansion and thus create an arrow of time, a gradient of change, could have come from. What was "before" that?

    But this is no longer a conventional notion of a temporal "before" anymore than it is a conventional notion of "what could have been hotter" than the Planck heat, or "shorter" than the Planck distance, or "slower" than the speed of light.

    Every such conventional notion fuses at the Planck scale - the scale of physical unification. The asymmetries are turned back into a single collective symmetry. There is no longer a before, a shorter, a hotter, a slower. All such definite coordinates are lost in the symmetry of a logical vagueness. That to which the principle of non-contradiction (PNC) now fails to apply.

    Before the PNC applied, there is a time when it didn't. That is the "before" here. :wink:

    You are not properly distinguishing the active from the passive.Metaphysician Undercover

    Another of the many co-ordinates that are erased if you wind back from their current state of divided asymmetry to recover their initial perfect symmetry. At which point the PNC fails to apply. The logic you want to argue with suddenly runs out of the road it felt it was travelling down.

    In the beginning, the active and the passive (along with the stable and the plastic, the necessary and the accidental, the global and the local, etc, etc) were a symmetrical unity. Both halves of the dichotomy had the same scale and so were indistinguishable as being different. The PNC might feel as though it ought to apply, but - being indistinguishable - it can't.

    It is only as they grow apart that a proper asymmetric distinction can develop. The passive part of nature is that which is less active. And vice versa. Taken to the limit, you get the passive part of nature as that with the least possible activity. Or the reciprocal relation where passive = 1/active. And vice versa. The active = 1/passive.

    The immaterial is separate and distinct from the material in the very same way that the future is separate and distinct from the past.Metaphysician Undercover

    That can only mean ... as a religious and unphysical belief.

    It is a claim of a theistic model. And a naturalistic model has become the one that has produced all the useful physics here.

    What we understand, in a mysticism based metaphysics, is that the entire material universe is created anew with each passing moment of time. This is a necessary conclusion derived from the nature of freewill. The freewill has the power to interfere with the continuity of material existence at any moment in timeMetaphysician Undercover

    Epicycles to explain away a metaphysics that is provenly unphysical. It feels like an explanation being expanded but it is a confusion being compounded.
  • Mathematics as a way to verify metaphysical ideas?
    Some might say that String theory is metaphysics, others might disagree.jgill

    This is a good example of the tensions now developing because mathematical research looks to be cutting deeper than empirical research.

    We simply can't recreate the Big Bang to test out our theories. But we can computer simulate the Big Bang - or at least check the self-consistency of a mathematical model.

    So the tensions are about the trivial thing of academic careers. Are your skills obsolete if Cern has no new super-collider, but Stanford has this new supercomputer simulation team? You will be quick to start name-calling - "That's not real science! That's just metaphysics!" - if your own job is on the line.

    Academia is a social game. People have to construct their in-groups by "othering". Calling someone a metaphysician can seem like the worst kind of insult.

    It is not helped by the fact that many then take advantage of the social notion that metaphysics means "unbounded speculation" rather than logically rigorous argumentation. Any crackpot will claim to be a metaphysician as convenient cover.

    But it is all games. No need to worry. The simple fact is that the traditions of metaphysics, maths and science have always been fused at the cutting edge of human inquiry. There are few examples of great scientists who didn't combine the three in productive fashion.
  • Mathematics as a way to verify metaphysical ideas?
    Could it be possible to translate metaphysical concepts into matematical language so that we would be able to prove some theories as valid and refute other?Eremit

    I certainly find that any good metaphysics has a mathematical clarity. So there is a deep connection. But not a simple one.

    In general, good metaphysics provides a putative model of reality in terms of some self-consistent logical argument. So it is mathematical in the sense of having some logic at work. And that logic can be checked out in terms of the mechanics of how it maps the inputs of an argument to its outputs. There is a system of rules. And so an argument can be checked for validity or internal consistency under those rules.

    So - as with maths - you can prove the validity of the argument. But then if metaphysics wants to be saying something fundamentally true of reality - ontology being its ultimate goal - it has to start engaging with science and empiricism.

    The wrinkle is that maths can now be regarded as itself the science of patterns. It is actual research into the nature of being - if Being itself is constrained by a generalised demand of self-consistency.

    That is why fundamental physics and the maths of symmetry are in such tight alignment. Metaphysically, both treat their "realities" - the worlds they seek to explain - as "systems". And symmetries are the key that unlocks that door.

    So that is an example of how science, maths and metaphysics become joined at the hip once they seek out a particular line of self-consistent logic - a holism that makes maths "unreasonably effective" for the physicist.

    There are lots of domains of maths that have very little metaphysical application. They lack the kind of logical (hence causal) closure that metaphysicians seek in making their ontological models of reality.

    And it is not generally helpful to think it is just a case of translating verbal descriptions into mathematical language. Metaphysics is generally going to be in advance of some mathematical notation. The mathematical concepts have to come first. And when first understood, they may still be hazy. But good metaphysics always has that quality of someone describing an essentially mathematical structure. There is some holistic arrangement of parts that forms a holistic process. And the solidity of that discovered structure - as with a symmetry operation - is something that can then be turned into a standardised mathematical notation.

    Do you think that we could describe metaphysical theories by geometric constructions?Eremit

    I am arguing that all good metaphysics is basically about the holism of structures. So that is geometric in that you would have to "picture" both the relata and the relations.

    But every good geometric idea can be describe algebraically (and vice versa). So it is not that geometry is primary. The key again is the presumption that we are trying to discover the hidden structure of reality - its deep skeleton. And the maths that is about the science of patterns, the logic of structures, is naturally going to be treading the same path.

    It will be making concrete the useful mathematical language that can then be employed by mathematical physicists to write their fundamental theories. And the fundamental physicists are today's actual metaphysical community.

    The academics working in philosophy departments can contribute plenty to a history of metaphysical ideas, but not much to the development of new ideas beyond some useful commentary from the sidelines.

    In summary, metaphysics, maths and science are in practice fused at the cutting edge of human inquiry already. Mathematical strength thought has always stood behind the best metaphysics. And science created the empirical connection between such metaphysical models and the reality they might claim to model. A lot of bad metaphysics fell by the wayside as a result. And the mathematical scientists were left the ones closest to the new action.
  • Is space/vacuum a substance?
    The only way that the thing could come into existence as the thing it is, and not some random other thing, is that it's material existence is preceded by its form.Metaphysician Undercover

    I agree with that argument too. Which is why I say the matter of origination can only be solved by adding a logic of vagueness to our metaphysical tool kit.

    Both formal and material cause have to arise in the same moment. They in fact must emerge as the two aspects of a shared symmetry breaking. And time (as spacetime) also emerges.

    Big Bang cosmology describes that. At the Planck scale, matter and spacetime are clearly dual. The smallest coherent distance is also the greatest energy density as being so confined, it can contain only the single shortest frequency wavelength beat. And that is the hottest thing possible. A material event of the highest energy.

    So the duality of matter and spacetime is written into the heart of physics by the reciprocal mathematics of the Planckscale. Material cause and formal cause are two halves of the same symmetry. All that happens is that the Cosmos expands and cools from there.

    There is then no time before this first moment as time is part of the onset of metric expansion and thermal cooling. There is change with an emergently coherent direction.

    The Hartle-Hawking no boundary story is based on that. The planckscale is a general cutoff as it is the point where energy density and spacetime are indistinguishable. They are a symmetry not yet broken. Vagueness rules until they each establish the mutually reinforcing directions to grow apart from the other.

    Energy density can become energy density by virtue of thinning and cooling. Spacetime can become spacetime by expanding and becoming a frame on energy densities. Crisp difference can become possible as not everything needs to be all the same temperature and all the same size any longer.

    So the key is to stop asking the usual question of what can first. Hylomorphism starts already as a package deal where both material and formal cause exist, doing their job, as the complementary aspects of a holistic transition from a vague everythingness to a crisp somethingness.

    Without accepting this principle, that form is prior in time to material existence (and this is the principle which necessitates the proposal of divinity), you cannot claim that your metaphysics is consistent with Aristotle's.Metaphysician Undercover

    It is an over interpretation to claim Aristotle was consistent himself. What I say is that he still broke the story apart into the many elements that are still useful today.

    And the conceptual tool he really lacked was a notion of vagueness (as opposed to crispness). This leads to problems where one half of a dichotomy must always precede the other half. And of course, that never can be the case if each half is effectively the cause of the other in being its Hegelian “other”.
  • Definitions
    Are you dismissing what I say without argument?unenlightened

    Yep. Guess so,
  • Definitions
    Where was your question?

    Were you wanting a definition of force as a term of art in modern physics as opposed to the many other ways of using the same word in colloquial language?

    Seems like you just wanted to rant and blow off steam.