Comments

  • Causality conundrum: did it fall or was it pushed?
    My question was about a poised situation and how we think about its symmetry breaking. It seems a problem that we need a first cause to actually pop up out of nowhere and set the ball in motion. But that problem can be removed by imagining a world where nothing is ever absolutely at rest. The second metaphysical picture is less conventional, but better fits the facts, I would suggest.
  • Causality conundrum: did it fall or was it pushed?
    I see now what you mean. It was a mistake to link to that paper as Norton’s dome is a special case. Do you buy his story? I don’t get where his sudden spontaneous motion could come from. And why it is a solution permitted by his particular curve. Need to read more yet. This helped - https://blog.gruffdavies.com/tag/the-dome/

    But anyway, I just meant to talk about the standard example as an illustration of spontaneous symmetry breaking.
  • Causality conundrum: did it fall or was it pushed?
    I told you that I cannot think of a good reason to accept that it is impossible to eliminate all disturbances. Do you have one?DingoJones

    Quantum mechanics.
  • Causality conundrum: did it fall or was it pushed?
    I cant think of a reason that it would be impossible.DingoJones

    But that is the easy presumption that is under attack here. Most people probably do find no reason to even question the possibility of being able to eliminate every possible source of perturbation in some physical system.

    The habit is to think of a world that is essentially clean and simple. A blank slate. A void. And then you start populating this world with its little pushes and pulls, its atomistic play of events.

    But I say why isn't the inverse of that a closer match to observer reality? Why don't we start with a world already chock full of pushes and pulls, then see if we can imagine subtracting them all completely away.

    Quantum mechanics tells us we can't in fact achieve a void. There are always going to be infinitesimal or virtual fluctuations.

    So in fact we have a well-motivated reason for taking the opposite view - the one that presumes the impossibility of suppressing all physical disturbances. And so - as a metaphysics - that would flip the usual comfortable view on its head.

    Where before there was no reason to think an absence of fluctuation was impossible, now there is no reason to think it might be possible. Hence the idea of a triggering cause loses its previously fundamental-seeming metaphysical status. The interesting condition is the one where such causes have become so suppressed that all their particularity has been lost and there is only now the generic concept of "the inevitability of spontaneous outcomes". Perturbation itself becomes a primal feature of "the void".
  • Causality conundrum: did it fall or was it pushed?
    Not so, as I've already explained to Bitter. The shape of the dome is such that, as the ball is getting infinitesimally close to the apex, the second derivatives of its horizontal motion tends towards zero; and hence, also, the horizontal component of the force.Pierre-Normand

    But that relies on the ball starting on a slope, not on the flat. It is only infinitesimally close to the apex and so also infinitesimally inclined towards rolling down in some direction. The forces acting upon it are already sufficiently off-kilter.

    So again, my essential point remains. The ball can't be placed with exact precision at the apex. Our modelling incorporates that infinitesimal "swerve" as something that can't be eliminated.

    As a way of thinking about what causes the ball to start to roll, the answer becomes we couldn't prevent that because any placement on the apex had to involve infinitesimal error.

    Another way to view it is to imagine the time reversal of the process where the ball is being sent rolling up the dome with just enough speed so that it will end up at rest at the apex, after a finite time. Thereafter -- and this is unmysterious -- it may remain at rest for an arbitrary period of time. If this is a valid solution to Newton's equations, then, so is the time reversal of this process where it remains at rest for some time and then "spontaneously" starts rolling (with an initial instantaneous null acceleration).Pierre-Normand

    But now if you time reverse the story, you still only can arrive infinitesimally close to the apex, not actually perched exactly on it. So if the ball seems at rest, that is a mistake. It is only ever decelerating and then beginning to accelerate again. Inertia and friction might slow that transition in the real world. It might get stuck a while. But in the model, which presumes frictionless action and an actual perfect balance of forces, with no infinitesimal errors regarding its location at the apex to break the symmetry in advance, traditional thinking would seek a triggering push. And it is that framing of the situation which is the OPs target.
  • Causality conundrum: did it fall or was it pushed?
    I cast my vote for "it fell" rather than "it was pushed". For one thing, all the environmental forces operating on the rock are cumulative, and have been operating for a long time.Bitter Crank

    Yes, but this would be a case of innumerable accidents as you say. The "push" advocates would still reply that there would be one last event that finally did the trick. So it could have been the unlucky tourist that leant on it, or that lightning bolt which was the straw that broke the back of the camel.

    I want to focus on the most extreme example where absolutely anything would be enough to be that straw. And so we can't really blame some particular straw anymore.
  • Causality conundrum: did it fall or was it pushed?
    What's interesting about the dome is that the ball's starting from rest, and, after a finite time, rolling in an arbitrary direction, is a valid solution to Newton's laws of motion.Pierre-Normand

    How so? If the ball has mass, it has inertia. A push is required to set it moving.

    Of course, it the ball is already "in motion", then that acceleration already exists. But that then becomes my alternative story of the physical impossibility of eliminating such accelerations. A physical mass has to have some kind of internal thermal jitter and so - even when placed perfectly at rest - it is going to just throw itself over the edge and roll.

    So we can't rely on the formalisms if the formalisms simply leave out the crucial physical facts.
  • Causality conundrum: did it fall or was it pushed?
    Its not necessarily either. ... There are reason(s) why the ball bearing was displaced, and with enough sophistication in the instruments of measure (whatever they may be) these reasons can be revealed and your question answered.DingoJones

    But here you seem to both say the right answer doesn't really matter, and yet also the right answer is that there will be some particular triggering cause that accounts for the "why". In the end, it will be a push that did it. And that is what counts most for your general view of the world.

    In practical terms, you might be right. It seems we could always measure nature more closely and put our finger on some individual environmental disturbance as the guilty party.

    But the essence of my question was metaphysical - what do we really want to believe about the truth of nature? So something is at stake. We ought to come down on one side or the other.

    Could you imagine ceasing to care about the individual pushes and instead accepting that the generic impossibility of eliminating all disturbances is this deep truth?

    I am guessing you would resist that alternative view strongly. The question becomes why? With what good justification?
  • Why shouldn't a cause happen after the event?
    This would be a devastating reply, if it wasn't based upon something other than what I've been arguing. Then, it would even be true. But alas, it is and it's not.creativesoul

    Anyone here understand what @creativesoul's position is? Someone care to hazard a guess.
  • Why shouldn't a cause happen after the event?
    Not a free creation of the mindJanus

    I was just citing Einstein there. I think he knew what he was talking about.

    It is basic to pragmatism that you could say absolutely anything about the world as a hypothesis. And that modelling freedom is what Einstein was stressing. It is science because you don't have to start with "the truth", just some reasonable conjecture.

    Of course the corollary is that models must be testable. The world does have to be able to constrain the conjecture in some measurable fashion.
  • Where does logic get its power?
    I like defining things so, logic: A method by which humans go from premise to premise that seems to reflect reality if the premises do. What was the "origin" of logic.khaled

    Logic arises quite naturally out of getting into the habit of imagining nature organised like a machine. So as soon as humans were building huts with doors, or fields with fences and gates, already a mechanical conception of things was taking shape.

    A switch is the canonical logical device. It is either off or on, open or shut. And doors and gates are kinds of switches. The world is divided into inside or outside. In the hut or paddock, or outside it. They are machines for the organisation of living. So the origins of logic as a useful way of conceiving of nature go right back to human technological inventions. If we could impose a rigid circuit-like pattern on the unconstrained flows of nature, then we would be laughing.

    Logic thus arose as a way to regulate natural flows rather than as a close description of nature. It was a way to impose our artificial mechanical schemes on the world.

    Why is it that we are simply born with a "rule for deriving rules" and why does it work so well?khaled

    But we are not born with a brain designed to think mechanically. We are only taught from the earliest age to learn to think that way because we depend so much on artificial ways to regulate an otherwise fairly unruly nature.

    It works well in that a logical turn of mind grants humans a good dose of control over material events. But also, machines are brittle things. They break very easily or can give very wrong outcomes. So strict logic - of the kind you are describing - can be just as useless as it is advantageous. Garbage in, garbage out.

    Good job we do have properly evolved brains to fall back on when the literalism of logic lets us down.
  • Why shouldn't a cause happen after the event?
    QM is our invention. Causality is not.creativesoul

    And so you dumbly repeat something that I never said? I said classical physics might give us one model of causality. QM might give us another.

    And I wouldn't call a model an "invention" exactly. It might be a free creation of the mind, but it also has to show itself to work in the real world. It is not yet clear whether you would dispute or agree with this obvious qualification.

    Likewise, I would point out that "causality" is a linguistic term. As such - by your own account even - it is a concept, a model, an interpretation of "the world". But you struggle so much with writing clear posts that who knows what you really want to commit to on this score.

    QM did not exist - in it's entirety - prior to our discovery. Causality does.creativesoul

    Again you are guilty of conflating epistemology and ontology. Or simply of being utterly confused in your thoughts.

    QM didn't exist as a model until we invented it (although for some reason you now say we "discovered" it). But clearly, we have no reason to think that the world wasn't always "QM".

    And likewise causality didn't exist before we invented/discovered/modelled it - at least not as an articulated conception. And again clearly, however we understand causality with any clarity, there would be no reason to thing the world wasn't always "that way".

    So you have to stick to some consistent epistemic story. Either you are talking about QM as a model of reality, and so causality is also a modelling construct, or you are shifting registers to talk about QM as the putative ontology of reality, just as you seem to be employing the term "causality" as being a noumenal fact of the world.

    To try and maintain that QM is just an invention, causality is just a fact, is conflating an epistemic linguistic register with an ontic linguistic register.

    It makes no sense. And that incoherence would indeed explain why your posts just seem a confused babble - the sound of naive realism wrestling with its own demons to no useful end.
  • Why shouldn't a cause happen after the event?
    Incoherency...

    Next?

    You want to claim that you're not... then draw and maintain the distinction between causality and a report thereof. Then, do the same with QM...
    creativesoul

    In what way am I failing to distinguish between model and world by drawing close attention to the mediating role played by "the report"?

    The sign (or measurement, observation, witness statement, report, fact) is the basis of the semiotic mechanism by which the model and the world are kept apart, and thus why they can then stand in some relation.

    So - dealing with your conflations - we have the three elements of the world as it may reveal itself to our inquiries, our conceptions that form the generic basis to our inquiries, then the reports that seem the right kind of particular evidence in favour of some habit of belief we might be forming.

    If you keep just talking about the two things of the report and the world, you are collapsing the account of the epistemology to the point it can make no sense. You are going to remain stuck in the usual dualistic confusion of the realists vs the idealists.
  • Why shouldn't a cause happen after the event?
    I'm pointing out the inherent conflation in your position, namely that you're not drawing the distinction between your report and what's being reported upon(causality, in this case).creativesoul

    Maybe you still just fail to understand pragmatism then?

    How could I be conflating the model with the reality when I am talking about our models of reality? But then the "report" in your terminology must be the mediating thing of a measurement, or observation, or sign. And the "report" does underwrite the conception. It is the particular that inductively confirms the generality of some theory.

    That is how it is meant to work. You haven't shown any problem with it.

    If you get hit by a rock falling out of the sky, you could assign that physical fact to various theories. It could be a malicious god taking careful aim at you. Or it could be a random accident - a bit of falling space junk.

    In the one example, you would report being struck by a divine missile. In the other, you would report being struck by a fluke accident.

    So "what is being reported" is some particular ... that relates interpretatively to some generality. You have two possible causal interpretations. You wind up reporting the version you have some habit of believing.

    Pragmatism draws out this full triadic relation of theory, measurement and world. It does the opposite of conflating in doing so.

    To hold that QM is the basis of causality is asinine.creativesoul

    But that is just your weird phrasing of what is being said.

    I said QM challenges the kind of classically linear, cause-and-effect, model of causality which you would appear to hope to assimiliate all your experiences to.

    So you have some habit of mind. You think you know what causality actually is in its true natural form. But as I've argued at length, even classical physics conceals basic challenges to that. The least action principle doesn't fit that story.

    And then QM really rocks any remaining faith in it. We know that causality can't be locally real. Or at best, that it is only a macroscale emergent phenomenon. Like the liquidity of water, it is a collective state of order that arises when the Universe has got so large and cold that any lingering QM uncertainty or weirdness has been shrunk mostly out of sight.

    So what is asinine is pretending that simple linear causal logic ever really applied to the observable physical world. Even Newtonian physics knew there was more to the story. QM proves there has to be much more.
  • Why shouldn't a cause happen after the event?
    If QM can undermine causality,Harry Hindu

    QM undermines classical causality. QM puts forward its own causal story. Experiment determines which story we are inclined to believe. It's really simple.
  • Why shouldn't a cause happen after the event?
    A model of our invention is not something that causality can be.

    I would warn here against conflating a report(conception if you prefer) of something with that something.
    creativesoul

    Yep, so you are making some confused epistemic point about our models of reality.

    Now of course we can presume that reality exists as whatever it is, independent of our thoughts, wishes or conceptions about what it might be. But that kind of realism is still a presumption, even it it seems pretty reasonable. And then one problem that QM poses is that the observer no longer seems independent of the observables. As in the twin slit experiment, the choices an observer makes becomes part of the reality in terms of the statistical outcomes.

    So even if you personally choose not to believe this fact of QM, it remains something that has now been witnessed time and again. The fact is not disputed, just how it might best be interpreted in the light of what we might want to believe in terms of defending more classical notions of causality, such as one that still models events in terms of the principle of locality.

    So whatever causality is "in the noumenal raw", we are going to understand that in terms of a model. And that is fine if our modelling is based on a desire to model the world as accurately as we can. Which pragmatically, cashes out as a measurable minimisation of our uncertainty or surprisal when it comes to the physics of the world. We never "know" causality, but we sure as heck can work towards the models that make the best possible predictions and so leave us with the least possible surprises.

    On that score, we know that classical models of causality work fine when the scale of the Cosmos is cold and large, but not when the Cosmos is small and hot. That is when the quantum model of causality would have to take over - and the Big Bang tells us it is the more fundamental story, being the condition that ruled at the beginning.

    So there is no danger of conflating our models of reality with that reality if we are pragmatists. But what is clear is that science has found that different models sum up the story of causality at different scales of being. And yet you are insisting you can go beyond the models to see how things really are. You can believe in a classical causality as the true story rather than as merely one of a couple of models we can usefully employ to measurably good effect.

    From a scientists point of view, this is a little crazy. Even classical Newtonian determinism is known to be full of causal paradox. The principle of least action is as basic a physical axiom as the principle of locality, and yet that involves "spooky action at a distance". Every event would have to know its future outcome so as to follow the path with the least action. Even Newtonianism has this "effect dictates the cause" back-arsewardness to it.

    And even if we now have quantum theory as our most accurate predictive model, we know it doesn't completely capture the causal story. QM has been relativised. The need to account for observer collapse has been worked around by tacking on statistical mechanics - the contextual thermodynamic decoherence story - as a kluge. But including gravity and thus spacetime fluctuations in the formalism is work in progress.

    So the point is that classical physics never actually supported a simple cause and effect ontology. It relied on some weird least action principle to actually determine every trajectory. And then QM brought least action to the fore as one of the causal things it was going to fix. The path integral formalism showed how reality must in some sense take every possible path and then sum over the possibilities. But QM can't yet deal with the contributions of gravitational fluctuations - at least right down to the Planck scale limit where they start to completely overwhelm any conventional causal structure.

    Science thus tells us that we don't actually understand causality, but we have gone a long way towards telling a more complete feeling story. We are acknowledging the modelling gaps and seeking to plug them with mathematical machinery.

    Yet you, in contrast, seem to be saying you can see cause and effect with your own eyes. Every question you could have about the way the world is has already been answered.
  • Why shouldn't a cause happen after the event?
    Yes. QM means no local realism. As a matter of interpretation, you can then explain that in various ways.

    On circularity, there is obviously plenty of speculation about wormholes and what they would do to causality - https://www.iflscience.com/physics/wormholes-could-solve-a-key-problem-with-quantum-mechanics/
  • Why shouldn't a cause happen after the event?
    In your own little world on this one.
  • Why shouldn't a cause happen after the event?
    As far as I know, retrocausality isn't ''possible''.TheMadFool

    Look up Cramer's transactional interpretation or the Wheeler/Feynman absorber theory.

    But I tried to make clear that I am talking about retrocausality only in terms of backwards-acting constraints on probabilities. The future can determine the past to the extent that future experimental choices will limit the statistics of some past process. So the future doesn't produce the event in a determining fashion. It just affects the shape of the probability that existed back then.

    The classic experiment is the quantum eraser. I can decide whether to measure an event as either a single slit or two slit situation. And even after the particle has passed through the slits - by a normal temporal view - I can make that decision and still see either kind of statistics.

    So normal causality says that is impossible. The difference couldn't be imposed on nature after the fact. But in quantum theory, it is routine. Systems can be entangled across time.

    The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

    What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

    https://aeon.co/ideas/you-thought-quantum-mechanics-was-weird-check-out-entangled-time
  • Why shouldn't a cause happen after the event?
    To conclude that quantum mechanics operates on a more fundamental level is very questionable. It becomes apparent that that is gravely mistaken if and/or when we continue on to say that randomness is fundamental in it's relationship with causality.creativesoul

    It is a witnessed fact that the quantum account beats the classical one in terms of its predictive accuracy. I think it is only you who find it questionable that it ain't more foundational.

    Though if you followed my own position, I am indeed arguing it isn't "foundational" in the conventional sense. It is indeed a less constrained picture of reality. My ontology is boot-strapping. So I am taking the conversation in quite a different direction there.

    Then as to randomness, again a boot-strapping metaphysics expects a stable ontology to arise out of dynamical contrasts. So it is not that randomness is fundamental. Randomness is simply the dialectical complement to its "other" of deterministic constraint. You have two polar tendencies which together give rise to the third emergent thing of a structured reality - one which has the stability of a statistical system.

    It is only because constraint is a thing that freedom is also a thing. So the more constrained a system, the more definite or fixed its freedoms. You can count the probability of a coin toss because you know that the coin can only either land heads or tails. Flip a quantum coin (or more accurately, a pair of them - the equivalent of two particles with spins) and the statistics are different because there is a loss of information due to the entanglement of the outcomes.

    So perfect randomness can't exist by itself. It needs a matching degree of absolute constraint to define it as being a counterfactually definite possibility. If there are only two answers on offer - heads and tails - then a game of perfect chance becomes possible.

    If it were the case that randomness is more fundamental then we would need to ignore overwhelming fractal evidence to the contrary in order to sincerely assert this. Fractals are patterns. Pattern cannot happen without sequences of events. Sequence cannot happen without predictable time increments.creativesoul

    Fractals are a bad example for supporting your case because they in fact show that behind ordinary "classical" probability spaces - the kind described by a Gaussian bell curve - there is now the less constrained probability spaces of scale-free systems.

    It is just like how QM was found hiding behind classical physics, and imaginary numbers behind real numbers. If you relax a major dimensional constraint, you still get organised structure. And now an actual mathematical structure that does a better job of accounting for nature "in the raw".

    So fractals are the mathematical story of many natural random processes - especially dissipative thermodynamical ones, such as river branching and coastline erosion, because the spatiotemporal scale drops out of the physical picture as a constraint on the expression of randomness or spontaneity.

    Deterministic chaos and fractals were a big deal because they revealed that chaotic nature was in fact predictably random even though any constraints were as minimal as could be imagined. So they speak to nature that has the simplest possible balance of chance and necessity. Gaussian systems are by contrast far more artificial in being overly-constrained (by the Universe's own foundational standards).

    Pure randomness has no predictable sequence. Randomness falls well short of the mark. It cannot produce what we do witness.creativesoul

    Pure randomness is pure vagueness. There couldn't even be a sequence to judge.

    As I say, chaos theory was a big deal as it gave a mathematical understanding of what a minimal state of constraint looks like, and thus what a maximal state of randomness looks like. You had to have both to have either. Each becomes the effect of the other's cause.

    It is this contextual mutuality that is a big part of the conceptual shift towards a holistic ontology here. QM showed that we have to take complementarity seriously. Chaos theory said the same thing.

    Here's my take...

    When observable entities are smaller than a planck length and the act of observing them includes shining light on them then the observation itself begins a causal chain of events as a result of the mass of the photon influencing the path(location) and movement speed(acceleration) of the subatomic particle being observed.
    creativesoul

    That's one familiar pop science explanation. But why does the Planck scale create a sharp cut-off long before location or momentum are driven towards infinity?

    Sure, the maths says things start to bend off sideways as you approach the Planck limit. Your effort to measure a system becomes so strenuous that at some point it produces such an energy density that the whole region of spacetime is going to collapse into a black hole.

    But that is long before you approach infinite efforts. So you haven't actually explained anything about the causality of what is going on. You don't have the kind of holistic/contextual story that quantum gravity is seeking to establish.
  • Why shouldn't a cause happen after the event?
    The results were witnessed.creativesoul

    I'm still none the clearer about the distinction you wish to uphold.

    What you said was....

    We did not arrive at causality by virtue of inventing and/or imagining it. We arrived at causality by virtue of witnessing it happen... over and over and over again...creativesoul

    And my reply is that we did invent a classical model of causality. And now a quantum model would challenge its predictions. We expect to witness a different statistics. And indeed we do, time and again.

    I take it for granted that inventing a model and testing that model are two aspects of the one intellectual enterprise.

    And then from the point of view of the scientifically-informed philosopher, one would be dubious about any "commonsense" claims that we instead just look out and see the world as it actually is. Any such folk theory of causality is only going to be an unthinking acceptance of the "evidence" of a history of classical physics and the logical tropes it has fostered.

    So what are you trying to say? That our belief in classical causality is just some kind of direct "witnessed" knowledge and not instead a socially constructed belief (albeit a belief that "really works").

    Or do you have a different point? I can't follow what you might want to say.
  • Why shouldn't a cause happen after the event?
    Continuing a bit, I take the view that existence, and thus causality, is fundamentally probabilistic. Atomism is emergent. And we have two formal statistical models - the classical and the quantum - that capture that fact.

    An irony is that Boltzmann settled the argument in favour of atomism by establishing a statistical mechanics view of reality. His famous dictum was “If you can heat it, it has microstructure.”

    The equipartition law says there is a direct link between macroscopic and microscopic physics because if you know the total thermal energy of a body - its temperature - you can calculate the number of microscopic degrees of freedom it must contain. Avogadro’s constant.

    So atomism was "proved" by spacetime having a well-behaved statistics. A given volume could contain a given number of degrees of freedom. And then - the ontological leap of faith - by observational degrees of freedom, we would be talking about actual definite particles ... as that is what our causal interpretation most naturally would want to assume.

    But who in particle physics believes in "actual particles" anymore? What we actually know to exist is the statistical formalism that describes the prototypically classical situation. We have equations that cough out results in terms of countable microstates or degrees of freedom.

    So the classical picture and the quantum picture are pretty much aligned on that score. They boil down to the kind of statistics to expect given a physical system with certain global or macro constraints on local possibilities. Going beyond the statistics to talk about "actual particles" - conventional atomism - is a reach.

    So in this way, quantum weirdness should cause us to go back and revisit the classical tale. Classical thermodynamics had already created an approach where atoms were modelled as the limit of states of constraint. The basic degrees of freedom of a system - the very "stuff" it was supposed to be constructed from - were emergent.

    And getting back to the quantum level of the story, Thanu Padmanabhan is pursuing this way of thinking as a way to understand dark energy and spacetime geometry -
    http://nautil.us/issue/53/monsters/the-universe-began-with-a-big-melt-not-a-big-bang

    So Boltzmann's argument - if it can be heated, it has "atoms" - can be used to impute a quantumly grainy structure to spacetime itself.

    But it is not that spacetime is actually composed of fundamental causal particles. Instead, it is the reverse story that regular spatiotemporal causal structure has a smallest limit. There is not enough contextuality to continue to imprint its regularity on events once you arrive at the Planck scale. You are foiled by all directions turning symmetric at that point - principally in the sense that there is no thermal temporal direction in which events can move by dissipating their localised heat.

    So again, what we read off our successful statistical descriptions is the literal existence of hard little atomistic parts. Our conventional notions of causality encourage that. Possibility itself is understood atomistically - which is what makes an added degree of quantum uncertainty rather a mystery when it starts to manifest ... and eventually completely erases any definite atoms by turning everything in sight vanilla symmetric. A quark-gluon fluid or whatever describes a primal state of material being.

    But we can turn it around so that atoms are always emergent. And classical atoms reflect another step towards maximal counterfactual constraint - one that takes a step beyond a looser quantum level of constraint, but then even a quantum level is still pretty constrained.

    It is exactly the story with algebras. Normal classical number systems operate as point on a 1D line. Quantum number systems operate in one step more complex/less constrained realm of 2D imaginary numbers. Yet there are further algebras beyond - the 4D quarternions and 8D octonions, and then eventually right off into barely constrained structures of the even higher dimensional exceptionals.

    So classical counting uses fundamental particles - 0D points on 1D lines. The emergent limit case if you were constraining the freedom of the act of counting. But then quantum counting leaves you with chasing your number around a 2D plane, which winds up behaving like an added rotation. When it comes to actual particles - like an electron - you have to in some sense count its spin twice to arrive at its spin number. To fix its state with classical counterfactual definiteness, you have to add back an extra constraint that eliminates the extra quantum degree of freedom it has from "inhabiting" a larger background space of probability.

    Everywhere you look in modern fundamental physics, this is what you find. Classicality is emergent - where you arrive at the end of a trail of increasing constraint on free possibility. So causality needs to be understood now in these same terms.

    And when it comes to quantum mechanics, it isn't even really that "weird" as it is already way more constrained in its dimensionality than the more unconstrained dimensional systems that could lie beyond it in "algebra-space". Quantum mechanics just has ordinary classical time baked into it at a background axiomatic level. That is why it is possible to calculate a deteministic wavefunction statistics for any given initial conditions. A definite basis has been assumed to get the modelling started.

    But to move beyond QM, to get to quantum gravity, it seems clear that time itself must become an output of the model, not an input. And if you give up time as being fundamental, if you presume it to be merely the emergent limit, then of course conventional notions of causality are dead - except as useful macroscopic statistical descriptions of nature.
  • Why shouldn't a cause happen after the event?
    Unpredictability doesn't imply a violation of causality. Without knowledge or control of the underlying physical causes coin flips are also unpredictable.Andrew M

    Right. So what I am arguing is that there are two models of causality here - the conventional atomistic/mechanical one, and a holistic constraints-based one. And there doesn't have to be a metaphysical-strength "violation" if the mechanical story is understood as the emergent limit of the underlying holistic constraints story.

    In a nutshell, all events are the constraint on some space of probabilities. An "observation" is some set of constraints that restricts outcomes to a fairly definite and counterfactual result. So contextuality rules. And you can have relatively loosely constrained states - like entangled ones - or very tightly constrained ones, such as when the whole course of events is being closely "watched".

    Atomistic causality presumes that everything is counterfactually definite from the get-go. Any uncertainty is epistemic. As with a coin flip, it is because you toss the coin without watching closely that you don't see the micro-deterministic story of how it rotates and eventually lands.

    But a holistic causality says uncertainty or indeterminacy is the ontological ground zero. Then it is the degree to which a process is "watched" - contextually constrained by a decohering thermal environment - that places restrictions on that uncertainty. Effectively, in a cold and expanded spacetime, there is such a heavy weight of context that there is pretty much zero scope for quantum uncertainty. It all gets squished out of the system in practice and classical causal sequence rules.

    So there is no violation of the classical picture from taking the holistic route. It simply says that the classical picture was never fundamental, only ever emergent.

    Conceptually, that is a big shift though. It means that cause and effect are entangled in root fashion. When we come to talking about time as being a universal direction for change, a passage from past to future, we are talking about the emergent thermal view. The effective bulk condition. On the quantum microscale, past and future are "talking" to each other in a nonlocal fashion. Decisions an experimenter might make about which constraints to impose on the evolution of an event a million years in the future will then "act backwards" to restrict the possibilities as they looked to have taken shape a million years ago in the past.

    Of course, respecting relativity, this retrocausal impact of constraints on probabilities can't be used to actually do any causal signalling. Time - as an emergent bulk property - does have a conventional causal structure in that sense. But it is a property that is emergent, not fundamental. That is the "violation" of conventional ontology.

    The Schrodinger equation is deterministic and so, in principle, can predict when a particular neutron will decay.Andrew M

    It is only deterministic because some definite constraints have been put in place to limit some set of probabilities. The big problem for conventional causality is that the constraints can be imposed at some distant date in the far future, as with a quantum eraser scenario - while also, having to be within the lightcone of those "initial conditions". (So the lightcone structure is itself another highly generalised constraint condition on all "eventing" - causality is never some wild free-for-all.)

    Another quantum result is the quantum zeno effect. Just like a watched pot never boils, continually checking to see if a particle has decayed is going to stop it from decaying. Observation becomes a constraint on its usual freedom.

    This is another "weirdness" from the point of view of causality. But it illustrates my key point. Neutrons that are left alone exhibit one extreme of possibility - completely free and "uncaused" decay. And the same neutron, if constantly monitored, will exhibit the opposite kind of statistics. Now it can't decay because it is no longer free to be spontaneous. It is being held in place as it is by a context of observation.

    So a mechanical view of causality presumes an ontology of separability. The OP experiment's demonstration of indefinite causal order shows that causal non-separability is a more fundamental physical condition. It is direct evidence for quantum holism. Spontaneity rules, but counterfactuality is what emerges, as an environment of constraints gets built up.

    Quantum computing is bringing the issue into focus. Ordinary causality can be describe in terms of familiar logic circuits. There everything is strictly determined to follow a "normal" causal sequence. But quantum computing is now developing the kind of process matrix formalism which this latest experiment illustrates. If you relax the constraints, allow paths to be logically entangled, then you get the kind of causal indeterminism reported.
  • Imagination, Logical or Illogical?
    Is imagination logical or illogical?BrianW

    Is association logical or illogical?

    So maybe the basis of imagination doesn't crisply fall into either category. Maybe logic itself is a little mad in its demand for exact determinism that maps one informational state on to another with no loss - and thus no creation - of information.
  • Why shouldn't a cause happen after the event?
    We arrived at causality by virtue of witnessing it happen... over and over and over again...creativesoul

    But with quantum mechanics, what is witnessed is violations of this simple classical model of causality "over and over and over again".

    Why did the neutron decay? If its propensity to decay is steadfastly random, any moment being as good as another, then how could you assign a cause to that effect? It is a spontaneous event and so causeless in any specific triggering sense.

    And the retrocausality implied by quantum eraser effects are just as big a challenge to classical locality. The decision the experimenter makes in the future becomes a constraint that affects the probabilities taking shape in the past. There is something spooking acting backwards in time - again, not as a triggering cause, but still as probabilistic constraint on what then is observed to happen.

    Entanglement happens across time as well as space. And the OP-cited experiment is another example of QM challenging any simplistic cause~effect model of events "over and over and over again".

    So sure, causes being followed by their effects is a model we might impose on reality quite successfully at a classical macroscale of observation. But with QM, we are still seeking to find some other way of understanding causality.

    And we already know it must be the more fundamental model, classicality merely being the emergent description.
  • The human animal
    (all the interior life is dependent on something that isn't interior) ... The meaningful also often seems to be essentially illusory.darthbarracuda

    But that is taking the position that to be meaningful, it would have to come from within in some strong sense. And naturalism would instead see the individual as a plastic state of adaptedness. Being is always contextual. And so meaningfulness is what emerges as a functional or adapted relation between a "self" and a "world".

    Meaning is always going to be exterior to the self as the self is what is contextually being constructed by a functional relationship. From that, adaptedness can be presumed. And merely coping or living in some kind of denial - as in terror management theory - would be the pathological state, not the philosophical baseline.
  • The human animal
    Who needs to snap out of it? Me or them?frank

    Maybe there is truth in both views given that we humans are both biologically and culturally evolving animals.

    So yes. The naturalistic lens applies overall. But humans have also gone beyond conventional nature if we are talking about straight biological/ecological level evolutionary games. We have added social and even artifactual levels of developmental outcomes.

    We are socially constructed - which means morality is "real' in some stronger sense. We can do weird things like decide to be vegetarian because we feel there is some more generic principle at stake. And then there is a whole realm of machinery and technology that we are unleashing that may become its own still higher level evolutionary game.

    So naturalism rules. But naturalism is also creatively open-ended. And humans ceased to be merely biological organisms as soon as they developed symbolic speech and opened up all the creative possibilities that entailed.
  • Are we doomed to discuss "free will" and "determinism" forever?
    If agency rather is viewed as a natural (and social) phenomenon that can only be disclosed as intelligible from an empathetic and engaged participatory perspective, then there is nothing problematic in asserting that the will is a power that is being freely exercised by mature and responsible fellow rational agents.Pierre-Normand

    Indeed. We are neither meat machines nor ensouled creations but the third thing of socially-constructed and biologically embodied agents.
  • Are we doomed to discuss "free will" and "determinism" forever?
    So, merely scrubbing dubious notions (such as the purely mental acts of 'volitions') because they are tainted by their theological origins will leave the roots that currently nourish the philosophical confusions on the topics surrounding rational agency and personal responsibility firmly in place.Pierre-Normand

    Yep. Surely it is Newtonian determinism that sustains the now neurological-level debate?

    Science's mechanical view of nature is what has been at issue. Freewill just becomes the most convincing argument against the modern understanding of the mind being a product of machine-like information processes.

    A good dose of Spinoza - superior by far to Aristotle on this issue - would do everyone alot of good.StreetlightX

    Spinoza is pretty irrelevant to dealing with the causal level here issue. Aristotelian biology sorts it.
  • Are we doomed to discuss "free will" and "determinism" forever?
    Why yes I am aware of the prevalence of third-rate scholarship on the issue....StreetlightX

    You miss the point. Sure, you have the theistic willing agent coming eventually into hard opposition with scientific determinism during the Enlightenment. But Aristotelian metaphysics already took a position that was more complex than this simple dualism.

    Simple material determinism was itself already wrong for Aristotle. He argued for the reality of chance or tychism as well. And then still there had to be the Tertium Quid - the insertion of agency into the story. Which today we would understand in terms of semiotics or embodied modelling relations - the information dimension.

    So there was something to be said way back then. But also the right kind of answer was on offer, if you are charitably inclined.
  • Are we doomed to discuss "free will" and "determinism" forever?
    'Free will' wasn't even a thing until some boofhead Church father decided to make it the cornerstone of his theology.StreetlightX

    Aristotle was the first philosopher to identify the tertium quid beyond chance and necessity as an autonomous agent power.

    Aristotle knew that many of our decisions are quite predictable based on habit and character, but they are no less free nor we less responsible if our character itself and predictable habits were developed freely in the past and are changeable in the future.

    One generation after Aristotle, Epicurus argued that as atoms moved through the void, there were occasions when they would "swerve" from their otherwise determined paths, thus initiating new causal chains. Epicurus argued that these swerves would allow us to be more responsible for our actions, something impossible if every action was deterministically caused. For Epicurus, the occasional interventions of arbitrary gods would be preferable to strict determinism.

    http://www.informationphilosopher.com/freedom/tertium_quid.html
  • Do Concepts and Words Have Essential Meanings?
    Do words have inessential meanings then? Curious how we manage to communicate ideas and concepts with such alacrity.
  • Interpretive epistemology
    So is taking something as absolutely certain the same as believing it to be minimally uncertain?

    The essence of pragmatism is a willingness to act on beliefs without requiring the absolute absence of doubt. You are doing what is reasonable having applied a process of reasoning. It is the scientific method in a nutshell.
  • Interpretive epistemology
    And QM can in turn quantify that actual uncertainty about the battleship’s location to many decimal places.

    So as I said early on, uncertainty is nothing to lose sleep over if you have the kind of knowledge that minimises it.

    It’s not me that is throwing out the important part of what has been said here.
  • Mental Compartmentalization
    His beliefs are simply at odds with the majority of people, and hence his mind is compartmentalized in such a way that it shows when talking about slavery or races.Posty McPostface

    Surely his beliefs are those of the majority with whom he mixes? That's why compartmentalisation hasn't been much needed as a psychic defence.
  • Mental Compartmentalization
    So, you think it's a matter of performative utility to resort to compartmentalizations of concepts or things?Posty McPostface

    Not sure that you got my point. People don't build a city if they only mean to camp the night.

    Was your OP describing someone who had systematically compartmentalised their rationalisations so as to avoid the logical inconsistencies involved? That would take a lot of prior work.

    Or did they camp on the edge of a pleasant stream and wake the next morning to find it had become a swamping flood? Was there never a compartment and only a hurried packing up the prejudices to go enjoy them somewhere else less challenging?

    Folk who don't like their inconsistencies being fingered just tend to check out because they were never trying to defend some larger coherent territory anyway. Being comfortable is the first priority.
  • Do Concepts and Words Have Essential Meanings?
    I dunno, maybe I've missed something but this move of essentializing (it's a real word, fight me) the meaning of some word doesn't seem to really move the debate along at all unless all parties involved already agree on the same meaning.MindForged

    Meanings are too slippery, too inherently viewpoint-dependent, to be concretely defined. So words are just ways to limit the scope of possible understandings to the point where they can be usefully shared.

    To use words properly, you need to be willing to do two things. Accept they do intend to narrow the scope for interpretation to some habitual conceptual essence. And then also show tolerance or charity for the vagueness that must always remain.

    The sharing of a viewpoint or meaning doesn't have to be exact, complete, or exhaustive. Indeed, there is no other choice than to accept a fit that is going to be fuzzy at the edges, varied in its precise boundaries, creatively open in the understandings it still admits.

    So I see meanings like an unruly herd of cats that you can lock up in a room. And maybe the occasional small dog or big rat gets swept up as well. If it works out well enough for some particular purpose, then that's fine.

    Of course, you think technical words need to obey tighter standards. The proper understandings would be those shared by the technical community that employs them.

    And that is completely reasonable. Yet the same combination of tolerant constraint has to apply. It is Quixotic to try to give words completely defined meanings. No definition could exhaust what is essentially the open ended thing of an act of interpretation. All you can do is create some habitual limit to interpretation. And that then includes the other thing of some habitual limit where it is agreed that differences in interpretation will no longer matter.

    The story is rather different once you move up to an actually mathematical level of speaking. Any scientist knows the difference between trying to understand a concept in words versus actually understanding its equations.

    But is one better than the other in a fundamental way?

    I think here it is interesting to point to a contrast. Ordinary language is good at taking the messy physical world and restricting our focus to some conversationally limited aspect. It suppresses all the other possibilities, but does not require their elimination.

    Mathematical speech on the other hand likes to start with a completely empty world and then start to construct a space of reference. So it is not limiting what already exists. It is starting with nothing and constructing whatever there is to be spoken about. It is an axiomatic approach.

    So one is messy and organic. The other is clean and mechanical. I think the greatest advantage is being able to employ both well rather than take either as being the canonical case. They can complement each other, as each has its strengths and weaknesses.

    The problem with the thread you mentioned is where the difference isn't recognised - and furthermore, that the difference might have to be reconciled if maths indeed aspires to talk about real physical things.

    There are lots of people who reason about the world in folk terminology. And then a lot who are trained to reason in technical terminologies. But those technical terminologies inhabit their own constructed worlds, as I say. So there is yet another step to show that the constructions really can say anything complete about the real world when they come to discuss it.

    The technical approach wipes the slate clean so as to build up an understanding as a set of elements. So how does it ever discover that it missed out key possibilities? Ordinary language only sweeps all the mess under a carpet. Eventually you could still stumble across it.

    So you could defend a commonsense notion of infinity, or a technically constructed notion of infinity. But especially for a scientist or philosopher, the fruitful thing would be to allow the two styles of language to play off each other - accept they are in tension for good structural reason. The definiteness of the one can complement the open creativity of the other.

    Having said that, using ordinary language to create shared understandings rather than defend "alternative facts" seems too much to ask of many posters. So I can understand the basic frustration you are expressing. ;)
  • Mental Compartmentalization
    What I'm wondering here, is how does compartmentalization occur, also?Posty McPostface

    Does one build the compartments or does one merely fail to build the generalised coherence?

    The white nationalist would seem to be the standard thing of back-filling a justification for your actions or attitudes by constructing some story. So you inherit a prejudice from your social context and then explain it whatever way you can get away with.

    It is like setting up a small defensive encampment wherever you find yourself with whatever is at hand. Bricolage.

    The tougher thing would be to be completely systematic in your thinking - to assimilate everything to a thought-through universal structure.

    So it is not that people have to construct a lack of coherent connections. They just get away with not having to live life according to a generally coherent philosophical position.
  • Interpretive epistemology
    You use "psychology" or variant six times.tim wood

    Yes, to emphasise that there is a brain involved. So we know experience of the world is indirect in that it involves the kind of cognitive processing that science reveals.

    Pragmatism, as I read your posts, is a model, an explanation. And it works. But at a price. You seem to surrender whatever must be surrendered in favour of pragmatism.tim wood

    You say there is a price to pay. But what exactly? What am I having to surrender? Let's see if it is something I actually would value.

    Are we talking about absolute certainty? If so, surely not to have to worry about perfection is a form of liberation.

    I'm absolutely certain that with respect to certain axioms, that 2+2=4, and more besides. It's all a giant if-then, but within the if-then we can have our certainty.tim wood

    Sure. We can imagine perfect machines that are so constrained in their actions that there are no possible uncertainties in their outcomes. So you can have your absolute certainty about physics-free syntax. If you say the bishop only moves on the diagonal, the bishop only ever moves on the diagonal.

    It is the step between your syntactical reality and your physical reality that becomes problematic. Is nature always so linear that 2+2=4 even as a modelling description of some set of natural events? If you measure a coastline with a ruler, don't you get a different result depending on the size of that ruler?

    So the realm in which you can claim any absolute knowledge is a highly artificial one - our human inclination to imagine a world of perfect, rule-bound, machines. The lack of uncertainty is precisely what was made axiomatic. It was an input, not an output.

    But would you stick your neck out and say physical reality is itself axiomatically certain? Quantum mechanics tells us it is not. The main axiom of an absolutely deterministic mechanics - the principle of locality, or local realism - has had to be abandoned.