Comments

  • Who here believes in the Many World Interpretation? Why or why not?
    What I was told on Physics Forum, is whether the particles are fired one at at time, or whether they are fired altogether, the end result is the same. So I am saying, it can't really be a result of 'interference', can it?Wayfarer

    So are you saying the two paths of an individual quantum event don't interfere due to superposition?

    And bear in mind that we are talking about the interference of probability waves. And also that interference is about the additive or cancelling effect of wave peaks and troughs arriving at a point of space and time - the detector screen.

    Given that, in what sense is it not analogous to wave interference in classical mechanics?

    And given that, why would you expect the rate of producing individual events to make some kind of difference to the accumulation of an interference pattern at the detector?
  • Who here believes in the Many World Interpretation? Why or why not?
    Do you get the complementarity principle? Is one description right and the other wrong? Or are both a reflection of some chosen measurement basis?
  • Who here believes in the Many World Interpretation? Why or why not?
    Rather than getting upset, show that you understand what you are talking about.

    Again, in what way does the event by event accumulation of a twin slit interference pattern (or even single slit diffraction pattern) depend on the rate at which one event follows another? Where does the formalism require such a dependence?

    You seem to think that the interference pattern is caused by some kind of dependency of one outcome on all the others. It is the particles that are all physically interfering with each others statistics in some kind of spooky, nonlocal, time and space defying, fashion.

    But that is wrong. It is about how each event is affected by its (observational) context. So it is about a single event and the exact set-up of the apparatus for that run. And it is the human experimenters who control the state of the emission source and the apparatus, so ensuring that the interference pattern will accumulate over multiple trials replicating "the same event".

    What's annoying was that this is the issue that Orzel was highlighting - the impossibility of perfect repeatability in the real thermal world. Something is always slightly different about the world. And that to me is a promising angle from which to attack the absolutism of MWI.
  • Who here believes in the Many World Interpretation? Why or why not?
    The pattern is dictated by the wave function. That will be so regardless of which apparatus or set-up you're using. The equation which describes the distribution is not dependent on the apparatus, although I imagine that the particulars of each set-up might produce variations because of the distances involved etc.Wayfarer

    ...and you can't see how you just contradicted yourself?

    In the real world, every set-up is particular, and so a particularisation of the wavefunction equation.

    But the underlying determinative cause is the wave equation itself - however the wave equation is not a material cause, as it is not something which exists, it's simply a pattern of probabilities, as the name says.Wayfarer

    The equation is the useful, in the limit, generalisation or abstraction which describes no actual world until some numbers are plugged into it, just like the laws of motion.

    You are making the mistake of reifying it and then treating that reification as a mysterious further concrete part of nature. Platonism redux.

    So it is not simply a pattern of probabilities until some actual numbers have been plugged into the equation.

    So I think the real sticking point is, how can a probability be causally efficacious. Isn't that what the whole argument is about? That's what Einstein kept saying to Bohr - 'God doesn't play dice'. He made a slogan out of it.Wayfarer

    The sticking point is that probability is irreducible. The wavefunction is the tool that limits the extent of the weirdness in useful fashion. But in the end, it can't be eliminated by just an equation. The equation - even if total information is plugged into it - can only point its finger to roughly where to expect a particle to be. So the question is how does that residual uncertainty ever get eliminated by a "real collapse".

    The reason why the rate-independence is significant, is that the behaviour of individual 'particles' (not that they're actually particles) is described by the wave-function, whether they're together or separate. In other words, whatever is causing that, is independent of space/time, or, that duration and the proximity of 'particles' are not factors in determining the result. Or so it seems to me.Wayfarer

    Well you are wrong. It is an important point that the particle "goes both ways" even if it was a one-off, never to be repeated, experiment.

    The problem is you can't see that just from observation of the one event.

    That's not confusing super-position and entanglement, although what I'm starting to think is that the 'rate-independence' of the pattern, and the so-called 'entangled states', are actually two aspects of the same underlying cause.Wayfarer

    So with next to no demonstrable understanding of the theory, you have convinced yourself you have stumbled on the missing link which has eluded a century of physicists?

    Isn't that the definition of crackpot?
  • Who here believes in the Many World Interpretation? Why or why not?
    So if the pattern is not rate-dependent, then by implication the cause of the pattern is not a function of time.Wayfarer

    You mean the pattern isn't the function of other particle histories. The pattern is simply a function of the fact the same maze, the same apparatus, imposes its constraints on a sequence of highly identical events.

    So it is the design of the system that makes it rate-independent. You could stick the equipment in a cupboard for a thousand years, pull it out, and the quantum statistics would be unchanged.

    Perhaps you are confusing entanglement and superposition in your understanding of what is going on?

    ...http://backreaction.blogspot.co.nz/2016/03/dear-dr-b-what-is-difference-between.html
  • Who here believes in the Many World Interpretation? Why or why not?
    The question seems to be: how can a particle 'interfere with itself'?Wayfarer

    The issue of course is that we have no good explanation in terms of concrete commonsense notions. So you are not going to get the kind of answer you are seeking in terms of things you think you understand.

    But such caveats aside, there is no particle travelling through the apparatus. Instead there is an evolving wave of probability of detecting a particle that reflects the shape of the apparatus. If there are two slits that the wave has to pass through, then it "goes through both" and you get the resulting wave-like interference effect.

    Remember that you get a wave-like defraction effect even if only a single slit is open. The slit causes particles to spray out across the detection screen. If the particles acted exactly like particles, they ought to just go straight through and burn a crisp hole in the one spot, not get smeared out across the screen.

    So that is the wave~particle duality. We know that only one particle gets emitted, one particle gets detected. But on its travels, it acts like a classical wave and responds to the shape of the experimental apparatus accordingly.

    It seems as if the probability distribution is itself like the so-called 'pilot wave' - in other words, it determines all the possibilities, but only in the sense of constraining the possible paths that any particle takes, whether individually or as part of beam.Wayfarer

    But don't forget the causal role being played by the apparatus here. There are some specific - classical - set of constraints being placed on the particle event. That may include experimenters making complicated delayed choice measurements with half silvered mirrors, or whatever.

    So in the decoherence view, we can see this as being about the hierarchical nested constraint of quantum potential. The causality is contextual.

    Start with a naked vacuum - the Universe in its most unconstrained state, without any kind of experimental apparatus. You still have some probability of an emission and absorption event. But it would be very random and patternless. The wavefunction would represent very little causal or contexual information beyond some probability of a patch of vacuum having an energetic fluctuation.

    But now start to assemble an apparatus. You have a photon gun, or some other particle producing machine that heats up and is designed to produce quantum events at some controllable rate. Now you have a wavefunction that is becoming quite highly constrained by its "classical" context. It is like corralling pigs in a pen and then creating a small gate which you can open. Pigs will start to fly out in a predictable direction.

    Then start adding in slits. This is like creating another pen with another gate. The pigs that happen to make a straight bee-line towards the next gate will fly through, but are then free to bend off once they are past. If there are two such gates, the pigs will form an interference pattern as they eventually smash into some distant wall set at an appropriate angle across their path of flight - a further act of constraint.

    So the wavefunction itself is the product of some environmental arrangement, some set of constraints that give shape to a "process". And the collapse is then just taking that constraint a further step. It is placing an end-stop by insisting that absorption happens "right now" due to some overwhelming constraint, like a particle detector screen.

    The quantum weirdness then comes in because no detector can ask every question of nature that you might expect of the one event.

    In the classically-imagined world, the particle (or wave) would have some exact position and momentum at all times. But in the quantum reality, you can't answer both questions at the same time with complete certainty. So the weirdness lies in the fact that the environment can causally constrain events up to a point. But that ability to create exactness runs out before classicality believes it should.

    And as I say, even a single slit results in quantum uncertainty. The narrowness makes it certain that any particle had to come through it. But at the detector, you have to pay for that certainty by losing certainty about the momentum.

    There you are standing waiting for your pigs to come flying straight through your maze of gates. But while you now the pigs can only come through the gate, they are then free to veer off randomly once their path is not constrained. And so veer off they do.
  • Who here believes in the Many World Interpretation? Why or why not?
    In other words, whether the protons are fired singly or as a beam, makes no difference to the interference pattern.Wayfarer

    The interference is from the fact that the particle can take two possible paths through the twin slits. So it is about the particle and the apparatus, not the particle and all the other particles.

    And the fact that you see a particle hitting the detector screen is the destruction of that wave function. The particle shows up at some place, and you can attach a probability to that place ranging from very low to very high.

    So to "see" an interference pattern on the screen requires we collect some reasonable number of wavefunction collapses - the story of many individual particles trips through the probabilistic maze. But the point of the experiment is that even a single particle will behave like a wave - a superposition of a pair of probability waves.
  • Who here believes in the Many World Interpretation? Why or why not?
    Well, glad we got to the bottom of that, although it directly contradicts and answer you gave just above it.Wayfarer

    That was another Tom in another world breaking through. By his own logic, his every possible state of belief is a real macrostate. So it hardly matters if he is contradictory. That's going to be the case no matter what. :)
  • Who here believes in the Many World Interpretation? Why or why not?
    My apologies, I misread where Wayfarer's quote came from. It came from an article that Orzel did NOT like apparently.tom

    So I'm guessing you didn't even read that Orzel link you posted in rebuttal?

    I actually asked a serious question. You might have had some worthwhile points to make about Orzel's angle.
  • Who here believes in the Many World Interpretation? Why or why not?
    Everett (actually bare quantum formalism) claims that any environment that interacts with the cat in superposition will itself enter a superposed state,tom

    Not sure why entering a macro superposition state is any less magical than exiting it.

    The bare quantum formalism still requires its "observer", even if the tacking on of the further formalism of statistical mechanics - in the guise of the decohering environment - is certainly the way to deflate the notion of the "observer".

    So collapse folk have the problem of getting rid of entanglement. No-collapse folk have the problem of initiating it. Sure the wavefunction evolves in the required fashion. But observers are then the necessary element to create the context that results in some actually specified wavefunction.

    Meanwhile decoherence as a general machinery helps out both in making it clear that "observation" is not about conscious human experimenters but about the concrete existence of a thermalising environment. The Universe has the means to "observe itself" in that it has a definite past that acts as a general constraint on the indefiniteness of its future.
  • Who here believes in the Many World Interpretation? Why or why not?
    Nope, nowhere in MW is the claim made that measuring the spin of an electron means "you have to build an entire parallel universe around that one electron, identical in all respects except where the electron went".tom

    And how does that connect with what Orzel (or I) have argued?

    The point is that even if you step back from many actual worlds - Tegmark's parallelism - MWI proponents still seem to believe in crisply real branches (and branching points). But it makes more ontological sense to treat that too as a mathematical idealisation.

    So the argument is that the universe, as a whole, could never have the definiteness required to create itself as some entangled mass of matchingly definite world branches. You can get something like that occasionally - with a sufficiently isolated system. But even ordinary quantum experiments have a bit of jiggery-pokery going on in that they don't control for the fact that actual environmental isolation is physically impossible.

    And physical reality ought to trump mathematical idealisation in this regard.
  • Who here believes in the Many World Interpretation? Why or why not?
    Orzel is also wrong. by the way.tom

    So it is clear Orzel indeed has you stumped because you can offer no analysis at all. But just saying "nope" is not going to get you out of the hole here. ;)
  • Who here believes in the Many World Interpretation? Why or why not?
    But insofar as physics is purported to be about what is real, then dorm-room bull is inevitable, as far as I am concerned.Wayfarer

    But physics can't claim to talk directly about what is real. All it can claim is to talk in a fashion that is systematically constrained by "the evidence". So it is ultimately a social practice. And its philosophy accepts that. But what a physicist can rightfully say is that s/he is better constrained by the evidence than most of the people who want to waffle on about metaphysical reality, employing half-baked traditional belief systems.

    So the real issue here - as I believe Orzel illustrates - is that people take hardline positions on quantum interpretations because they are locked into either/or binary thinking. It must be the case either that wavefunctions are ontic or epistemic - a definite fact of the world, or a useful fiction of the mind. The same with wavefunction collapse. Or in a more general way, either classicality or quantumness is the illusion, the other the truth. Either everything is secretly hard and definite behind the scenes, or it is fuzzy and probabilistic - an eternal spawning confusion.

    So there are two familiar alternatives when it comes to existence - actuality vs potentiality, being vs becoming. And a quantum interpretation must settle ultimately into one or other general category.

    But why not instead see those two choices as the complementary limits on the notion of existing? Reality is never fully definite, nor fully probabilistic, always somewhere inbetween the static hardness of actuality and the soft fluidity of uncertainty.

    So there are two ways of looking at quantum weirdness. Either you can take an internalist perspective - as I do - and see the classical world as a system that confines it and dissipates it. Or you can take an externalist view where quantum weirdness is essentially unconfined and spills out to take over everything. You get people saying the entirety of existence is not just a single giant superposition, but one that branches in unrestrained fashion, growing forever more byzantine.

    Now the mathematics of quantum theory doesn't provide any machinery to collapse the wavefunction. So there is nothing in the bare formalism to constrain all the world branching, all the ever-expanding weirdness.

    But as Orzel argues, properly speaking, this weirdness applies strictly only to isolated systems - parts of the world that are essentially disconnected from the thermal bulk. To get entanglement and quantum coherence, you have to be dealing with the very small and the very cold. And that takes special equipment. Generally the world is too hot and messy for quantum effects to manifest. The weirdness is always there, but classicality is about it becoming heavily suppressed.

    So as I say, actual quantum weirdness can exist only at the very limit of the classical. The wavefunction defines that boundary where hot messy contexuality eventually peters out and all that is left - trapped inside a small and isolated spatiotemporal region - is your fundamental-level indeterminacy.

    So yes, indeterminacy exists. We've manufactured it by very careful control over experimental set-ups that produce the level of thermal isolation that permit it to be the case. But to then do the MWI trick of claiming "unconfined isolation" would turn the whole universe into a giant unbroken and coherent superposition is to ignore how the world really is - so hot and messy that indeterminacy is always and everywhere in practice highly confined.

    And the corollary is that the same applies to classical reality, the hot and messy bit. It doesn't have hard solid existence in the way that conventional materialist metaphysics imagines. It is everywhere and always that tiny bit quantum and indeterminate.

    And the whole shebang has evolved. At the Big Bang, the Universe was basically in a generalised quantum state. It was 99.999% quantum, only fractionally classical. And now that the Universe is so cool and large, it has become 99.999% classical - at least at the scale we care about, the interactions between big and still warm lumps of mass. This is the era of the hot and messy.

    Roll forward to the Heat Death and the balance shifts back to the quantum pole of existence. The contents of the Universe will only be describable in terms of a black-body quantum fizzle of ultra-cold photons being emitted by the cosmic information horizons.

    From a MWI point of view, calling the Heat Death a multiplicity of worlds in superposition would be like comparing scrambled bags of sand. Technically you might claim every back to represent some unique possible state or arrangement of sand grains/quantum events. But in fact every bag is just another bag in a way that makes no useful difference. Every bag of sand world is unexcitingly similar due to the thermal inevitabilty imposed by the second law.

    So my view is that this is the best metaphysical basis for interpretation - the real and possible are not two categories, one of which must be made to stick, but instead they represent the complementary bounds that form existence. The classical and the quantum mark the two ends of a spectrum. That means neither reality nor possibility are going to be 100% pure states.

    And yet all interpretations try to force the issue and give an absolute categorisation in terms of various binaries. That is why all of the interpretations seem to be saying something right, yet none of them could ever get it all right because of the way they go about striving after a single definite metaphysical categorisation.

    But now that quantum theory is being married to thermodynamics and information theory, now that it is importing a proper systems ontology in which you can model the kind of contextuality and scale effects that I'm talking about, things seem to be getting somewhere.

    That is why I am a fan of decoherence even if I don't go along with the fanatical MWI view which wants to treat actual decoherence as a 100% illusion (resulting in completely unconfined superpositions), whereas I say that in our hot and messy classical reality, decoherence is pretty real in being only 0.0001% - or some sensible fraction - an ontic illusion.

    So I would be an effective realist about both the collapse and wavefunction issue. Determinacy can approach 100% at one end of the scale, indeterminacy can approach 100% at the other. And neither in fact every completely rules. If we are going to construct a metaphysics of existence, then the fact that everything is always a messy mix, some balance on a spectrum, becomes the new foundation for interpretations.
  • Who here believes in the Many World Interpretation? Why or why not?
    Orzel's understanding of Many Worlds has improved over the years:tom

    Glad you think so. But I note you are avoiding saying whether you agree with his essential point. Whereas many MWI proponents get quite fundamentalist about universal wavefunction realism, Orzel is treating it more as a matter of pragmatic limits. It is pretty much impossible in practice to repeat measurements in the exact fashion that would give you a single crystaline mass of sharply branched world-lines. That version of MWI - which is pretty widespread - is a misunderstanding.

    How do you measure an interference effect? Well, you look for some oscillation in the probability distribution. But that’s not a task you can accomplish with a single measurement of a single system– you can only measure probability from repeated measurements of identically prepared systems.

    If you’re talking about a simple system, like a single electron or a single photon in a carefully controlled apparatus, this is easy. Everything will behave the same way from one experiment to the next, and with a bit of care, you can pick out the interference pattern. As your system gets bigger, though, “repeated measurements of identically prepared systems” become much harder to achieve. If you’re talking about a big molecule, there are lots more states it could start in, and lots more ways for it to interact with the rest of the universe. And those extra states and interactions mess up the interference effects you need to see to detect the presence of a superposition state. At some point, you can no longer confidently say that the particle of interest is in both states at once; instead, it looks like it was in a single state the whole time.

    And that’s it. You appear to have picked out a single possibility at the point where your system becomes too big for you to reliably detect the fact that it’s really in a superposition.

    http://scienceblogs.com/principles/2015/02/20/the-philosophical-incoherence-of-too-many-worlds/

    And note how he ends for good measure....

    (Finally, the above probably sounds more strongly in favor of Many-Worlds than my actual position, which shades toward agnosticism. But nothing makes me incline more toward believing in Many-Worlds than the gibberish that people write when they try to oppose it.)

    So he is trying to challenge the more conventional MWI interpretations.
  • Who here believes in the Many World Interpretation? Why or why not?
    Now, I've realised what I think is wrong about this view. This is that science views reality through theories and hypotheses. And what I think Einstein is forgetting (and, hey, he's Einstein, so I know I'm saying a lot!) is that the kinds of purported facts that he is arguing about are only disclosed by a rational intelligence who is capable of interpreting the facts. So 'the facts' - and by extension, even the moon - don't exist irrespective of whether one is looking or not. 'Looking' is inextricably intertwined with what is being observed. That has always been the case, but it took 'the observer problem' for it to more or less come up and punch us in the nose!Wayfarer

    I think you are agreeing with me on pragmatism. And as I say, that is what comes through from Heisenberg. But also Einstein got it in that he said (in a co-authored book) that scientific concepts are free creations of the human mind. So his issue was more one of metaphysical principle. He was loathe to sacrifice a concept that had worked as well as the principle of locality.
  • Who here believes in the Many World Interpretation? Why or why not?
    We went along with collapse was real, and it was the "observation" which made it real.Moliere

    But what does it mean for the collapse to be real (and the wavefunction not real) in Heisenberg's Kantian view? He talks about an epistemic collapse - a change in your state of knowledge. So the emphasis is on pragmatic modelling. We can only have knowledge about reality via our conceptions.

    Our models encourage us to create certain measuring devices and experimental set-ups with which to probe. But whatever we learn is always in terms of those familiar conceptions. We don't get outside our own self-created observer bubble to grasp the thing-in-itself. All we have is a system of signs that seems well behaved. We can stick our thermometer into the bath and read off some numbers. We understand that combination of events as evidence there exists "a temperature". Likewise we can probe the quantum realm with quantum set-ups and read off observations in terms of the behaviour of a particle. Or of a wave. Depending on the choices made as the observer.

    So you set up questions in a certain way - a human way and not necessarily nature's way. That will result in a reading, a sign, that "collapses" your ignorance. But what went on "out there" is another mystery.

    Heisenberg: ....we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning. Our scientific work in physics consists in asking questions about nature in the language that we possess and trying to get an answer from experiment by the means that are at our disposal.

    So yes. CI is often considered to claim collapse realism. But Heisenberg appears to aim at a sophisticated epistemic position that instead takes as primary the Kantian impossibility of naked realism in any form.

    And the mystery unsolved is then why the quantum mode of inquiry works so "objectively". The collapse of our ignorance when we conduct our probes is so reliable that it tempts us make a stronger causal connection than our epistemological limitations would warrant. We want to say we ourselves collapsed the wavefunction by touching reality with our minds. Or that collapse really is objective and caused by the physical aspect of our probing - the way we jarred the wavefunction with our material devices.

    Thermal decoherence seems to offer now a fairly natural view of how macroscale observers can act as the decohering contexts that "collapse" quantum-scale possibilities. But unfortunately decoherence is quite tied up with MWI fundamentalism about wavefunction realism and no collapses.
  • Who here believes in the Many World Interpretation? Why or why not?
    We always differentiated between instrumental interp from CI, though.Moliere

    As you say, even if you go back to Bohr and Heisenberg, you can't recover some pure CI position. And perhaps I should have said pragmatist or logical positivist rather than instrumentalist initially - even though instrumentalism is only Dewey and Popper trying to strip down Peircean pragmatism of its "unfortunate" metaphysical leanings.

    So I find that when I talk to modern proponents of CI, they are essentially arguing pragmatism - all we can know is that the maths sure works. And then the metaphysics that lingers at the back of this is the idea that the mind of the observer works on the classical side of the equation, so something that sure looks like a definite collapse of quantum weirdness must be the case in that we manage to extract classically understood measurements from the world (within the bounds of uncertainty).

    So even the instrumentalism relies on a background metaphyics which I would say should be troubling. And it certainly was for Peirce who was working on a "fuzzy logic" view of metaphysics for just that reason.

    Generally, I struggle to draw sharp lines between interpretations. But when given some central issue - like wavefunction collapse - people are going to divide quite logically into the three camps of (1) it must do, (2) no, it can't, and (3) can't know so learning not to care.

    CI as people currently use it seems more 3 than 1 these days.
  • Who here believes in the Many World Interpretation? Why or why not?
    What bugged Einstein is his native faith that reality was 'there anyway', whereas the Copenhagen advocates all said that in this matter, the line between observer and observed was no longer clear-cut. And I'm with them on that, as far as I can understand it.Wayfarer

    CI itself comes in a rainbow of variants. But the central idea in my view is that in the end what we can be sure of is that we don't know how to define what looks like a necessary division between observers and observables when it comes to quantum scale observations of observables. So we know there is an explanatory gap, but can see no watertight way to fill it.

    So CI says there is a line for sure. We sit on its classical side. And where that line gets drawn to rule off the quantum side is something we can't answer.

    And my response to that is that it is this notion of there being a definite line which is questionable. Instead, I see the classical and the quantum as complementary models of the two perfect limits on existence. So CI gets it wrong in persisting in believing in a dividing line. Although, as I say, CI comes in so many varieties that it can be seen as a "shrug of a shoulders" intrumentalism even about hard line vs fuzzy line ontologies. Who cares because we can use the maths to deal with the world and built great machinery?

    Then Peirce comes in here because he was already dealing with this precise problem - the nature of observers. And he extended that epistemological question to make it a ontological answer. His semiosis is a way of defining soft dividing lines in worlds where observers and observables are fundamentally entangled, but can - thermally - develop robust habitual divisions.

    So you keep claiming Peirce to be an idealist - someone somehow arguing that divine mind conjures the world into being. And at stages in his life, he may have well wanted to believe that.

    But if you look at his actual metaphysics - his semiotic approach - then he was talking about signs rather than minds. Observers weren't localised experiencers but contextual habits of interpretance. The difference might be subtle, but it is also huge.
  • Who here believes in the Many World Interpretation? Why or why not?
    Rather than being defensive, why not critique Orzel from your point of view? That would be more interesting.
  • Who here believes in the Many World Interpretation? Why or why not?
    That's close to my own thinking, but was obviously written before the discovery that the second law of thermodynamics is violated more frequently the smaller anything becomes and completely ignores the Quantum Zeno Effect.wuliheron

    I'm hardly ignoring the quantum zeno effect. Remember that that too requires "perfect watching" to stop the particle ever decaying. So it is both remarkable that we could slow down a decay, impossible that we could create the energy-demanding experimental conditions that would stop a decay.

    The simplest explanation is that time can flow both forwards and backwards because a context without significant content and any content without a greater context is a demonstrable contradiction.wuliheron

    I'm all for some version of retrocausality. But you are invoking a globally general version that again betrays perfect world thinking and not the fuzzy logic approach that I would take.

    So our bulk model of time and causality is best described by this kind of thinking I would say - http://discovermagazine.com/2015/june/18-tomorrow-never-was

    This thermal view of time says the past is pretty much solid and decohered, the future is a bunch of open quantum possibilities. And then quantum retrocausality would be about very local and individual events which are criss-crossing this bulk picture.

    The bulk seems definitely sorted in having a sharp split between past context and future events. But on the fine grain, past and future are connected because - as with quantum eraser experiments - the context can take a "long time" to become fixed in a way that then determines the actual shape of the wavefunction. It is only in retrospect that we can see all that went into its formation.

    So again, rather than time/causality being either absolute in a uni-directional classical sense, or instead absolute in a quantum non-local or "both ways" sense, the real world dangles somewhere between these two perfect limits. It emerges as the equilbrated bulk behaviour.

    . Its enough to make Zeno's head spin, but its a more Asian metaphoric take on the issue.wuliheron

    The trouble with Asian metaphors is that culturally they lack mathematical development. So they are inherently fuzzy in being verbal descriptions. At best, using proto-logical arguments, they are proto-mathematical.

    So yes, it is my own argument that all early civilisations shared a fairly organic, symmetry-breaking, perspective on metaphysics. There are strong parallels between Anaximander and Tao.

    But you can't claim quantum physics to be the triumph of the Eastern way over the Western way. It was Zeno who crystalise the mathematical paradoxes of a way of thinking, and thus made possible their equally sharp counter-reaction. You couldn't develop calculus unless you knew there was some sharp problem when it came to differentiating a curve. And you couldn't develop quantum mechanics if Lagrangian mechanics wasn't already a result of being able to do such differentiation.

    So Zeno sparked something usefully concrete in Western thought. It allowed us to speak mathematically about the opposing limits on being. Asian philosophy just spoke about the fact that Yin and Yang gave you the I Ching - a proto-maths that was too fuzzy to ever go anywhere after that.

    So I don't undervalue Eastern metaphysics. But there are reasons why Western metaphysics - in its built-in capacity to be "utterly wrong" via axiomatic mathematical claims - became the actually productive intellectual tradition.

    Zeno made everyone's head spin for the next 2300 years. Asian metaphysics has since gone down the nearest toilet/rabbit hole even in Asia. Universities over there don't teach quantum theory any differently.
  • Who here believes in the Many World Interpretation? Why or why not?
    You do realise that that makes you a super-strength pragmatist? :)

    You aren't thinking of yourself as a CI proponent in the "consciousness causes collapse" sense? - https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Wigner_interpretation
  • Who here believes in the Many World Interpretation? Why or why not?
    The branching is an artifact of quantum mechanics still being formulated using classical mathematics when all the evidence, including macroscopic evidence, indicates nature is fundamentally analog and what is required, at the very least, is some sort of fuzzy logic variation on the excluded middle. That includes modern quantum mechanics which are formulated as wave mechanics according to the Schrodinger Equation.wuliheron

    Yep. But then MWI seems to be an example of applying fuzzy logic interpretations to those successful mathematical formalisms. Which would be ironic.

    So the maths can't provide an actual (ie: real) wavefunction collapse. Your interpretive choice then is whether (1) to affirm that there must be a collapse to one-world classicality that so far has escaped out mathematical models, or (2) argue for a no-collapse reality and ride that to wherever it logically leads, like MWI, or (3) argue for strong agnosticism about the true nature of reality as with an instrumentalist version of Copenhagen.

    And we are seeing MWI being defended in very fuzzy terms with talk of interactions, correlations, interferences, branches, and other such stuff happening causally across world lines. So concrete sounding mechanisms are being invoked, while at the same time the latest decoherence versions of MWI seem to get squirrely about what any of this talk means in a definite physical sense. The other worlds "don't really exist", just as the collapse "doesn't really happen".

    So the charitable view is that MWI is part of the exercise of giving up fairly completely on our classical expectations about how reality works. In some way, the whole of existence is a thermal ensemble of evolving possibility with an emergently classical character. But nothing can be completely pinned down or localised.

    So in some sense the very notion of "to exist" has to reflect that reality is fundamentally contextual and can feel the shadowy presence of all its alternatives - all its possible worlds - even as it hovers fitfully around some general emergent equilbrium balance of that ocean of possibility.

    In that light, both hard and definite collapse scenarios, and hard and definite no-collapse/many real worlds scenarios, are too strong as interpretations. Existence is to be found somewhere between the bounds of the one and the many.

    An approach to MWI I find appealing is Chad Orzel's - http://scienceblogs.com/principles/2008/11/20/manyworlds-and-decoherence/

    He emphasises that in a twin slit experiment, every photon has a slightly different thermal history or context as it passes through the array.

    What you get depends on exactly what went on when you sent a particular photon in. A little gust of wind might result in a slightly higher air density, leading to a bigger phase shift. Another gust might lower the density, leading to a smaller phase shift. Every time you run the experiment, the shift will be slightly different.

    So at a deep level, every photon has a spooky "completely entangled' connection. Yet at the emergent quasi-classical level, the world is varying enough to wash away the effect of these entanglements. Although you can arrange your experiment to also stop the entanglements being washed away and - now tilting the statistical ensemble the other way - present an accumulation of photon events that have the spooky connected pattern.

    So the warring interpretations want to have it clean cut as being one or the other. Either there is one world spun of definite collapses, or many worlds spawned because of no-collapse. But a thermal realism says no world is perfect for any actual photon. It is always either relatively strongly entangled or relatively weakly entangled, depending on the amount of "perfect control" there is over the identicality of real world conditions.

    That is, the context itself is varying or fuzzy at all times. Only an impossibly perfect and regular context could "manufacture" the kind of pure spookiness that hard-line approaches to MWI would demand. The "world" is itself never certain enough to justify the ontic demands of the no-collapse camp, just as much as an actual collapse view yielding a single classical world is also out of the question.

    A parallel in thermodynamics might be the opposing notions of absolute thermal order that would be represented by the two possible minimum entropy organisations of a perfect gas. A highest state of order would be all the particles collected in the one corner of the jar - from where they would spread out randomly. But then the opposite perfect bound would be to start with every particle having an exact grid-like spacing - spread out as regularly as possible. Again, as soon as released, randomness would scramble that initial state very quickly (and much more quickly in fact that if the gas has to diffuse from one corner).

    So that is an example of how real thermodynamics is about equilibrium states that are some thermal balance which is measured relative to two opposing perfect bounds. And with MWI, the collapse vs the no-collapse positions on quantum maths represent the single perfectly classical world and the unlimited perfectly entangled quantum world-lines of which our own world is the messy actual reality that exists between two impossible states of perfection.

    There is huge uncertainty/contextuality at the local particle event level. But also that context has an always present residual uncertainty itself.

    So as Orzel argues, we have to both accept spookiness as fundamental, but then not jump to treating it as itself something that has absolutely definite existence. Even the spookiness is relative to what emergently exists. The world in effect exists by suppressing the spookiness. It is not the spookiness that rules in a way that produces some unlimited number of actually branching world-lines, with their then fundamentally mysterious multiple "observers" experiencing different "collapses".

    Orzel again...

    Why do we talk about decoherence as if it produced “separate universes?” It’s really a matter of mathematical convenience. If you really wanted to be perverse, and keep track of absolutely everything, the proper description is a really huge wavefunction including that includes pieces for both photon paths, and also pieces for all of the possible outcomes of all of the possible interactions for each piece of the photon wavefunction as it travels along the path. You’d run out of ink and paper pretty quickly if you tried to write all of that down.

    Since the end result is indistinguishable from a situation in which you have particles that took one of two definite paths, it’s much easier to think of it that way. And since those two paths no longer seem to exert any influence on one another– the probability is 50% for each detector, no matter what you do to the relative lengths– it’s as if those two possibilities exist in “separate universes,” with no communication between them.

    In reality, though, there are no separate universes. There’s a single wavefunction, in a superposition of many states, with the number of states involved increasing exponentially all the time. The sheer complexity of it prevents us from seeing the clean and obvious interference effects that are the signature of quantum behavior, but that’s really only a practical limitation.

    Questions of the form “At what point does such-and-so situation cause the creation of a new universe?” are thus really asking “At what point does such-and-so situation stop leading to detectable interference between branches of the wavefunction?” The answer is, pretty much, “Whenever the random phase shifts between those branches build up to the point where they’re large enough to obscure the interference.” Which is both kind of circular and highly dependent on the specifics of the situation in question, but it’s the best I can do.
  • A Theory about Everything
    I think it can be seen as triadic but it need not be seen so. The belief in a world beyond your experiences can be seen as ultimately the same belief as the belief that there is a self. Each (the world beyond your experience and the self “inside” your experience) is merely a different version of the Noumenon. The belief in a world beyond your experience is simultaneously the belief that your experience has the character of “I-ness” about itDominic Osborn

    That's still a triadic move. My point was that to speak about "pure experience" is already to have leapt from the monadic position of "just experiencing" to talking triadically about the I-ness of being a self having experience of a world.

    The belief in a self beyond your experience (or, as I suppose we all imagine it: the belief in a self inside experience or on this side of experience) is simultaneously the belief that your experience has the character of “world-ness” about it. (Apologies for these awkward expressions.) There are two versions of duality here, not three things.Dominic Osborn

    Yes, speaking about experience as itself a "thing" is to claim - triadically - that the experience has world-ness along with the I-ness. The whole point is that we are now thinking about experience in this meta-fashion where it is something distinctive - a state of mind, a field of qualia, a mental representation - that mediates between a witnessing self and a material world.

    So in reality we jump straight from one to three in talking about "just experience". It has this complex structure that involves both I-ness and world-ness as its basic division - and hence, potential relation.

    What I think I am saying is that Reality is Indeterminacy, Vagueness. Or, what I am saying, to put it another way, is: you can’t say anything about Reality. I then go on to say that all you can say is what Reality is not. So I then say, Reality is not many things, Reality is not one thing; Reality is not the Physical World; Reality is not the Mind; Reality is not this, Reality is not that, etc..Dominic Osborn

    I'm not sure on what basis you are claiming to say these things. It doesn't seem to be on the basis of either rational argument or probable evidence. It involves the awkward epistemic manoeuvre of first believing we are in a triadic modelling relation with reality - recognising qualia as a mediating level of sign - and then dropping the modelling part to then claim that the mediating signs might be all that exist.

    My approach is the consistent one. It accepts that we are in a modelling relation with reality and, from there, draws the practical conclusion that dreams of absolute knowledge are an epistemic pipedream. We can only hope to minimise our uncertainty in regard to the noumenal.

    So you want to doubt the world. I only want to doubt our knowledge of the world.

    I think being “lost in the flow of events or actions in unselfconscious fashion” is knowledge (of those events or actions). I don’t consider Knowledge and Being to be separate. I think your definition of Knowledge mirrors your (dualistic) conception of existence: an existence essentially consisting of a knower and a known, a self and its experience (with the possibility of a third thing too, the Noumenon). I think Knowledge is non-dual and Being is non-dual.Dominic Osborn

    If they are so non-dual, why do you call Knowledge and Being by different names? (Yes, I realise you will now call them two aspects of experience - and so we circle back to the necessarily triadic structure that betrays the discursive nature of idealism.)

    I think you, and Kant, and Peirce have swallowed an absurdity, an absurdity however that is so widely and deeply felt and held that it almost passed into the realm of fact.Dominic Osborn

    For an absurdity, it is unreasonably effective wouldn't you say. Science is founded on it for a start.

    The positing of a Noumenon is an absurdity: something that exists but is not felt. If whether something is perceived or not has no bearing on whether or not it exists, why are there not not spooks and pixies dancing on my desk here? The positing of the Noumenon is the conceiving of Ignorance. But the conceiving of two realms, the Known and the Unknown simply proposes Duality again. Why do you accept the notion of "Ignorance" uncritically?Dominic Osborn

    There might be spooks and pixies dancing on your desk. All you can know is that you have minimised your uncertainty about that to the extent that that seems possible.

    So the positing of the noumenal is simply the rational acceptance that perceptual experience has its limits. One shouldn't claim absolute knowledge about reality even if we seem to have a pretty damn useful handle on this thing we call reality.

    The conceiving of the Definite and the Possible simply proposes Duality again. Why do you accept the notion of "Possibility" uncritically?Dominic Osborn

    But my point was that all categorisation has to proceed dichotomously. You have to have an intelligible division into a this vs a that to play the game.

    So it is not dualism - a division lacking a bridge. It is a dichotomy - a division that is self-defining in that each half defines its "other".

    Knowledge is a lack of ignorance - a minimisation of uncertainty, as I say. And ignorance it the opposite - a maximisation of uncertainty or a lack of knowledge. Likewise, the definite is that which lacks indeterminancy, and vice versa.

    So it is hardly uncritical. A dichotomy is the definition of critical thinking - the sharp division that renders the world generally intelligible.

    It can’t be the case that there are two different things, an existence in which the world is real and an existence in which the world is idealistic illusion, but each looks the same to me.Dominic Osborn

    But that is the possibility which your idealism requires.

    My pragmatism can see the bent stick in the water as a straight stick that only looks bent because of the water. I accept that there is a phenomenal vs noumenal distinction which is a difference that can make a difference.

    But you are arguing that a stick that looks bent is always a bent stick. Appearances are all there are to reality.

    Either the two parts (Phenomenon and Noumenon) are in some way joined, in which case they are not really two after all, or they are not joined, in which case there must be a thing, nothingness, between them, which is at once an existing thing, and must be, in order to hold the two things apart, and also a non-existing thing because, were it to exist, it would join the two things up. But there cannot be a thing that both exists and does not exist.Dominic Osborn

    Again, my ontology is based on things needing to be first separated in order that they then can interact. So it is irreducibly triadic and doesn't fall into the paradoxes inherent in dualism.

    There can't be an interaction without a difference. Thus you need at least two different things. And furthermore, for such a state of being to persist (and thus be said to "exist"), the interaction must serve to maintain the difference that is the basis of the interaction. The interaction can't be a passing fluke. It must develop into a systematic habit.

    Triadic semiosis in a nutshell.
  • Metaphysics as Selection Procedure
    Matter to what?
    And to what degree have you enquired?
    Are you sure you can see through the mist?
    Punshhh

    Sorry. which of those questions is about pragmatism rather than being an expression of pragmatism?
  • Metaphysics as Selection Procedure
    Yes I see that but, we are blind to what we are in some sense.Punshhh

    But if that is so, then we are only dealing with a known unknown. And if my epistemology accepts that there can be unknown unknowns, then it is reacting to that very known unknown. It builds in the fact that we could be blind - and explains the degree to which it could then matter.

    I am concerned with other or unconventional ways of knowing and other means of seeing and witnessing and the development of wisdomPunshhh

    Fine. But you are not showing that they have a demonstrable advantage - except as a way to block open minded, publicly conducted, ontological inquiry.
  • Metaphysics as Selection Procedure
    Nothing much one could say to that gibberish.
  • Why are superhero movies so 'American'?
    Superheroes spell out the American Dream - transcendent individualism. So it is the logical endpoint of the romantic mythologising of the "self-made self".

    The contrast with, say, Japanese culture and the films of Miyazaki, is pretty stark.

    Another strikingly US trope is the smartarse kid. There is always the child that says the cute and clever things to show up the adult characters. Again it is obvious how this plays up something core to US values.

    The adult superhero is the guy with the concealed physical powers. The child hero is the kid with the exhibited social dominance. Both spell out the same message. The individual can always transcend the mundane constraints of the collective norm. One can aspire not merely to succeed, but to exceed.

    And from there, the failure to exceed becomes the new failure.
  • Metaphysics as Selection Procedure
    How is the actual experience experienced by the experiencer "framed"?schopenhauer1

    The clue is in the fact you have to mention the experiencer.
  • Metaphysics as Selection Procedure
    What this boils down to is we don't know if we are actually doing metaphysics, or just playing at it.Punshhh

    You are just repeating what I've already dealt with. Of course beyond the known knowns, and the known unknowns, there could be the unknown unknowns. Pragmatism takes that for granted.

    But the point then is twofold.

    First, if there really are unknown unknowns, they still remain open to being discovered if they make a difference.

    Then second, they would have to be unknown unknowns about which we could care. Pragmatism is also about truth in terms of the purposes that can define being a self, being an observer. So it is a Janus faced epistemology in defining both observer and observables in a fully consistent fashion.

    Thus there could be differences that don't make a difference - to us. In fact, to now switch to the ontological view relevant to the OP and its confusions about selections and hinges, the world is presumed to be full of potential difference. Variety begins unconstrained - the definition of vague. And then "self-interested" constraints or habits develop to regulate variety, turning it into a crisp contrast between signal and noise, meaning and irrelevance.

    So again, pragmatism has no interest in denying the unlimited possibilities of difference. And that is because it speaks to the regulatory possibility that is the separation of that kind of vague potential into differences that make a difference, and the differences that don't.
  • Metaphysics as Selection Procedure
    Yeah, but as soon as your private experience is framed by yourself as an argument, it is social, even if never in fact articulated publicly. So to be mapped is already crossing the line that is the epistemic cut upon which human introspective "self consciousness" is constructed. It invokes the "self" as the interpreter of a sign, the sign being now the observable, the claimed phenomenon.

    You seem to imagine that naive experiencing of experiences is possible. But to talk about the self that stands apart from his/her experiences is already to invoke a pragmatist's sign relation.
  • Metaphysics as Selection Procedure
    Why would I dispute the very problem pragmatism sets out to resolve?
  • Metaphysics as Selection Procedure
    Perhaps you don't understand what it means when I say I am defending a pragmatist epistemology? If you believe instead in private revelation, go for it.
  • Metaphysics as Selection Procedure
    I think this notion that if there are no counterfactuals, it has no value or useful understanding is skipping over a large amount of phenomena.schopenhauer1

    It is illogical to claim that there could be phenomena that aren't distinct and therefore counterfactual in the fact that, given different conditions yet to be discovered, they wouldn't be there.
  • Metaphysics as Selection Procedure
    You forget that I am arguing the pragmatist view and so Occam's razor applies. You can pretend to worry about invisible powers that rule existence in ways that make no difference all you like. You are welcome to your scepticism and all its inconsistencies. But as I say, if whatever secret machinery you posit makes no difference, then who could care?
  • Metaphysics as Selection Procedure
    Like Von Neumann's measuring tools, the model is both map and territory. But it's kind of this unstable thing, right? like it's both - but it can't be both at the same time.csalisbury

    I don't understand your objection. The model describes a territory that is itself being viewed as a modelling relation. Seems simple enough.

    that recursive explosion - where one would need a new tool, M', to measure M+S, and so forth - requires an indefinite expanse which would allow one to keep 'zooming-out'.csalisbury

    But that is the argument for the epistemic cut or semiotic sign relation. It is because the measurement function - the observer - can't be understood as "just physics" (because recursion ensues) that the observer/measurement has to be understood in terms of a symbolic level of action.

    So the passage you cite identifies the fundamental problem of physicalist explanation. And that homuncular regress is what semiosis fixes.

    well, yes, that which constrains has to be atemporal, but it's a weird kind of atemporality isn't it? It's out of time, yet of time - precipitated from temporal dynamic material processes (tho always implicit within them), yet able to turn around, as it were, and regulate them.csalisbury

    Again, there seems no problem at all. That is how a memory functions. You have all these regulative habits you've learnt - like perhaps the rules of cribbage. Then along comes a cribbage playing situation and all your dormant skill gets a chance to do its thing.

    But a model qua TOE isn't merely constraining and controlling a local set of dynamic processes - it envelops everything - both the dynamic processes and the atemporal. It is somehow outside of the dialectic, touching the absolute**, and invites the very idea of the transcendent mind you rightfully decry. It's a fixed thing - a holy trinity of sorts - which explains the fixity/nonfixity/relation-between-the-two which characterizes everything.csalisbury

    A TOE would be maximally general. And it would then encompass all the more constrained physical models.

    A model of quantum gravity unifies quantum field theory and general relativity. General relativity unifies special relativity and Newtonian gravity. So physics already is organised in this nested hierarchical fashion.

    And it is definitional of a TOE that spacetime becomes an emergent feature, not a fundamental ingredient. That is the point.

    So being "outside" of time, and space, and matter, are all desirable properties.

    And that in turn is the argument for pansemiosis. The fundamental problems of physics can't be fixed with just "more physics". That risks the recursion that can only be "solved" by the appeal to mystic transcendent causes.

    And so the trick that worked for human self consciousness and biological autonomy - semiosis/the epistemic cut - would be the way to fix physics as well.

    Physics is at an impasse with quantum theory because it cannot offer a formal model of the observer that collapses the wavefunction. And semiotics is precisely that - a formal model of observers.
  • Metaphysics as Selection Procedure
    Has the symmetry always existed?darthbarracuda

    Silly question. You already know that my position isn't tied to a mechanical notion of time.
  • Metaphysics as Selection Procedure
    But you are ignoring the known distinction between dimensionless and dimensionfull physical constants.

    So at the singularity, we are talking about the Planck triad of constants. And these are pure ratios that thus encode a naked reciprocal or dichotomous relation.

    The singularity is thus not singular. It is not a dimensionless point. The very first moment of existence is already divided, even if that division is yet to be expresed. The symmetry is broken, even if it hasn't yet moved off from its own absolute symmetry.

    So the basic dichotomy the Planck triad captures is that between spacetime extent and quantum action - the size of the container versus the density or temperature of its contents. At the Big Bang, these two aspects of physical being were "symmetric" - both at their material limit in terms of compactness. And the difference was pure, not something that could be measured in terms of some numbers that would speak of transcendent reference frames.

    The stuff Rees was talking about was mostly the after the fact constants that emerged against the fundamental Plankian backdrop. And modern physics hope is to account for these also as pure mathematical constants that reflect further structurally inevitable symmetry breakings.
  • Metaphysics as Selection Procedure
    Is it a brute fact that the third category of vagueness is the land of no brute fact?darthbarracuda

    Don't be silly. It is a deduced fact. Just like the complementary notion of there being instead a first cause. Both derive from the axiom of sufficient reason. One just puts reasons in the past, the other puts them in the future.

    So the mechanical view says there is some thing that exists. Therefore it must have a further thing sufficient to cause its existence.

    My organicism instead acts from the observation that existence is always dichotomised. Therefore there must be a prior state in which such dichotomisation is dissolved. Symmetry breaking implies the symmetry that got broke. And so, following that logic through, we arrive at the ontic definition of a vagueness or Apeiron.
  • Metaphysics as Selection Procedure
    In actual fact, speculation about a purported first cause is still alive and well in the form of arguments about the fine-tuned universe.hWayfarer

    Yep. The cosmology that captures the public's attention is exactly that which taps straight into the mechanical thinking that has become endemic via technology in modern society.

    So of course multiverses, string landscapes, eternal recurrence, and so forth are what everyone talks about. It seems like the sort of thing science ought to be saying. Existence is utterly contingent. Structure can only be a cosmic accident.

    So all you are pointing out is how far my organicism is from the populist mainstream that would find it easier to believe we all exist inside a Matrix simulation.
  • Metaphysics as Selection Procedure
    ↪apokrisis But surely if something must be stopped, it must have begun before. Unless it is just a brute fact that something is the case, which sounds suspiciously like adarthbarracuda

    Hence the third category of vagueness - the land of no brute fact which can give rise to the yin and yang of mutually co-arising brute facts such as stasis and change.