• Donald Hoffman and Conscious Realism
    Was there some actual question there? I couldn't get anything definite. Perhaps you just wish to assert dualism but are unable to muster the appropriate argument?
  • How a Ball Breaks a Window
    Does the ball still break the glass if no one is around to observe? (I am trying to imagine things as you are describing them.)
  • Donald Hoffman and Conscious Realism
    On the one hand, Hoffman is just making a standard semiotic or modelling argument. We understand the world through a system of sign. But on the other hand, he looks to miss the crucial thing of the epistemic cut where signs are an actual act of mediation in which a material reality comes an informational interpretation.

    So he makes a big thing about reality having no objective features. And that is where he turns into sounding like an idealist, not an indirect realist or pragmatist.

    However the semiotic view says there is a real world out there of matter and energy. It is objective, and indeed utterly recalcitrant, in its existence. Then the epistemic cut says there follows an act of translation. With our sensory receptors and habits of perception, patterns of physical energy are turned into informational activity - the signs of our qualitative experience.

    Consider an interaction without this translation. Shine some red light on a dead sheep eyeball. All that will happen is that the dead flesh might start to heat up after a while. Energy can make an energetic change and that is as far as it goes. There is no hue to this interaction as such. Saying the light is "red" is a meaningless claim from the point of view of the physics. Red just isn't an objective property of reality while we are talking of it as material being. Whereas light being able to heat up the eyeball is a recalcitrant fact of nature. It just happens.

    By contrast, when light falls on a live eyeball, we don't experience the heating but instead the construction of some representational pattern of signs. The visual field is divided in hues, like green and red, that "stand for" some neural circuit judgement about relative wavelength frequency information. Green and red are only fractionally different in wavelength as a physical fact, but are experienced as ontically the exact opposite of each other. It is impossible (in any normal way) for red and green to be found together at the same point of experience. Our circuitry is designed to so that their informational state is signalling one or the other in a logically mutually exclusive fashion.

    So the semiotic view is that what we construct is an interpretation of reality where our own biological interests are part of the information that shapes the sign. Evolution doesn't want the sign to be "realistic" in that it is some pure token of the material world - like a reading of a scientist's light meter that wants to give an objective reading of an energy level or wavelength. Evolution wants the sign itself to be a sharply dichotomous judgement.

    The receptors have to make a simple decision - say green or say red. Break the complexity of physical energy relations into simple pixels of yes/no acts of discriminative judgement. Then that epistemic cut means we can get on with building up our own fully self interested model of the world.

    We have already made the first necessary act of interpretance to separate our interests from the material constraints the world seeks to impose on our physical being. In our little neural cocoon of self interested world modelling, we can then construct a whole realm of plans and ideas founded on our system of sign.

    The computer interface analogy Hoffman offers does get at this epistemic cut. Our interests are served by seeing an icon on the screen we can click. We don't want to have to care about all the physical complexity in terms of the hardware actions that a little picture might represent. So when we see a symbol that looks like a floppy disk, what we see is our own desire to save a file. The sign appears to be directly representative of the material,world, yet really it is a fragmentary reflection of our own internal realm of felt intentionality. It is a little bit of us.

    The trick is to see how the same is true of all phenomenology, like our experience of hues such as red and green. They are shards of self interested judgement hardwired down at the neurobiological level. Energy and matter are exactly what get left at the doors of perception. Consciousness starts with a logical transformation, an epistemic cut, where a digital decision has got made and now we can talk of a selfish realm of sign.
  • How a Ball Breaks a Window
    Remember that the maths was developed to deal with idealised point objects. So the Zeno-style paradox of jumping to the first next point to get moving is an artefact of that maths.
  • How a Ball Breaks a Window
    Anyhow it is just a bunch of crackpottery, so why bother.Rich

    So you don't even believe in it yourself enough to try to defend it?
  • How a Ball Breaks a Window
    So the answer is no?
  • How a Ball Breaks a Window
    My take would be this based upon the universe being a holographic field in nature.Rich

    What is a holographic energy field? Only reference seems to be to crackpottery - http://ambafrance-do.org/spirituality/24334.php
  • Explaining probabilities in quantum mechanics
    The universe has 1 trillion years to live.Rich
    Don't you mean that the Heat Death is eternal? That's quite a surprising conclusion if you think about it.
  • Is linear time just a mental illusion?
    The muon can be said to be at rest and thus there is no effect on its decay.Rich

    So you can subtract away all acceleration to arrive at an inertial frame. But after constraining second derivative motion to get first degree motion, how do you actually arrive at actual zeroeth degree motion - this "proper rest" you want to talk about?

    This is why the reciprocity is between the second and first degree derivatives of motion, not between some absolute frame with matchingly absolute resting coordinates.

    To put any rate on a muon's decay, some reference frame must be established as your chosen coordinate basis. Conventionally one can make that the global cosmic backdrop. That seems safe enough for SR purposes. A muon's decay could be then measured against that as its inertial frame. A slow muon could be compared to a fast muon from some general cosmic point of view.

    But to claim baldly that a muon has some proper spacetime coordinate all of its own - a zeroeth derivative - shows you haven't really thought this relativistic measurement business through.

    You know, you are, like good scientist, suppose to say that STR does not apply to accelerating systems...Rich

    You seem really confused. Perhaps you don't get the consequences of SR adopting a Minkowski geometry framework where spacetime is united as four dimensions? That means we can map one inertial frame onto another via a Lorentz transformation. Length contraction and time dilation - the reciprocality - gets handled automatically by now being built into the mathematical structure. The constant c - lightspeed - is now the scaling factor. So you get that inverse relation, c/1 vs 1/c, hardwired in as a new universal constraint.

    Restmass arises as the possibility of going slower than c - so slow as to be "at rest". And a muon at rest is experiencing much more time due to dilation, so its decay comes as fast as it possibly can. A muon travelling at near the speed of light has a clock that ticks slower, so its decay is stretched out. All this of course being the view we are seeing having fixed our frame of reference so that one muon is not moving relative to us, the other is moving at near lightspeed.

    Which nicely brings me back to my first reply to the OP.
  • Explaining probabilities in quantum mechanics
    You are very flattering. But its just standard cosmology. You can read all about it yourself.
  • How a Ball Breaks a Window
    I can take a photo of the baseball sitting on the glass and one of the baseball at the exact instant it contacts the glass and the photos are identical, and yet one will impart energy sufficient to break the glass and the other will not.MikeL

    But in one, the glass is not bending, nor the ball flattening. In the other, the snapshot looks completely different. The electrostatic bonds holding the glass atoms together are being visibly stretched towards the point where they could break. Even the ball is being tested on that score. It could have been the one to shatter instead.
  • Explaining probabilities in quantum mechanics
    Most people probably can follow simple math. If the Universe had a temperature of 10^32 degrees at the Big Bang, and the Heat Death is defined by it being asymptotically close to 0 degrees, then it being currently 2.7 degrees tells us what?

    Is it: A) We are pretty much at the end of the journey. Yes siree, 32 orders of magnitude is quite a big drop. We are not even talking nanoseconds to midnight (nano being merely 9 orders of magnitude).

    Or: B) Bibble, bibble, bibble. Blub, blub, blub....
  • Is linear time just a mental illusion?
    The humans are the ones accelerating. Muon at rest.Rich

    So you support your position of there being no preferred frame by stating your preference for a frame?You support the reciprocality principle by denying it applies between two frames?

    Wow. Your grades as school must have been spectacular. You can't seem to make a single point that isn't a self-contradiction.
  • Explaining probabilities in quantum mechanics
    You are like someone plunging off a skyscraper, now being inches from the ground, shouting out I'm not dead yet, you don't know what you're talking about, my future has not been foretold, there are no grounds to predict my immanent demise.
  • Explaining probabilities in quantum mechanics
    I am sure many members are awaiting breathlessly for the v final verdict on what will happen billions and billions of years from now as science refines it's calculations.Rich

    Just measure the cosmic background radiation. Its 2.7 degrees above absolute zero. The average energy density is down to a handful of protons per cubic metre.

    Again you reveal the vastness of your ignorance of routine scientific facts. The Heat Death is a done deal even if you might also say the clock has another tick or two to actually reach midnight.
  • Boys Playing Tag
    I'm just confused about whether you're telling me to quit taking that second mechanism-seeking step, or whether it's just that you're talking metaphysics and I'm usually not.Srap Tasmaner

    Sure. I get that you want to get going with real world modelling. That is where correlations between variables start to mess up attempts to model in terms of assuming independent variables.

    But my response to that is you have to start with the clean basic models. You have to have a (metaphysically general) foundation which sorts out what you even mean by independent or random. And as I say, it is a huge thing to discover that the statistical world is larger than just the central limit theorem. It is indeed really huge to realise that powerlaw statistics is the more general natural case (as being a system with the fewest actual constraints).

    So you have to establish the baseline that legitimates any modelling. And then you can start building back in the kind of sophistication that starts to deal rigorously with messy domains with possible internal correlations you might want to talk about.

    Systems with correlations or coordination dynamics have been a big deal for statistical mechanics for a good 50 years. That is what phase transition models are all about. Remember your interest in the logistic functon or S-curve - the reason why transitions can be sudden as global correlations suddenly kick in? Rene Thom's Catastrophe Theory? Spontaneous symmetry breaking? Autocatalytic networks? Ising models? There's a thousand variations of statistical mechanical models that start with a clean baseline of "no interactions", and then find ways to model the realistic emergence of those interactions or collective behaviour.

    Take the Ising model - the story of a bar magnet. When it is hot, all the iron atoms jiggle and don't line up. All their magnetic fields are aligned in a non-interacting fashion. But cool the metal and it hits a point where the thermal jiggling gets suddenly overtaken by the potential local attractions. Correlation goes from zero to infinite in a flash. Voila. The bar has a fixed global magnetic field in which all individual variety is completely constrained.

    Its the usual story. Modelling has to break the world apart to put it back together. You have to work out the baseline simplicity before you can hope to model the real world complexity.

    So it is not about quitting your second mechanism step. My point is that statistical mechanics - at its fundamental level - has only been able to move forward with a new era of thermodynamically-inspired models (ones that deal with coordination or constraint-making as itself emergent within a system) by realising that Gaussian statistics are the special case, not the general one.

    For me, I agree, this has metaphysical import. I like to think what it means about existence itself.

    You might be just interested in baseball statistics or whatever Nate Silver has in mind as some particular domain. And fair enough.
  • Is linear time just a mental illusion?
    No preferred frame of reference.Rich

    Ho, ho! Trying to slip out from under the rubble of the wreck of your own argument.

    Reciprocality says there is a "preferred" and absolute connection between two inertial frames. So if it looks like acceleration going from frame A to frame B, it looks like a matching deceleration going in the other direction.

    You do understand the difference between inertia and acceleration? You got an A for that back at school?
  • Explaining probabilities in quantum mechanics
    Unless it collapses back into another singularity, and then expands again. Guess we'll have to wait and see ;-)Wayfarer

    No we bloody don't. Dark energy is a fact. The Heat Death is gonna happen.

    Of course we now have to account for dark energy. And again - in my view - decoherence is the best hope of that. Because quantum level uncertainty can only be constrained, not eliminated, then that means that the fabric of spacetime is going to have a built-in negative pressure. It is going to have a zero-point energy that causes quantum-scale "creep".

    Unfortunately we don't know enough particle physics to do an exact calculation of this "creep". We can't sum all the contributions in an accurate way to see if they match the dark energy observations. And the naive level calculation - where either things either all sum or all cancel - produce the ridiculous answers that the dark energy value should be either zero or Planck-scale "infinite". An error of 130 orders of magnitude and so another of your often cited "crises of modern physics".

    Other calculations going beyond the most naive have got closer to the observed value. But also admittedly, not come nearly close enough yet.

    But at least, as a mechanism, it could be bloody wrong. ;)
  • Explaining probabilities in quantum mechanics
    He, like myself, can't accept the idea of 'parallel universes', but the point I'm trying to make is that it is an inevitable consequence of Everett's 'relative state formulation', like it or not. So, let's move on.Wayfarer

    I reject parallel worlds and parallel minds because immanence has to be more reasonable than transcendence when it comes to metaphysics.

    An immanent explanation could at least be wrong. A transcendent explanation is always "not even wrong" because it posits no actual causal mechanism. It just sticks a warning sign at the edge of the map saying "here be dragons".

    And the Everett formulation is just an interpretation - a metaphysical heuristic. It itself becomes subject to various metaphysical interpretations as I just described. You can get literal and concrete. Or you can take a vaguer approach where the worlds and branches are possibilities, not really actualities. Or you can go the full hog and just accept that the foundation of being is ontically vague and so any counterfactual definiteness is an emergent property.

    The real advance of "MWI" is the uniting of the maths of quantum mechanics with the maths of thermodynamical constraints - the decoherence formalism.

    This is a genuine step in the development of quantum theory. And it has sparked its own wave of interpretative understanding - even if ardent MWIers claim to own decoherence as their own thing.
  • Explaining probabilities in quantum mechanics
    To be fair to Deutsch, he wrote that book back in the 1990s. Many people got carried away and were taking the most literal metaphysical view of the newly derived thermal decoherence modification to quantum formalism.

    Now most have moved to a more nuanced take of talking about many world-line branches. But my criticism is that simply mumbles the same extravagant metaphysics rather than blurting it out aloud. Many minds is as bad as many worlds.

    On the other hand, listen closely enough to MWI proponents, and they now also start to put "branches" in quotes as well as "worlds". It all starts to become a fuzzy ensemble of possibilities that exist outside of time, space and even energy (as preserved conservation symmetries). The MWIers like Wallace start to emphasise the decision making inherent in the very notion of making a measurement. In other words - in accepting metaphysical vagueness and the role that "questioning" plays in dissipating that foundational uncertainty - MWI is back into just the kind of interpretative approach I have advocated.

    There is now only the one universe emerging from the one action - the dissipation of uncertainty that comes from this universe being able to ask ever more precise questions of itself.

    In ordinary language - classical physics - we would say the Universe is cooling/expanding. It began as a fireball of hot maximal uncertainty. As vague and formless as heck. Then it started to develop its highly structured character we know and love. It sorted itself into various forces and particles. Eventually it will be completely definite and "existent" as it becomes maximally certain - a state of eternal heat death.

    Only about three degrees of cooling/expanding to get to that absolute limit of concrete definiteness now. You will then be able to count as many worlds as you like as everyone of them will look exactly the same. (Or rather you won't, as individual acts of measurement will no longer be distinguishable as belonging to any particular time or location.)
  • Is linear time just a mental illusion?
    Remember: all frames of reference are reciprocal.Rich

    So they have an inverse relationship, and yet are the same? Did 1/x = x/1 when you went to school, getting your mighty fine grades while apparently parroting nothing and questioning everything?

    I think you need to add reciprocals as another basic principle you completely fail to understand.
  • Boys Playing Tag
    Franks - https://stevefrank.org/reprints-pdf/09JEBmaxent.pdf

    I don't get your objection. If you observe a powerlaw statistics, then that is when you should suspect this free othogonality to be at work. And the very fact we are so accustomed to mapping the world like this - with an x and y axis which models reality in terms of two variables - should tell you a lot.

    And as I say, the alternative is that the correct interpretation might be that it ought to be a Gaussian plot - normal/normal axes rather than log/log. You can of course then have log/normal distributions as a mixed outcome.

    So it is curve fitting. You have some bunch of dots that mark the location of an observable in terms of two orthogonal or independent variables - whatever labels make sense for your x and y axis. Then either you can draw a good straight line through the middle of them if the relationship is fixed and linear using normal/normal scaling (just counting up 1, 2, 3... on both axes), or you find that the relationship is a flat line plot only when you uses axes that count up in orders of magnitude (so 1, 10, 100...).

    Both gaussian and powerlaw distributions presume two independent variables. The question then is whether the axes need to be linear or exponential counting when it comes to the making the resulting equilibrium balance a simple flat line.

    Back in the real world, yes one might need extra knowledge about the domain as more complicated stuff will usually be going on. Your independent variables might in fact not be so independent.

    But my point - and Franks's point - is that domain issues wash out at the grand metaphysically general scale.

    This is the truth we can derive from the maths of hierarchy theory - Stan Salthe I have already mentioned too. At a large enough scale of observation, the local and global bounds of a system are far enough apart that any coordination - any determistic connections - become so fine-grained as drop out of the picture.

    It is the law of large numbers. Eventually local differences cease to matter as you zoom out far enough. All that domain detail becomes just a blur of statistical sameness. You can now see a system in terms of its general characteristics - like whether it is closed and single scale Gaussian, or open and multi scale fractal or powerlaw.
  • Explaining probabilities in quantum mechanics
    They are being metaphysically extravagant in a way the mathematics of decoherence doesn't require.

    To build an actual quantum computer will require a lot of technical ingenuity to sort practical problems like keeping the circuits in a cold enough, and isolated enough, condition for states of entanglement to be controllable.

    Do you think those basic engineering problems - that may be insurmountable if we want to scale up a circuit design in any reasonable fashion - are going to be helped by a metaphysical claim about the existence of many worlds.

    Unless MWI is also new science, new formalism, it is irrelevant to what is another engineering application of good old quantum mechanics.
  • Explaining probabilities in quantum mechanics
    Many worlds is used by many to avoid the physical reality of wavefunction collapse or an actual epistemic cut. Or rather, to argue that rather than local variable collapse, there is branching that creates complementary global worlds.

    So as maths, many worlds is fine. It has to be as it is just ordinary quantum formalism with the addition of thermodynamical constraint - exactly the decoherent informational view I advocate.

    But it gets squirmy when Interpretation tries to speak about the metaphysics. If people start thinking of literal new worlds arising, that's crazy.

    If they say they only mean branching world lines, that usually turns out to mean they want to have their metaphysical cake and eat it. There is intellectual dishonesty because now we do have the observer being split across the world lines in ways that beg the question of how this can be metaphysically real. The observer is turned back into a mystic being that gets freely multiplied.

    So I prefer decoherence thinking that keeps observers and observables together in the one universe. The epistemic cut itself is a real thing happening and not something that gets pushed out of sight via the free creation of other parallel worlds or other parallel observers.
  • Explaining probabilities in quantum mechanics
    Modest or radical? The Copenhagen Interpretation is metaphysically radical in paving the ground to acknowledge that there must be an epistemic cut in nature.

    The "modest" understanding of that has been the good old dualistic story that it is all in the individual mind of a human observer. All we can say is what we personally experience. Which then leads to folk thinking that consciousness is what must cause wavefunction collapse. So epistemic modesty quickly becomes transcendental confusion. We have the divorce in nature which is two worlds - mental and physical - in completely mysterious interaction.

    I, of course, am taking the other holistic and semiotic tack. The epistemic cut is now made a fundamental feature of nature itself. We have the two worlds of the it and the bit. Matter and information. Or local degrees of freedom and global states of constraint.

    So CI, in recognising information complementarity, can go three ways.

    The actually modest version is simple scientific instrumentalism. We just don't attempt to go further with the metaphysics. (But then that is also giving up hope on improving on the science.)

    Then CI became popular as a confirmation of hard dualism. The mind created reality by its observation.

    But the third route is the scientific one which various information theoretic and thermodynamically inspired interpretations are working towards. The Universe is a system that comes to definitely exist by dissipating its own uncertainty. It is a self constraining system with emergent global order. A sum over histories that takes time and space to develop into its most concrete condition.
  • Boys Playing Tag
    Models simplify. They shed information. A plot of a frequency distribution is a snapshot of the state of affairs that has developed over time. It is not a plot of how that state of affairs developed. That is the bit that the plot treats as accidental or contingent - information that is ignorable and can be shed.

    So it seems you want to track the deterministic train of particular events in the minecraft scenario. One mod was really good. A lot of others just stank. The outcome over time would be explained in terms of the individual merit of each mod.

    But that defeats the point of a probabilistic analysis. The surprise - to the determinist - is that all that locally true stuff is still irrelevant in the largest view where we are inquiring into the fundamental set up of the system. The same global patterns emerge across all kinds of systems for the same general global reasons. The deterministic detail is irrelevant as it doesn't make a difference. What creates the pattern is the simple thing of two free actions orthogonally aligned.

    Again, read Franks. The Gaussian~powerlaw dichotomy works just the same whether we are talking about a host of independent deterministic variables, or random variables. If you step back far enough - if the ensemble size is a large number - then what seems like an essential metaphysical distinction (random vs determined) becomes just a statistical blur. Now we are just talking about the nature of the global constraints. Are they the set up for a closed Gaussian single scale system, or an open powerlaw multi-scale system?

    Local determinism - like some objective judgement that a mod either stinks or works, hence the "feedback" that determines its popularity - just drops out of the picture. It makes no difference to the answer. What we are now talking about is the limits on indeterminism itself. Randomness at a global systems level turns out itself to be constrained to fall between the two bounds described by normal and powerlaw distributions.

    Folk like to talk about chaos. But chaos turned out to be just powerlaw behaviour. Mess or entropy has its top upper limit.

    Which then leads to the next question of what lies beyond messy? Again, back to Peirce, the answer becomes vagueness or firstness or Apeiron. Or rather, vagueness is the ground from which maximum mess and maximum order co-arise as the dichotomisation of an ultimate unformed potential.
  • Explaining probabilities in quantum mechanics
    If that can be explained in a causal framework then it restores the idea that probability reflects a lack of knowledge about the world, it's not fundamental.Andrew M

    I like the quantum information approach where the view is that uncertainty is irreducible because you can't ask two orthogonal questions of reality at once. Location and momentum are opposite kinds of questions and so an observer can only determine one value in any particular act of measurement.

    So this view accepts reality is fundamentally indeterministic. But also that acts of measurements are physically real constraints on that indeterminism. Collapse of the wavefunction happens. It is only that you can't collapse both poles of a complementary/orthogonal pair of variables at once. Maximising certainty about one, minimises uncertainty of the other, in good Heisenberg fashion.

    What this interpretation brings out sharply is the contextual or holistic nature of quantum reality. And the role played by complementarity. Eventually you are forced to a logical fork when trying to eliminate measurement uncertainty. You can't physically pin down two completely opposite quantities in a single act of constraint.

    Grab the snake by the head, or by the tail. The choice is yours. But pin one down and the other now squirms with maximum freedom and unpredictability.

    Of course, when talking of observers and measurements, we then have to grant that it is the Universe itself which is doing this - exerting the constraints that fix a history of events. So it becomes a thermal decoherence interpretation with the second law of thermodynamics being the party interested in reducing the information entropy of the Universe as a physical system.
  • What is motivation?
    It is an assertion of immanence and a rejection of transcendence.

    The self is the system as a whole. And it is a whole in that all four causes evolve via mutual interaction. They arise within the system itself. Top-down constraints shape the bottom-up degrees of freedom. And those bottom-up degrees of freedom in turn construct - or rather reconstruct - those prevailing global states of constraint.

    Holism is pretty simple once you get your head around the fact it is not the usual unholy mx of reductionism and transcendentalism that folk try to apply to metaphysical questions. The machine and its ghost.
  • Is linear time just a mental illusion?
    You did badly in science class? Is that what causes you to blather on here?
  • What is motivation?
    Yep, machines need a creator. That is why organisms need explanation in terms of a logic of self organisation.
  • Boys Playing Tag
    A powerlaw distribution is a log/log plot. So the result of exponential growth or uninhibited development in two contrasting or dichotomous directions.

    In the case of minecraft mods, mods can be freely added to the pool and freely selected from the pool. So popularity of any mod will have a powerlaw distribution in that you have two contrasting actions freely continuing. And then the frequency with which any mod is both added and selected is simply "an accident".

    The model treats the choice as a fluctuation that can have any size (there is no mean). But also there is a constant power expressed over every scale. So you should expect a fat tail of a lot of mods with very few takers, and then also a few mods which almost everyone adopts.
  • Is linear time just a mental illusion?
    Not just the nuts. The absolute coconuts.
  • Boys Playing Tag
    Surely whether you want unbounded growth or a steady state is what would determine whether you decide to engineer for a powerlaw or Gaussian situation.

    Most nations want unbounded growth. It would be a major change to switch to a steady state ambition as things stand. Even if natural environmental constraints say we should.

    So for immigration, what globalisation has produced is worker mobilty. The smart choice - if you can manage it as a nation - is to try and import the creative educated elite on a permanent basis, and then take advantage of imported cheap labour - Philipino construction workers, Mexican fruit pickers - on temporary labour visas.

    Of course - going in the other direction - you want to export all your polluting and slave wage jobs to the developing nations. They can run the extractive industries and call centres. You only have to import temporary workers to be the nannies, the care home staff, the dairy workers.

    So the true immigration picture does look dictated by an economic growth agenda. Then within that might come the discussion whether the desired balance of social stability/social creativity is best served by assimilationist vs multicultural migrant policies.
  • Is linear time just a mental illusion?
    And yet a muon travelling at relativistic speed takes much longer to decay. Fancy that.
  • Gettier's Case I Is Bewitchment
    On my view, correspondence doesn't lie between thought/belief and fact/reality.creativesoul

    I have no idea what you mean here. It sounds like some brand of idealism if you so clearly rule out fact/reality. Although fact and reality could also be talking about the "thing in itself" and the observables we take as its signs.

    So what is fact/reality? And what is thought/belief? Is that conception, interpretation or what? Translation please.

    Rather, it is necessarily presupposed within all thought/belief formation and the attribution of meaning itself. Correlation presupposes the existence of it's own content, no matter how that is later qualified as 'real', 'imaginary', and/or otherwise...creativesoul

    Now here is sounds like you want to talk about the packaged deal instead of the analysed justified belief relation.

    It seems of course obvious that we need to "presuppose" both the existence of the world, and the veracity of our signs, or sensory observables. So correlation or inference can work because it is really doing something. And the way we become sure of that in practice is that presupposing this triadic truth relation has a definite outcome. Over time we come to reliably see a sharp difference between what is just our imaginings or wishful thoughts - the stuff we assign to our "self" - and then the recalcitrant category of experience which we then assign to the "existence of a real world".

    So the presuppositions may seem necessary to start the ball rolling. But they remain in force only because we find they actually seem to do something very definite in partitioning our experience into a realm of thoughts and a realm of physical reality.

    Thus the packaged deal of "inference to the best explanation" does not predetermine what it finds. And this is historically obvious as we have learnt that reality is not actually coloured. The hues we experience sit on the side of our ideas as constructed signs. Over the other side is electromagnetic energy. Or at least that is where our scientific strength inferencing has got us to in terms of conceiving the thing-in-itself in terms of a system of signs (that is, theories and measurements, ideas and impressions).