Comments

  • Explaining probabilities in quantum mechanics
    To be fair to Deutsch, he wrote that book back in the 1990s. Many people got carried away and were taking the most literal metaphysical view of the newly derived thermal decoherence modification to quantum formalism.

    Now most have moved to a more nuanced take of talking about many world-line branches. But my criticism is that simply mumbles the same extravagant metaphysics rather than blurting it out aloud. Many minds is as bad as many worlds.

    On the other hand, listen closely enough to MWI proponents, and they now also start to put "branches" in quotes as well as "worlds". It all starts to become a fuzzy ensemble of possibilities that exist outside of time, space and even energy (as preserved conservation symmetries). The MWIers like Wallace start to emphasise the decision making inherent in the very notion of making a measurement. In other words - in accepting metaphysical vagueness and the role that "questioning" plays in dissipating that foundational uncertainty - MWI is back into just the kind of interpretative approach I have advocated.

    There is now only the one universe emerging from the one action - the dissipation of uncertainty that comes from this universe being able to ask ever more precise questions of itself.

    In ordinary language - classical physics - we would say the Universe is cooling/expanding. It began as a fireball of hot maximal uncertainty. As vague and formless as heck. Then it started to develop its highly structured character we know and love. It sorted itself into various forces and particles. Eventually it will be completely definite and "existent" as it becomes maximally certain - a state of eternal heat death.

    Only about three degrees of cooling/expanding to get to that absolute limit of concrete definiteness now. You will then be able to count as many worlds as you like as everyone of them will look exactly the same. (Or rather you won't, as individual acts of measurement will no longer be distinguishable as belonging to any particular time or location.)
  • Is linear time just a mental illusion?
    Remember: all frames of reference are reciprocal.Rich

    So they have an inverse relationship, and yet are the same? Did 1/x = x/1 when you went to school, getting your mighty fine grades while apparently parroting nothing and questioning everything?

    I think you need to add reciprocals as another basic principle you completely fail to understand.
  • Boys Playing Tag
    Franks - https://stevefrank.org/reprints-pdf/09JEBmaxent.pdf

    I don't get your objection. If you observe a powerlaw statistics, then that is when you should suspect this free othogonality to be at work. And the very fact we are so accustomed to mapping the world like this - with an x and y axis which models reality in terms of two variables - should tell you a lot.

    And as I say, the alternative is that the correct interpretation might be that it ought to be a Gaussian plot - normal/normal axes rather than log/log. You can of course then have log/normal distributions as a mixed outcome.

    So it is curve fitting. You have some bunch of dots that mark the location of an observable in terms of two orthogonal or independent variables - whatever labels make sense for your x and y axis. Then either you can draw a good straight line through the middle of them if the relationship is fixed and linear using normal/normal scaling (just counting up 1, 2, 3... on both axes), or you find that the relationship is a flat line plot only when you uses axes that count up in orders of magnitude (so 1, 10, 100...).

    Both gaussian and powerlaw distributions presume two independent variables. The question then is whether the axes need to be linear or exponential counting when it comes to the making the resulting equilibrium balance a simple flat line.

    Back in the real world, yes one might need extra knowledge about the domain as more complicated stuff will usually be going on. Your independent variables might in fact not be so independent.

    But my point - and Franks's point - is that domain issues wash out at the grand metaphysically general scale.

    This is the truth we can derive from the maths of hierarchy theory - Stan Salthe I have already mentioned too. At a large enough scale of observation, the local and global bounds of a system are far enough apart that any coordination - any determistic connections - become so fine-grained as drop out of the picture.

    It is the law of large numbers. Eventually local differences cease to matter as you zoom out far enough. All that domain detail becomes just a blur of statistical sameness. You can now see a system in terms of its general characteristics - like whether it is closed and single scale Gaussian, or open and multi scale fractal or powerlaw.
  • Explaining probabilities in quantum mechanics
    They are being metaphysically extravagant in a way the mathematics of decoherence doesn't require.

    To build an actual quantum computer will require a lot of technical ingenuity to sort practical problems like keeping the circuits in a cold enough, and isolated enough, condition for states of entanglement to be controllable.

    Do you think those basic engineering problems - that may be insurmountable if we want to scale up a circuit design in any reasonable fashion - are going to be helped by a metaphysical claim about the existence of many worlds.

    Unless MWI is also new science, new formalism, it is irrelevant to what is another engineering application of good old quantum mechanics.
  • Explaining probabilities in quantum mechanics
    Many worlds is used by many to avoid the physical reality of wavefunction collapse or an actual epistemic cut. Or rather, to argue that rather than local variable collapse, there is branching that creates complementary global worlds.

    So as maths, many worlds is fine. It has to be as it is just ordinary quantum formalism with the addition of thermodynamical constraint - exactly the decoherent informational view I advocate.

    But it gets squirmy when Interpretation tries to speak about the metaphysics. If people start thinking of literal new worlds arising, that's crazy.

    If they say they only mean branching world lines, that usually turns out to mean they want to have their metaphysical cake and eat it. There is intellectual dishonesty because now we do have the observer being split across the world lines in ways that beg the question of how this can be metaphysically real. The observer is turned back into a mystic being that gets freely multiplied.

    So I prefer decoherence thinking that keeps observers and observables together in the one universe. The epistemic cut itself is a real thing happening and not something that gets pushed out of sight via the free creation of other parallel worlds or other parallel observers.
  • Explaining probabilities in quantum mechanics
    Modest or radical? The Copenhagen Interpretation is metaphysically radical in paving the ground to acknowledge that there must be an epistemic cut in nature.

    The "modest" understanding of that has been the good old dualistic story that it is all in the individual mind of a human observer. All we can say is what we personally experience. Which then leads to folk thinking that consciousness is what must cause wavefunction collapse. So epistemic modesty quickly becomes transcendental confusion. We have the divorce in nature which is two worlds - mental and physical - in completely mysterious interaction.

    I, of course, am taking the other holistic and semiotic tack. The epistemic cut is now made a fundamental feature of nature itself. We have the two worlds of the it and the bit. Matter and information. Or local degrees of freedom and global states of constraint.

    So CI, in recognising information complementarity, can go three ways.

    The actually modest version is simple scientific instrumentalism. We just don't attempt to go further with the metaphysics. (But then that is also giving up hope on improving on the science.)

    Then CI became popular as a confirmation of hard dualism. The mind created reality by its observation.

    But the third route is the scientific one which various information theoretic and thermodynamically inspired interpretations are working towards. The Universe is a system that comes to definitely exist by dissipating its own uncertainty. It is a self constraining system with emergent global order. A sum over histories that takes time and space to develop into its most concrete condition.
  • Boys Playing Tag
    Models simplify. They shed information. A plot of a frequency distribution is a snapshot of the state of affairs that has developed over time. It is not a plot of how that state of affairs developed. That is the bit that the plot treats as accidental or contingent - information that is ignorable and can be shed.

    So it seems you want to track the deterministic train of particular events in the minecraft scenario. One mod was really good. A lot of others just stank. The outcome over time would be explained in terms of the individual merit of each mod.

    But that defeats the point of a probabilistic analysis. The surprise - to the determinist - is that all that locally true stuff is still irrelevant in the largest view where we are inquiring into the fundamental set up of the system. The same global patterns emerge across all kinds of systems for the same general global reasons. The deterministic detail is irrelevant as it doesn't make a difference. What creates the pattern is the simple thing of two free actions orthogonally aligned.

    Again, read Franks. The Gaussian~powerlaw dichotomy works just the same whether we are talking about a host of independent deterministic variables, or random variables. If you step back far enough - if the ensemble size is a large number - then what seems like an essential metaphysical distinction (random vs determined) becomes just a statistical blur. Now we are just talking about the nature of the global constraints. Are they the set up for a closed Gaussian single scale system, or an open powerlaw multi-scale system?

    Local determinism - like some objective judgement that a mod either stinks or works, hence the "feedback" that determines its popularity - just drops out of the picture. It makes no difference to the answer. What we are now talking about is the limits on indeterminism itself. Randomness at a global systems level turns out itself to be constrained to fall between the two bounds described by normal and powerlaw distributions.

    Folk like to talk about chaos. But chaos turned out to be just powerlaw behaviour. Mess or entropy has its top upper limit.

    Which then leads to the next question of what lies beyond messy? Again, back to Peirce, the answer becomes vagueness or firstness or Apeiron. Or rather, vagueness is the ground from which maximum mess and maximum order co-arise as the dichotomisation of an ultimate unformed potential.
  • Explaining probabilities in quantum mechanics
    If that can be explained in a causal framework then it restores the idea that probability reflects a lack of knowledge about the world, it's not fundamental.Andrew M

    I like the quantum information approach where the view is that uncertainty is irreducible because you can't ask two orthogonal questions of reality at once. Location and momentum are opposite kinds of questions and so an observer can only determine one value in any particular act of measurement.

    So this view accepts reality is fundamentally indeterministic. But also that acts of measurements are physically real constraints on that indeterminism. Collapse of the wavefunction happens. It is only that you can't collapse both poles of a complementary/orthogonal pair of variables at once. Maximising certainty about one, minimises uncertainty of the other, in good Heisenberg fashion.

    What this interpretation brings out sharply is the contextual or holistic nature of quantum reality. And the role played by complementarity. Eventually you are forced to a logical fork when trying to eliminate measurement uncertainty. You can't physically pin down two completely opposite quantities in a single act of constraint.

    Grab the snake by the head, or by the tail. The choice is yours. But pin one down and the other now squirms with maximum freedom and unpredictability.

    Of course, when talking of observers and measurements, we then have to grant that it is the Universe itself which is doing this - exerting the constraints that fix a history of events. So it becomes a thermal decoherence interpretation with the second law of thermodynamics being the party interested in reducing the information entropy of the Universe as a physical system.
  • What is motivation?
    It is an assertion of immanence and a rejection of transcendence.

    The self is the system as a whole. And it is a whole in that all four causes evolve via mutual interaction. They arise within the system itself. Top-down constraints shape the bottom-up degrees of freedom. And those bottom-up degrees of freedom in turn construct - or rather reconstruct - those prevailing global states of constraint.

    Holism is pretty simple once you get your head around the fact it is not the usual unholy mx of reductionism and transcendentalism that folk try to apply to metaphysical questions. The machine and its ghost.
  • Is linear time just a mental illusion?
    You did badly in science class? Is that what causes you to blather on here?
  • What is motivation?
    Yep, machines need a creator. That is why organisms need explanation in terms of a logic of self organisation.
  • Boys Playing Tag
    A powerlaw distribution is a log/log plot. So the result of exponential growth or uninhibited development in two contrasting or dichotomous directions.

    In the case of minecraft mods, mods can be freely added to the pool and freely selected from the pool. So popularity of any mod will have a powerlaw distribution in that you have two contrasting actions freely continuing. And then the frequency with which any mod is both added and selected is simply "an accident".

    The model treats the choice as a fluctuation that can have any size (there is no mean). But also there is a constant power expressed over every scale. So you should expect a fat tail of a lot of mods with very few takers, and then also a few mods which almost everyone adopts.
  • Is linear time just a mental illusion?
    Not just the nuts. The absolute coconuts.
  • Boys Playing Tag
    Surely whether you want unbounded growth or a steady state is what would determine whether you decide to engineer for a powerlaw or Gaussian situation.

    Most nations want unbounded growth. It would be a major change to switch to a steady state ambition as things stand. Even if natural environmental constraints say we should.

    So for immigration, what globalisation has produced is worker mobilty. The smart choice - if you can manage it as a nation - is to try and import the creative educated elite on a permanent basis, and then take advantage of imported cheap labour - Philipino construction workers, Mexican fruit pickers - on temporary labour visas.

    Of course - going in the other direction - you want to export all your polluting and slave wage jobs to the developing nations. They can run the extractive industries and call centres. You only have to import temporary workers to be the nannies, the care home staff, the dairy workers.

    So the true immigration picture does look dictated by an economic growth agenda. Then within that might come the discussion whether the desired balance of social stability/social creativity is best served by assimilationist vs multicultural migrant policies.
  • Is linear time just a mental illusion?
    And yet a muon travelling at relativistic speed takes much longer to decay. Fancy that.
  • Gettier's Case I Is Bewitchment
    On my view, correspondence doesn't lie between thought/belief and fact/reality.creativesoul

    I have no idea what you mean here. It sounds like some brand of idealism if you so clearly rule out fact/reality. Although fact and reality could also be talking about the "thing in itself" and the observables we take as its signs.

    So what is fact/reality? And what is thought/belief? Is that conception, interpretation or what? Translation please.

    Rather, it is necessarily presupposed within all thought/belief formation and the attribution of meaning itself. Correlation presupposes the existence of it's own content, no matter how that is later qualified as 'real', 'imaginary', and/or otherwise...creativesoul

    Now here is sounds like you want to talk about the packaged deal instead of the analysed justified belief relation.

    It seems of course obvious that we need to "presuppose" both the existence of the world, and the veracity of our signs, or sensory observables. So correlation or inference can work because it is really doing something. And the way we become sure of that in practice is that presupposing this triadic truth relation has a definite outcome. Over time we come to reliably see a sharp difference between what is just our imaginings or wishful thoughts - the stuff we assign to our "self" - and then the recalcitrant category of experience which we then assign to the "existence of a real world".

    So the presuppositions may seem necessary to start the ball rolling. But they remain in force only because we find they actually seem to do something very definite in partitioning our experience into a realm of thoughts and a realm of physical reality.

    Thus the packaged deal of "inference to the best explanation" does not predetermine what it finds. And this is historically obvious as we have learnt that reality is not actually coloured. The hues we experience sit on the side of our ideas as constructed signs. Over the other side is electromagnetic energy. Or at least that is where our scientific strength inferencing has got us to in terms of conceiving the thing-in-itself in terms of a system of signs (that is, theories and measurements, ideas and impressions).
  • Is linear time just a mental illusion?
    If time means nothing to a photon, should it mean anything to any of us?Mike Adams

    Time means nothing to the photon, but it does mean something to any particle with mass. Relativistically-speaking, mass has a meaningful temporality because it can go slower than lightspeed. It enjoys a range of temporal rates that is bounded by absolute rest and the speed of light.

    For a photon, we can say that its journey is over the same instant it began. Time as we think of it doesn't really exist for a massless particle. But a particle with mass "experiences" a range of clock speeds. So it makes a difference if my twin heads off in a rocket at near light speed while I remain in a rest mass inertial frame. One of us is going to look a lot older than the other the next time we meet.
  • What is motivation?
    Yeah. And so already you are pointed in the direction of seeking that missing thing as a satisfaction.

    If satisfaction is actually impossible, then it can't really be said to be missing. Motivation remains the direction you want to take because it is "leaving something definitely behind by definitely heading in the exact other direction".
  • Gettier's Case I Is Bewitchment
    Instead of two beliefs, it is two separate notions of justification.

    On one hand, there is what seems to be your idealist approach - a dyadic correspondence between what you believe and what you observe. And this is contrasted with the realist approach where the dyadic correspondence is between what you observe and how the world really is.

    So what is needed to ground truth is the pragmatic or semiotic account which is triadic, and so can embrace both these correspondence claims.

    Now it become the case that our belief or interpretation is connected to the world itself outside via the mediation of a sign.

    So here, it is the number of coins in a pocket that has been proposed as the sign - the observable. And a mistake in counting shows that this sign could be an idea rather than a reality.

    This seems a real problem. But a semiotic view says that is how signs work. They have to - somehow - have a foot in both camps and thus stand as the third thing in the modelling relation going on.

    It is the same with dreams, illusions, imaginings, fictions and all the other ways we can have "sensations" that are "unreliable" when it comes to a correspondence with reality.
  • What is motivation?
    Genes play a part in determining the characteristics of the individual, but that's only a part.Metaphysician Undercover

    In biology, they are the determining part. What happens during growth or development is then that this finality gets mixed with a lot of particular accidents.

    So the acorn is a one-off genetic template - a particular form that can only deliver that one adult tree. Sexual reproduction ensures a shuffling of the genetic cards to create a unique hand.

    But then the tree grows. The acorn happens to have fallen on a stony hillside. One year as a sapling there is a big drought, another year it is hit by a pest invasion, eventually it gets hit by lightning.

    So the mighty oak ends up a bit mangled in ways that the acorn's genome couldn't envision. Constraints may be top-down determining, but also the development of actuality is subject to irreducible contingency. There are many particular accidents of fate that get woven into the final form of the genetic intention.

    So roughly the original envisioned oak emerges. And if not too beaten up, it can produce its crop of new acorns. Its biological goal has been achieved on the whole, to the extent it matters.

    Again, you have brought the discussion back to a reductionist way of thinking where constraints must be absolutely determining. But organically, constraints only have to regulate contingency to the degree it really matters. Doing more than that is pointless over-kill.

    Once you get that naturalistic systems principle, it is pretty easy to apply that to the discussion of brains and habits we were having.
  • What is motivation?
    I already made the point this is a one-eyed view - https://thephilosophyforum.com/discussion/comment/100361

    You always stress the escape from negativity and ignore the approach to positivity. But it takes two to tango.

    So what we really have here is an inherent direction that points the way to progress. For things to be even meaningfully understood as bad, the logically corollary is that they could be good. Thus your pessimism collapses due to its own first premise.
  • Boys Playing Tag
    On a personal note, the first aha! moment for me was reading a 1976 SciAm article on Rene Thom's Catastrophe Theory as a biology undergrad.

    http://www.gaianxaos.com/pdf/dynamics/zeeman-catastrophe_theory.pdf

    You can see it covers abrupt changes of state in stockmarkets or rage/fear responses in dogs. So it gets at the kind of dynamics you raised in the OP.

    I was really terribly bored by what I was studying in class. It was so reductionist. Catastrophe theory was an almost mystical blast of something utterly different.

    But it was like a solitary trumpet call. At least given that one just did not have access to the "whole world of academia" back in those days. If your prof didn't know about it, you were hardly going to find out.

    Then 10 years later, it all came spilling out of the closet as deterministic chaos theory, fractals, complexity theory, far from equillibrium thermodynamics, etc. Everyone was talking about it. And the spreading continues.

    I'm glad you are checking out Life's Ratchet. To me, that is another such trumpet blast when it comes to establishing the physical basis of biosemiosis.

    Powerlaw stuff is all about sorting out the story of self-organising material dynamics. Semiosis is then the follow-on issue of how life and mind can constrain that dynamics in entropically fruitful fashion using information.

    And biophysics is identifying how this informational trick is something that "must happen" emergently at a particular nanoscale of being - the "edge of chaos" or transition zone which is the quasi-classical scale of atomic behaviour.

    Such a beautiful and satisfying story.
  • Boys Playing Tag
    Am I getting this right?Srap Tasmaner

    That's it. The popular account of all this has been the talk about fat-tail distributions, or seven degrees of separation - http://www.nytimes.com/2009/02/08/magazine/08wwln-safire-t.html?mcubz=1

    Or as I mentioned, Taleb's Black Swan. Or "disintermediation" - https://en.wikipedia.org/wiki/Disintermediation

    So everyone is picking up on what is happening with the internet and now social media. People are inventing terminology left, right and centre.

    That makes it rather hard to see that this is not just about the web. It is an absolutely generic self-organisational story.

    Again, that Franks' article is excellent in identifying the fact that we are talking of two contrasting probabilistic regimes in nature - where before people thought there was only really the one, the good old bell curve. Now we are seeing that powerlaw (or log/log) statistics are not some kind of weird exception, but the other natural limit.

    Instead of trying to assimilate all structure to Gaussian outcomes. we should expect nature to be fractal, scalefree, hierarchical, exhibit fluctuations over all scales, simply because of emergent probability.

    Powerlaw behaviour is in fact more normal or basic as it is less constrained. It is the first stage of order that you get because "free growth", or dissipative structure, is the simplest form of emergent organisation. It takes the addition of limits on growth to then start to get Gaussian closed system behaviour where fixed limits force the system towards a single-scale mean.

    Metaphysically, this is revolutionary. The Second Law of Thermodynamics would no longer be fundamental as it describes an already closed world in which entropy has an average. The lid has been put on the pot, as it were. Instead, you need a modified law - one based on dissipative structure, or Prigogine's "far from equilibrium" systems - that starts with powerlaw behaviour.

    So Gaussian probability - the central limit theorem - was worked out first. But it is the more constrained statistical situation. We are now working out the models of statistics with the least possible constraints. And so while powerlaw behaviour seems weird and exceptional, it is really the more generic case in nature.
  • Boys Playing Tag
    What type of predictions can be expected of complex system modelling with regard to cultural development in stratified societies?Galuchat

    Well stratification or nested hierarchical organisation is itself predicted by Barabási's scalefree networks. The emergent powerlaw statistics of airports will be a familiar example. Eg: http://www.pnas.org/content/104/39/15224.full.pdf

    And now Bejan's Constructal Theory is pushing explicitly into social modelling. This marks a shift from purely statistical models to thermodynamical ones. Introductory chapt here:
    http://www.springer.com/la/book/9780387476803

    In general, hierarchy theory, which has been going strong since the 1970s, does explain hierarchical organisation in emergent terms. But that was more heuristic explanation and not mathematically developed models. Now the general mathematical models are arriving, as in the above.

    You seem to be asking about cultural trends in particular. I would say that remains at the heuristic stage of argument. If you could pinpoint some trend of interest, that might jog my memory on relevant mathematical strength modelling.

    But one obvious trend explained is how modern life is polarised by the contrasting pulls of specialisation and generalisation. We are both more homogenous and more diverse at the same time because we all get exposed to Trump/Kardashians/Bieber as our universal shared culture, and yet also the same social media lets us dive into the most obscure interests shared by a few.

    Fifty years ago, everyone was clustered on a middle ground because TV had just a few channels. And homes, a single device. Now the internet has created a scalefree sociocultural environment. Going viral is now a thing - an emergent behaviour that is perfectly familiar.

    I guess my particular slant here is then making the connection between emergence/hierarchy theory and Peircean, or even Hegelian, semiosis and dialectics.

    So Peirce makes the logical and metaphysical point that all emergence must be grounded in Firstness or Vagueness - a state of pure potential or pure symmetry.

    Then there is a symmetry-breaking or dichotomisation. One becomes two, as in dialectical thesis and antithesis. You get complementary bounds emerging - as in canonically, the local and global scales that are the basis of a triadic hierarchical organisation. See Stan Salthe on his basic triadic system in his classic, Evolving Hierarchical Systems.

    So the emergent model is the Peircean one of an unbroken potential that breaks and separates in opposite directions, and having done that, becomes stratified because the two tendencies thus created get mixed - go to statistical equilibrium - across all available scales.

    And it is very easy to read this into current world affairs. For instance, we have had 30 years of economic globalisation. And the natural response to that is a new call for economic localisation.

    This is being read as a pendulum swing in politics. We went one way, now we must go the other. But really, political attention should be focused on the systems fact that an economic agenda predicated on liberated growth is going to go strongly in both these directions anyway. That is predictable. What will vanish from the system is the middle ground. Or rather, any proper mean or average scale of economic action.

    So it is not either/or, but both - and both being expressed across all available scales of organisation. And we can then measure a "fully stratified" hierarchical organisation in terms of its approach to this powerlaw ideal of having no actual mean.

    A non-growth system would be characterised by approaching the Gaussian limit of a precisely specified mean. A free-growth system does the opposite. And understanding this is pretty important if you want to have a sensible political conversation about the emergence of radical wealth inequality, or the "surprising" disappearance of the middle class.
  • What is motivation?
    . If the acorn grows, it will construct an oak tree (in general), but not any particular oak tree, the intent is something general.Metaphysician Undercover

    Perhaps get someone to explain genes to you sometime.
  • What is motivation?
    As I explained, "top-down constraint" is formal cause, but this is inconsistent with "final cause" which gives the thing acting (the agent) freedom to choose a goal.Metaphysician Undercover

    No, no. Top-down constraint is formal and final cause bound up. Although - following the logic of dichotomies - we would also follow Aristotle in dividing constraint into its generality and particularity. So goals are general imperatives. And forms are particular states of constraint that would serve those imperatives.

    Take the Platonic solids. You can place the general constraint on geometric possibility of limiting volumes to regular-sided polygons. So the goal is maximised regularity. And then you have the five forms that meet the requirement. These forms in turn can be used as actual limits which shape lumps of matter that have efficient causes, like the property of whether they stack nicely, or not.

    The acorn becoming a tree, is a bottom-up action.Metaphysician Undercover

    Hardly. The acorn packs a genome - the product of millennia of evolved intentionality. You couldn't pick a worse example. The acorn - as a small package of carbohydrate and basic metabolic machinery - has to grow. It must construct an oak by constraining material flows for 100 years. But the fact it will be an oak is already written into its destiny.

    Otherwise the human agent has no freedom to choose one's own goals, and this is inconsistent with observations of human behaviour. We freely choose our goals, they are not enforced through top-down constraint.Metaphysician Undercover

    And so you again ignore all the science that has shown that this kind of "ghost in the machine" freedom is a dualistic illusion.

    That doesn't mean there is no "freedom of choice". It means that we are constrained by our biology and sociology to act intelligently and creatively. We have the capacity to negotiate the balance between our individual wants and our social demands. And we can do that well, or do that badly. Selection will weed out what works and what doesn't.

    Then what is the thing which is active? Global constraints and local degrees of freedom produce effects on what?Metaphysician Undercover

    You are locked into cause and effect thinking. A doer and a done-to. That is the mental habit you need to break. Aristotle ought to be a good start for any systems thinker. His four causes approach was the basis for self-organising entelechy. Material potential becomes actualised as it expresses its functionality.
  • Boys Playing Tag
    This is interesting:Srap Tasmaner

    I think we all know this. But then the paradigm shift is seeing that it is a natural, probabilistic and self-organising thing. It is a mutuality or dichotomy that emerges through "pure statistics". That is why the new wave of system modelling - based on complexity and thermodynamical thinking - offers the right analytic tools.

    And then the other paradigm surprise is that the statistical models themselves are polarised. We get Gaussian vs Powerlaw systems as the two limiting cases of natural probabilistic systems.

    A few years back, Nassim Nicholas Taleb had a best-seller, The Black Swan, that expressed his surprise that the modern social world had become a case of Extremistan vs Mediocristan - http://kmci.org/alllifeisproblemsolving/archives/black-swan-ideas-mediocristan-extremistan-and-randomness/

    Likewise there was an explosion of popular talk about fat-tail distributions.

    But it was a surprise that anybody could be surprised. Once we started to get the computer power to handle non-linear calculations from the 1970s, an abundance of "emergent constraint" mathematical models poured out of science. Fractals, scalefree networks, etc.

    So it offers a metaphysically-general shift in frame. We have got used to thinking of reality as a deterministic Newtonian clockwork. But actually it is all about emergent self-organising probability - the organicist or natural philosophy view.

    That then is how I would analyse your example of a misplayed game of tag. You were seeking to extract some example of how a few strong actors might tip a much larger dynamical social order into a new regime - effect a phase transition. A modern probability based metaphysics makes that a right approach. It is a good starting intuition.

    But then also - the point I made - this particular game could just be a breakdown in self-organisation. The lesson might be more about what now counts as unnatural about the situation described - like the short-run view of the system you were taking, and its very small number of possible interactions.
  • What is motivation?
    Thjs is surprising coming from someone who supports the idea of "final causes". What do you think final cause is, if not a dualist principle?Metaphysician Undercover

    I thought I've explained ad nauseum? It is a dialectical or dichotomistic principle. Final and formal cause are wrapped up in the systems notion of top-down acting constraints. They are matched in complimentary fashion by bottom-up acting degrees of freedom - a notion wrapping together material and efficient cause.

    So Aristotle was the great systems thinker. This is what modern systems thinking looks like.

    In any activity, there is always an "agent".Metaphysician Undercover

    Quotemarks are good. The agent should vanish if the systems account is working. We end up with a system that has the property of agency exhibited hierarchically over all scales of its being.

    This is the thing which is acting, the agent produces an effect.Metaphysician Undercover

    But in the systems view, both the global constraints and the local degrees of freedom produce effects. Both the general context and the particular events are causal.

    You claim to support the idea of final causes but then you describe human activities in your neuroscientific way, as if they are all efficient causes.Metaphysician Undercover

    No. That is just how you insist on understanding everything, no matter how regularly I correct you on that.

    Unless you can describe an interaction between efficient causes and final causes within one model, there is no basis to your claim that you both support the idea of final causes, and deny dualism.Metaphysician Undercover

    Yep. You really do never listen.
  • Boys Playing Tag
    I wondered about this, but my guess was what mattered was the percentage. 25% is clearly enough, but my guess is that a much smaller percentage of the population could effect this kind of change. They wouldn't even need to conspire if there was an objective way the choose a target.Srap Tasmaner

    But is your tag game a good model from which to extrapolate? It has unnatural features like that it is a closed system - only these four kids are playing. If there were a large pool of kids and behaviour was observed over time, then more dynamical and self-organising conclusions could be drawn.

    In a realistic game of tag - as nature plays it - would this one kid switch the system? Even in your tag game, the two others are just left out. They don't change their strategy. The smallest kid is the only other one who feels forced into joining in the mutual strategy switch. Wait a little longer, and doesn't the smallest kid get fed up and walk away?

    So you are illustrating the breaking of a system, not the gestation of a new self-balancing state of system organisation. There just is no organisation unless it has a self-perpetuating balance of competition vs cooperation. There is a sort of cooperativity between your 2 and 4 for a while. But it seems one that must soon break down - 4 walks off - rather than being the new stable state with mutual benefits.
  • Boys Playing Tag
    Yet another thought: I'm torn between the idea that cooperation might not be emergent and needs to be a first-class goal alongside competition, and the idea that market theory could be right.Srap Tasmaner

    Again, take notice of the background thought here. There are two views. Either we try to engineer the system like a machine, or we recognise the power of self-organisation based on a probabilistic view of nature.

    So as I say, probability theory sees two kinds of natural attractors when it comes to "fair" outcomes, fairness being really another word for globally random and unbiased.

    This is one of my favourite papers on the issue - https://stevefrank.org/reprints-pdf/09JEBmaxent.pdf

    Should Bill Gates be taxed to bring him back within normal distribution bounds of wealth? Or is his wealth in fact fair because we want to create a social world that is exponentially growing and not stuck in a steady-state equilibrium in terms of consumption?

    Fairness or randomness or cooperation are themselves globally emergent outcomes that can be polarised by two general settings when it comes to thermodynamical balance, or emergent natural patterns.

    In such a story, our player 2 would not be a bully but an iconoclastic hero, the one who says the emperor has no clothes.Srap Tasmaner

    In probability theory, there are always fluctuations. The question is whether the system itself is at a tipping point where the perturbation makes a difference. In a perfectly poised system, like the weather, a butterfly wing flap can be the difference. In a severely constrained system, it would take something huge - bigger than itself - to smash things apart. You can kick the mountain and it won't fall. It would take an asteroid, or millennia of eroding rain drops, to do that. But an avalanche could just "give way" because of the tiniest vibration.

    But again, your tag game seems too small a sample size to really show any emergent dynamics like this. You just have a kid taking it into his head to win in the easiest fashion. And you as an outsider see that as being against the rule of winning in tag being random. You want this tiny sample size to replicate your ideal of social dynamics where everyone has something they can win at, and so - as the corollary -
    nobody wins at everything.
  • Boys Playing Tag
    1 thought they were all playing 'tag'. 2 was actually playing 'get 4', a quite different game.Cuthbert

    Excellent point.

    This is what I had in mind: there are theories that expect cooperation to be emergent from competition.Srap Tasmaner

    My systems science perspective sees competition and cooperation as the complementary local and global poles of social organisation. So while cooperation is emergent, competition would be so as well. Each needs the other to be able to definitely distance itself from the other. There is an inner drive to bifurcate towards one or the other state. Which seems disruptive as a dynamic. And yet also, hierarchical stability is achieved overall because cooperation is generalised constraint, competition is generalised degrees of freedom. There is a balance when the overall cooperation is forming the "right kind" of locally competitive behaviour. That is the kind of competition that is (re)constructing the higher level general constraints.

    So a "fair" tag game has the implicit rule that individual interactions are randomly targeted. They should approach a normal distribution. One individual then flipping the other way - deterministically targeting one interaction - is going naked competition and that breaks the general rule.

    In a big enough game of tag, there would be room to be a cheater like this and get away with it. You could target the easy to get kids as a group, or target one particular kid while also throwing in enough exceptions to look reasonably random to the rest. But your small sample size means that the distance between chasing fairly and chasing unfairly doesn't offer much room except to completely flip state from cooperative to competitive mode.

    The emergence of strong cooperation in social groups is about the removal of the opportunities to cheat like this. An anthropologist noted that a tribe shared evenly all the food it gathered. Until they put up their tent and found individuals wanting to take the opportunity to hide food with the outsiders' help.

    So it is a tricky thing. But consider how you are viewing tag as a lesson in social fairness. You want it to be a competitive game without winners. Players should cooperate to randomise the outcome to the degree that it is simply an accident who comes out top. And as parents of kids, that seems like a great lesson in life. Pure cooperation at work. Kids like it to. It is natural to enjoy being part of a crowd having fun and where winning isn't really the thing.

    But then as we get older, then games become serious. Now there are meant to be winners. And so targeting the weaknesses of opponents is no longer unfair by the rules. You do need a much tighter game structure. Lines on the ground, enforced turn taking, all kinds of rules to create equality of opportunity. But while the cooperative structure of the game is thus made completely explicit - the chaos of tag becomes the order of Wimbledon centre court - so also the competitive element becomes sharply focused. The whole point becomes that it ends with a winner and a loser.

    So as I said, social organisation is about this natural dynamic of competition and cooperation. Each is emergent from the other as each can only measure itself in terms of its dialectical "other". And this dynamic is vague in a game of kid's tag. Only parents standing outside would start to form the sharp rule that interactions ought to be self-consciously random. For kids, the chaos itself would be more the point - the chance to be inside a moment of learning how sociality works.

    But then mature adult games are this kind of chaotic learning stage becoming clearly polarised. We form concepts of social equality and social order, as well as the matching concepts of individual striving and acceptable degrees of social cheating. How to be acceptably competitive is also something that clearly emerges for us.

    How this relates to powerlaws or scalefree network models is then another story. A further complication. Enough to say that it is the difference between a steady-state system and an expanding one. A dynamical system that is static or not growing has an equilibrium balance that conforms to a normal distribution. One that is growing freely will conform to a powerlaw distribution.

    So "global fairness" looks quite different in the two regimes statistically. It has a mean in one, and no mean in the other.

    This is the reason why we are conflicted by the 1% and current social inequality. Why should an individual like Bill Gates be worth more than many nations? In a static world economy, wealth ought to be normally distributed. In an accelerating world, then wealth will naturally tend to a powerlaw distribution. That becomes the new random outcome. The bigger question is whether exponential growth is possible for long in a resource-limited world. But there you go.
  • What is motivation?
    I'm not a fan of dualism or homuncular regress. So sue me.
  • What is motivation?
    It would be difficult to identify final cause as constraint, because an agent is free to produce one's own goalsMetaphysician Undercover

    Ah. Again the return of the agent, the mysterious witnessing and deciding self who is conscious. We are back to the ghost in the machine. Who needs neuroscience.
  • 'Quantum free will' vs determinism
    Quantum mechanics definitely challenges Newtonian determinism. But the answer on freewill and causality is more general. What is really needed is an "epistemic cut" between physics and information in a system for it to gain self-regulating autonomy.

    That is, a living and mindful system is modelling its relation with the world informationally or symbolically - through a system of interpretive signs.

    The disconnect or epistemic cut is clear to see in computation. The hardware takes away the physics essentially. There is still an energetic cost to powering the circuits. But the cost of any operation is made the same. And so physics drops out of the equation as a causal constraint. The software is then left free to symbolise any state of affairs. It is free in an absolute way to be anything it likes. It can invent its own private system of causality, like the logic of a programme that computes.

    Now of course computation is just a machine. The epistemic cut is rather too complete. A computer is utterly severed from the world. And so human have to write the programs, build the hardware, act as a connection between the realms of material physics and immaterial information.

    But organisms - life and mind in general - have an epistemic cut which is also then the basis of a lived entropic interaction. First, the physics and the information are separated. Organisms have various forms of memory that sit back from the metabolic whirl of dissipative physical action, like genes and neurons. But then this separate realm of information is embedded in an active modelling relation with the physical world. All the information is controlling physical processes, directing them towards desired ends.

    So there is a tight feedback loop that spans the physics~information divide that has been constructed. Unlike computation, there is a two-way street where the information may live in its own physics-free environment, but it still has to build its own hardware, pay for its own entropic expenditure. Life doesn't have anyone to plug it into a wall socket. It has to also own the means of self-construction and self-perpetuation.

    But when it comes to freewill debates, the essential point is that there is this basic epistemic cut which makes physics not matter. A zone of freedom is created by setting up an informational realm of causality.

    Now that freedom is then tied back to the general purpose of being a self-sustaining autonomous or autopoietic system. So it is not the absolute freedom of computation, which has no embedded purpose. But it is the practical freedom that is what it is like to be a human concerned with getting by in the world, making smart choices about physical actions that will perpetuate our existence. It is freewill as the modelling relation that allows us to be plugged into the world we are making for ourselves, individually and collectively.

    So really, when it comes to the physics, it makes no difference if that physics is understood as fundamentally deterministic or probabilistic. The epistemic cut upon which life and mind is based already filters that issue out. All that matters is that memory mechanisms can be constructed and material processes thus regulated via a modelling relation.
  • Pragmatism and Wittgenstein
    Yes of course. The basic idea that our conceptions shape our impressions is ancient. The Greeks realised that we have to read the ship on the horizon as a regular-sized ship far away and not some tiny miniature. What we experience is constructed.

    And then the epistemic issue becomes central with the Enlightenment and Scientific Revolution. Once we turn to the practical business of truly knowing the world, then the semiotic relation - the scientific process of reasoning that involves abduction, deduction and inductive confirmation - emerges as something explicit. You get Hume and the rest starting to spell this out.
  • Pragmatism and Wittgenstein
    To my understanding, pragmatism isn't generally considered to be a cohesive body of agreed upon thought that is acceptable to all of its adherents.sime

    I would say the historical situation is that Peirce formed an absolutely coherent view of pragmatism/semiotics. But then because of social forces, that never broke out the way it should have at the time. What came through into the public was the diluted Jamesian understanding of pragmatism (stripped of its semiotic backbone), or the Deweyian version (stripped of the metaphysical ambition).

    The "true Peirce" didn't start to come through until the 1980s and 1990s. The start of his emergence was in theological circles. The radicalness of his metaphysics fitted with those wanting a more idealist philosophical basis. But then a proper interest in Peirce has developed - even if still off the mainstream map. Semiotics has become important in theoretical biology for instance. It is big in Spain and South America and Canada, and other places on the edge.

    Peirce almost broke through in his own time, but philosophy was dominated by the UK, Germany and France. Harvard was some horrible provincial backwater of little account. And the Euro mood was also turning sharply to reductionism/AP. Peirce was offering a grand holistic logical scheme. The UK especially went with the shorn down logic of Frege. Peirce became just a background unacknowledged influence - a half-heard good idea that echoed but never fully grasped.

    Again, it is not so surprising as he got into trouble with Harvard, went off into the solitary wilderness, never actually managed to publish a single coherent text to bring his mature vision together. He created no book to study. And it then took about 80 years for scholars to get through all the unpublished notes and papers to present a fair reading of a voluminous output.

    So yes, Peirce's pragmatism/semiotics is utterly cohesive. And that totalising metaphysical view is itself considered a philosophical sin these days. No-one is meant to be able to make sense of it all in the way of an Aristotle or Hegel.

    And then there are a host of non-philosophical reasons why Peirce's impact was only as a whisper in the ear of AP. And why Pragmatism is viewed shallowly in terms of the metaphysically and logically unambitious retellings by James and Dewey.
  • Wittgenstein, Dummett, and anti-realism
    And we do this even if we don't expect to get "yes" or "no", but closer to "yes" or closer to "no", right?Srap Tasmaner

    I would put it the other way round. We know that to dichotomise strongly is the way to be sure that any answer is going to fall within the bounds of the possible. So the concern is about asking the question in the logically broadest sense so to ensure we encompass the whole range of any resulting answer.

    We can't reliably get closer to either limit - yes vs no - unless we are secure about the fact that the limits actually limit. So bivalence is part of that effort of framing questions in ways that answers at least land inside their counterfactual bounds.

    I could say that thing over there is either a gnat or a 7. You can see how hard it would be to assign a truth value to statements that don't properly suggest actual bounds on our uncertainty.

    "AP"?Srap Tasmaner

    Analytic philosophy.