• Continuity and Mathematics
    When you divide a line at a point, the point stays with one segment and not the other. As someone trained in math, it's hard for me to understand how this answer isn't satisfactoryfishfry

    How can that be satisfactory in a philosophical sense? If you can divide the point on one of its sides, why can't the next cut divide it to its other side, leaving it completely isolate and not merely the notion of an end point of a continua?

    And a better paper on the Peircean project is probably...

    http://uberty.org/wp-content/uploads/2015/07/Zalamea-Peirces-Continuum.pdf
  • Continuity and Mathematics
    Continuity can only be relative to discreetness (at least in actualised existence). That is, continuity Is defined by the lack of it other. So even spacetime as generalised dimensionality would be only relatively continuous. And that is what physics shows both with the quantum micro scale and also the relativistic macro scale (where spacetime is "fractured" by he event horizons of its light one structure).
  • Zeno's paradox
    Can you (or anyone) supply some of relevant Bergson and Pierce links that would shed light on the relation between the mathematical real numbers and the philosophical idea of the continuum?fishfry

    You are better off asking aletheist that as that is his argument. And I am certainly no Bergsonite.
  • Zeno's paradox
    I am not convinced that this is true. Two of Peirce's major objectives for philosophy were to make it more mathematical (by which he meant diagrammatic) and to "insist upon the idea of continuity as of prime importance." Surely he must have considered these efforts to be complementary, rather than contradictory.aletheist

    I think Rich is right that maths is generally premised on the notion of atomistic constructability and so is anti-continuity in that sense. (And that is not a bad thing in itself as constructionist models - even of continuity - have a useful simplicity. Indeed, arguably, it is only by a system of discrete signs that one can calculate. And signs are themselves premised on understanding the world in terms of symbolic discontinuities of course - signs are no use if they are vague.)

    So then the holistic reply to this routine mathematical atomism would be a countering mathematics of constraints - of pattern formation calculated via notions of top-down formal and final cause. And that is damn difficult, if not actually impossible.

    This would be why Peirce felt his diagrammatic logic was so important. Like geometry and symmetry maths, it tries to argue from constraints. Once you fence in the possibilities by drawing a picture with boundaries, then this is a way to "calculate" mathematical-strength outcomes.

    So yes, there is no reason why a construction-based maths should not be complemented by a constraints-based maths. And arguably, geometry illustrates how maths did start in that fashion. Symmetry maths is another such exercise.

    However to progress, even these beginnings had to give way to thoroughly analytic or constructive techniques. Topology had to admit surgery - ways that cut apart spaces could be glued back together in composite fashion - to advance.

    So that is at the heart of things here. For a holist, it is obvious reality is constraints-based. So regular maths is "wrong" in always framing reality in constructivist terms. And yet in the end maths is a tool for modelling. We actually have to be able to calculate something with it. And calculation is inherently a constructive activity.

    So while we can sketch a picture of systems of constraints - like Peirce's diagrammatical reasoning - that is too cumbersome to turn into an everyday kind of tool that can be used by any schoolkid or universal turing machine to mechanically grind out results.

    Of course, that kind of holistic reasoning is also then absolutely necessary for proper metaphysical level thinking, and diagrammatical reasoning can be used to advance formal arguments in that way. You have probably seen the way Louis Kauffman has brought together these kinds of thoughts, recognising the connections with knot theory, as well as Spencer-Brown's laws of forms. And I would toss hierarchy theory into that mix too.

    So construction rules the mathematical imagination as tools of calculation are the desired outcome of mathematical effort.

    While that doesn't make such maths wrong (hey, within its limits, it works I keep saying), it does mean that one should never take too much notice of a mathematician making extrapolations of a metaphysical nature. They are bound to be misguided just because they hold in their hands a very impreessive hammer and so are looking about for some new annoying nail to bang flat.
  • Zeno's paradox
    I have noted, rather, that no matter how big a finite number you specify, it is possible in principle to count up to and beyond that number. In other words, you cannot identify a largest natural number (or integer) beyond which it is impossible in principle to count.aletheist

    Again this is an example of rationally seeking a way for the part to speak for the whole. What can't be achieved via actualisation can be supported by appeal to the existence of a local property - in this case, not bijection but a quick demonstration that any nameable number implies in its own syntactic construction a number immediately larger (or immediately smaller).

    Tom is also employing this local syntactic property.

    So yes, bijection seems more abstract a level of definition because it maps maths to maths rather than maths to physics (ie: syntactic spaces where time is still part of the deal - as in saying any time you name a number, the next higher number awaits). But still, the general mathematical tactic is the same - seek a local property that constructive principles will guarantee stands for the truth of the whole. And thus, the very nature of this tactic reveals the deeper questionable presumptions that metaphysics would be interested in.

    It is the idea that reality is perfectly constructible that is questioned by a synechetic or holistic point of view.

    But then even a simple holism falters - the idea of the continuum being instead " the foundational". The continuum is that to which an infinity of cuts can be made. If a division is possible, another one right next to it ... but spaced by the infinitesimal of some continua ... must be possible. So simple holism is simply the inverse problem. Although - like division as an arithmetic operation - there is an advantage that at least it is being flagged that there is a more primitive presumption about there being in fact a pre-existent whole (that gets cut or divided).

    So simple holism brings out the fact that simple constructionism is presuming an infinite empty space that can be filled by an unbounded act of counting. The standard atomistic approach presumes its numerical void waiting to be filled. And even bijection just illustrates the presumed existence of this numerical void as a waste disposal system that can swallow all arithmetical sequences. You can toss anything into the black hole that is infinity and it will disappear without a splash.

    So the simplest view treats infinity as the void required by atomic construction. The next simplest view treats infinity as a continuum - a whole that is in fact an everything, and so able to be infinitely divided.

    Then obviously - as usual - there is the properly complex view where instead of an atomistic metaphysics of nothingness, or even the partial holism of a reciprocal everythingness, we arrive at the foundational thing of a vagueness as that deepest ground which can be divided towards this reciprocal deal of numerical construction vs numerical constraint, the filling of a numberless void vs the breaking of a numberful continuum.

    Of course, none of this deep metaphysics need trouble those only concerned with ordinary maths. They can believe that Cantor fixed everything for atomistic construction and the story ends there.

    But deep metaphysics makes the argument that the very act of trying to cut is what produces the divided that appears to either side. The continuum arises because it is cuttable. Which like the Chesire Cat's grin, sounds really weird to those only used to everyday notions of logic or causality where something - either everything or nothing - has to be the starting point or prime mover for any chain of events.
  • Zeno's paradox
    f you think to yourself, "The natural numbers, the integers, and the rational numbers are examples of foozlable sets," you will not confuse yourself or others by shifting the meaning of a technical term to its everyday meaning.fishfry

    I think the issue here has been metaphysical - so neither everyday, nor mathematical. Although the mathematics of course has to have some grounds for finding its own axiomatic base "reasonable".

    So the Zeno paradox is about a particular difficulty between a mathematical operation and the world we might want it to describe. The math seems to say one thing, our experience of the world another.

    Bijection is great. It replaces the need for a global act of quantification (demonstrating an example of infinity by showing a sequence is measurably unbounded) with a local demonstration of a quality (if bijection works for this little bit of a sequence, then that property ensures the infinite nature of the whole). So bijection doesn't do away with the notion of counting or a syntactic sequence. But it does extract a local property that rationally speaks for the whole.

    No problems there.

    And then we get back to the metaphysics on which even the mathematical intuitions are founded. Which was the issue the OP broaches and which you are side-tracking.
  • Perfection and Math
    Here is the text box definition, pulled from one of my statistics course books.Jeremiah

    That would be why probability ranges from 0 to 1 then? Categorical differences are measured relatively in fact?
  • Fractured wholes.
    Positivity doesn't exist unless it's in the company of negativity. So if we're just dealing with 2 dimensions, left and right are negatives of one another. It's more complicated if we add that third dimension.Mongrel

    Symmetry broken simply is symmetry broken on just a single scale. So it is easily reversed. There is no real separation of what just got separated and so there is nothing stopping a distinction immediately erasing itself.

    That is what literally happens with "positivity and negativity" when it comes to fundamental particles. They pop out of the quantum vacuum in opposing pairs (as the conservation laws derived from symmetry mandate) and then annihilate so fast that physics ends up calling them virtual.

    To get a persistent symmetry breaking requires a "third dimension" - a breaking over scale that creates an effective state of separation or asymmetry. Stuff has to be put far enough apart from itself so it can do something else while it takes its time to - by the end - just annihilate.

    With our actual Universe, there is a complex charge asymmetry built in because "raw matter" could fall into several different local symmetry-breaking arrangements. You could have the quarks with their eight-fold way that left a sufficient excess of positive protons. Then you had the leptons which - after an entanglement with the further symmetry-breaking of the Higgs fields - eventually left a sufficient excess of negative electrons.

    So right there - in a series of complicated symmetry breakings that turned out to have the makings of an actual asymmetry - you have an illustration of reality being a something because it got separated across scale (thermal scale, as heat all this asymmetric residue and you can return it to its Big Bang equilibrrium where all particles are simply virtual fluctuations of a vanilla force).
  • Fractured wholes.
    I've still going to have to think about what you've said more, do a little reading on the subject before I get back to you.Wosret

    No problem. I understand it is a dense issue. But as SX indicates, we can deal with actual similarity and difference in the world with an apparent intuitive ease that belies the underlying metaphysical complexity.

    And that complexity is what gets revealed as soon as we instead start to ask how a difference comes to make a difference. That question is like finding the loose end of a woolly jumper and beginning to pull.

    Well, right, left, up down are all positive things. "not-me", "not-us" and "not-shit" are not, and could really conceivably be anything at all except for me, us, and shit..Wosret

    This illustrates particularity. We seem to start metaphysics with a brute something. There is the positivity of some concrete proposition - that is then either true or false.

    But look closer and you can see here that the brute somethingness points to an "otherness" of two possible kinds - the more general, or the more vague.

    All the not-As might accounted for as by a concrete generality of some constraint that then defines the nature of what may count as a certain genus of particular. It might be the "me" that is a subset of the "we", or the "shit" that is defined in contrast to the undigested banquet.

    Or the not-A might simply refer to the indeterminism that is by contrast the generalised lack of such a determining context. Or in other words, it refers to the freedom or contingency that is also an equally inescapable aspect of reality. It might be the random seeming collection of "me, apples, tanks, galaxies". The "other" being spoken of via the logical construction of "not-A" could be just every kind of stuff. So just, in semantic effect, a vagueness.

    Thus once more, a complex triad is revealed at the heart of conventional monistic thought.

    From the particular - viewed as some brute substantial particular - you can talk about the "other", the not-A, as either the vague or the general. So that is something to be further specified in any attempt to apply logic to ontology.

    Peirce made the difference clear enough. He argued that the law of the excluded middle does not apply to generality, while it is the principle of non-contradiction that fails to apply to the vague.

    So within the (triadic) laws of thought, this important distinction between generality and vagueness is perfectly well defined (along with particularity as being that which to all three laws of thought then do apply).
  • Zeno's paradox
    You mentioned the relevance of transversing the Planck scale. And while I applaud taking the physical facts seriously, in fact any exactness of location results in a complementary uncertainty about momentum (or equivalently, duration).

    So if you talking about a physical continuity on the Planck scale, your attempt to mark the first location would already then have your fixed point transversing the whole distance to its resulting destination.

    It is like the way a photon is said to experience no time to get where it is going. Travelling at c means the journey itself is already described by a vector - a ray rather than a succession of points.

    So in the real world, locating your starting point is subject to the uncertainty relation. The Planck scale is the pivot which prevents you reaching your goal of exactitude by diverting all your measurement effort suddenly in the opposite direction. In effect you so energise the point you want to measure that it has already crossed all the space you just imagined as the context that could have confined it.

    Zeno definitely does not apply in quantum physical reality.
  • Zeno's paradox
    I guess you must deny, then, that the integers are countable, since nothing and no one can actually count them all. And yet it is a proven mathematical theorem that not only the integers, but also the rational numbers are countable - i.e., it is possible in principle to count them - despite the fact that they are infinitely numerous.aletheist

    MU is right that it has to be more complex than that. Talk of actually counting smuggles in the necessity of the maker of the infinesimal divisions or Dedekind cuts.

    For there to be observables, there has to be an observer. Or for the semiotician, for there to be the signs (the numeric ritual of giving name to the cuts), there has to be a habit of interpretance in place that allows that to be the ritualistic case. Which is why the number line itself is just a firstness or vagueness. In the ultimate analysis it is the raw possibility of continua ... or their "other", the matchingly definite thing of a discontinuity.

    So infinity and infinitesimal describe complementary limits - one is the continuum limit, the other the limit on bounded discreetness, the limit of an isolate point.

    Thus counting presumes an observer then able to stand inbetween. The counter can count forever because the counter also determines the cuts that pragmatically "do no violence" to the metaphysics, at least as far as the counter is concerned.

    My point is thus that an observerless metaphysics is as obtuse as an observerless physics, or theory of truth, or observerless anything when it comes to fundamental thought.
  • Fractured wholes.
    Great. The essential thing is not to be scared of complexity.

    Metaphysical analysis always arrives at dichotomous contrasts. Logical intelligibility itself demands a world divided into what is vs what is not. The problem is that this has to work for both sides of the equation. So the "what is not" has to be still something else itself - whatever it is that can make the "what is" what it is.

    So analysis sounds like it demands the resolution of a monadic outcome - the arrival at the fundamental via the rejection of all that is superficial, or contingent, or emergent, etc. Yet the fact is the dichotomy - the dyadic relation - is irreducible. You can't have any notion of the "what is" in the absence of the complementary notion of it being precisely "that which is not what it is not".

    So there is a doubled or recursive negation at work. Monadicity can only arrive at itself via the denial of its own denying. The essentially self defeating nature of monadic metaphysics is thus revealed. It others othering and thus falls into inconsistency even with itself.

    Thus the dichotomy forms the irreducible basis of intelligible existence. It both finds the natural divisions of being, and relates them as each other's other. Each is the others limit.

    Having established that, we also establish that we are thinking in active and developmental terms, not passive and brutely existence ones. Existence is revealed as having a necessary history - as divisions must both arise and terminate. Which is where you get the thirdness or triadicity that is the ultimately irreducible metaphysical state. So yes, what is fundamental is not twoness, let alone oneness, but threeness. Three is the number of actual complexity.

    A further point is that to cash all this out in terms of some actual world requires a global state of asymmetry - or scale symmetry. That is, if reality is constructed by a symmetry breaking of pure possibility, then this breaking must happen freely and completely across all available scales of being.

    In terms of cosmological theory, the results must be homogenous and isotopic - invariant with the scale of observation. That is why fractal maths are found everywhere where nature is at its most simplest. Zoom in or zoom out, the fractal world looks always exactly the same. And that is because the dichotomy or distinction being expressed is being expressed fully over all possible scales. It is the same damn thing - the same damn seed asymmetry - absolutely everywhere.

    The Koch triangle shows this in its fractal generator, which is the simple asymmetry of natural log2/ natural log3 (or fractal dimension of .63).

    To unpack this, the Koch triangle fractal is a line divided into three and then the middle segment sprouting the two sides of a further triangular bump. So a line buckles in the simplest imaginable fashion. That gives you the seed ratio - the 3/2. And then the natural log simply forces the growth of that act of buckling over every possible scale. You thus have two exponential actions in a constantly specified balance. The result is a mathematical model of perfectly complete asymmetry - or rather, the emergence of a new axis of scale symmetry, a fractal dimension that stands in the middle of two bounding extremes of action (between the flatness of the line that gets radically broken, and then the curvature of the buckling that is a departure from the now radical thing which is to be instead flat and "a line all the same with itself").

    So it may seem a bit of excursion to talk about the maths of fractals. On the other hand, it is a fact that the new maths of complexity (fractals, scalefree networks, universality, criticality, etc) gives a picture of reality that is precisely the kind of irreducibly triadic metaphysics I just described.

    So don't expect monadic metaphysics to be right. Expect the dichotomies or symmetry breakings that point to the broken symmetries or equilibrated outcomes that are then in turn their natural "scale symmetry" limits (or, the same thing, their states of asymmetric or hierarchical final order).

    Ie: The maths of complexity has vindicated this irreducibly triadic vision of nature during the past 40 years.
  • Fractured wholes.
    Similarity and difference are a metaphysical dichotomy. So each is defined in terms of being not the other. Or rather, in practice as the breaking of a symmetry, the least like each other as possible by each being as far apart as possible as states of being or categories.

    In being two poles being differentiated, then brings in the further thing which is the vagueness or firstness that they divide. They are both crisply actual - as limits - of what was the purely possible.

    This furthere "in reference to" also manifests (confusingly) in the crisply divided outcome. The world that emerges between two opposed limits (here the similar and the different) is iteself everywhere some mixture or equilibrium balance of the two categories. So the world itself does sit in the middle - with this concrete mixture of states being found to be the same blend over all observable scales.

    So that is the basic set up - for all metaphysical dichotomies. They speak to the firstness that is their common vague origin (the symmetry that got broke) as well as the thirdness which is their own completely mixed state of being - the further thing of having become broken in the limit and arriving at an equilibrium balance.

    Of course it may sound crazy to talk of similarity and difference as being themselves united and divided. Or instead, that they are united in initial vagueness, then concretely divided by a logical symmetry breaking, and then reunited by the emergent symmetry of being as mixed together as they can possibly be, is the feature here. The developmental trajectory involved of firstness, secondness and thirdness describes itself in terms of itself.

    Anyway, it means that for there to be a world, similarity and difference must be a division concretely respected over all scales of differentiation (and hence integration).

    Now we can get down to the detail of the mechanism.

    SX makes the standard semiotic point that to be an actual difference, a difference has to make a difference. So difference itself is divided into the meaningful vs the meaningless, the signal vs the noise, the teleological vs the contingent. Thus now we do bring in the active or causal nature of being.

    The alternative view is that existence is a passive brute fact. It has no reasons. Difference or similarity has no meaning. It is all just arbitrary labels for a world that has no developmental story and thus no reasons for its apparently definite state of organisation.

    But here I have described a developmental or process metaphysics where existence is an emergent equilibrium state where change keeps changing, but by the end further change can make no difference. It is like a new pack of cards. Once the deck is well shuffled, continued shuffling makes no effective difference. It does make a difference to the exact order, but now such differences are a matter of indifference. When the deck is as random as possible, it can't be made more random.

    So yes, this all seems now a rather mindful or psychological kind of metaphysics. Similarity and difference are relative judgements that are about differences that make a difference (in breaking the symmetry of a state of similarity which is another word now for a state of indifference).

    But again, that bug is really a feature. It brings minds or observers firmly within the metaphysics of actual being. It unites epistemology and ontology in making meanings and thus purposes part of the world.

    The final twist to bring an organic or pansemiotic metaphysics into focus is then understanding the triadic relation in causal terms as the hierarchical contrast between constraints and freedoms. One is top down causality, the other acts in causally bottom up fashion.

    So similarity is enforced on natural possibility by general constraints. Worlds as states form constraining contexts. They limit free possibility in particular ways. And all objects or events thus limited are the same in that fashion. They all participate in that particular form.

    But then difference stil exists. That is what freedom means. Spontaneous and unconstrained in some regard. So accidents and contingency are also fundamental in this organic picture of nature. They too exist over all scales of being. (The statistics of fractals or power laws being the signature of actual natural systems for this reason.)

    So now we have that triadic set up. There are general constraints. And there are particular freedoms. Then there is the rule of indifference in operation that marks the emergent boundary where now further differences fail to make a difference to the general state of things - which, dichotomously, also then defines the differences that do make a difference.

    So if enforcing similarity is the telos of a constraint, then that also means that eventually the world becomes equilibrated - like a well shuffled deck - and so apparently only composed of a whole bunch of accidents. The differences that don't make a difference become the apparent ground of being because they are what get left once the development of a world has arrived at the dichotomous satisfaction of it's own symmetry breaking desires.

    Contingency rules when organisation has had its say. Existence is a bunch of indifference (a heat death) in the end.

    But the story of how it gets to that fate is the bit that is metaphysically interesting.
  • Zeno's paradox
    A true continuum is infinitely divisible into smaller continua; it is not infinitely divisible into discrete individuals.aletheist

    The story in a nutshell. Points are a fiction here. The reality being modelled is the usual irreducibly complex thing of a vector - a composite of the ideas of a location and a motion...

    My coordinate system only uses the rational numbers. So I ask again; what coordinate does it pass through first?Michael

    ....and the corollary is that what is being counted is not points but (Dedekind) cuts. The numbers count the infinite possibility for creating localised and non-moving discontinua.

    My coordinate system only uses the rational numbers. So I ask again; what coordinate does it pass through first?Michael

    The cut bounds the continua in question. So the continua has already been "traversed" in the fact there is this first cut. You are then asking how near the other end of the cut continua can be brought in the direction of the first cut in question. The answer is that it can be brought arbitrarily close. Infinitesimally near.

    So you are creating difficulties by demanding that continua be constructed by sticking together a sequence of points. However there is no reason the whole story can't be flipped so that we are talking about relative states of constraint on a continuity - or indeed, an uncertainty - when it comes to the possibility of some motion, action, or degree of freedom.
  • Most over-rated philosopher?
    Yep, simple isn't it. If you actually break things apart, they are no longer in a relation.

    Again, close reading will show that I stress that this is about "directions" and "extents", and so the intrinsic relativity of a logical dichotomy is presumed. Your pretence otherwise is just trolling.
  • Most over-rated philosopher?
    Don't pretend to be so dim. Maximising the separation is night and day different from breaking the connection.
  • Most over-rated philosopher?
    Are your close reading skills really as challenged as you pretend?

    (Of course, living beings can't actually ignore the world. They must live in it. But the point here is the direction of the desires. Rationalism got the natural direction wrong - leading to rationalist frustration and all its problems concerning knowledge. Pragmatism instead gets the direction right and thus explains the way we actually are. There is a good reason why humans want to escape into a realm of "fiction" - and I'm including science and technology here, of course. As to the extent we can do that, we become then true "selves", the locus of a radical freedom or autonomy to make the world whatever the hell we want it to be.)apokrisis
  • Most over-rated philosopher?
    Get back to me when you want to discuss what I actually said and not what you are pretending I said.
  • Most over-rated philosopher?
    If you want to discuss this seriously, define madnesss properly.

    Are you talking paranoia or bipolar mania or what? A primary symptom of schizophrenia is a breakdown of perceptual predictability. So a loss of control over experience rather than a gain.
  • Most over-rated philosopher?
    You sound threatened somehow.
  • Most over-rated philosopher?
    I think that's quite a mad philosophy.Agustino

    And I find your replies trivial.
  • Most over-rated philosopher?
    So the direction of desire is towards madness and the mad is the most successful of us all? :sAgustino

    Why do you have to drag Trump into every conversation? But yes I guess.
  • Are the laws of nature irreducible?
    So, in all the common interpretations of QM, including "no-collapse" interpretations, there always is a tacit reference to measurement operations, and the choice of the setup of a macroscopic measurement apparatus always refers back to the interests of the human beings who are performing the measurement. The processes of either "decoherence", or "collapse" of the wave function, (or of "projection" of the state vector), amount exactly to the same thing from the point of view of human observers.Pierre-Normand

    Yep. Decoherence - at the level of heuristic principle - says all the troubling indeterminacy disapears in the bulk behaviour. So that probabilistic view gives us an informal account of collapse that fits the world we see.

    Of course, the existing quantum formalism doesn't itself contain a model of "the observer" that would allow us to place the collapse to classical observables at some specific scale of being. But then either one thinks that is the job of a better future model - which seems the metaphysically reasonable choice. Or one can go crazy with the metaphysics and say every possible world in fact exists - a "solution" which still does not say anything useful about how world-lines now branch rather than collapse.

    So the main reason for supporting MWI is that it is ... so outrageous. It appeals because it is "following the science to its logical conclusion" in a way that also can be used to shock and awe ordinary folk. Scientism in other words.
  • Most over-rated philosopher?
    But again, how does this change anything?Agustino

    Simply put, if the error is external, then the mind simply has to make a better effort at knowing the world truly. But if instead the error is internal - the mind has to create the structure of its perceptions - then more effort may only put the mind at an even further distance from the thing-in-itself.

    And this in fact fits with psychological science. It also ceases to be a problem once you give up rationalist dreams of perfect knowledge and accept the pragmatism of a Peircean modelling relation with the world.

    So a striking fact of cognitive architecture is that consciousness is in fact "anti-representational". The brain would rather live with its best guess about the actual state of the world. It would like to predict away all experience if it could - as that way it can start to notice the small things that might matter most to it.

    This would be Kantianism in spades. It is not just a generic structure of space and time, or causality, that we project on to existence. Ahead of every moment we are predicting every material event as much as possible, so we can quickly file it under "ignore" when it actually happens.

    In this sense, we externalise error. Through forward modelling or anticipatory processing, we form strong expectancies about how the world "ought to be". And then the world goes and does something "wrong", something surprising or unexpected. The damn thing-in-itself misbehaves, leaving us having to impose some revised set of expectations that then becomes our new consciousness of its state of being.

    (And until we have generated some new state of prediction, we are not conscious of anything for the half second to second it can take to sort out a state of sudden confusion - or in extreme situations, like a car crash, our memory will be of time slowed or even frozen with a hallucinatory, conceptually undigested, vividness. It is another psychological observation that childhood experience and dreams have this extra vividness because there is not then such a weight of adult conceptual habit predicting all the perception away and rendering it much more mundane.)

    Anyway, as I was saying, Kant was right in understanding that the brain has to come at the world equipped with conceptual habits of structuration if it is to understand anything - in terms of its own pragmatic interests.

    But Kant was still caught up in the rationalist dream of perfect knowledge. And so the gap between mind and world was seen as some kind of drama or failure. We have the right to know the world as it is - and yet we absolutely can't.

    Peirce fixed this by naturalising teleology. Knowledge exists to serve purposes. And so what was a rationalist bug becomes a pragmatistic feature.

    Oh goody! We don't have to actually know the world truly at all if the real epistemic aim is to be able to imagine it in terms that give us the most personal freedom to act. The more we can routinely ignore, the more we can then insert our own preferences into the world as we experience it. Consciousness becomes not a story of the thing-in-itself but about ourselves whizzing along on a wave of satisfied self-interest.

    So Kant turned things around to get the cognitive architecture right. But because he still aspired to rationalist perfection, he wanted to boil down the mind's necessary habits to some bare minimum - ontic structure like space, time and causality.

    This simply isn't bold enough. Brains evolved for entirely self-interested reasons. Which is why an epistemology of pragmatism - consciousness as a reality-predicting modelling relation - was needed to fully cash out the "Kantian revolution". The thing-in-itself is of interest only to the degree that it can be rendered impotent to the mind. The goal is to transcend its material constraints so as to live in the splendid freedom of a self-made world of semiotic sign.

    (Of course, living beings can't actually ignore the world. They must live in it. But the point here is the direction of the desires. Rationalism got the natural direction wrong - leading to rationalist frustration and all its problems concerning knowledge. Pragmatism instead gets the direction right and thus explains the way we actually are. There is a good reason why humans want to escape into a realm of "fiction" - and I'm including science and technology here, of course. As to the extent we can do that, we become then true "selves", the locus of a radical freedom or autonomy to make the world whatever the hell we want it to be.)
  • Are the laws of nature irreducible?
    What's the problem? Is deflection your only defence?
  • Are the laws of nature irreducible?
    That's what allows thought, and life.tom

    Nope. It is the semiotic interaction between the realms of sign and materiality that allow that.

    Computation explicitly rules out the interaction between formal and material causes. So to actually build a computer, the dynamics of the material world must be frozen out at the level of the hardware. Computation is the opposite of the organic reality in that regard. And biophysics is confirming what was already obvious.

    And that is before we even get into the other issue of who writes the programs to run on the hardware. Or who understands that the simulations are actually "of something". Or that error correction is needed because what the computer seems to be saying must be instead that kind of irreducible instability which is the real dynamical world intruding. (Oh shit, my quantum entanglements keep collapsing or branching off into other worlds.)

    But keep on with the computer science sloganeering. I'm well familiar with the sociology of the field. No one cares if people talk in scifi terms there. It is the name of the game - always over-promise and under-deliver.
  • Are the laws of nature irreducible?
    It is how these principles are related to what is outside the category, how we relate an epistemology to an ontology for example, which is where we should make such judgements of good and bad.Metaphysician Undercover

    So you mean ... exactly what I said then?

    Ie: Holism is four cause modelling, reductionism is just the two. And simpler can be better when humans merely want to impose their own formal and final causality on a world of material/efficient possibility. However it is definitely worse when instead our aim is to explain "the whole of things" - as when stepping back to account for the cosmos, the atom and the mind.
  • Are the laws of nature irreducible?
    At the risk of repeating myself, it has been proved that all real universal computers are equivalent. The set of motions of one can be exactly replicated on the other. It has further been proved that any finite physical system can be simulated to arbitrary accuracy, with finite means, on a universal computer. The brain can thus be simulated on a universal computer, whether it is itself universal or not. Whatever a brain can do, a computer can do. There is nothing beyond universality.tom

    Still this dualistic crackpottery.

    A computational simulation is of course not the real thing. It is a simulation of the real thing's formal organisation abstracted from its material being.

    This should be easy enough to see. A computer relies on the physical absence of material constraints. It is cut off from the real world in that it has a power supply it doesn't need to earn. It doesn't matter what program is run as the design of the circuitry means the entropic effort is zeroed in artificial fashion. The whole set-up is about isolating the software from dissipative reality so it can do its solipsistic Platonic thing.

    A brain is quite different in being organically part of the material world it seeks to regulate via semiosis. And you can see this in things like the way it is fundamentally dependent on dissipative processes and instability.

    Where a computer must be made of Platonically stable or eternal parts - logic circuits frozen in silicon - the brain requires the opposite. It depends on the fact that right down at the nanoscale of cellular structure everything is on the point of falling apart. All molecular components are self-assembling in fluid fashion. So they are constantly about to break apart, and constantly about to reform.

    And in having this critical instability, it means that top-down semiotic constraint - the faint nudges to go one way or the other that can be delivered by the third thing of a molecular message - become supremely powerful. This is the reason why a level of sign or biological code can non-dualistically control its world. It is why the "software" can regulate the materiality of metabolic processes, and on a neural scale, the material actions of whole bodies.

    So science has looked at how organisms are actually possible. And the answer isn't computation but biosemiosis.

    Computers are abstracted form. So they have the fundamental requirement that someone - their human masters - freezes out the material dynamics of the real world so they can exist in their frozen worlds of silicon (or whatever super-cooled, error corrected, machinery a quantum computer might get made of).

    And organisms are the opposite. They depend on a material instability - being at the edge of chaos - that then makes it possible for top-down messages to tip stochastically self-organising processes in one direction or another.

    As I say, that is what makes multi-realisability an issue. A Turing Machine can indeed be made out of anything - tin cans and string if you like.

    But biology - in only the past 10 years - has shown how organic chemistry may be a unique kind of "stuff" that can't be replicated or simulated by simpler physical machinery (circuitry lacking the critical instability that then gives semiosis "something to do").

    It is a happy fact that Turing himself was on to it with his parallel work on chemical morphogenesis. He was an actual genius who saw both sides of the story. But sadly UTMs have given licence to decades of academic crackpottery as hyped-up computer scientists have pretended that the material world itself is "computable" - as if an abstracted simulation is not the opposite of existing in a world of material process.
  • Are the laws of nature irreducible?
    By saying that human beings create a group-mind, without attributing this unity to God, you assign to the human race the property of God, and commit the sin of the fallen angel.Metaphysician Undercover

    Cripes. So social constructionism is the work of the Devil.
  • Are the laws of nature irreducible?
    I would also readily grant that mental abilities can be multiply realized in a variety of biological or mechanical media ...Pierre-Normand

    I have to say that the latest understanding of biophysics at the nanoscale is now a serious challenge to multirealisabilty. Organic molecules have physically unique properties that allow them to flourish in a dissipative environment and function as various kinds of functional components. So the biologists don't have to grant the computationalists any kind ground at all anymore if life and mind are semiotic processes rather than information processes.

    And the beauty is that the onus is on computationalists to show that life and mind are "just information processes" now if they want to keep pushing that particular barrow. This is no longer the 1970s. :)

    Peter Hoffman has done a great book - Life's Ratchet - on this.
  • Are the laws of nature irreducible?
    Of course, if you managed to formulate an argument that the brain is not computationally universal, and that it could not be programmed (e.g. by training), and that therefore the mind could not be an abstraction instantiated on a brain, then you might have a point.tom

    You ought to check Robert Rosen's Essays on Life Itself for such arguments. Also Howard Pattee's paper, Artificial life needs a real epistemology.

    But even just from a good old flesh and blood neuroscience perspective, where's the evidence that the brain is actually any kind of Turing machine (even if you believe that any physical process can be simulated by a UTM)?
  • Are the laws of nature irreducible?
    Reductionists are generally materialist. If there are such philosophers as 'reductionist dualists', I would be interested to hear about them.Wayfarer

    Chalmers?
  • Are the laws of nature irreducible?
    No, I meant that hearing people speak, and reading books are acts of sensation. Don't you agree?Metaphysician Undercover

    Of course not. All my senses actually see is squiggles of black marks. My cat sees the same thing.

    To interpret marks as speaking about ideas is something very different. It is to be constrained not by the physics of light and dark patterns but by a communal level of cultural meaning.

    So without being a substance dualist, the semiotician has all the explanatory benefits of there being "two worlds" - the one of sign, the other of matter.

    I don't read books, or speak to people to gain access to any "group-mind".Metaphysician Undercover

    Exactly. I mean who needs a physics textbook to know about physics, or a neuroscience textbook to know about brains? Just make the damn shit up to suit yourself.
  • Scholastic philosophy
    Hah! Knocked it out of the park.
  • Are the laws of nature irreducible?
    I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem?Metaphysician Undercover

    I always say it is fine in itself. It is only bad in the sense that two causes is not enough to model the whole of things, so reductionism - as a tale of mechanical bottom-up construction - fails once we get towards the holistic extremes of modelling. You need a metaphysics of all four causes to talk about the very small, the very large, and the very complex (the quantum, the cosmological, the biotic).

    a dualist reductionist would not meet the same problem. The dualist allows non-spatial substance.Metaphysician Undercover

    Yep. Olde fashioned magick! Dualism is just failed reductionism doubling down to make a mystery of both mind and matter.

    I don't see this need. We hear people talking, we read books. These are perceptual activities. Why can't we treat them like any other perceptual activity?Metaphysician Undercover

    You meant conceptual activities really, didn't you? :)

    Or at least some of us read books and listen to people talk to gain access to the group-mind. It kind of defines the line between crackpot and scholar.
  • Are the laws of nature irreducible?
    But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic wayWayfarer

    All modelling is reductionist ... even if it is a reduction to four causes holistic naturalism. And as I say, even the brain is a reductionist modeller, focused on eliminating the unnecessary detail from its "unified" view of the world. The brain operates on the same principle of less is more.

    As far as free will (or won't) is concerned, the point from the perspective of a humanistic philosophy is not understanding the determinative causes of human actions from an abstract or theoretical point of view, but what freedom of action means.Wayfarer

    Yep. But that is covered by my point that neuroscience only covers the basic machinery. To explain human behaviour, you then have to turn to the new level of semiosis which is linguistic and culturally evolving. So you can't look directly to biology for the constraints that make us "human" - the social ideas and purposes that shape individual psychologies. You do have to shift to an anthropological level of analysis to tell that story.

    (And I agree that the majority of neuroscientists - especially those with books to sell - don't get that limitation on what biology can explain.)

    Isn't that 'the genetic fallacy'? Anyway, I'm Buddhist and an outed dualist.Wayfarer

    As it happened, Libet told me about his dualistic "conscious mental field" hypothesis before he actually published it in 1994. So I did quiz him in detail about the issue of his personal beliefs and how that connected to the way he designed and reported his various earlier brain stimulation and freewill experiments.

    So I am not making some random ad hominen here. It is a genuine "sociology of science" issue. Both theists and aetheists, reductionist and holists, are social actors and thus can construct their work as a certain kind of "performance".

    And believe me, the whole of philosophy of mind/mind science came to seem to me a hollow public charade for this reason. For the last 50 years (starting from the AI days) it has been a massive populist sideshow. Meanwhile those actually doing real thinking - guys like Stephen Grossberg or Karl Friston - stayed well under the radar (largely because they saw the time-wasting populist sideshow for what it was as well.)
  • Are the laws of nature irreducible?
    This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it.Wayfarer

    Yep. So what the experiments illustrate is that we have "free won't", rather than freewill. As long as we aren't being hurried into an impulsive reaction, we can - the prefrontal "we" of voluntary level action planning - pay attention to the predictive warning of what we are about to do, and so issue a cancel order.

    So part of the habit-level planning for a routine action is the general broadcast of an anticipatory motor image. As part of the unity of experience, the sensory half of our brain has to be told that our hand is suddenly going to move in a split second or so. And the reason for that is so "we" can discount that movement as something "we" intended. We ignore the sensation of the moving hand in advance - and so then we can tell if instead the world caused our hand to move. A fact far more alarming and deserving of our attention.

    So Libet was a Catholic and closet dualist. As an experimenter, that rather shaped how he reported his work. The popular understanding of what was found is thus rather misunderstood.

    If you turn it around, you can see that instead we live in the world in a way where we are attentionally conscious of what we mean do to do in the next second or so. Then at a faster operating habitual level, the detail gets slotted in - which includes this "reafference" or general sensory warning of what it is shortly going to feel like because our hand is going to suddenly move "of its own accord". But don't panic anyone ... in fact just ignore it. Only panic if the hand fails to get going, or if perhaps there is some other late breaking news that means mission abort - like now seeing the red tail spider lurking by the cornflakes packet.

    So the Libet experimental situation was extremely artificial - the opposite of ecologically natural. But it got huge play because it went to the heart of some purely cultural concerns over "the instantaneous unity of consiousness" and "the human capacity for freewill".
  • Are the laws of nature irreducible?
    The purpose of the digital computer analogy also was to show that, in this case also, individual transistors, or logic gates, or even collections of them, need not have the high level software instructions "translated" to them in the case where the implementation of this high level software specification is a matter of the whole computer being structured in such a way that its molar behavior (i.e. the input/output mapping) simply accords with the high level specification.Pierre-Normand

    Real computers are structured in hierarchical fashion. So once you start to talk about operating systems, languages, compilers, instruction sets, microcode and the rest, you are talking about something quite analogous to the organic situation where the connection from "software" to "hardware" is a multi-level set of constraints. Functions are translated from the level of programmes to the level of physical actions in a way that the two realms are materially or energetically disconnected. What the software can "freely imagine" is no longer limited by what the hardware can "constrainedly do".

    Where the computational analogy fails is that there is nothing coming back the other way. The physics doesn't inform the processing. There is no actual interaction between sign and matter as all the hierarchical organisation exists to turn the material world into a machine that can be ignored. That elimination of bottom-up efficient/material cause is then of course why the software can be programmed with the formal/final fantasies of us humans. We can make up the idea of a world and run it on the computer.

    So the computer metaphor - at least the Universal Turing Machine version - only goes so far. The organic reality is rather different in that there is a true interaction between sign and matter going on over all the hierarchical levels. Of course, this is more like a neural network or Bayesian brain architecture. But still, there is a world of difference between a computer - a machine designed to divorce the play of symbols from the play of matter - and a mind/brain, which is all about creating a hierarchically complex, and ecologically constrained, system of interaction between the two forms of play.

    Computers are not "of this world" so can be used as devices to freely imagine worlds.

    Brains are devices constrained by a world. But in making that relationship structurally complex, brains gain the functional degrees of freedom that we call autonomy and subjective cohesion. (The freedom to actually ignore the world being a central one, as I argued.)
  • Are the laws of nature irreducible?
    That is the well-known philosophical conundrum of the 'subjective unity of experience'. There is a vast literature on that, but it remains mysterious.Wayfarer

    It's not that mysterious once you accept that the unity is mostly being unified by what it successfully ignores. (Which is also what makes the computer analogies being used here fairly inappropriate.)

    So attentional processing "unifies" by singling out the novel or surprising. And it does that by suppressing everything else that can be treated as predictable, routine, or irrelevant.

    Well I say attention "does it". But of course it is anticipatory modelling and established habit that defines in advance the predictable, routine, or irrelevant. So attention-level processing only has some mopping up to do.

    Thus the mind does have its strong central division into habit and attention. Everything that can be dealt with without clear conscious knowledge gets sorted out in 150 to 300 milliseconds by "automatic" habit. Then anything left over becomes a focus of "conscious" attentional processing - which takes 300 to 700 milliseconds to form and stabilise. With attention we are now talking about reportable awareness as - having managed to remove so much unnecessary sensory detail from the picture - we have a small enough "point of view" to retain as a persisting state of working memory.

    So when it comes to something like the question of how does one lift one's arm, the usual way is without even attentionally deliberating. Attention is usually focused in anticipatory fashion on some general goal - like getting the cornflakes off the top shelf. Then habit slots in all the necessary muscular actions without need for higher level thought or (working memory) re-presentation. It is only if something goes wrong that we need to stop and think - start forming some different plan, like going to get a chair because our fingers can't in fact reach.

    So - as I have argued through the thread - the key is the hierarchical and semiotic organisation of top down constraints over bottom up degrees of freedom. And even a simple action like lifting an arm is highly complex in terms of its many levels of such organisation.

    I can lift a hand fast and reflexively if I put it on a hot stove. Pain signals only have to loop as far as the spine to trigger a very unthinking response.

    Then I can lift the hand in a habitual way because I am intending in a general way to have my routine breakfast.

    Or then I can lift my hand in a very artificial way - as perhaps in a laboratory experiment where I am wired up with electrodes and I'm being asked to notice when my intention to lift the arm first "entered my conscious awareness".

    At this point, it is all now about some researcher's model of "freewill" and the surprise that a familiar cultural understanding about the "indivisibility of consciousness" turns out to be quite muddled and wrong.

    Not that that will change any culturally prevalent beliefs however. As I say, the mind is set up to be excellent at ignoring things as a matter of entrenched habit. A challenge to preconceptions may cause passing irritation, but it is very easy for prejudice to reassert itself. If - like Querius - you don't like the answer to a question, you just hurry on to another question that you again only want the one answer to.
  • Perfection and Math
    My question is is math deserving of this respect and trust? Could it not be flawed? What does a mathemstical analysis of a given subject deprive us of? Are there some areas of study where math is harmful instead of beneficial?TheMadFool

    Maths is a model of reality as a perfect syntactical mechanism. It predicts the patterns that will be constructed as the result of completely constrained processes. So if reality is also spontaneous and vague in some fundamental way, maths can't "see" that. It presumes an absolute lack of indeterminism to give a solid basis to its story of determinism.

    This isn't a big problem because humans using maths as a tool can apply it with "commonsense". And when humans are actually building "machines" - as they mostly are in maths dominated activities - then the gap between the model and the world being created is hardly noticeable.

    The key issue when it comes to applying commonsense is the making of measurements. We have to use our informal judgement when plugging the numbers meant to represent states of the world into our models or systems of equations. So it is outside the actual maths how much we round numbers up, how we spread our sampling, etc, etc. Garbage in, gabage out, as they say.

    The flipside of all this is then when we are dealing with a world that is complex and it is not absolute clear what to measure. Or worse still, the world may be actually spontaneous or vague and relatively undifferentiated, and so every definite-sounding measurement will be dangerously approximate.

    So the issue is no that maths simply fails to apply to some aspects of life. If you are talking ethics or economics for example, game theory gives some completely exact models which can be used. However then they have to presume a world of machine humans - perfectly rational actors. Thus judgement then has to come in about how much one can rely on this particular modelled presumption. Can the actual model work around the issue by adding some further stochastic factor or is the real world variance in some way "untameable".

    So maths works well when the world is made simple - as when building machines. And then complexity can cause fatal problems for this mechanistic modelling when the complexity makes good measurement impractical. For a chaotic system, it may be just physically impossible to measure the initial state of the world with enough accuracy.

    Then where the metaphysical strength issues really bite is if the world is actually spontaneous or vague at a fundamental level - as quantum physics says it is.

    The final source of indeterminism is the semiotic one - the issue of semantically interpreting a sign or mark. We can both see a word like "honesty" or "beauty" written on a page as a physical symbol. But how do we ever completely co-ordinate our understandings or reactions to the word?

    So clearly, to the extent that human lives revolve around the common understanding of systems of signs, there is an irreducible subjectivity that makes maths a poor tool for modelling what is going on. That would be why philosophers would put aesthetics and morality in particular beyond the grasp of such modelling.

    However as with the probabilisitc modelling of chaotic and quantum processes, that is not to say maths couldn't be applied to semiosis. Instead, it may be the case that we just haven't really got going on trying. It is not impossible there would be a different answer here in another 100 years.