apokrisis

  • What is Information?

    then life is not a process of copying, modeling or representing a world,Joshs

    As a result, they do not explain how certain processes actively generate and sustain an identity that also constitutes an intrinsically normative way of being in the world.”(Thompson)Joshs

    Why do you exclude modelling along with copying and representing? The biosemiotic approach of biologists like Pattee, Salthe, Rosen and many more stress the need for the epistemic cut that indeed produces the closure of autonomy.

    And Pattee shows how this even goes back to von Neumann’s mathematical treatment of self-reproducing automata. Rosen likewise provides the strong mathematical arguments. So even just for genetic copying, the need for a model that is separated from what it replicates is an axiomatic basic.

    The problem with autopoiesis is that it was fuzzy on this aspect of the story. But there is a good grounding in semiotics to understand how selfhood and autonomy must emerge in life and mind. It is because they are this new thing of a semiotic modelling relation. It is all founded on the logical necessity of making an epistemic cut between self and world so as to start acting as a self in a world.

    The informational machinery of a code has its first job in securing a state of enactive organisation. It must have a model of the self in its world so as to organise its metabolic flows and repair its dissipating structures - Rosen’s MR model of anticipatory systems. Then after that enactive relationship is established, there might be some kind of machinery worth replicating by making transmissible copies of a set of genes. The ability to replicate is somewhat secondary - although a logical inevitably because it allows biological development to be joined by biological evolution. And that is a powerful extra.

    Note how words and numbers are semiotic codes that first exist as a way of separating a self from its world. Children have minds that become logically organised as they learn language and become able to self-regulate - as was well understood by symbolic interactionism and Vygotskian psychology. Humans have a heightened sense of selfhood because they must socially construct themselves as actors in a cultural drama, and now in the modern era, actors in a techno-neoliberal drama (the world made by thinking in terms of numbers or pure quantification).

    And then words and numbers become something that can be transmitted and copied - turned into information or inert symbols to be decoded - by being rendered as marks on a page or electronic fluctuations on a wire. Human culture developed the power to become copyable and thus fully evolvability - capable of explosive change and growth over time as history shows once the digitised habits of writing and counting got started.

    Oral culture is weakly transmissible. You had to be there to hear how the story was told and the gestures were used to really get the message. The machinery of copying was still more enactive than representational. It was not symbolic so much as indexical.

    But with alphabet systems and numerals, along with punctuation and the sequestering of these marks in inert substrates - in the same way DNA is zipped up and inert and so physically separated from the molecular storm it regulates - humans continued on to full strength symbolism. Or a proper epistemic cut where the transmissiblity of information is separated from the interpretation or enaction of that information.

    So you can see why huge confusion results from not being clear that syntax and semantics are two different things when we want to talk about “information” in some generalised way. An informational system - like a biological organism with genes and neurons, perhaps even words and numbers - is both enactive and representational. It is involved in both development (of a self-world modelling relation) and evolution (of a self-world modelling relation).

    As usual, there is always a dialectic. And academic camps spring up at either pole to defend their end as the right end.

    Again, I stick with systems thinkers or hierarchy theorists who can frame things more coherently.

    Enaction is about the first person particularity of being in some actual selfish state in regard to the world. Representation is about what can be objectively copied and replicated so as to pass on the underlying machinery that could form such a particular state of world adaptedness.

    Genes represent a generalised growth schedule and a list of essential molecular recipes that are the basic machinery for a body having an enactive modelling relation with its world. And genes also are in some active state of enaction when they are part of a body doing and feeling things as it indeed lives and transacts its metabolic flows.

    In any moment of active selfish living existence, the DNA is unzipped and coated with all kinds of regulatory feedback signals so that it is functioning as the anchor to a vast cell and body-wide epigenetic hierarchy of “information”. The code couldn’t be more enactive.

    And then the DNA is zipped tight, reduced to the frozen dialectic of sperm and ovum, mechanically recombined as now part of a different kind of story - one that couldn’t be more representational in being an inert process of information copying and the seeding of a next generation with the syntactic variety upon which the process of evolution depends.

    If we talk about neurology or neurosemiosis, the stress is of course more on the enaction than the representation. Nature relies on genes to encode neural structure. So experience is something that can both be enacted and represented if you are dealing with simple intelligence in the form or ants or jumping spiders. Genes can specify the shape of the wiring to the degree that habits of thought are pretty much hard wired.

    But large brained animals become more or less dependent on personal development or enaction. Thoughts, feeling and memories - some package of life experience that shaped the mind of a tiger or elephant - is information gained and lost. Only very general parts of being a tiger or elephant, as a self in some particular ecological niche, can be captured and transmitted as evolvable and representational information passed on to the next generation.

    Humans became even more enactive and developmental as a large brain species. Our babies are at a real extreme in being born with unformed circuitry awaiting the imprint of life experience and hence the accumulation of untransmissible states of attentional response and new thought habit forming.

    So genetics was strained to its outer limit in this tilt towards the enactive pole.

    But then - hey presto - that paved the way for linguistic culture as a new higher level of semiotic code or information enaction/information representation. We could restore the balance between making minds and being minds with oralism, and then oralism’s continued evolution towards literacy and numeracy.

    So neurosemiosis is sort of a gap in the story. It is where the baton gets passed as the genes get stretched to the limit and suddenly - with Homo sapiens - something more abstract, something arising out of social level systemhood, arises to continue the semiotic journey to a higher level of organisation.

    This is why the neural code is so hard to find, and why we have patent idiocies like integrated information theory or quantum consciousness theories trying to fill the explanatory gap.

    Biology can point to genes as the dual basis of enaction and representation, development and evolution. Social psychology can point to words and numbers in the same way. Brain scientists have to talk in terms of neural network principles to feel they are getting at what makes it all tick in terms of a mind that can be both some particular enactive first person state, and then also the other thing of a genetically transmissible algorithm which a new generation of minds can implement.

    Again, I return to the neuroscientists who are actually homing in on this understanding of the great hunt for the neural code - folk like Friston and Grossberg. It is easy to see why they are on the right track, scientifically speaking.

    The neural code has to be understood not as a train of symbols but as a standard microcircuit design. A bit of computational machinery. An architectural motif. A transmissible algorithm that is the brain’s basic building block.

    And the problem there is Turing machine based notions of neurology’s canonical microcircuit - the standard approach - are so far off the mark. The only people to pay attention to are the ones that talk the language of anticipatory systems.
  • The Unequivocal Triumph Of Neuroscience - On Consciousness

    "Consciousness is not just a matter of having a subjective perspective within the world; it also includes the sense of occupying a contingent position in a shared world. From within this experiential world, we manage to conceive of the world scientifically, in such a way that it fails to accommodate the manner in which we find ourselves in it. Hence the real problem of consciousness is that of reconciling the world as we find ourselves in it with the objective world of inanimate matter that is revealed by empirical science.” — Matthew Ratcliffe’s paper

    In other words, a science that accounts for experiencing organisms needs a theory of semiosis. It needs to place the epistemic cut (between self and world) at the centre of its inquiry. It needs a general theory of modelling relations to create a meta-theory large enough to encompass both mind and matter, healing the Cartesian rift.

    Phenomenology ain't the destination even if it seems the starting point. It might correctly identify the embodied and intersubjective nature of human experience. But as an academic thread of thought, it wanders away into no clear conclusion. It winds up in PoMo plurality and "disclosure of ways of being. Nothing of any great interest results.

    I look for phenomenological projects that get somewhere. Like Peircean semiotics, Pattee's epistemic cut, Rosen's modelling relation, systems science approaches in general.

    The human mind is the product of four levels of semiosis.

    At ground level, there is biology's foundational epistemic cut - the gene~metabolism division by which information regulates entropy. Life as a dissipative structure.

    Then also part of biology is neurosemiosis. Genes capture regulatory information over generational timescales, and control only what lies with an organism's own body. Neurons operate to capture regulatory information on the microsecond scale and extend the body's scope as far as the eye can see or ear can hear.

    Humans came along and added the further semiotic levels of words and numbers. The first created our intersubjective or sociocultural model of self~world. The second has created our modern scientific and technological model of self~world. The "real world" was enhanced by a "virtual world".

    So semiotics provides a rich new framework for understanding life and mind in naturalistic terms - ones where the self~world distinction is bridged from the start and so doesn't build in a dualistic Hard Problem.
  • Popper's Swamp, Observation Statements, Facts/Interpretations

    Thoughts?jas0n

    I see no mention of Peirce. Did I miss something?

    If we think of basic statements as facts and theories as interpretations, then facts turn out to be more 'complex' than interpretations (or to be a different kind of interpretation.)jas0n

    The tricky bit here is that measurements are where the facts - as theoretical entities - must interface with the physical reality they puport to measure. So there is the further thing of an epistemic cut.

    Logically, a measurement is constructed to be a binary switch. What number should I attach to some modelled aspect of the world? And then a measurement is made by plunging the mechanical switch into the boiling flux of world. The switch is tripped and you pull your measuring implement out to read off the appropriate numerals.

    So the mystery is all about the epistemic cut - the ability to make measurements that depends on being able to produce mechanical switches that interface between the logically/mathematically organised theory and the unbroken physical flow of the world - the thing in itself - that can trip the switch in a suitable fashion, giving some account of itself in terms of digits to be read off dials.

    A measuring stick doesn’t seem immediately like a switch, but it is. You can only read off some definite number and write it down in your log when you decide the analog continuity of the reality looks close enough - for all practical purposes - to one digit and not some other digit.

    The epistemic cut is a further refinement developed by Howard Pattee and Robert Rosen in the 1960s, if you are looking for a formal understanding of the pragmatism that grounds the scientific method - and indeed, life and mind as reality-modelling systems in general.
  • Logical Necessity and Physical Causation

    :chin:Wayfarer

    Pattee's clarity on these gritty matters always makes my soul sing. It also helps that we talked about them most days for five or six years. :grin:

    There is a real conceptual roadblock here. In our normal everyday use of languages the very concept of a "physics of symbols" is completely foreign. We have come to think of symbol systems as having no relation to physical laws. This apparent independence of symbols and physical laws is a characteristic of all highly evolved languages, whether natural or formal. They have evolved so far from the origin of life and the genetic symbol systems that the practice and study of semiotics does not appear to have any necessary relation whatsoever to physical laws.

    As Hoffmeyer and Emmeche (1991) emphasize, it is generally accepted that, "No natural law restricts the possibility-space of a written (or spoken) text.," or in Kull's (1998) words: "Semiotic interactions do not take place of physical necessity." Adding to this illusion of strict autonomy of symbolic expression is the modern acceptance of abstract symbols in science as the "hard core of objectivity" mentioned by Weyl. This isolation of symbols is what Rosen (1987) has called a "syntacticalization" of our models of the world, and also an example of what Emmeche (1994) has described as a cultural trend of "postmodern science" in which material forms have undergone a "derealization".

    Another excellent example is our most popular artificial assembly of non-integrable constraints, the programmable computer. A memory-stored programmable computer is an extreme case of total symbolic control by explicit non-integrable hardware (reading, writing, and switching constraints) such that its computational trajectory determined by the program is unambiguous, and at the same time independent of physical laws (except laws maintaining the forces of normal structural constraints that do not enter the dynamics, a non-specific energy potential to drive the computer from one constrained state to another, and a thermal sink).

    For the user, the computer function can be operationally described as a physics-free machine, or alternatively as a symbolically controlled, rule-based (syntactic) machine. Its behavior is usually interpreted as manipulating meaningful symbols, but that is another issue. The computer is a prime example of how the apparently physics-free function or manipulation of memory-based discrete symbol systems can easily give the illusion of strict isolation from physical dynamics.

    This illusion of isolation of symbols from matter can also arise from the apparent arbitrariness of the epistemic cut. It is the essential function of a symbol to "stand for" something - its referent - that is, by definition, on the other side of the cut. This necessary distinction that appears to isolate symbol systems from the physical laws governing matter and energy allows us to imagine geometric and mathematical structures, as well as physical structures and even life itself, as abstract relations and Platonic forms. I believe, this is the conceptual basis of Cartesian mind-matter dualism.

    This apparent isolation of symbolic expression from physics is born of an epistemic necessity, but ontologically it is still an illusion. In other words, making a clear distinction is not the same as isolation from all relations. We clearly separate the genotype from the phenotype, but we certainly do not think of them as isolated or independent of each other. These necessary non-integrable equations of constraint that bridge the epistemic cut and thereby allow for memory, measurement, and control are on the same formal footing as the physical equations of motion. They are called non-integrable precisely because they cannot be solved or integrated independently of the law-based dynamics.

    Consequently, the idea that we could usefully study life without regard to the natural physical requirements that allow effective symbolic control is to miss the essential problem of life: how symbolic structures control dynamics.

    Concluding...

    Is it not plausible that life was first distinguished from non-living matter, not by some modification of physics, some intricate nonlinear dynamics, or some universal laws of complexity, but by local and unique heteropolymer constraints that exhibit detailed behavior unlike the behavior of any other known forms of matter in the universe?

    In other words, biology invented the molecular switch. Suddenly physics could be turned on and off "at will". Nothing like this had ever been seen before in nature. A whole new biosemiotic game had been invented.
  • Nice little roundup of the state of consciousness studies

    And as I’ve also said, that is not something which can be framed in scientific terms, because there’s no ‘epistemic cut’ here. We’re never outside of it or apart from it.Wayfarer

    You don't yet understand the epistemic cut. Perhaps I should rename it the epistemic bridge for your benefit.

    The cut is the mechanics of a sign, a switch, a ratchet, that gets inserted so as to make the modelling a reality. Brains do that at their level. Societies do that at the next level up.

    You are being too psychology-centric. You think only of the minds of "individuals". But organisms can become entrained to social levels of reality modelling. Ants and humans are the "ultrasocial" extremes of this development, as they could insert the further systems of sign in the form of pheromone signals and verbal signals.
  • There Are No Identities In Nature

    So which is it - do vague and crisp map on to analog and digital or do they not? If they do, in what sense can you claim that the analog/digital distinction is derivative from vagueness (circularity). If they don't, you're back to mythology.StreetlightX

    The answer is the same as before. When we are talking about the ontology of a modelling system, we have two realms in play - the material and the symbolic. And the vague~crisp can apply as a developmental distinction in either. And indeed to the modelling relation as a whole. The vague~crisp is about a hierarchy of symmetry-breakings, a succession of increasingly specified dichotomies.

    So in the symbolic realm, a vague state of symbolism is indexical. A still vaguer state is iconic.

    If you say "look, a cat", that 's pretty definite. If you point at a cat, I might be a little uncertain as to exactly what your finger indicates. If you make mewing and purring noises, I would have to make an even greater guess about the meaning you might intend.

    So as I argued using the example of the wax cylinder, informational symmetry breaking can be weak because it is easily reversible - still strongly entangled in the physics of the situation - or it can be strongly broken in being at the digital end of the spectrum and thus as physics-free as possible.

    If I were to say "look, the universe", then physically the words involve no more effort that talking about a cat. But pointing gets harder, and pantomiming might really work up a sweat.

    But then any form of communication or representation has already crossed the epistemic cut Rubicon in creating a memory trace of the world and so made the step to being physics-free. So even vague iconicity is already crisp in that sense. And thus there is another whole discussion about how the matter~symbol dichotomy arose in nature. And a further whole discussion about whether the abiotic world - with its dissipative organisation - has pansemiotic structure, and so this notion of "digitality" as negatively-self reflexive demarcation (or the constraint of freedom) has general metaphysical import there.

    We can see that discrete~continuous is just such a general metaphysical dichotomy - the two crisp counter-matched possibilities that would do the most to divide our uncertainty about the nature of existence. And I would remind you of your opening statement where you said this was all about a generic metaphysical dichotomy that applied to all "systems"....

    Broadly speaking, one can speak of two types of systems in nature: analog and digital.StreetlightX

    So that sweeping claim is what I have been addressing. And my argument is that when it comes to reality as a system, it is just the one system - formed by dividing against itself perhaps.

    This is why I find your exposition confused - although also on the right track. So I tried to show that to resolve the dualism implicit in your framing here, we have to ascend to Peircean triadic semiosis to recover the holism of a systems' monism. We have to add a dimension of development - the vague~crisp - so as to be able to explain how the crisply divided could arise from some common source.

    Your opening statement would be accurate if it made it clear that you are talking about symbolic systems or representational systems - systems that are already the other side of the epistemic cut in being sufficiently physics-free to form their own memory traces and so transcendently can have something to say about the material state of the world.

    But instead you just made a direct analogy between analog~digital signal encoding in epistemic systems and continuous~discrete phenomena in ontic systems.

    Now again, there is something important in this move. It has to be done in a sense because the very idea of a physical world - as normally understood in its materialistic sense - just cannot see the further possibility of semiotic regulation, the new thing that is physics-free memory or syntax-based constraints. So you can't extract symbols from matter just by having a full knowledge of physical law. As you/Wilden say, the digital, the logical, the syntactical, appears to reach into the material world from another place to draw its lines, make its demarcations, point to the sharp divisions that make for a biinary "this and a that".

    So saying in a general metaphysical way that the material world is analog, and the digital is sprung on this material world from "outside itself" as a further crisply negating/open-endedly recursive surprise, is a really important ontological distinction.

    But then confusion ensues if one only talks about the source of crispness and the fact of its imposition, and neglects to fit in its "other", the vagueness which somehow is the "material ground" that takes the "formal mark" of the binary bit. Or even the analog trace.

    So to talk generically about reality as a system - which indeed is a step up from process philosophy in talking about symbol as well as matter, hierarchy as well as flow - is where we probably agree in a basic way. Structuralism was all about that. Deconstructionism was also about that - in the negative sense of trying to unravel all symbolic distinctions. Deleuze was about that I accept.

    But again, the metaphysics of systems is always going to be muddy without being able to speak about the ontically vague - Peircean Firstness, Anaximander's Apeiron, the modern quantum roil. Sure we can talk about grades of crispness - iconic vs indexical vs symbolic. But to achieve metaphysical generality, we have to be able to define crispness (computational digitality, or material substantiality/particularity/actuality) in terms of what crispness itself is not.

    And to return to your OP.....

    A few quite important things follow from this, but I want to focus on one: it is clear that if the above is the case, the very notion of identity is a digital notion which is parasitic on the introduction of negation into an analog continuum. To the degree that analog systems do not admit negation, it follows that nothing in an analog system has an identity as such. Although analog systems are composed of differences, these differences are not yet differences between identities; they are simply differences of the 'more or less', or relative degrees, rather than 'either/or' differences.StreetlightX

    ...this is where your keenness to just dichotomise, and not ground your dichotomy as itself a developmental act, starts to become a real blinkering issue.

    Analog signals are still signals (as Mongrel points out). They are differences to "us" as systems of interpretance. An analog computer outputs an answer which may be inherently vaguer than a digital device, but did use to have the advantage of being quicker. And also even more accurate in that early digital devices were 8 bit rather than 16 bit or 64 bit - or however many decimal places one needs to encode a continuous world in floating point arithmetic and actually draw a digitally sharp line close enough to the materially correct place (if such a correct place even exists in a non-linear and quantumly uncertain world).

    So whether variation or difference is encoded analogically or digitally, it already is an encoding of a signal (and involves thus a negation, a bounding, of noise). Then while the digital seems inherently crisp in being a physics-free way to draw lines to mark boundaries - digital lines having no physical width - in practice there still remains a physical trade-off.

    The fat fuzzy lines of analog computing can be more accurate at least in the early stages of technical development. The digital lines are always perfectly crisply defined whether they use 8-bit precision or 64-bit precision - this is so because a continuous value is just arbitrarily truncated (negated) at that number of decimal places. But that opens up the new issue of whether the lines are actually being dropped in the right precise place when it comes to representing nature. Being digital also magnifies the measurement problem - raises it now to the level of an "epistemic crisis". Ie: the fallacy of misplaced concreteness.

    So it just isn't good enough to say analog signals can be signals without the need for negative demarcation and the open-ended recursion that allows. A bell rings a note - produces a sine wave - because vibrations are bounded by a metal dome and so are forced to conform to a harmonic whole number. Identity or individuation does arise in analog processes - in virtue of them being proto-digital in their vaguer way.

    Yes, this is a complication of the simpler starting point you made. It is several steps further down the chain of argument when it comes to a systems ontology. And as I say, you/Wilden are starting with a correct essential distinction. We have to pull apart the realms of matter and symbol to start to understand reality in general as a semiotic modelling relation with the power to self-organise its regular habits.

    But for some reason you always get snarky when I move on to the complexities that then ensue - the complexities that systems ontologists find fruitful to discuss. The vague~crisp axis of development being a primary one.
  • Rasmussen’s Paradox that Nothing Exists

    ...to complete the thought, classical realism is the place we want to get back to as the balance of what gets broken.

    So QM is "weird" as it breaks realism. And folk then take one or either path and extrapolate the weirdness to infinity.

    That gives rise to the different interpretational extremes. You have the Copenhagenism that offers no stopping point until it arrives at the consciousness and freewill of the human observer.

    Or you head in the other no-collapse direction and have the endlessly bifurcating many worlds multiverse.

    Each seems the correct interpretation - compatible with the maths. But that is because the maths doesn't contain a cut-off. Only a quantum gravity theory that absorbs all three Planck constants - the irreducible triad of c, G and h - could introduce such a cut-off to physics. And so nothing formally seems to resist the galloping off towards the infinite horizons of metaphysical irreality in one or other of its available directions.

    The way to avoid the pathological metaphysics is to realise what is going on. Classical reality is emergent from the reciprocality of a pair of local~global limits. The weirdness of one is going to cancel out the weirdness of the other, at the end of the day.

    Which is where we get to with thermal decoherence as a general framework uniting QM vagueness and classical crispness, or counterfactual definiteness.

    And biosemiosis becomes the icing on the cake. It draws the further natural line across reality that is the epistemic cut between organisms and their environments. It shows how the Cosmos already decoheres itself, and how what human observers do is add a new level of machinery to the situation where this decoherence can be experimentally manipulated and even exploited for new technological purposes.

    So at the Copenhagen end of the interpretive spectrum, you get rid of the conscious observer issue entirely. It can be left at the door of the biosemiotic epistemic cut.

    And at the multiverse end of the interpretive spectrum, you can likewise rule out MWI. Decoherence says collapse is real enough due to thermal scale.

    Copenhagenism is a claim about limits being taken - contextualised events becoming collapsed to a-contextual numbers. But then that Copenhagenism is just a human story. The physical reality it is based on is the nanoscale of warm water - the quasi-classical transition zone in which quantum coherence is becoming classical decoherence. Strong entanglement is giving way to strong contextuality.
  • Popper's Swamp, Observation Statements, Facts/Interpretations

    For instance, is human philosophy conceived of as something like reality's self-knowledge?jas0n

    Something like that is surely the case. But that is also too flowery language.

    What does it mean for humans to ascend to a mathematical level of abstraction in semiosis? As science, it has resulted in us trying to de-subjectivise our inevitable first person point of view to recover the objective third person, or God's eye, point of view. Or better yet, following more insightful approaches like Nozick's Invariances, we seek to dissolve our highly particular view of the world in the mathematical acid of universal symmetry.

    So to the degree the world is understood as physical - some blend of fundamental material accident and fundamental constraining structure - we can hold a mirror up to that. We can construct a metaphysics that sees the world in this way. And pragmatically proves itself as a correct view because it offers us control over all the physics involved.

    Thus it is not about "knowledge" in some passive Cartesian representational sense. It is instead knowledge in its enactive and pragmatic sense - its modelling relation sense.

    This how we get from semiosis of the actually modelled kind - the biosemiosis of life and mind - to recover some kind of semiosis as the pansemiosis by which the cosmos indeed brings itself into being.

    One flaw in Peirce is he conflated the two - especially in his "transcendental" mid-phase of thought where he wrote his notorious comments about matter as effete mind, not making it clear enough whether this was pansemiotic metaphor or pansemiotic metaphysics.

    It should be clear that I don't subscribe to the Cosmos as having its own model of itself in a biosemiotic encoded sense. And indeed, it is part of the very theory of biosemiosis that the very possibility of a symbolic code only gets born where physicalism reaches its own naturalistic limits.

    A symbol has to be a physical mark, even if just a dot being printed, or a blank being left, on an infinite Turing tape. But the great trick of semiosis is that if you can afford to encode information in a way that seems physically costless, then you - as an organism - can escape all the strictures of the material world.

    This is Pattee's epistemic cut. Life and mind arise because they can make physical marks - like a DNA codon or a new synaptic junction - which look perfectly meaningless to the physical world that they then sneakily turn out to regulate.

    It costs the body as much to code for a nonsense protein as it does for some crucial enzyme. The world - as a realm of rate dependent dynamics - can't see anything different about the two molecules in terms of any material or structural physics. Both are equally lacking in meaning - and even lacking in terms of being counterfactually meaningless as well. The two molecules just don't fit any kind of signal~noise dichotomy of the kind that semiotics, as a science of meaning, would seek to apply.

    But then the body does know the difference as the difference is precisely one it imposes on the physics. It says I could be producing molecular junk or molecular messages. You - the world - can't tell and so just have no say in the matter. I - the body - am thus absolutely free to throw proteins into the bubbling stew of metabolic action and see what sticks as the best evolutionary choice.

    Evolution doesn't just happen to organisms. They invent the binary distinction of sense~nonsense so as to make themselves evolvable as something completely new - a structure of rate independent information - imposed on rate dependent dynamics of the merely physicalist world.

    So yes, the human story has reached the point where it holds up a mirror to the physicalism of the real world. But it can only do that by adding itself as a further trick - the trick of semiotic mechanism - which the physical world does not appear to contain and which is only present because the physical world in fact has strict limits.

    The physical world is capable of abolishing all entropic gradients. But it can't even see the negentropy that is the informal structure that an organism accumulates so as to have its own parasitic existence on this world.

    It's a splendid irony. A form of transcendence in that a model of the world must transcend that world. And yet the books get balance as that brief escaped from entropy is then paid back to the world with interest. Life and mind earn their way in the cracks of existence by breaking down accidental barriers to maximum entropy - like the way industrialised humans are taking half a billion years worth of buried carbon, slowly concentrated into rich lodes of coal and petroleum, and burning the bulk of it in a 200 year party.

    So the answer to your question is that there is further recursion in the physicalist tale as it now has to add the new thing that is life and mind. The mirror we hold up would show the Comos the self that is also now the one with us in it - the informational degrees of freedom that its laws of thermodynamics could never forbid, but which also didn't in any immediately obvious way seem to require.

    It is only because entropification must be achieved in any way possible - and life and mind were the one further way possible - that we can be considered part of the natural order.

    Is reality made of signs that are neither mental nor physical ? For this distinction is itself a cut of the sign ?jas0n

    This is the epistemic cut issue. As I previously said, the central trick of semiosis is that a sign is really - as Pattee makes clear - a switch. And it is then easy to see the connection as well as the cut. A mechanical switch is both a logical thing and a physical thing. It has a foot on both sides of the divide.

    So that fact puts a halt to the homuncular regress. The two worlds - of entropy and information - are bridged semiotically at the scale of your smallest possible physical switches.

    And that is what the biophysics of the past decade has confirmed. All life and mind is based on the ability of proteins - molecular structure - to ratchet the quasi-classical nanoscale of organic chemistry.

    The nanoscale is the tipping point where all the key physical forces converge to have the same scale. It is the "edge of chaos" or zone of criticality. In material terms, it exhibits the maximum thermal instability.

    And in being peak material instability - halfway between the quantum and the classical - it is also the most tippable state. Biological information can get in there and tilt the entropic odds in its own favour.

    But all this is extremely new science. Even in biophysics, the fact is still sinking in.

    Have you looked into Derrida's différance?jas0n

    Yep. But only doing due diligence. :grin:

    Generally post-modernism is the backlash against its own structuralist past. It wants to kill the part of itself that was valuable. It got tangled up in Romanticism, Plurality and Idealism in likewise wanting to distinguish itself from Enlightenment rationalism and the hierarchical views of Natural Philosophy.

    As philosophy, it is a self-parodying mess. Yet of course, take any text in isolation and it often says something that could be seen as reasonable and obvious.

    So between AP and Continentalism, I stick to Pragmatism as the middle path that offers the most coherence.
  • Popper's Swamp, Observation Statements, Facts/Interpretations

    Funny point, but this is as dense and elusive as anything Derrida wrote.jas0n

    Yes, Peirce is jargon-ridden. The difference is that Peirce is thinking mathematically. Just check out the amphek as the epistemic cut switching device that makes possible the whole of Boolean algebra - a fact known to Peirce in 1880 and not rediscovered until Sheffer in 1913. And even Sheffer got no credit until Bertrand Russell stumbled across it in the 1920s and was compelled to incorporate it into the second edition of his Principia Mathematica.

    So Peirce is mathematical rigor underneath all the neologisms. Derrida and PoMo in general are more like the blind people in a dark room giving the elephant a good touch up and feeling moved to poetic outbursts.

    don't know much about the physics of the beginning of the universe.jas0n

    I meant to add, given your interest in the real numbers, a physicist would these day say that the complex numbers are more foundational than the reals.

    If quantum field theory and its commutativity is basic, then nature counts in complex numbers rather than real numbers. The ground of being is where the dichotomising starts. And that dyadicity is what the complex plane encodes as the dimensionality where rotations and translations share the same unit 1 starting point.

    Which leads neatly to....

    I did learn Newtonian physics pretty well once.jas0n

    And what was Newtonianism founded on but the (Noether) symmetry of rotation and translation. Or angular momentum and linear momentum.

    So you can spin on the spot or roll in a straight line as an inertial degree of freedom. They are the two reciprocal faces of the same unit 1 identity operator that then let you start counting accelerations and decelerations within a coordinate-stabliised inertial reference frame where even being at "rest" is made a strictly relative state of affairs.

    Reality is dichotomies, or switches, or ampheks, or signs, or quantum operators acting on infinte Hilbert spaces, or symmetry breaking in general, all the way down to ground. Which is then defined by the Planck triad of constants - that stand in their own final set of reciprocal relations. That becomes the Big Bang cut-off that says you can't go any smaller or hotter as the fabric of reality now becomes just a vagueness - the dissolutoion that is Wheeler's quantum foam.

    Something you've probably already touched on and seems relevant is the difficult distinction between sign and non-sign. If a sign is not grounded in a 'mental content' (a signified), then it's just 'out there' in the environment. In other words, what separates a salute from wiping the sweat off of one's forehead? The answer is probably something like the 'play' or 'ambiguity' of the sign/non-sign or trace/non-trace distinction. This is why I say the Cartesian 'ghost' is dethroned perhaps rather than annihilated. Our mentalistic language, however misleading, almost needs to remain legible. This is determinate negation, writing under erasure, etc. Less pretentiously we might talk of switching between language games or perspectives.jas0n

    Well Peirce addressed this for language by making a triad out of the steps towards full-blown semiosis. The most hesitant sign is iconic (a relation of Firstness), the more definite sign is indexical (a relation of Secondness), and the fully realised sign is symbolic (the fixity of a habit, or Thirdness).

    For example....

    [Peirce] identifies three types of signs as a function of their representative condition: icons, or signs that resemble their object (an image of fire), indices, or signs that are contiguous with, are caused by, or somehow point to their objects (smoke coming from a fire), and symbols, or signs whose meanings are a function of convention, habit, or law (fire as knowledge in the story of Prometheus). Here again, icons are firsts, indices are seconds, and symbols are thirds.

    https://undcomm504.wordpress.com/2013/02/24/firstness-secondness-and-thirdness-in-peirce/

    And I also addressed this in a more general way by echoing the usual observation that a mark can be granted extrinsic meaning precisely because it lacks intrinsic meaning.

    I press my stylus into the wax. It makes a dent. It certainly draws attention to itself as a distinctive physical fact - a small and yet curiously precise effort someone has just made in a world where marks are distributed across the landscape with the maximally generic unconcern of a fractal or scalefree probability distribution.

    And then you learn that the mark is in fact part of some larger mental structure - some community-level habit of interpretation. As more marks get made, you might start to think you could crack this cuneiform code.

    So at the level of some single mark, it could be "just physics" - a complete material accident in a world composed of material accidents over all possible spatiotemporal scales. Or it could be "all mind" in being a purposeful act of encoding information.

    A mark could be a switch. Or not a switch. And so it sits there right at the epistemic cut as an information bit that might also be understood as an entropic microstate.

    Shannon and Gibbs formalised the probabilistic maths that made the two kinds of things equivalent - once you strip reality down to its own natural Planck scale cut-off to discover the Boltzmann constant, k.

    Again this is why I would sound impatient with Wittgenstein or anyone who wants to just deal with language alone as the metaphysical issue. It is the principles of codes that is at stake, whether they be verbal, numerical, neural or genetic.

    And computer science, quantum holography, thermodynamics, and all the other new information theoretic approaches to foundational physics now show that semiosis is not just about the actual codes employed to fashion organisms with life and mind, it also can be given the pansemiotic twist where it becomes a physicalist description of nature in its own right.

    Nature is switches or signs all the way down to the ultimate primal dichotomy that is encoded by the intrinsic reciprocality of the Planck constants.

    One metaphysics to rule them all. :smile:
  • Logical Necessity and Physical Causation

    I can't see that in what I've been reading of him.Wayfarer

    Pattee, H.H.. [2001]. "The Physics of Symbols: Bridging the Epistemic Cut". Biosystems. Vol. 60

    In more common terminology, this type of constraint is a structure that we say controls a dynamics. To control a dynamical systems implies that there are control variables that are separate from the dynamical system variables, yet they must be described in conjunction with the dynamical variables. These control variables must provide additional degrees of freedom or flexibility for the system dynamics. At the same time, typical control systems do not remove degrees of freedom from the dynamical system, although they alter the rates or ranges of system variables. Many artificial machines depend on such control constraints in the form of linkages, escapements, switches and governors. In living systems the enzymes and other allosteric macromolecules perform such control functions. The characteristic property of all these non-holonomic structures is that they cannot be usefully separated from the dynamical system they control. They are essentially nonlinear in the sense that neither the dynamics nor the control constraints can be treated separately.

    This type of constraint, that I prefer to call non-integrable, solves two problems. First, it answers Lucretius' question. These flexible constraints literally cause "atoms to swerve and originate new movement" within the descriptive framework of an otherwise deterministic dynamics (this is still a long way from free will). They also account for the reading of a quiescent, rate-independent memory so as to control a rate-dependent dynamics, thereby bridging the epistemic cut between the controller and the controlled. Since law-based dynamics are based on energy, in addition to non-integrable memory reading, memory storage requires alternative states of the same energy (energy degeneracy). These flexible, allosteric, or configuration-changing structures are not integrable because their motions are not fully determined until they couple an explicit memory structure with rate-dependent laws (removal of degeneracy).

    The crucial condition here is that the constraint acts on the dynamic trajectories without removing alternative configurations. Thus, the number of coordinates necessary to specify the configuration of the constrained system is always greater than the number of dynamic degrees of freedom, leaving some configurational alternatives available to "read" memory structures. This in turn requires that the forces of constraint are not all rigid, i.e., there must be some degeneracy to allow flexibility. Thus, the internal forces and shapes of non-integrable structures must change in time partly because of the memory structures and partly as a result of the dynamics they control. In other words, the equations of the constraint cannot be solved separately because they are on the same formal footing as the laws themselves, and the orbits of the system depend irreducibly on both (Whittaker, 1944; Sommerfeld, 1956; Goldstein, 1953; Neimark and Fufaev, 1972).

    What is historically amazing is that this common type of constraint was not formally recognized by physicists until the end of the last century (Hertz, 1894). Such structures occur at many levels. They bridge all epistemic cuts between the controller and the controlled, the classifier and the classified, the observer and the observed. There are innumerable types of non-integrable constraints found in all mechanical devices in the forms of latches, and escapements, in electrical devices in the form of gates and switches, and in many biological allosteric macromolecules like enzymes, membrane channel proteins, and ciliary and muscle proteins. They function as the coding and decoding structures in all symbol manipulating systems.

    https://homes.luddy.indiana.edu/rocha/publications/pattee/pattee.html
  • Physics and computability.

    The OP displays a basic epistemic confusion which is indeed fairly widespread in physics since it has jumped sides and gone from a materialist ontology over to an informational ontology. This bedevils all "interpretations".

    This passage from Howard Pattee is a typically lucid analysis of the epistemic issues - and an introduction into how a pan-semiotic metaphysics (one that sees physical existence in terms of matter AND symbol, nor matter OR symbol) is the path out of the maze.

    This matter-symbol separation has been called the epistemic cut (e.g., Pauli, 1994). This is simply another statement of Newton’s categorical separation of laws and initial conditions.

    Why is this fundamental in physics? As I stated earlier, the laws are universal and do not depend on the state of the observer (symmetry principles) while the initial conditions apply to the state of a particular system and the state of the observer that measures them.

    What does calling the matter-symbol problem “epistemological” do for us? Epistemology by its very meaning presupposes a separation of the world into the knower and the known or the controller and the controlled. That is, if we can speak of knowledge about something, then the knowledge representation, the knowledge vehicle, cannot be in the same category of what it is about.

    The dynamics of physical laws do not allow alternatives paths between states and therefore the concept of information, which is defined by the number of alternative states, does not apply to the laws themselves.

    A measurement, in contrast, is an act of acquiring information about the state of a specific system. Two other explicit distinctions are that the microscopic laws are universal and reversible (time-symmetric) while measurement is local and irreversible.

    There is still no question that the measuring device must obey the laws. Nevertheless, the results of measurement, the timeless semantic information, cannot be usefully described by these time-dependent reversible laws (e.g., von Neumann, 1955).

    http://www.academia.edu/3144895/The_Necessity_of_Biosemiotics_Matter-Symbol_Complementarity

    So the gist is that the "space" in which maths or computation takes place is physically real - in the sense that material spacetime is a generalised state of constraint in which all action is regulated to a Planckian degree of certainty ... except the kind of action which is informational, symbolic, syntactic, computational, etc.

    Physics can describe every material characteristic of a symbol ... and none of its informational ones.

    And in being thus an orthogonal kind of space to physical space, information is a proper further dimension of existence. It is part of the fundamental picture in the way quantum mechanics eventually stumbled upon with the irreducible issue of the Heisenberg cut or wavefunction collapse.

    So the mistake is to try to resolve the irreducibility of information to physics by insisting "everything is computation", or alternatively, "everything is matter". Instead, the ontic solution is going to have to see both as being formally complementary aspects of existence.

    Aristotle already got that by the way with his hylomorphic view of substance.

    So nature keeps trying to tell us something. Duality is fundamentally necessary because there is nothing without a symmetry breaking. But then we keep looking dumbly at the fact of a world formed by symmetry breaking and trying to read off "the big symmetry" that therefore must lurk as the "the prime mover" at the edge of existence.

    The logic of the principle of sufficient reason fools us into believing that only concrete beginnings can have concrete outcomes. Therefore if we see a broken symmetry, then this must point back to an equally physical (or informational) symmetry that got broke.

    But that simple habit of thought - so useful in the everyday non-metaphysical sphere of causal reasoning - is what blinds almost all efforts at "interpretation".

    The duality of existence will never make sense until your metaphysics includes a third developmental dimension by which beginnings are vague or fundamentally indeterministic.

    Clinging onto a belief in the definiteness of beginnings, the concreteness of initial states, is just going to result in the usual infinite regress stories of creating gods or universal wavefunctions. Folk are very good at pushing the question they can't answer as far out of sight as possible.
  • Explaining probabilities in quantum mechanics

    Modest or radical? The Copenhagen Interpretation is metaphysically radical in paving the ground to acknowledge that there must be an epistemic cut in nature.

    The "modest" understanding of that has been the good old dualistic story that it is all in the individual mind of a human observer. All we can say is what we personally experience. Which then leads to folk thinking that consciousness is what must cause wavefunction collapse. So epistemic modesty quickly becomes transcendental confusion. We have the divorce in nature which is two worlds - mental and physical - in completely mysterious interaction.

    I, of course, am taking the other holistic and semiotic tack. The epistemic cut is now made a fundamental feature of nature itself. We have the two worlds of the it and the bit. Matter and information. Or local degrees of freedom and global states of constraint.

    So CI, in recognising information complementarity, can go three ways.

    The actually modest version is simple scientific instrumentalism. We just don't attempt to go further with the metaphysics. (But then that is also giving up hope on improving on the science.)

    Then CI became popular as a confirmation of hard dualism. The mind created reality by its observation.

    But the third route is the scientific one which various information theoretic and thermodynamically inspired interpretations are working towards. The Universe is a system that comes to definitely exist by dissipating its own uncertainty. It is a self constraining system with emergent global order. A sum over histories that takes time and space to develop into its most concrete condition.
  • On the transition from non-life to life

    The 'epistemic cut' implies a dualism between matter and symbol and so implies a duality.Wayfarer

    It implies a formally exact complementarity, which is a very different (triadic) thing.

    The reason matter~symbol works, and mind~body doesn't, is that we have fundamental physical theories of the relation between physical degrees of freedom and epistemic degrees of uncertainty. I just explained that above - the equivalence of Shannon information and Gibbs/Boltzman free energy.

    So it is a dichotomy that works. We know how to measure it as a physical reality. We can convert it to bit, and back again. This has become an insight of fantastic power.

    And as I've mentioned with considerable enthusiasm, biophysics has now discovered in the past 10 years how this works for life and mind. There is an obvious reason now why - at the quasi-classical transition zone of the nanoscale - bio-semiosis and neuro-semiosis could take off. Again a unit of biological information and a unit of biological work (the two sides of Pattee's epistemic cut!) are zeroed at that scale for reasons that are just physically transparent (once you understand the physics).

    This is huge. As big as DNA. Science has come through for us once again.
  • Towards a Scientific Definition of Living vs inanimate matter

    With that established, I then define "life" as "self-productive machinery":Pfhorrest

    Ah. But the question when it comes to life is how can a machine self-reproduce. That is the essence of Pattee's epistemic cut issue. It is the central problem that a definition of life must address.

    See Pattee's account of von Neumann's famous challenge to quantum theorists....the infinite homuncular regress that arises as we try to avoid accounting for why a machine would have the intent to make the machine that it does.

    The most convincing general argument for this irreducible complementarity of dynamical laws and measurement function comes again from von Neumann (1955, p. 352). He calls the system being measured, S, and the measuring device, M, that must provide the initial conditions for the dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the same laws as S, we may try a unified description by considering the combined physical system (S + M). But then we will need a new measuring device, M', to provide the initial conditions for the larger system (S + M). This leads to an infinite regress; but the main point is that even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.

    This same argument holds also for control functions which includes the genetic control of protein construction. If we call the controlled system, S, and the control constraints, C, then we can also look at the combined system (S + C) in which case the control function simply disappears into the dynamics. This epistemic irreducibility does not imply any ontological dualism. It arises whenever a distinction must be made between a subject and an object, or in semiotic terms, when a distinction must be made between a symbol and its referent or between syntax and pragmatics. Without this epistemic cut any use of the concepts of measurement of initial conditions and symbolic control of construction would be gratuitous.

    "That is, we must always divide the world into two parts, the one being the observed system, the other the observer. In the former, we can follow up all physical processes (in principle at least) arbitrarily precisely. In the latter, this is meaningless. The boundary between the two is arbitrary to a very large extent. . . but this does not change the fact that in each method of description the boundary must be placed somewhere, if the method is not to proceed vacuously, i.e., if a comparison with experiment is to be possible." (von Neumann, 1955, p.419)

    https://homes.luddy.indiana.edu/rocha/publications/pattee/pattee.html
  • Is 'information' physical?

    The question I asked (also evaded) was that the distinction between the symbolic and the physical that you generally refer to, seems to originate with Von Neumann's idea, as then picked up by Pattee, in the paper, Physics and Metaphysics of Biosemiosis. I am saying, this is distinction that only appears evident in living systems - that is why, in scanning the universe for life, NASA has some idea what to look for. There is a particular order which is characteristic of living systems, is there not? And that is where the symbolic/physical distinction really comes into play.Wayfarer

    More bullshit. I have agreed umpteen times that the epistemic cut is where life and mind properly kick in. There is actual semiotic machinery involved, like receptors, membranes, pumps, channels, let alone the core stuff of codable memories - genes, neuons, language - that can read/write the information that stands for the purposes and constraints of a biological system.

    A non-biological system can still be a dissipative structure. Now the world at large - the thermodynamic context - is the memory structure that represents the purpose and constraints. So there is no located epistemic cut - one internal to the self-describing or self-replicating organism. The cut is now only a distributed pattern of environmental information. This is when we get into the importance of event horizons as encoding the order of nature at a physical level.

    So yes, we can also define pansemiosis as this more generalised type of metaphysics. And physics has been doing exactly that too.

    But stop pretending that I am not clear about the fundamental difference between biosemiosis and pansemiosis in this regard. It gets really tedious.
  • Thoughts on Epistemology

    Take the map/territory example. These two are both objects, so there is no epistemic cut between these two.Metaphysician Undercover

    Yeah. I've said many times now that a dualistic ontology can't cut it. It has to be a triadic relation. So someone has to interpret the map to navigate the territory.

    Thus the further thing of the interpreter must either be addressed by the metaphysics, or else it sets up the familiar homuncular regress.

    A further obvious problem with a map is that it is representational. It is passive. It can't physically do anything to constrain the physics of the world.

    Well it does if you are reading it and saving your legs by not getting lost. But the epistemic cut is about the need for some actual hinge point, or transduction step, where information and physics truly make contact.

    Hence we have Pattee's focus on how a molecule can function as a message - how DNA can code for a protein that is then an enzymatic signal to switch on or off a metabolic process.

    So Peirce gives us the general triadic need to include the notion of interpretance in any modelling relation with the world. And Pattee focuses on the practicality of the machinery that connects the interpretance and the world.

    The usual dualistic bind that plagues representationalism is resolved by this modelling relation where the informational aspect of nature is tied in an interactive feedback loop with the material aspect of nature.
  • The Non-Physical

    Do you think this admits of a purely physical solution?Wayfarer

    Or it could be that Pattee is adopting a useful rhetorical position in which the glass is half-empty rather than half-full.

    It is definitely part of his character that he pushes the expected scientific attitude of: "Well, we don't really know yet. And we may never actually know the answer on abiogenesis because we haven't got a time machine to go back and see what may have been some of the accidental steps along some actual sequence of events."

    Pattee set himself apart from his mostly far more easy-going theoretical biology colleagues on this score. There are always plenty happy to believe they have the answer - RNA world, or whatever. And Pattee's chosen role was to be the one bringing clarity to the actual question to be answered. So he was always saying, hold up, not yet. You will have to go deeper than that to count as a final theory.

    So what you are hearing is the kind of rigour that makes science a metaphysically-responsible exercise worth doing.

    It is certainly not any kind of semi-religious wavering - the thought that the causes of life and mind might not have a naturalistic explanation. I never heard Pattee make the faintest nod in that direction. And the subject did come up as others in his circle, like Robert Ulanowicz, were openly theistic.

    Pattee would be the most hard-nosed of materialists and so resisted Peircean metaphysics and semiotics pretty strongly - until he was converted and came out with his late flood of papers arguing the case elegantly.

    That the epistemic cut, or the distinction between the semantic and the physical, will be erased in due course?Wayfarer

    But the cut exists. The abiogenetic issue is how could it have evolved as it seems there is a significant gap to leap.

    And now - in just the past decade - that gap has shrunk dramatically, as Nick Lane and Peter Hoffman can tell you from their frontline position in experimental biology.

    With Hoffman, the gap is pretty much literally not there. At the quasi-quantum nanoscale, where the entropic costs of converting thermal gradients to negentropic work falls effectively to zero, life is left with no choice but to get started.

    The epistemic cut simply is lying there on the floor ready to be picked up. It doesn't need to be created anymore. You couldn't avoid stumbling into its grip if you are some passing biochemical process. The likelihood of life not breaking out falls to some improbably tiny number that we might as well call zero.
  • Thoughts on Epistemology

    Could you answer the question that was asked, please. What were you agreeing on?

    Musing a bit, that is part of the problem I have with apokrisis's epistemic "cut"; the cut could not be a private thing.Banno

    The cut is another relative thing, never absolute. And it creates the "private" realm from which either communities or individuals would construct meaning in terms of a sign relation.

    So the entirety of you problem is that you haven't understood the concept. That tends to happen when you are lazy about reading the literature.

    What would sharing it with yourself look like?Banno

    As I've said, speaking creates the speaker. A linguistic identity, a psychological construct of self, develops by mastering the habits of language use.

    Being a self is a particular kind of language game. One that is baked into the general communal game. It is right there in the grammar - me, you and them - as Mead pointed out.

    So if I have a beetle in my box, I can talk about it to myself. I can construct the view which says there is this "me" and there is this "other".

    But this is not of course a whole private language. It is some private vocab. It refers to the world that only I see because only "I" could have such a point of view. It is that tightly tied to any claims to identity that "I" might have. Hence why qualia are treated as the height of the private and ineffable.

    In general, our "I" is socially and culturally constructed. It encodes the communal "I" as the point of view from which a generalised and linguistically sharable selfhood arises. So most of our speaking remains speech from a collective cultural identity. As I said about wine-tasters, this becomes true even of talk about ineffable qualia.

    Thus again, this is about degrees of the private or public. In the end, the speaking "I" is still largely a cultural self. But every person lives in a different body. We all have some unique point of view as well. So there is scope for private language to construct that as the private experience of some solipsistic notion of "myself".
  • What is Information?

    Yeah, but no room for epistemic cuts here!Pop

    So the genes don’t measure the state of the body, the state of its metabolism, and turn the dials accordingly? There is no separation between the regulation and the action? An enzyme doesn’t have both its quantum pocket for doing its physical magic and also separately it’s regulatory receptor site for listening out for its instructions?

    The body is a nested hierarchy of epistemic cuts. And that is only expanded by evolving an immune system and a nervous system.

    It is ridiculous that you now just go boo, hiss in pantomime fashion when the epistemic cut is mentioned. Show that you understand what it even means as a technical term from theoretical biology.
  • Objective Truth?

    What I was trying to get at it is that since the mind-conceived 'mind-independent world' is always, obviously, conceived; then it is always conceptually articulated.John

    Well that is different in focusing on the epistemic angle rather than the ontic. And pansemiosis is an ontic claim in saying, essentially, that epistemology becomes ontology here. The structure of the modelling relation we have with the world (what you are talking about) is in fact the structure by which the Universe also "knows things" - that is knows things like what its laws say about how its parts ought to be behaving in conformance with developed habit.

    So what I would say in reply here is that while we need - epistemically - to be aware that the "mind-independent world" is in fact a free creation of the mind, just an idea, it is also true that the "mind" is also a construction of this kind. It is also "just an idea" we hold to explain things.

    So both the world and the self that is imagined as its observer are articulated concepts. Together they form the very epistemic relation, the sign relation, which is what "we" then claim to believe in as our "objective truth".

    What we can't get beyond is the need for a conceptually articulated view in general. And talk about the mind vs the world is what that articulation looks like.

    Of course there is, we must imagine, 'something' independently of human being.John

    But strictly, we consider reality to start exactly where imagination fails. Imagination makes experience depend on "us". We can imagine flying for instance. So it is when experience comes to depend on something other than "us" that we can experientially say, well this is not "us" now. And let's call this other thing mind-independent reality.

    If something is conceptualizable, then it is articulated in the same, or an isomorphic, manner as concepts are, i.e. logically. So, it seems that we are committed to thinking there is a logos in nature independently of human being.John

    Now we are back to ontic commitments. And the question is whether the structure of thought and world are the same in some way that is exactly as we conceive it, or whether - because we know we are manifesting an image - in fact it still remains likely that we are just projecting our articulate concepts.

    And my own point about self and world as equally conceptual at root, should point towards the latter, in fact. There is now even less reason for the workings of our minds to be true to the thing-in-itself.

    This is probably surprising, but it is already basic to psychological science. The brain is not there to re-present reality but to ignore it as much as possible. Attention and habit are filters set up to limit our physical connection to the world (so as to achieve the separation which constitutes the modelling relation's epistemic cut). Being a mind is all about constructing some minimal symbolic encoding that simply has the job of leaving us effective physical actors. Like DNA's relation to the metabolism it models, the contents of experience must be essentially unrealistic to be effective as semiosis.

    If you want people to stop at road junctions, you put the stop sign to one side rather than erecting a physical barrier in the middle of the road. Or at least that is the simple and cost-effective way to co-ordinate driving behaviour. The stop sign looks nothing like a physical barrier. It doesn't represent the world. Yet as a symbol, it articulates a concept about how the world "ought to be".

    So this is very tricky stuff. We have every reason to be suspicious of every articulate conception as their whole point is not to be true in some veridical "thing-in-itself" sense. That is not even the ambition. The ambition is to be pragmatically effective. And that is achieved by a capacity to leave just about everything material out of the concepts. Classic reductionism to theory and measurement in other words.

    However then - having properly understood this psychological apparatus, this epistemic truth - that is the structure of the modelling relation which pansemiosis would project onto our imaginings of reality. The thing-in-itself has the form of wanting to self-simplify in terms of concepts like particles or waves ruled by dynamical laws of motion, for instance.

    People always complain that we look at reality but then talk about the abstracta that aren't really there. We end up treating a logos as the essence of the real (while the actual physical stuff is reduced to mere appearance).

    Pansemiosis - in transferring the psychological account into the space of cosmological accounts - gives us a formal way of accounting for just this. It says, nope, logos really is what is most real here. The thing-in-itself is not just some bunch of stuff, a state of affairs. It does boil down to an encoding relation where there is a cosmic purpose expressing the desire to produce the simplest definite actions.

    Anything might be quantumly possible. But semiotically, existence arises due to the collapse of all this potential being to some historic collection of binary-framed choices. Was the electron spin-up or spin-down all along? Who can know. But history remembers some now fixed answer.
  • General purpose A.I. is it here?

    I argue that because this algorithm has to learn from scratch it must discover it's own semantics within the problem and solution to that problem.m-theory

    That is the question. Does it actually learn its own semantics or is there a human in the loop who is judging that the machine is performing within some acceptable range? Who is training the machine and deciding that yes, its got the routine down pat?

    The thing is that all syntax has to have an element of frozen semantics in practice. Even a Turing Machine is semantic in that it must have a reading head that can tell what symbol it is looking at so it can follow its rules. So semantics gets baked in - by there being a human designer who can build the kind of hardware which ensures this happens in the way it needs to.

    So you could look at a neural network as a syntactical device with a lot of baked-in semantics. You are starting to get some biological realism in that open learning of that kind takes place. And yet inside the black box of circuits, it is still all a clicking and whirring of syntax as no contact with any actual semantics - no regulative interactions with material instability - are taking place.

    Of course my view relies on a rather unfamiliar notion of semantics perhaps. The usual view is based on matter~mind dualism. Meaning is held to be something "mental" or "experiential". But then that whole way of framing the issue is anti-physicalist and woo-making.

    So instead, a biosemiotic view of meaning is about the ability of symbol systems - memory structures - to regulate material processes. The presumption is that materiality is unstable. The job of information is to constrain that instability to produce useful work. That is what mindfulness is - the adaptive constraint of material dynamics.

    And algorithms are syntax with any semantics baked in. The mindful connection to materiality is severed by humans doing the job of underpinning the material stability of the hardware that the software runs on. There is no need for instability-stabilising semantics inside the black box. An actual dualism of computational patterns and hotly-switching transistor gates has been manufactured by humans for their own purpose.

    Consider the task of creating robot hand that is deleterious as the human hand.m-theory

    Yes. But the robot hand is still a scaled-up set of digital switches. And a real hand is a scaled-up set of molecular machines. So the difference is philosophically foundational even if we can produce functional mimickry.

    At the root of the biological hand is a world where molecular structures are falling apart almost as soon as they self-assemble. The half-life of even a sizeable cellular component like a microtubule is about 7 minutes. So the "hardware" of life is all about a material instability being controlled just enough to stay organised and directed overall.

    You are talking about a clash of world views here. The computationalist likes to think biology is a wee bit messy - and its amazing wet machines can work at all really. A biologist knows that a self-organising semiotic stability is instrincally semantic and adaptive. Biology know itself, its material basis, all the way down to the molecules that compose it. And so it is no surprise that computers are so autistic and brittle - the tiniest physical bug can cause the whole machine to break-down utterly. The smallest mess is something a computer algorithm has no capacity to deal with.

    (Thank goodness again for the error correction routines that human hardware designers can design in as the cotton wool buffering for these most fragile creations in all material existence).

    So again...this algorithm, if it does have semantic understanding...it does not and never will have human semantic understanding.m-theory

    But the question is how can an algorithm have semantic understanding in any foundational sense when the whole point is that it is bare isolated syntax?

    Your argument is based on a woolly and dualistic notion of semantics. Or if you have some other scientific theory of meaning here, then you would need to present it.

    Pattee's epistemic cut was not very clear to me, and he seems to have coined this term.m-theory

    Pattee has written a ton of papers which you can find yourself if you google his name and epistemic cut.

    This is one with a bit more of the intellectual history.... http://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html

    But really, Pattee won't make much sense unless you do have a strong grounding in biological science. And much of the biology is very new. If you want to get a real understanding of how different biology is in its informational constraint of material instability, then this a good new pop sci book....

    http://lifesratchet.com/
  • General purpose A.I. is it here?

    I have read some more and you are right he is very technically laden.m-theory

    Right. Pattee requires you to understand physics as well as biology. ;) But that is what makes him the most rigorous thinker in this area for my money.

    I was hoping for a more generalized statement of the problem of the epistemic cut because I believe that the Partially observable Markov decision process might be a very general solution to establishing an epistemic cut between the model and the reality in an A.I. agent.m-theory

    Good grief. Not Mr Palm Pilot and his attempt to reinvent Bayesian reasoning as a forward modelling architecture?
  • General purpose A.I. is it here?

    Agency is any system which observes and acts in it's environment autonomously.m-theory

    Great. Now you have replaced one term with three more terms you need to define within your chosen theoretical framework and not simply make a dualistic appeal to standard-issue folk ontology.

    So how precisely are observation, action and autonomous defined in computational theory? Give us the maths, give us the algorithms, give us the measureables.

    The same applies to a computational agent, it is embedded with its environment through sensory perceptions.m-theory

    Again this is equivocal. What is a "sensory perception" when we are talking about a computer, a syntactic machine? Give us the maths behind the assertion.

    Pattee must demonstrate that exact solutions are necessary for semantics.m-theory

    But he does. That is what the Von Neumann replicator dilemma shows. It is another example of Godelian incompleteness. An axiom system can't compute its axiomatic base. Axioms must be presumed to get the game started. And therein lies the epistemic cut.

    You could check out Pattee's colleague Robert Rosen who argued this point on a more general mathematical basis. See Essays on Life Itself for how impredicativity is a fundamental formal problem for the computational paradigm.

    http://www.people.vcu.edu/~mikuleck/rosrev.html

    I also provided a link that is extremely detailed.m-theory

    The question here is whether you understand your sources.

    Pompdp illustrates why infinite regress is not completely intractable it is only intractable if exact solutions are necessary, I am arguing that exact solutions are not necessary and the general solutions used in Pomdp resolve issues of epistemic cut.m-theory

    Yes, this is what you assert. Now I'm asking you to explain it in terms that counter my arguments in this thread.

    Again, I don't think you understand your sources well enough to show why they deal with my objections - or indeed, maybe even agree with my objections to your claim that syntax somehow generates semantics in magical fashion.

    I can make no sense of the notion that semantics is something divided apart from and mutually exclusive of syntax.m-theory

    Well there must be a reason why that distinction is so firmly held by so many people - apart from AI dreamers in computer science perhaps.

    To account for the competence of AlphaGo one cannot simply claim it is brute force of syntax as one might do with Deepblue or other engines.m-theory

    But semantics is always built into computation by the agency of humans. That is obvious when we write the programs and interpret the output of a programmable computer. With a neural net, this building in of semantics becomes less obvious, but it is still there. So the neural net remains a syntactic simulation not the real thing.

    If you want to claim there are algorithmic systems - that could be implemented on any kind of hardware in physics-free fashion - then it is up to you to argue in detail how your examples can do that. So far you just give links to other folk making the usual wild hand-waving claims or skirting over the ontic issues.

    The Chinese room does not refute computational theories of the mind, never has, and never will.
    It is simply suggests that because the hardware does not understand then the software does not understand.
    m-theory

    Well the Chinese Room sure felt like the death knell of symbolic AI at the time. The game was up at that point.

    But anyway, now that you have introduced yet another psychological concept to get you out of a hole - "understanding" - you can add that to the list. What does it mean for hardware to understand anything, or software to understand anything? Explain that in terms of a scientific concept which allows measurability of said phenomena.
  • There Are No Identities In Nature

    As I've said quite a few times now, the distinction between the digital and the analog is quite precisely defined by the presence of negation and self-reflexivity.StreetlightX

    You are merely choosing to highlight the bit I already agree with in general fashion. From a biosemiotic viewpoint, that states the obvious.

    But what I have been pointing out is that your framing of the issues lacks the further dimensionality that would allow it to be actually developmental in the way a process view needs to be. Your way of talking about the continuum or the analog is fuzzy over the issue of fuzziness. You talk about the analog/continuum as being itself crisply existent (a realm of actualised material being), and then at other times you talk about it as a ground for further development - the less specified basis for the discrete/digital machinery that transcends it so as to have a view of it.

    Of course in your confusion, that becomes the confusion you accuse me of. I'm just patiently taking you back to the source of symmetry-breaking to show how both continuity and discreteness co-arise from pure vagueness. And analog~discrete would have arisen as modes of communication or representation in the same fashion.

    As I have said, it is important that the analog or iconic representation already exists on the other side of the epistemic cut - on the side of the symbolic or "rate independent informatiion". It is a distinction made at the level of the mapping, even if it means to be talking about a distinction in the (computational!!) world.

    And because you set off in the OP to say something logically concrete about metaphysics, you can't just gaily presume that what is true of the map is true of the territory. That further part of the argument must be properly supported.

    Either you don't understand that or you simply want to avoid the issue.

    So it is fruitless to keep trying to return me back to Wilden's perfectly acceptable 1970s analysis of the distinction between analog and digital computation. You know I agree with that.

    The interesting question is then the ontological or metaphysically-general one of how does that fact about representative modes change our conception of nature itself? What new vantage point does it give us for dealing with the central questions of process philosophy, like the mechanics of development and individuation.

    A difference that makes a difference can be described analogically or digitally, represented in terms of what it is, or what it is not. But that does not yet get at the deeper question of how representation itself arises (via an epistemic cut), nor how bare difference arises (as an ontic symmetry breaking).
  • Metaphysics as Selection Procedure

    Like Von Neumann's measuring tools, the model is both map and territory. But it's kind of this unstable thing, right? like it's both - but it can't be both at the same time.csalisbury

    I don't understand your objection. The model describes a territory that is itself being viewed as a modelling relation. Seems simple enough.

    that recursive explosion - where one would need a new tool, M', to measure M+S, and so forth - requires an indefinite expanse which would allow one to keep 'zooming-out'.csalisbury

    But that is the argument for the epistemic cut or semiotic sign relation. It is because the measurement function - the observer - can't be understood as "just physics" (because recursion ensues) that the observer/measurement has to be understood in terms of a symbolic level of action.

    So the passage you cite identifies the fundamental problem of physicalist explanation. And that homuncular regress is what semiosis fixes.

    well, yes, that which constrains has to be atemporal, but it's a weird kind of atemporality isn't it? It's out of time, yet of time - precipitated from temporal dynamic material processes (tho always implicit within them), yet able to turn around, as it were, and regulate them.csalisbury

    Again, there seems no problem at all. That is how a memory functions. You have all these regulative habits you've learnt - like perhaps the rules of cribbage. Then along comes a cribbage playing situation and all your dormant skill gets a chance to do its thing.

    But a model qua TOE isn't merely constraining and controlling a local set of dynamic processes - it envelops everything - both the dynamic processes and the atemporal. It is somehow outside of the dialectic, touching the absolute**, and invites the very idea of the transcendent mind you rightfully decry. It's a fixed thing - a holy trinity of sorts - which explains the fixity/nonfixity/relation-between-the-two which characterizes everything.csalisbury

    A TOE would be maximally general. And it would then encompass all the more constrained physical models.

    A model of quantum gravity unifies quantum field theory and general relativity. General relativity unifies special relativity and Newtonian gravity. So physics already is organised in this nested hierarchical fashion.

    And it is definitional of a TOE that spacetime becomes an emergent feature, not a fundamental ingredient. That is the point.

    So being "outside" of time, and space, and matter, are all desirable properties.

    And that in turn is the argument for pansemiosis. The fundamental problems of physics can't be fixed with just "more physics". That risks the recursion that can only be "solved" by the appeal to mystic transcendent causes.

    And so the trick that worked for human self consciousness and biological autonomy - semiosis/the epistemic cut - would be the way to fix physics as well.

    Physics is at an impasse with quantum theory because it cannot offer a formal model of the observer that collapses the wavefunction. And semiotics is precisely that - a formal model of observers.
  • Explaining probabilities in quantum mechanics

    Many worlds is used by many to avoid the physical reality of wavefunction collapse or an actual epistemic cut. Or rather, to argue that rather than local variable collapse, there is branching that creates complementary global worlds.

    So as maths, many worlds is fine. It has to be as it is just ordinary quantum formalism with the addition of thermodynamical constraint - exactly the decoherent informational view I advocate.

    But it gets squirmy when Interpretation tries to speak about the metaphysics. If people start thinking of literal new worlds arising, that's crazy.

    If they say they only mean branching world lines, that usually turns out to mean they want to have their metaphysical cake and eat it. There is intellectual dishonesty because now we do have the observer being split across the world lines in ways that beg the question of how this can be metaphysically real. The observer is turned back into a mystic being that gets freely multiplied.

    So I prefer decoherence thinking that keeps observers and observables together in the one universe. The epistemic cut itself is a real thing happening and not something that gets pushed out of sight via the free creation of other parallel worlds or other parallel observers.
  • On the transition from non-life to life

    OK, so you hold that consciousness is not substance but rather that some vague matter/info/stuff isjavra

    I would start by reminding that I would see consciousness as a process and not any kind of "stuff". You do think of consciousness as a stuff - substantial being - and so you automatically try to understand my position in the same ontic terms. For you, the critical question becomes what sort of substance am I talking about - aha! Information. Or (vague) matter. Or something (some thing).

    Still, last I recall, we can both agree that life and non-life are qualitatively different.javra

    Again you just translated the discussion into substance terminology. Where I would say we might agree on a difference in process, you say we might agree about a difference in quality - a particular property of a substance.

    To my mind, the physical plane is the closest communal proximity that all co-existent agents hold to the grand finale. It deterministically (again, derived teleologically) constrains our various freewill intentions to a set of possibilities that we all abide by (e.g., nature says: thou shalt not act out one’s fantasies of flying off of tall cliffs/buildings through the flapping of hands lest one fall and loose one’s identity to this world … kind of thing).javra

    It is plausible that when all possible wishes are taken into account, a generalised shared world emerges as the baseline to that. That is also the logic of the "sum over histories" approach in quantum mechanics. The Universe can be understood as emerging from an ensemble of possibilities where the vast mass of those possibilities will self-cancel away, leaving behind only the commonalities that are uncancellable.

    So if we average all "desires" or "acts" in a world where the possibility of turning right is matched by the possibility of turning left, then the shared outcome is a world where what is left uncancellable is the symmetry of being poised between two options.

    The story works for either a mentalistic or physicalist metaphysics.

    Thing is, there’s a bridge that I have a hard time traversing. I’m very set on affirming that life and non-life are substantially different, with the difference being that of awareness. What I’m considering, though, is the possibility of there being an underlying factor to both non-life and life—one that would yet be present in the final end—which when held in large enough degrees forms the gestalt of a first-person point of view as can be defined by perception and perceiver (no homunculus).javra

    This indeed seems a critical problem for your approach. You are wanting to assert that awareness is basic, and yet it only emerges eventually.

    So one solution to that is panpsychism - saying that awareness was always there, just dilute and not properly organised to be a structured state of experience, a point of view.

    The other would be to turn causality on its head and make finality retrospective. In Hegelian fashion, the world is called into being by the desire that is its own end.

    Panpsychism is in fact pretty reductionist - back to primal stuff with primal properties. And the idea of retrocausality is something even physics is having to contemplate, as in Cramer's transactional interpretation of quantum mechanics. Experiments like the quantum eraser show how the future can act backwards to affect events in the past - or at least something that "causality violating" must be the case.

    So for both the mentalistic and physicalist ontologies, the alternatives boil down in similar fashion.

    Here, there’s yet a duality, as you might call it, between the ontically real “agency” and the information that, despite its causal influence upon agency, is nevertheless an illusion which vanishes in the final end. Though this is from my interpretation, I believe you’ll find it parallels your own: in the Heat Death you uphold, information as we know it, together with all natural laws as we know them, all causal processes as we know them, etc., vanish, leaving instead … well, that’s your territory.javra

    The way you describe it sounds too much like the Cheshire Cat's grin. Once more, you are reifying the process of acting agentially - behaving like a self in form a point of view - as then this thing of "agency". Your claim becomes that an abstraction is left as all that exists. Knock down Oxford University and its essence still persists, hanging over the cleared ground as a real substantial being.

    The Heat Death is a more subtle concept because it is in fact a process that never stops, yet becomes eternally unchanging. Differencing still goes on, but it ceases to make a difference. You are left with the same process producing now only the simplest possible outcome.

    [For those who deny that bacteria hold any awareness and some minimal degree of freewill, the transition nevertheless happened somewhere along the way toward being human; I pick at this level for my own reasons … As for myself, I’ll not here again debate where the transition first occurred, nor on whether reality is all determinist v. indeterminist. Again, the intended theme here is how one can logically go from inanimate matter to conscious agency.]javra

    This is the advantage of a semiotic approach to physicalism. We can now define the bridge as the epistemic cut between - as Pattee puts it - rate independent information and rate dependent dynamics.

    So as soon as proper internalised semiosis occurs - as soon as there is a modelling relation - there is life and mind in some formally-defined degree.

    For a bacteria, this sign-processing may be terribly simple. The mechanics of what is going on is completely transparent. A bacterium with a flagellum - a wiggling tail - connected to a chemo-receptor, can swim along a gradient of food scent.

    So long as the receptor is signalling "yes", the molecular motors spin the tail, a collection of strands, one way. The bacterium is driven in a straight line towards its heart's desire. Then if the receptor's switch is then flipped the other way - no chemicals binding it, causing the receptor's molecular structure to change shape due to a simple alteration in the balance of its mechanical forces - then that in turns signals the flagellum to rotate in the other direction. The bundle of strands untangle and no longer push the bacterium in a direction. It now tumbles about randomly - until it again happens to pick up a scent.

    The point is that if we actually look at the ground level of life, there is just no mystery. You get intelligent behaviour due to semiotics. A mechanical chain of events connects information to action as a hardwired interpretive habit.

    This epistemic cut is a small trick. But having got established, it can be scaled to be as large as you like. The modelling relation has no limit on its complexity. Physicalism just doesn't have a problem explaining intelligent behaviour. There is no explanatory gap when it comes to semiosis as a model-producing process. The gap arises only once folk start treating the process as something further - an ontological thing, or substantial state.

    Again, the intended theme here is how one can logically go from inanimate matter to conscious agency.javra

    One can't because the dualism is baked in by the chosen terminology. It becomes a word game, not a reasonable inquiry.
  • Semiotics Killed the Cat

    there is in some important sense an incommensurability between the physical and the semiotic. It is precisely this incommensurability which you then claim to have overcome by 'pansemiosis' - when this is actually the point at issue!Wayfarer

    Once more, you can leave pansemiosis out of it if you like. Biosemiosis alone makes the crucial point when it comes to how life/mind can be both a physical process, and then more than physical in being informational.

    Then pansemiosis is the larger view which shows how an informational view can be applied to the purely physical.

    Now the epistemic cut is not due to some internal coding machinery - a memory that provides the constraints that shape the organism - but is a physical feature of material interactions themselves. All interactions are limited by lightspeed. And so this creates an event horizon when it comes to the history that is the shaping context of any material events.

    It is this fact that creates a sharp topological discontinuity. You can't be affected by what hasn't yet had time to affect you. So in a real sense, every physical event is being shaped by a "personal" history. It is seeing the Cosmos from a particular point of view.

    This is what the holographic principle is about. Event horizons aren't real in the sense of being material. They are just the fact that it takes time for distant events to impinge upon you as now part of your history.

    But then this situation is best described as informational. Whether you know or not makes an actual difference. It is meaningful to you. Or if you are a particle, it is the context that determines your state.

    So physics does have a need to make a distinction - draw a line, make an epistemic cut - to mark the event horizon which is not a physical thing itself but is an definite informational effect.

    Well at least that is where the information theoretic approach begins. As you get properly quantum, and materiality gets totally slippery, the event horizons start to look like they are creating our reality as a holographic projection.

    That is getting crackpot of course. But that is another reason for liking pansemiosis. It stops the metaphysics going that far and becoming nonsense.
  • Emergence is incoherent from physical to mental events

    If we are still discussing the nature of mind, we only need biosemiosis and its epistemic cut.

    Peircean semiosis claims the irreducibility of spontaneity or tychism anyway. Othererwise what is there to constraint or regulate?

    Then this is logical at the metaphysically general level because it is reasonable in a causal sense. If everything tries to happen, much will cancel out. An average will emerge.

    Remember the high esteem with which you hold a statistical principle like natural selection? Well Peirce’s view is that physical existence is probabilistic and falls into the regularity of patterns due to emergent constraints.

    Note also that evo-devo has been replacing the modern Darwinian synthesis in biology. This is a recognition that material self organisation - development - is as important as inheritance and selection, or evolution. ... Just as Pattee’s epistemic cut describes.

    So as a result of the 1980s paradigm shift brought about by chaos theory, dissipative structure theory, self organising criticality theory, etc, even physics and chemistry seem lively and mindful in that self-constraining order can emerge for purely probabilistic or entropic reasons.

    That makes pansemiosis a reasonable metaphysical framework. And biology certainly now recognises that life is not about bringing dead matter into action. It already wants to develop order. The trick then is to find material processes balanced at the edge of chaos - where they are at the point of critical instability and so easy to tip with just an informational nudge.

    You can’t be a follower of modern biology and not have noted this paradigm shift. The 1960s genecentric view is out. It is now evolution and development because life has to rely on the more fundamental self organising tendencies of a material world.

    Nature is rational or reasonable all the way down in that order cannot help but emerge to make disorder, or entropy, also an actual thing.
  • Is 'information' physical?

    But is inorganic matter on a continuum with life and mind? Or is there a discontinuity there?Wayfarer

    When have I ever not flagged the critical discontinuity? It's the epistemic cut. It is only after that that life and mind become a thing.

    So that then raises the question of whether there is still a continuity that is "semiotic".

    The reply is that if the epistemic cut internalises constraints - this being the information that membranes, genes, neurons, words and numbers encode - then that now raises the definite possibility of constraints which are encoded or remembered externally, out in the world itself. The Cosmos might be understood as a dissipative structure, organised by its historically fixed information.

    And this is what the information theoretic turn of modern physics hinges on. Entropy. Event horizons. Holography. Quantum information. Material cause no longer carries the weight of explaining existence. Instead, formal cause provides the intelligible structure.

    So it is telling that you ask about a continuity that can connect biosemiosis back to "inorganic matter". You assume that real physics can't afford to let go of material causality. But physics has pretty much let go now.

    The symmetries that account for the fundamental forms of reality - the symmetries of spacetime and particle physics - are the part of existence that feel hard, definite, crystalline. They have the force of mathematical necessity.

    The "action" that then animates this mathematical pattern must still be part of the physicalist story somehow. But now it feels like the mysterious ghost in the machine. The metaphysical puzzle has been reversed. Matter seems the most immaterial part of the modern physicalist equation.

    Check out ontic structural realism to see how current metaphysics is trailing along in the wake of this particular turn of events.

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.