• What is life?
    Not animate. Duhhh...noAxioms

    So the LEM applies to inanimate, but not to animate? Interesting.
  • What is life?
    Would you categorise a tornado as inanimate and on what grounds precisely?
  • What is life?
    A biologist would stress that what is definitional is replication and metabolism. Respiration releases energy, but life also requires the ability to direct some of that into work - the work that rebuilds the body doing the respiring. So somewhere life must have an idea of the material structure it desires to build or maintain. Which is where the imateriality enters the picture.
  • What is life?
    Have you no respect for the advancements made by metaphysicians between then and now? In particular, I refer to those advancements which have created the categories of animate and inanimate things.Metaphysician Undercover

    Define inanimate. What is its essence? :)
  • What is life?
    Do you not see that this is unreasonable?Metaphysician Undercover

    Of course not. It makes a change from calling life and mind a physical machinery.

    I thought you were the one who believed in all the spooky transcendent shit - God, freewill, prime movers. My way of speaking is faithful to the immanence that is the founding presumption of the natural philosopher.

    I conclude that your metaphysics is essentially pantheistic. The Cosmos is a living god. Do you agree with this assessment?Metaphysician Undercover

    As I say, it is essentially pansemiotic rather than pantheistic or even panpsychic. So no, this ain't about gods or minds or anything that requires hard dualism. Semiosis is how physicalism can enjoy all the benefits of dualism without any of its mystic-mongering and question-begging.

    As I have been arguing, existence is the product of the dynamic duo of matter and sign (or matter and symbol). And part of the big shift in the physicalist mindset needed to understand this pansemiotic metaphysics is that matter can't be regarded as inert or passive. This deal only works if matter has critical instability ... and relies on semiosis or habit-taking to grant it the stability of informational constraint.

    The fact that pansemiosis is the case is pretty much proven by the thermodynamic/information theoretic turn that modern physics has had to take. The same general theory - of information entropy - now describes both sides in the one coin of measurement. We can measurably talk about the same thing when talking about physical uncertainty and mental (or rather message) uncertainty. That is Gibbs vs Shannon entropy.

    So pansemiosis is the completely scientific resolution of the ancient dilemma. Yes, the Cosmos has a Mind. And if that sounds whacky, well sorry but this is what we actually mean in terms of modern physical models based on the interchangability of H and S measures of information entropy. We can now talk about particles and brains in the same essential language.
  • What is life?
    Biophysics finds a new substance

    This looks like a game-changer for our notions of “materiality”. Biophysics has discovered a special zone of convergence at the nanoscale – the region poised between quantum and classical action. And crucially for theories about life and mind, it is also the zone where semiotics emerges. It is the scale where the entropic matter~symbol distinction gets born. So it explains the nanoscale as literally a new kind of stuff, a physical state poised at “the edge of chaos”, or at criticality, that is a mix of its material and formal causes.

    The key finding:
    In brief, as outlined in this paper - http://www.rpgroup.caltech.edu/publications/Phillips2006.pdf - and in this book - http://lifesratchet.com/ - the nanoscale turns out to a convergence zone where all the key structure-creating forces of nature become equal in size, and coincide with the thermal properties/temperature scale of liquid water.

    So at a scale of 10^-9 metres (the average distance of energetic interactions between molecules) and 10^-20 joules (the average background energy due to the “warmth” of water), all the many different kinds of energy become effectively the same. Elastic energy, electrostatic energy, chemical bond energy, thermal energy – every kind of action is suddenly equivalent in strength. And thus easily interconvertible. There is no real cost, no energetic barrier, to turning one kind of action into another kind of action. And so also – from a semiotic or informational viewpoint – no real problem getting in there and regulating the action. It is like a railway system where you can switch trains on to other tracks at virtually zero cost. The mystery of how “immaterial” information can control material processes disappears because the conversion of one kind of action into a different kind of action has been made cost-free in energetic terms. Matter is already acting symbolically in this regard.

    This cross-over zone had to happen due to the fact that there is a transistion from quantum to classical behaviour in the material world. As the micro-scale, the physics of objects is ruled by surface area effects. Molecular structures have a lot of surface area and very little volume, so the geometry dominates when it comes to the substantial properties being exhibited. The shapes are what matter more than what the shapes are made of. But then at the macro-scale, it is the collective bulk effects that take over. The nature of a substance is determined now by the kinds of atoms present, the types of bonds, the ratios of the elements.

    The actual crossing over in terms of the forces involved is between the steadily waning strength of electromagnetic binding energy – the attraction between positive and negative charges weakens proportionately with distance – and the steadily increasing strength of bulk properties such as the stability of chemical, elastic, and other kinds of mechanical or structural bonds. Get enough atoms together and they start to reinforce each others behaviour.

    So you have quantum scale substance where the emergent character is based on geometric properties, and classical scale substance where it is based on bulk properties. And this is even when still talking about the same apparent “stuff”. If you probe a film of water perhaps five or six molecules thick with a super-fine needle, you can start to feel the bumps of extra resistance as you push through each layer. But at a larger scale of interaction, water just has its generalised bulk identity – the one that conforms to our folk intuitions about liquidity.

    So the big finding is the way that constrasting forces of nature suddenly find themselves in vanilla harmony at a certain critical scale of being. It is kind of like the unification scale for fundamental physics, but this is the fundamental scale of nature for biology – and also mind, given that both life and mind are dependent on the emergence of semiotic machinery.

    The other key finding: The nanoscale convergence zone has only really been discovered over the past decade. And alongside that is the discovery that this is also the realm of molecular machines.
    In the past, cells where thought of as pretty much bags of chemicals doing chemical things. The genes tossed enzymes into the mix to speed reactions up or slow processes down. But that was mostly it so far as the regulation went. In fact, the nanoscale internals of a cell are incredibly organised by pumps, switches, tracks, transporters, and every kind of mechanical device.

    A great example are the motor proteins – the kinesin, myosin and dynein families of molecules. These are proteins that literally have a pair of legs which they can use to walk along various kinds of structural filaments – microtubules and actin fibres – while dragging a bag of some cellular product somewhere else in a cell. So stuff doesn’t float to where in needs to go. There is a transport network of lines criss-crossing a cell with these little guys dragging loads.

    It is pretty fantastic and quite unexpected. You’ve got to see this youtube animation to see how crazy this is – https://www.youtube.com/watch?v=y-uuk4Pr2i8 . And these motor proteins are just one example of the range of molecular machines which organise the fundamental workings of a cell.

    A third key point: So at the nanoscale, there is this convergence of energy levels that makes it possible for regulation by information to be added at “no cost”. Basically, the chemistry of a cell is permanently at its equilibrium point between breaking up and making up. All the molecular structures – like the actin filaments, the vesicle membranes, the motor proteins – are as likely to be falling apart as they are to reform. So just the smallest nudge from some source of information, a memory as encoded in DNA in particular, is enough to promote either activity. The metaphorical waft of a butterfly wing can tip the balance in the desired direction.

    This is the remarkable reason why the human body operates on an energy input of about 100 watts – what it takes to run a light bulb. By being able to harness the nanoscale using a vanishingly light touch, it costs almost next to nothing to run our bodies and minds. The power density of our nano-machinery is such that a teaspoon full would produce 130 horsepower. In other words, the actual macro-scale machinery we make is quite grotesquely inefficient by comparison. All effort for small result because cars and food mixers work far away from the zone of poised criticality – the realm of fundamental biological substance where the dynamics of material processes and the regulation of informational constraints can interact on a common scale of being.

    The metaphysical implications: The problem with most metaphysical discussions of reality is that they rely on “commonsense” notions about the nature of substance. Reality is composed of “stuff with properties”. The form or organisation of that stuff is accidental. What matters is the enduring underlying material which has a character that can be logically predicated or enumerated. Sure there is a bit of emergence going on – the liquidity of H2O molecules in contrast to gaseousness or crystallinity of … well, water at other temperatures. But essentially, we are meant to look through organisational differences to see the true material stuff, the atomistic foundations.

    But here we have a phase of substance, a realm of material being, where all the actual many different kinds of energetic interaction are zeroed to have the same effective strength. A strong identity (as quantum or classical, geometric or bulk) has been lost. Stuff is equally balanced in all its directions. It is as much organised by its collective structure as its localised electromagnetic attractions. Effectively, it is at its biological or semiotic Planck scale. And I say semiotic because regulation by symbols also costs nothing much at this scale of material being. This is where such an effect – a downward control – can be first clearly exerted. A tiny bit of machinery can harness a vast amount of material action with incredible efficiency.

    It is another emergent phase of matter – one where the transition to classicality can be regulated and exploited by the classical physics of machines. The world the quantum creates turns out to contain autopoietic possibility. There is this new kind of stuff with semiosis embedded in its very fabric as an emergent potential.

    So contra coventional notions of stuff – which are based on matter gone cold, hard and dead – this shows us a view of substance where it is clear that the two sources of substantial actuality are the interaction between material action and formal organisation. You have a poised state where a substance is expressing both these directions in its character – both have the same scale. And this nanoscale stuff is also just as much symbol as matter. It is readily mechanisable at effectively zero cost. It is not a big deal for there to be semiotic organisation of “its world”.

    As I say, it is only over the last decade that biophysics has had the tools to probe this realm and so the metaphysical import of the discovery is frontier stuff.

    And indeed, there is a very similar research-led revolution of understanding going on in neuroscience where you can now probe the collective behaviour of cultures of neurons. The zone of interaction between material processes and informational regulation can be directly analysed, answering the crucial questions about how “minds interact with bodies”. And again, it is about the nanoscale of biological organisation and the unsuspected “processing power” that becomes available at the “edge of chaos” when biological stuff is poised at criticality.
  • What is life?
    The Nous is the mind which orders all the parts of the cosmos to behave in an orderly fashion. That's what you describe when you say that the universe follows final cause (the intent of a mind), and inanimate things behave according to habits (actions resulting from a mind).Metaphysician Undercover

    Correct. So my use of "mind" is clearly deflationary. Especially as I am explicitly generalising it to semiosis, or sign rather than mind. Semiosis is mind-like - in being the mechanism or process by which formal/final cause are understood as immanent in nature. So the Cosmos is thermodynamic. It is ruled by emergent self-organisation. And thus it has a teleology - the desire to maximise entropy. All material existence - including living and mindful creatures - are entrained to that universal purpose.

    But immanent constraints are looser than transcendent laws. They only limit freedoms to the degree that any differences make a difference. And so the Cosmic level purpose - of achieving entropification - is highly attenuated, especially on very short spatiotemporal scales at which humans engage with the world. It is only in the long-run that human intelligence must be found to have accelerated the cosmos's grand entropification project.

    Now the point in contention here was the difference between biosemiosis and pansemiosis. And a critical difference is one of scale. Physics would say the critical scale for semiosis - as in the collapse of the wavefunction, the symmetry-breaking represented by the Big Bang - would be Planck scale or the scale of the fundamental quantum action. However biophysics has recently found that for life and mind - biological processes - the relevant symmetry breaking scale is instead much greater. It is the nanometre scale of the quasiclassical. The tipping point where sign relations can kick in is the poised point, the zone of critical instability, that lies energetically between the quantum and classical scale.

    Oh, this thermal region also has to be in a body of water. You also need the right chemistry - water being a solvent of complex molecules and so providing the material base of some actual instability. Things are actually building up and breaking down within a complex medium.

    So life and mind are different in that they rely on there being these further "accidents" of nature not foreseen by a purely physical level of semiosis.

    Once the Universe, in its Bang state of being a bath of radiation, cooled/expanded enough to undergo a succession of phase transitions, it had the crud of massive atoms with their classically described motions condensing out and starting to do their own semiotic thing, with their own new laws. And after stars made heavy elements, you had the production of watery planets, you finally arrive at the rather accidental looking conditions for organic chemistry, and so organic life and mind as the highly complex avatars of the Second Law.

    So while in a general sense, there is one principle to rule them all - pansemiosis in the general thermodynamic sense of a dissipative structure - it is also clear that biosemiosis is a whole other story in that it requires its own quite different quasiclassical scale of critical instability, and that in turn is quite narrowly defined in terms of its material conditions.

    However this is like nous in granting mind - the power of self-organising purpose - to the cosmos. And it is kind of dualistic in granting fundamental reality to a realm of sign or symbol, as well as matter or physics. But - as I understand it anyway - it is critical that nous is immanent and not transcendent. It is not about some spirit or external hand acting on an inanimate and purposeless world. Instead, pansemiosis is a theory of immanent self-organisation - the taking of habits that forms a cosmos obeying its own accumulated laws.

    I will now add an old post from PF that explains the recent biophysics that now directly supports the biosemiotic side of this argument....
  • What is life?
    You mean Apeiron, or even apokrisis? Or are you mixing up your Anaximanders and Aristotles? Easily done.
  • What is life?
    Anaximander's "Nous"Metaphysician Undercover

    Hu?
  • What is life?
    Any description of mind which uses psychological terms only as metaphor (e.g., accept, desire, rage, self, autonomy, as above) is inadequate, leading to confusion rather than clarity.Galuchat

    I said it seems like it rages ... and then specified why that could only be anthropomorphic projection because there is no internal semiotic model in play.
  • What is life?
    OK, fixed. :)
  • What does it mean to say that something is "heavy"?
    The bowling ball isn't actually heavy, it's just someone's subjective experience of the bowling ball which they find to be heavy. Yet, the statement says that the bowling ball exhibits the property of heaviness, which makes me think the claim is objective.TphalfT

    It's a good question as it does get to the heart of a big controversy.

    A simple answer is that yes, all properties are relative - and that in turn means relative to "some observer".

    So in the case of people lifting things, that observer is the particular thing of being some human making judgments. And two humans can routinely agree the bowling ball is heavier than the ping pong ball while also disagreeing about whether they themselves think a bowling ball is actually heavy - because they are strong and can lift far heavier things by comparison.

    But even in physics, relativity rules. Properties are relative to some context that speaks to what they are. The job of physical modelling is to discover the most invariant or unchanging notion of a stable reference frame from which the necessary measurements can be made. So instead of the observer being subjective - the view from some particular mind - the observer is treated as being objective ... the God's eye view that anyone making the same kind of measurement would see from anywhere in the Universe.

    So we have properties defined "relative to me" and "relative to the world". And when we talk about the bowling ball being heavy, we can be meaning either.
  • What is life?
    Science as it is now practiced is constitutionally incapable of incorporating mind, having gone to great lengths to exclude it from its reckonings.Wayfarer

    That's a bit harsh when science is all about placing empirical or observable constraints on metaphysical speculation. So the observer is included within the very epistemology of science - as the viewpoint which is to be constrained in some pragmatic/semiotic fashion.

    So you are criticising that science does not explain mind. But science exists to shape the mind. It is the reasoning mind in action with the benefit of a sharper method of practice. You want mind incorporated as a scientific output, when it is instead incorporated as the input - a way to refine the modelling that minds are there for.

    Now science can also produce theories of mind. A model of semiosis is a model of modelling. And forming a modelling relation with the world is what minds do. And it seems obvious that to be in such a modelling relation ought to feel like something. I mean logically, why would it not? Why would we expect being in a lived, intimate, modelling relation with the world to be simply zombie-style computation and not some particular expectation-driven point of view?

    So sure, mind science isn't moving towards the discovery of some kind of "mind stuff that lights up with consciousness" - a good old reductionist story of a dualistic mental material with awareness as a property. But mind science already can give a quite reasonable semiotic explanation for "qualia" as what it is like to be in a modelling relation that forms signs of things.

    CogSci had a computational or representational view of consciousness as some kind of data display or abstract symbol processing. But neurocognition has gone back to a more organismic or gestalt psychology understanding of mentality as being "ecological". This makes counter-intuitive but accurate predictions about modelling having the purpose of minimising the physical surprises that the world can impose on the mind, rather than the mind having some need to completely simulate the physical world as some mental simulacrum.

    Minds are maps of territories, so they are all about turning messy reality into some simple arrangements of signs, like the lines on a scrap of paper that simply represent in compact fashion a way to get about with the least effort or even thought.

    So in that sense, science is mind. It is map-drawing taken to another level of simplified habit. What you complain about as a bug - the vast reduction of information that science achieves in forming its models of the world - is its semiotic feature. To be more scientific is to be more mindful - if being a mind is about reducing the physical world's capacity to surprise or confound us to the bare minimum.
  • What is life?
    Apokrisis argument is that biological life perpetuates itself at the most fundamental levels by governing dissipative structures: intelligent data governing engines of the dissipation, but human minds themselves cannot readily be described as dissipative systems/structures.VagabondSpectre

    In my opinion, the best neuroscience model of the mind is Karl Friston's Bayesian Brain approach. And that does describe it as a semiotic dissipative structure - http://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20principle%20-%20a%20rough%20guide%20to%20the%20brain.pdf

    The mind as informational mechanism is all about reducing the uncertainty that a physical/material world has for an organism. So it is all about modelling that is intimately tied to physical regulation. And that is why a lack of such a tie makes artificial intelligence so impoverished - unless it is, as I argue, tied back into human entropic activities as yet a further level of semiosis.
  • What is life?
    It is accepting formal and final cause as real at the cosmological level. Even if that is just the general desire for entropification served by the form of dissipative structure. And that does account for life and (actual) mind as biology is ultimately explained as dissipative structure.

    I agree that pansemiosis is still a speculative thought. Does it add anything or systematise our thought in any new useful way? And clearly it is a big difference that the interpretance forming non-living being is information outside that being, not information internalised as a model.

    So as I said about a tornado, it seems rather lifelike as it rages about a landscape. But it is being sustained by boundary conditions, not by any internal model that makes it a self with some degree of autonomy.

    But on the other hand, it feels important to shake up physicalist ontology rather boldly - to show that it is just as weird to call physics a matter of "material" as it is to call it "deadened mind".
  • What is life?
    To use the words means being able to cash them out as acts of measurement. So it is a semiotic coupling of models and measurements, concepts and percepts, intepretants and signs.

    If I use the word "cat" successfully - in ordinary language - it means we agree on some interpretation of a sign. And if there is semantic vagueness, I could draw you a picture of my conception, or point to some "actual cat" - perhaps point to that lion sitting over there on a mat, and say "except a lot smaller and friendlier, without the mane, etc". A whole lot of further measurables to constrain your state of conception.

    So on what side of the concept-percept or model-measurement divide does the semantic essence reside? Is it the theory that defines the cat, or the acts of measurement? Or the two functioning reliable/usefully/pragmatically together - over time? That is, the formal/final cause - as captured by the model - is coupled to the material/efficient cause, as captured by the acts of measurement (or the physical fact of "the sign of the thing" being triggered, so to speak).

    So we start in good Peircean fashion with an epistemology that is personal and covers pragmatic ordinary language use. We all have our private interests and can define our languages of thought. When I see cats in a "perceptual fashion", I might have all sorts of feelings of cuteness and loveableness. But you might look at them with fear, loathing or even indifference. We each come at the world through our own lens of self-interest, our own story of individuated self and private arrangements of purposes or desires.

    But when semiosis, through the syntactic machinery of speech, lifts such mindfulness to a communal level - humans as socially constructed creatures - then of course private meanings have to be now shaped by some common purpose (a group or cultural identity). And that means they must have a common form - the constraint of a common response to hearing a word like "cat" used "the right way".

    So there is an ideational essence of catness - the one that functions at the cultural level of mindfulness to act as the tacit model of what sufficiently conforms to our contraints-based definition of "a cat". And yes, that definition certainly seems hazy. It sort of includes quolls ... or tiger-cats. But that is no big deal. That is how constraints are meant to function - limiting the variety of semiotic interpretation to the point of indifference.

    There might still be further differences if we were to get out our metaphoric boundary atomising microscope - like the three-legged cat - but they don't in fact make an essential difference. At least within some community of ordinary language speakers (as opposed to the good folk running the local cat fanciers show who reject both the quoll and the three-legged critter you turned up with at the competition).

    So right. That establishes the epistemological side of the argument for "essences" as being the information that constrains interpretive uncertainty. And clearly such definitions of essence are loaded with self-interest. That is why they speak to formal/final cause as it plays out in minds. The essence includes our reason for looking at the world in some particular way. It is not necessarily a fact of the thing, just necessarily a fact of the pragmatic relation - the fact that speaking this way achieves a (communal) purpose in terms of interacting with the noumenal world (the thing that is resistant or causally "other" to our wishes - the material/efficient causes that we are seeking to control, in short).

    If you are with me this far - grind, grind, grind - then you will have already remembered how Peircean metaphysics then flips epistemology into ontology.

    If we now want to answer scientific/metaphysical strength questions about natural kinds or essences - talk about the facts of the thing-in-itself, with no distorting human lens of self-interested speech - then we have to have a model of how the physical world is itself a mind doing semiosis. We have to be able to find a way to model formal/final causes "for real". And that is when we start to focus on how nature is in general a self-organising entropic habit. It is modelling itself into existence via acts of measurement.

    So now the essence of a cat is whatever a cat genome says it is. To the degree the genome cares about the details. Then the essence of this cat here is whatever its neural or other developmental information has to say about the matter - to the degree that information sweats the fine print. Is the three legged cat still a cat? As far as the three-legged cat is concerned, probably yes. And probably functionally for other cats who come across it.

    And then - if we can keep careful track of the information that stands for what is essential and necessary in terms of some individuated identity, not merely accidental or arbitrary differences that don't make a difference - we can cross the boundary between life and non-life to continue to put a finger on natural kinds or essences when talking about non-living systems, like weather patterns, plate tectonics, or stars.

    So for you, as an instinctive reductionist, the issue is wherever does essence get to enter the picture? And for me, as a holist, the question is turned around so that it is wherever does essence get squeezed out? If we are now talking about ontology - the real world - what is it like for it to be at its least mindful or purposive, its most accidental or meaningless?

    So we are chalk and cheese. My way sees nature as a unity. Even epistemology = ontology in rigorous fashion. Your way always leads to a division - and a division that doesn't even dare speak its own name at that. This is why your arguments always end up as muffled transcendence while claiming the cover of commonsense realism.

    Look, he exclaims, the cat is on the mat. If everyone's head turns and nods in agreement, honour is then satisfied. Meaning is use. Syntax is sufficient to demonstrate coordinated behaviour. Actual private semantics be damned as unreachable metaphysics.

    Philosophy by dog-whistle. It's just so seductively simple. And just so metaphysically wrong.
  • What is life?
    I had to grit my teeth in order to work my way through that post, ApoBanno

    That is really interesting information Banno - rolleyes....

    The difference seems to be that you continue to call this use, the "essence", while I don't.Banno

    Thanks for again illustrating the narcissistic essence of life and mind. Whatever else you don't know, you know you are right and all that remains to be determined is how everyone else is wrong. Anticipation-based world modelling in a nutshell.

    Get back to me if you have some more interesting reply to my arguments than that. Clue: four causes.
  • What is life?
    I put the case that biology succeeds despite not having a hard definition of the essence of life.Banno

    Definitions are never going to be hard if they have to track the crossing of some critical boundary. It is always going to be the case that the line between non-life and life is going to look hazy under the scientific microscope.

    So that is why your problem with metaphysical essence is so misguided. You think the essential difference has to be marked on reality as some binary borderline. On this side life, on the other side, not-life. And the arbitrary nature of such lines on a map are obvious.

    So yes, biology succeeds because it finds the essential in generalties - the constraints that speak to global or top-down formal/final causes.

    You are imagining the search for essence to be the search for local material/efficient causes - the usual atomist/reductionist approach to understanding "the real". And that then leads to a crazy "natural kinds/rigid designator" style essentialism. That is what promotes the argument that the stuff on one side of a material border must be "non-living", the other side "living", thus provoking a metaphysical implosion and logical crisis.

    But once you accept that generals are real, formal and final cause exist, existence itself is simply a state of constraint on foundational vagueness, then the problem of "essence" goes away. We know we are trying to talk about different kinds of hylomorphic substances - different forms of material constraint. So it something globallly functional rather than locally material that we mean to pick out as defining the boundary between living and non-living matter.

    So that is why the semiotic approach to definition works. And is the one that theoretical biology keeps picking out, as your reference confesses...

    One working definition of ‘life’ that has become increasingly accepted within the origins-of-life community is the ‘chemical Darwinian’ definition. A careful formulation (Joyce, 1994a;b) is: ‘Life is a self-sustained chemical system capable of undergoing Darwinian evolution.’

    So life is different in that it localises formal/final cause. It is organismic in being able to remember the negentropic shape that is its entropic advantage.

    Non-living matter does not have this internal model of itself. Non-living matter is regular and self-similar only due to global information, or pan-semiosis.

    Dissipative structures do seem lifelike. A tornado seems to chase its way across a plane of temperature gradients, sustaining its vortex by "eating" the differences. Physico-chemical nature is ruled by all sorts of such growth and entropification processes. They have common forms - like vortexes and fractals. And they have a generic purpose - as encoded in the Laws of Thermodynamics. So - like even fire - they are sort of life-like ... in being pan-semiotic, or constrained in a global fashion by formal/final cause. But then the ability to internalise this kind of information - form a self-organising model of "self" - marks a functional crossing of a boundary.

    But again, if we are to put this under a microscope - ask about life as a natural kind - then we have to actually understand the question we want to ask from nature's own point of view.

    The whole point is the functional "having of a self-describing model" - the internalised information that is captured by a whole array of semiotic machinery, but principally genes, neurons, words (and now numbers). So life is semiotic modelling - internally generated constraints over less constrained non-living physico-chemical entropic flows. And now at the material borderline things look hazy because life only needs a stochastic cut-off point between what it - it itself - defines as living vs non-living, self vs non-self, regulated vs haphazard, meaningful vs meaningless.

    That is, in being a system able to interpret the differences that make a difference, the system defines its own border of indifference. We humans can stick life under a microscope and complain that this borderline looks hazy to us. But so what? In the "mind" of the organism, it has set its own probabilistic threshold in terms of what is "good enough" as the constitution of its material/efficient self. It has an idea of its formal/final essence. And that is what it is busy living out as an entropic process.

    So essence is use. ;)

    It is just that life is in fact defined by making essence personal. Essence for the physical world is its global identity in terms of formal/final cause. And essence for the biological world is information that is internalised to "a self". It is a local capacity to add constraints or bounds on entropic activity.

    I personally don't feel much need for "essence" as a term. It suffers from the substantive confusion I outlined. Substance was a theory of metaphysical hylomorphism - a "four causes" story about how formal/final cause acted to constrain material/efficient degres of freedom. But then along came atomistic reductionism - in competition with Platonic religious spiritualism. That produced the familiar modern confusion of a sustance dualism.

    Folk had to pick a side. Either reality was just material/efficient cause, or there was this other mystic stuff call formal/final cause. And Fregean logical atomism picked its side, pretty soon ran into a ditch, and was left to walk away from its own smoking wreck, muttering bitterly about nothing being certain except that if logical atomism couldn't make metaphysics work, that proved no-one could make it work.

    Meanwhile Peirce had already sketched out a much bigger four causes metaphysics that explained the hylomorphic divide in terms of semiosis. Instead of a mind-matter divide, he provided a sign-matter bridge. And now modern thermodynamics is cashing that out in information theoretic terms. We can actually make scientific measurements on both sides of the sign-matter division in terms of entropy or fundamental degrees of freedom.

    And as I've pointed out, definition is theory plus measurement. Definition can be precise to the degree we can make exact measurements of what we claim to be believing. The metaphysics of the modern information theoretic approach at last does give us a fundamental measurement basis. And so biology -
    already a very recent discipline - has started to really move in the last 30 years.
  • What is life?
    So life can be defined as a natural kind, and yet that is not an implicit theory of essence? Ah, how you Fregean scholastics love dancing on your pinheads.
  • What is life?
    How to put it simply? I would say you are far too focused (like all AI enthusiasts) on the feat of replicating humans. But the semiotic logic here is that computation is about the amplification of human action. It is another level of cultural organisation that is emerging.

    So the issue is not can we make conscious machines. It is how will computational machinery expand or change humanity as an organism - take it to another natural level.

    It is still the case that there are huge fundamental hurdles to building a living and conscious machine. The argument about hardware stability is one. Another is about "data compression".

    To simulate protein folding - an NP-strength problem - takes a fantastic amount of computation just to get an uncertain approximation. But for life, the genes just have to stand back and let the folding physically happen. And this is a basic principle of biological "computation". At every step in the hierarchy of control, there is a simplification of the information required because the levels below are materially self-organising. (This is the hardware instability point seen from another angle.)

    So again, life and mind constantly shed information, which is why they are inherently efficient. But computation, being always dependent on simulation, needs to represent all the physics as information and can't erase any. So the data load just grows without end. And indeed, if it tries to represent actual dynamical criticality, infinite data is needed to represent the first step.

    Now of course any simulation can coarse grain the physics - introduce exactly the epistemic cut offs by which biology saves on the need to represent its own physics. But because there is no actual physics involved now, it is a human engineering decision about how to coarse grain. So the essential link between what the program does, and whether that is supported by the organisation that results in an underpinning physical flow, is severed. The coarse graining is imposed on a physical reality (the universal machine that is the computer hardware) and is not instead the dynamical outcome of some mass of chemistry and molecular structure which is a happy working arrangement that fits some minimum informational state of constraint.

    Anyway. Again the point is about just how far off and wrongly orientated the whole notion of building machine life and mind is when it is just some imagined confection of data without real life physics. What is basic to life and mind is that the relation is semiotic. Every bit of information is about the regulation of some bit of physics. But a simulation is the opposite. No part of the simulation is ever directly about the physics. Even if you hook the simulation up to the world - as with machine learning - the actual interface in terms of sensors is going to be engineered. There will be a camera that measures light intensities in terms of pixels. Already the essential intimate two-way connection between information and physics has been artificially cut. Camera sensors have no ability to learn or anticipate or forget. They are fixed hardware designed by an engineer.

    OK. Now the other side of the argument. We should forget dreams of replicating life and mind using computation. But computation can take human social and material organisation to some next level. That is the bit which has a natural evolutionary semiotic logic.

    So sure, ANNs may be the architecture which takes advantage of a more biological and semiotic architecture. You can start to get machine learning that is useful. But there is already an existing human system for that furrther level of information processing to colonise and amplify. So the story becomes about how that unfolds. In what way do we exploit the new technology - or find that it comes to harness and mould us?

    Agsin, this is why the sociology is important here. As individual people, we are already being shaped by the "technology" of language and the cultural level of regulation it enables. Humans are now shaped for radical physical instability - we have notions of freewill that means we could just "do anything right now" in a material sense. And that instability is then what social level constructs are based on. Social information can harness it to create globally adaptive states of coherent action. The more we can think for ourselves, the more we can completely commit to some collective team effort.

    And AI would just repeat this deal at a higher level. It would be unnatural for AI to try to recreate the life and mind that already exists. What would be the point? But computation is already transforming human cultural organisation radically.

    So it is simply unambitious to speculate about artificial life and mind. Instead - if we want to understand our future - it is all about the extended mentality that is going to result from adding a further level of semiosis to the human social system.

    Computation is just going to go with that natural evolutionary flow. But you are instead focused on the question of whether computation could, at least theoretically, swim against it.

    I am saying even if theoretically it could, that is fairly irrelevant. Pay attention to what nature is likely to dictate when it comes to the emergence of computation as a further expression of semiotic system-level regulation.

    [EDIT] To sum it up, what isn't energetically favoured by physics ain't likely to happen. So computation fires the imagination as a world without energetic constraints. But technology still has to exist in the physical world. And those constraints are what the evolution of computation will reflect in the long run.

    Humans may think they are perfectly free to invent the technology in whatever way they choose. But human society itself is an economic machine serving the greater purpose of the second law. We are entrained to physical causality in a way we barely appreciate but is completely natural.

    So there are strong technological arguments against AI and AL. But even stronger here is that the very idea of going against nature's natural flow is the reason why the simple minded notion of building conscious machines - more freewilled individual minds - ain't going to be the way the future happens.
  • What is life?
    But some initial responses are: why is maths considered to be the order that arises as a consequence?Wayfarer

    Maths is the science of patterns. It is our modelling of pure form. So maths remains just a model of the thing in itself and not itself the thing.

    So I am not making an actually mystic Platonic point. In fact, our mathematical models are generally terribly reductionist - bottom up constructions with numbers as their atoms. So Scientism rules in maths too. But also, to be able to create these reductionist models - of forms! - maths has to be able to think holistically. So the informal or intuitive part of mathematical argument - the inspiration that makes the connections - does have to see the big picture which then gets collapsed to some more mechanistic description. That is how mathematical thought ends up with equations.

    I would have thought the source of the 'unreasonable efficacy of mathematics' is due to the fact that it is prior to the 'phenomenal domain' rather than a consequence of it it - nearer to the source.Wayfarer

    But I am not saying one has to come before the other. Rather both reflect the same process - a summing over everything to discover what doesn't get self-cancelled away by the end. So the first place it has to happen is out there in the real physical world. It starts as ontology. But then epistemology finds itself having to recap the same developmental process - because that just is the essence of development as a process.

    So the surprise is that nature is a process. People normally think of it as a thing - an existence rather than a persistence. And then the process of modelling the world could only develop via the same logic. So that is why maths and reality look like mirror images. Each is a process - one ontic, the other epistemic.

    This is the basis of Peircean metaphysics - the reason why we might call the cosmos semiotic, or - your favourite - consider matter as deadened mind.

    Now I think the reason that this seems backwards is because nowadays it is naturally assumed that intelligence is a result of evolution. It's not something that appears until the last second, in cosmic terms, so intelligence itself is understood as a consequence. Whereas in traditional cosmology the origin of multiplicity is the unborn or unconditioned which is symbolised in various (and often highly divergent) ways in different philosophical traditions but which, suffice to say, is depicted as in some sense being mind-like. Of course that is deprecated nowadays because it sounds religious.Wayfarer

    As we have always agreed, Eastern metaphysics thinks this same general away about existence as a developmental process. Before the mechanistic mode of thought arose (to organise societies by democracy and law, then to harness the world with machines), everyone could see the natural logic of "dependent co-arising" as the basis of any metaphysics.

    And as long as you say intelligence rather than consciousness, then yes, it is quite possible to place that there right at the beginning in some true sense. To me, intelligence means formal and final cause - the having of a purpose and then the organisation that results to achieve it. And even if it is just the second law - a driving desire to entropify which then results in the particular mathematics of dissipative structures - then even scientists are saying that intelligence or intent was there from the start with the Big Bang.
  • What is life?
    You said that computation doesn't produce a steady-state system, and typically it doesn't. But does the mind produce a steady-state? I would say yes and no given the presumption that connected groups of neurons have persistence in some aspects of their structural networks (the neurons and connections approximating "cat" has somewhat coherent or permanent internal structure AFAIK), but parts of neuronal networks also exhibit growth and change overtime to such a degree that the dynamics of the entire system also change.VagabondSpectre

    Again, this is why machines and organisms are at different ends of the spectrum (even if it is the same spectrum in some sense).

    So it is because biology can stabilise the unstable that it can easily absorb new learning. It is already a system of self-organising constraint. So it can afford to accept localised perturbations - new learning - without a danger of becoming generally destabilised.

    Machines by contrast are only as stable as their parts. They have to be engineered so their bits don't break. Because if anything important snaps, the machine simply stops. It can't fix itself. Some human has to call in the repair-man with a bag of replacement components.

    In machine learning - even with deliberate attempts at biological designs like anticipatory neural nets - this lack of the ability to stabilise the unstable shows in the central problems with building such machines. Like catastrophic forgetting. The clunky nature of the faux organicism means that a learning system can keep absorbing small differences until - unpredictably - the general state of coherence breaks down.

    A human brain can absorb an incredible variety of learning with the same circuits. A machine's learning is brittle and liable to buckle because the top-down stability only reaches a small way down. At some point, human designers have to introduce a cut-off and take over. Eventually a repair-man has to be there to fix the breakdown in foundational hardwared stability which the computer still needs, even if it has been pretending in software emulation that it doesn't.

    We could train a single artificial neural network to recognize "cats" (by sound or image or something else), and I'm not suggesting that this artificial neural network would therefore be alive or conscious, but I am suggesting that this is the particular kind of state of affairs which forms the base unit of a greater intelligence which is not only able to identify cats, but associate meaning along with it.VagabondSpectre

    And this is always the engineer's argument. If I can build just this one simple stable bit - the cat pattern recognition algorithm - then that gives me the stability to add the next level of computational complexity. Eventually we must replicate whatever the heck it is that life and mind are actually doing.

    But this line of thought is fallacious for the reasons I've outlined. By continually deferring the stability issue - building it in bottom up rather than allowing it to emerge top-down - the engineer is never going to arrive at the destination of a machine in which all its stability comes top-down as stable information regulating critically unstable physics.

    I still don't understand why life and mind needs to be built on fundamental material instability or it ain't life/mind.VagabondSpectre

    OK, you get that information is so immaterial that it can't push the world very hard. So to have an effect, it must find the parts of the world which respond to the slightest possible push. It needs to work with material instability because it itself is just so very, very weak.

    Right. That entropic equation is then only definitional of life/mind as a central logical principle. It explains life/mind as semiotic mechanism. It show how the price of informational stability is material instability. It is a trade-off - a way to mine a world that is overall rather materially stable by comparison.

    So the definitional strength argument is that life/mind is semiotic dissapative structure. Its essential characteristic is that it takes advantage of this particular informational stability vs material instability trade-off.

    I know why biological life needs extreme material instability, but do minds need it?VagabondSpectre

    Yep. So you can accept biology is semiotic dissipative structure, but you think intelligence or even consciousness is something else - like really complex information processing. The biological or hardware side of the story can be set aside. Computers just deal with the informational realm of symbol manipulation. Syntax can do it.

    But my argument is that all biology is regulated by information. There is "mind" operating even when it is just genetic information and not yet neural information. The genes are an anticipatory model of the organism. The neurons then put that model of "the organismic self" in a larger model of "the world".

    And we can see how that world modelling depends on instability at its very interface between self and world. Sensory receptors wouldn't be sensitive unless they as unstable as possible. They have to be set up as switches that only respond to change in the world. And which stop responding as soon as the change stops. We don't hear the humming fridge because our neurons have already got bored with it. It is only if the fridge stops - data disappears - that they wake up again.

    So minds don't need the world to be unstable in the same way. Perception isn't metabolism. Although the way we think is focused on the affordances of the environment. We are evolved to look for the causal levers by which we can move the world with the least effort. So it all comes back to an economy of control. It is great that the world is also materially stable - we don't have to worry about controlling its existence. We can build a house of solid foundations - or a computer with sound engineering - and then just get on with living and dealing with the surprising. Mental instability is reserved for creative problem solving - not being so fixed in our habits that we can't try new smart ways to regulate the world with the least effort.

    And then to be able to have this kind of sensitivity, that has to be built in from the ground up - from the level of individual sensory receptors.

    So it would only really be from the next level up - the sociocultural one - that we get that biological story of informational stabilisation in search of material instability to regulate. A society depends on a bunch of people who might go off in any direction, yet the lightest touch can keep them all bound in some common direction. Just wave a flag - the simplest signal - and the group will follow.

    Again, this has implications for machine intelligence. If DeepMind is not good at having friends, being inspired by leaders, a natural at working in a team - all because it also has all the opposite potential of being moody, going off message, generally getting chaotic - then how is it ever going to simulate any actual human? A machine by definition is engineered for stability. Instability is the last thing we would engineer into DeepMind - or at least the kind of relationship instability that is critical for humans who are social creatures.

    And all our science fiction gets that. Machines are inhuman - the misfit dynamics of teamwork is the last thing they get. They are never in on the jokes, just tagging along with the human gang in bewilderment. Where there is no risk of individual instability, there can be no reward of collectivised stability. Humans by contrast live on a constant knife edge of fractiousness vs compassion. The smallest social thing can tilt them. Which is ... why humans are so fantastically controllable. Just wave a flag, say thank-you, hoist a finger, or offer any other gesture of minimal effort. The results will be hugely predictable. Behaviour is simple to co-ordinate when there is semiosis to regulate the instability and tilt it in the right general direction.
  • What is life?
    You often ask why nature is so mathematical. And the reason would be that maths (especially symmetry maths) can be considered to be the order that emerges once one has abstractly - metaphysically - summed over all possibilities. Maths starts with everything in an abstract way and winds up with what can't be subtracted away. So you arrive at triangles as you try to remove as many corners from a polygon as you can. Or in the other direction, circles as you try to remove all the faces.

    The argument then is nature arises the same way. To the extent it is the constraint or erasure of "every possible action over all possible dimensionality", it would find its way to the same mathematical outcomes. Simplicity will out.

    So the standard model has "problems" in that it in fact gives a completely mathematical reason why there are quarks and electrons, for example. One is the result of the "eight-fold way" of breaking SU(3) chiral symmetry (the strong force). The other is the result of breaking the SU(2) symmetry of the weak force - the Higgs mechanism explaining how the four-fold way of SU(2) becomes completely broken down to the ultimate simplicity of U(1), the simplest possible kind of particle spin that is the electron with its electromagnetic field (or neutrino, without).

    So the standard model is a stellar success. But having understood the lowest energy modes, we still need to discover the original more complicated initial symmetry that the whole of the Universe might have cracked with its 3D Big Bang. The "problem" is that there are a lot of candidates, such as SU(5), SO(10) and E(8). And to test the different ways of crumbling this "supersymmetry" into the simpler bits that make our observable world - SU(3) and SU(2) and U(1) - we would have to be able to detect the various other particles that the different candidate Big Bang symmetries predict.

    So we could test for SO(10) say. It would have its own characteristic zoo of high energy particles (or excitation modes that exist because the "cosmic plasma" can still ring in a really complex higher dimensional way, and not just the much cooler and simpler way of a quark or electron).

    Thus the Standard Model accounts for the observed world with mathematical simplicity. It already proves that nature shakes itself down to be as simple as organisationally possible. It arrives at the simplest shapes - just like the Platonic solids.

    But the difficulty is to be able to make observations that then limit the earliest symmetry breaking - the configuration which was at the start of it when all forces (including maybe gravity) were "facets" of some still quite hot and multi-directional "vibration", and so still liable to spew out all sorts of weird higher-symmetry particles along with the much simpler ones that eventually came to dominate in a cold/expanded world.

    As such, the Standard Model is hardly a failure or in crisis. It stands above everything in science to show we have got creation's basic shtick right. Given the practical impossibility of doing experiments at Big Bang energies, we might hope to use pure maths to discover the foundational symmetry. Like string theory tried, we might just be able to figure it out by mathematical reasoning. This is still promising, but of course string theory resulted in an almost infinite number of initial symmetry conditions. And it doesn't yet have any definite mathematical reason to pick out just one. And experiment may never come to the rescue as we are essentially asking about what happened "just prior" to the Big Bang itself.

    So in just 500 years, science has managed to explain the stuff out of which everything observable has been made in terms of Platonically-necessary and maximally-simple mathematical principles. Pretty remarkable.

    And yes, there is still the issue of the physical constants. But at worst, that just means there are as many universes as there are different values for those constants (the majority of which would then be unstable and rapidly inexistent anyway). So the formal framework would still be the same - there can only be some "simplest symmetry-breaking" when it comes to the maths. But every survivable arrangement of constants to scale the coupling strength of forces, and masses of particles, would survive to create a larger multiverse zoo of outcomes.

    On the other hand, the constants of our Universe might turn out to be as mathematically necessary as everything else. And why not? Is there some good apriori argument against it?

    But either way, you can see how maths might describe the Universe if both are the product of "sums over possibility". In each case, we can start with an everythingness that is every possibility. Then because much of that everythingness is then going to be parts contradicting some other part (like positive annihalating negative), pretty much everything falls away until we are only left with the simplest possible forms of organisation - the symmetries and symmetry-breakings which maths describes and the Universe physically embodies.
  • What is life?
    But that has lead many people to assume that science somehow can explain those very same regularities, when really why there are such regularities is beyond physics - i.e. meta-physical.Wayfarer

    But that natural order is explicable in terms of an accumulation of history if we understand the mechanism of the critical phase transitions.

    So it is like our Universe being now in its water phase where before it was gaseous. Being watery imposes all sorts of material constraints that we can describe as "the laws of nature". But we can also understand why that is the case if we know about the gaseous phase from which water condensed. We can see how the world was once "a lot less lawful" and so how constraints got added.

    This is why modern physics and cosmology is so focused on symmetry and symmetry-breaking. That is a mathematical strength metaphysics of phase transitions. It describes in a generic fashion what must have been the case before to get what is observably the case after. Or indeed, what we could hope to observe again if we got matter in an accelerator and heated it up enough to reverse the breakings.
  • What is life?
    But Sam L. responded with the claim that matter follows the laws of gravity. That's why I pointed out the category error. The position being argued by VagabondSpectre, and apokrisis as well for that matter, is completely supported by this category error. Simply stated, the error is that existent material can interpret some fundamental laws, to structure itself in a self-organizing way. it is only through this error, that supporters of this position can avoid positing an active principle of "life", and vitalism.Metaphysician Undercover

    My position on the laws of physics is that - to avoid any mystery - laws are "material history". Laws are simply the constraints that accumulate as a system (even a whole Universe) develops its organisation.

    So that is how something global can be felt locally. The Universe has crystalised as some general material state. And that constrains all local actions in radical fashion from then on.

    This again is a big advantage of turning the usual notion of material existence on its head.

    The usual notion is that existence is the result of causal construction. First there was nothing, and then things got added. So that implies someone must have chosen the laws of nature. There was a law-giver who had some free choice and now somehow every object knows to obey the rules.

    But a Peircean semiotic metaphysics - one where existence develops as a habit - says instead everything is possible and then actuality arises by most of that possibility getting suppressed. So the universal laws are universal states of constraint - the historical removal of a whole bunch of possibility. The objects left at the end of the process are heavily restricted in their actions - and by the same token, they then enjoy the equally definite freedoms that thus remain.

    That is what Newtonianism was about. The motion of massive bodies is universally restricted so that it is only free, or inertial, if it is constant motion in a straight line or spinning on a spot (translational and rotational symmetry is preserved). So it is extreme restriction which underpins extreme freedom - the inertia that means a mass has some "actual physical properties", like a quantifiable position and momentum.

    So laws are a mystery in a "something from nothing" metaphysics. There seems no reason for the rules, and no connection between these abstractions and the concrete objects they determine.

    But a constraints-based holistic metaphysics says instead that laws are simply historically embedded material conditions. History fixes the world in general ways that then everywhere impinge as constraints on what can happen. But in doing that, those same constraints also underpin the freedoms that local objects can then call their own.
  • What is life?
    That's all fine, so is there a "unity", a "singularity" in The Big Bang EventPunshhh

    There would be a unity or symmetry. That is implied by the fact something could separate or break to become the "mutually exclusive and jointly exhaustive" two.

    But the further wrinkle is that the initial singular state is not really any kind of concrete state but instead a vagueness - an absence of any substantial thing in both the material and formal sense.

    This radical state of indeterminism is difficult to imagine. But so are many mathematical abstractions. And it is a retroductive metaphysical argument as we are working back from what we can currently observe - a divided world - to say something about what must have been the undivided origins.

    So note how our universe is limited to just three spatial directions. Going on the "everythingness" argument, there seems no reason that before the Big Bang symmetry breaking moment, when a vague everythingness was constrained, this would mean the pre-Bang was infinitely dimensional. Anything happening, bled into an unlimited number of directions. And so nothing could really happen.

    There are good arguments for why the only stable arrangement of dimensions is three. Forces like gravity and EM dilute with the square of the distance. In a universe of less dimensions, force would remain too strong. In more dimensions, it evaporates too fast. So we can argue that there is something Goldilocks about three dimensionality as having the best balance if you have to build a spacetime that is a dissipative structure, expanding and cooling by a steady thermal spread of its radiation.

    So from that, you can imagine the pre-bang state being simply radiative fluctuations that instantly thermalise. Every attempt at action gets swallowed up instantly as it is draining in infinite directions and not taking its time spreading out and thinning inside three dimensions.

    The Big Bang is thus more of a big collapse from infinite or unbounded directionality to the least number of dimensions that could become an eternal unwinding down towards a heat death.

    The details of this argument could be wrong of course. But it illustrates a way of thinking about origins that by-passes the usual causal problem of getting something out of nothing. If you start with vague everythingness (as what prevents everything being possible?) then you only need good arguments why constraints would emerge to limit this unbounded potential to some concrete thermalising arrangement - like our Big Bang/Heat Death universe.
  • What is life?
    Yep. I think apo is working on a theory of life that involves an unconscious signaler and an unconscious receiver. But maybe he didn't mean that, because that type of thing is pervasive in electronics.Mongrel

    Just keep making random shit up.
  • What is life?
    Electrical discharge along axons precedes the release of acetylcholine. I'm not sure why you're denying that. It's a science fact, dude.Mongrel

    You can dude all you like. But action potentials are not electron discharges.

    Ion flow regulated by voltage-gated channels are electrical in that a change in membrane potential at a point does cause a change in protein conformation causing a pore to open. So a changed potential is a signal which the pore mechanically reads to continue a chain reaction of depolarisations.

    But sodium channel blockers don't stop electrons flowing across or along membranes, do they? They block the ability of pores to respond to the signal of a potential difference.

    And in describing the machinery of neural signalling, the striking fact is not the electrical gradients (why would it be?) but the intricate semiotics of messaging involved.

    Eh.. I was an electronic engineer for 10 years. I've been a nurse for 10 years.Mongrel

    And I've written books on neuroscience.

    I believe you're suggesting that only a particular kind of material can be organized as a living thing. And this is somehow related to your understanding that life involves signs in a way that non-life does not.Mongrel

    It's not just my understanding.
  • What is life?
    It's often referred to as the neuro-endocrine system because the two function pretty thoroughly as a team in governing the body.Mongrel

    I will think you will find that is BS. Triggering a gland is different from triggering a muscle. Even if "electrical discharge" is involved in neither.

    So just like botox and muscles, there is a reason why endocrine disruptors are chemicals like dioxins or plasticisers that mimic biological messages. It is not stray EM fields you have to worry about - even if the folk with tin-foil hats might tell you otherwise.
  • What is life?
    So botox works because it blocks tiny amounts of electricity and not large amounts of acetylcholine discharge?

    Cool. I never understood that before.
  • What is life?
    Neurons communicate with muscles, for instance, by electric discharge. Look into it. It's fascinating stuff.Mongrel

    You mean acetylcholine discharge? The muscle fibres know to contract because they get given a molecular message?

    And even if you are getting into the controversy of direct "electric synapses", it is still not about the conduction of an electrical current but a wave of membrane depolarisation - Na+ ions being allowed to flood in through the molecular machinery of membrane pores before being pumped out again to maintain a working gradient.

    So everywhere you look, you see semiotics at work - messages being acted upon as the way the hardware does things - not some simple current flow which has been modulated to carry a "signal" as a physical pattern.

    Think about it. A radio broadcast is modulated frequency. It encodes music and voices in a physical fashion that is simply a sign without interpretance. That pattern then drives some further set of amplifying circuitry and loudspeakers at the receiver end. So no matter how complicated or syntactic the physical pattern, zero semantics is happening as it flows. There is no "communicating".

    Biology is the opposite. The physics and the message are an interplay happening right where it all starts. The circuits are alive because the flow is a process of communication. The two sides - the electron transfers that drive the production of waste, and the proton gradients that do the meaningful work - are strictly separated so they can also crisply interact.

    So when you talk about "electrical discharge", that again sounds like you being vague so as to avoid getting into the complex semiotics that is actually taking place.

    Computers have electrical circuits. Humans have electrical circuits. So hey. Life is just chemistry and mind is just information processing. [Pats small child on the head and walks away.] :)

    Science fiction writers have long imagined silicon-based life forms, silicon and carbon being similar.Mongrel

    Fiction writers can take poetic licence with science. Science will point out the critical differences between silicon and carbon.

    Like the weakness in bonds that means you couldn't make large complex organic molecules. Or the unsuitablity of silicon for redox metabolism as its waste electron acceptors are not a gas like CO2 but instead silicon oxides.

    And you imagine having to excrete sand rather than CO2 which just leaks out of a cell.

    So your objection is all based on silicon+electricity being the wrong stuff in the sense of being the wrong electrochemical stuff. Do you not get that the "wrong stuff" is about it being the wrong stuff in lacking a potential for semiotic mechanism?

    Even if silicon life was limited by molecular complexity and also energetically constrained by the need to excrete solid waste, it could still exist - if it could implement actual nanoscale communication across an epistemic divide. Or be a semiotic "stuff" in other words.
  • What is life?
    Electricity is extensively utilized by living things.Mongrel

    That's a vague claim. Modern biophysics would agree that electron transport chains are vitally important as "entropic mechanism". But even more definitional would be proton gradients across membranes. It is those which are the more surprising fact at least.

    So it is the ability to separate the energy capture from the energy spending - the flow of entropy vs the flow of work - which is the meaningful basis of life.

    We can talk of a machine being driven by energy - because we are there to turn it off and on. But life has to build in that semiotic difference at the foundational level, down at the nanoscale, where a separation between entropy production and negentropic work has to be maintained via a physical or chemical difference.

    So again, silicon/electrons is just not that kind of stuff.
  • What is life?
    Is it really a fourth option, or just essentially an elaboration of the second option I listed?John

    It is different in that it explicitly embraces the holism of a dichotomy. It says reality is the result of a separation towards two definite and complementary poles of being - chance and necessity, material fluctuation and formal constraint, or what Peirce called tychism and synechism, that is, spontaneity and continuity.

    So you can't merely elaborate chance or fluctation to build a world. Instead, the world emerges by the dialectic which separates chance fluctuations (like a particle decay) from the global constraints (like the experimental conditions that specify the observational context for said decay).

    So that is how quantum theory works. On the one side (inside the deterministic wavefunction description of a quantum system) you have all the indeterminism. A purity of spontaneity or uncertainty. Then on the other, you have the determining context - the observer's world - that serves to fix the wavefunction and thus give the quantum probability its very certain measurement basis.

    Thus if we are talking ontically - taking quantum theory as our cue - then the particle decays because its probability space was shaped a certain way. And by the same token, that wavefunction defines some scope of pure and unreachable uncertainty. True spontaneity is being manufactured - by virtue of the dialectic or symmety-breaking which is the other side of things, the determining of an observational context.

    How could we talk about particle decays in the dense heat of the first instant of the big bang? In a thermal chaos without clear divisions, there is nowhere to definitely stand so as to be able to see something else definitely happen. The hot fog has to dissipate for events to become either classed as deterministic or spontaneous. You need a dark, cold void for it to become a thing that a particle has not decayed and so to have a statistical history that says something about the degree of spontaneity exhibited by the fact of its decay.

    So it is not just an elaboration of the claim that nature is fundamentally indeterministic. When considered in full, the argument is really that both spontaneity and its other emerge as crisply definite via a process of dialectical development or symmetry breaking. So quantum weirdness is a thing - only because local classicality is also a thing. And they both become more of a thing together as the cosmos expands and cools.
  • What is life?
    A computer which can work somewhat objectively in translating languages or a camera which takes a picture and records light data are not aware of the meaning contained within the data they manipulate and store, but they somewhat objectively work with that data none the less in a way that retains meaning.VagabondSpectre

    So this is syntax and not semantics.

    A computer can mechanically map a set of constraints specified in one language into the same set of contraints specified in another. A faithful translation like this is semantics preserving. The constraints would still serve to reduce a mind's uncertainty in the same fashion. "Cat" and "chat" can mean the same thing in different languages because they are both verbal signs meant to limit their users to some common viewpoint, some common state of anticipation, of the feline variety.

    So - in Chinese Room fashion - machines can be constructed that "make the same interpretations" as we would, without having the faintest possibility of being minds that actually understand anything. The ability syntactically to manipulate signs in a "proper" fashion isn't actually functioning as a constraint on informational uncertainty in the machine. The machine has no such information entropy to be minimised. And it is that kind of information which is the semantic "data" that matters.

    Again, you are thinking that computers are doing something that is mind-like. And so it is only a matter of time before that gets sufficiently scaled up that it approaches a real mind. But syntax can't generate semantics from syntactical data. Syntax has to be actually acting to constrain interpretive uncertainty.
    It has to be functioning as the sign by which a mind with a purpose is measuring something about the world.

    So syntax operates only as the interface between mind and world. It is the sign that mediates this living triadic relation.

    If I hear, or read, or think the word "cat", I understand it as a constraint on what I expect to experience, or imagine, or anticipate. I am suddenly feeling radically less uncertain or vague in my state of mind (it is now concretely infused with cat expectations). And so it can become a meaningful surprise that the critter I've just seen raiding the chicken house turns out to be a quoll. What I took to be the sign of a cat can return the truth value of "false" ... sort of, as the quoll is a little cat-like in its essential purpose, etc.

    A computer could be designed to simulate this kind of triadic relation. That is what neural networks do. But they are very clunky or grainy. And getting more biologically realistic is not about the number of circuits to be thrown at the modelling of the world - dealing with the graininess of the syntactic-level representation - but about the lightness of touch or sensitivity of the model's interaction with the world. And so again, it is about a relation founded on extreme material instability.

    The more delicately poised between entropy and negentropy - falling apart and becoming organised - these interactions are, the more semantic information they contain. It is no surprise if a mechanical switch is still in the same position half an hour later, or a week later. That stability is engineered in. But if that switch is an organic molecule in constant thermal jitter, then the persistence of a state has to be deliberate and purposeful - maintained by an interpretive state of affairs that is holistically larger than itself.

    So any AI or AL argument based on "more circuits" is only talking about adding syntactic capacity. To add semantic capacity, it is this triadic or holistic semiotic relationship that matters. And it is "more criticality" that would be key to that. Which is not something to be added in fact. It has to become foundational to the very notion of a circuit or switch. The machine-like stability is something that has to be removed from the very stuff from which you are trying to construct your AI or AL.

    Again, this is not an easy argument to track as neural network approaches do try to simulate critical behaviour. That is why they are good at some tasks like pattern matching. But there is a big difference from faking criticality with software that runs on completely mechanical hardware, and actually doing what life/mind does, which is to exist in an entropically open relation with the world. Semantic information has to be organising the state of the hardware from the ground up. It has to run native, no emulators.

    And biophysics has arguments that only a certain kind of organic chemistry is the "right stuff" when it comes to creating this kind of living and mindful "machinery". AI/AL would have to be the same protoplasmic gunk from the ground up. Silicon and electricity are simply the wrong stuff for biophysical reasons.
  • What is life?
    What is required for deductive logic is that the use on the left be the same as the use on the right.Banno

    As if syntax were semantics.
  • What is life?
    But we have good telescopes. We can see the heat death already. The Universe is only a couple of degrees off absolute voidness. The average energy density is a handful of atoms per cubic metre. Nihilism is hardly speculation.
  • What is life?
    Still struggling with how this is not simply nihilism,Wayfarer

    Why does it have to be not nihilism? My argument is that the goal of the Comos is entropification. Then life and mind arise to accelerate that goal where it happens to have got locally retarded. So life and mind are the short-term cost of the Cosmos reaching its long-term goal.

    That's not just nihilism - the idea that our existence is cosmically meaningless. I am asserting we exist to positively pick up the pace of cosmic annihilation. So super-nihilism. :)
  • What is life?
    Given that life is an open system, and that the dissipative structures to which you allude depend on an influx of energy (in order to resist the second law of thermodynamics), where does hard indeterminism actually benefit the model?VagabondSpectre

    I've already said that these are two different issues - that the Comos itself might be indeterministic or vague "at base", and that life requires material indeterminism as the condition for being able to control material flows.

    I think both are true, but I am arguing for them separately.

    The electron transport train is what keeps life warm so to speak, but the self-organizing property of life's data goes beyond that to provide innovative direction well beyond mere random variance.VagabondSpectre

    Now you are conflating material states and information states. We might model material states as "data", but that doesn't mean that entropy is just information.

    Instead, the big deal in modern science is we can translate between matter and information using a common unit now. We can count both in terms of degrees of freedom. But that doesn't make them the same thing. Instead, they are opposite kinds of things (atoms of matter vs atoms of form). So there is a subtle duality that we shouldn't ignore by a conflation of terms.

    If we boil this down, life is self-organizing information (and consumes energy to do it, and so requires abundance of fuel).VagabondSpectre

    Again this lumps levels that I want to keep apart. Dissipative structure occurs in non-living systems - like the atmospheric convection cells that are the weather. So we have to be able to distinguish the informational extra that life brings to harness dissipative structure towards private ends. The weather serves no higher person than the second law. Life is still ultimately entrained to the second law but also does form its own local purposes. And that is information of some new level of order. Which is in turn a significant enough disjunction to needs its own terminological distinction.

    Learning digital information networks are also physical structures which give rise to physical complexity that can rival the complexity found in nano-scale biological machinery. Even though it all exists materially as stored charges (what we abstract as bits), the connections and relationships between these parts can grow in complexity by more efficiently utilizing and ordering it's bits rather than by acquiring more of them (although more bits doesn't hurt).VagabondSpectre

    But again you are ignoring the evidence that life is fundamentally different in seeking hardware instability of a kind that permits its informational control to exist. Digital hardware is just basically different in that it depends on instability being engineered out. Computers don't create their own steady-state environments. They have to be given them. But life does create its own steady-state environment. It makes them. So apples and oranges in the end.

    We don't have an AI yet capable of taking control over it's own existence (in the way that biological life does as a means of perpetuation), but I think that chasm is shrinking faster than most people realize.VagabondSpectre

    Again, my argument is that the chasm is not shrinking at all. There is no trend towards hardware designs with inherently unstable switches rather than inherently stable ones. Computing remains defined by its progress towards a lack of entropic limits on computation, not its steady progress towards computation that is entropically limited.

    So to sum up, I don't have a problem with the idea that computation can add another level to human semiotics. We can express our desires to build these kinds of "thinking machines" because for us it is meaningful.

    But it is another thing to think we are moving towards artificial mind or artificial life. And I just raise that new point about hardware instability as another definitional reason for how far we are from what we tend to claim about what we are doing in our computer science laboratories right now.
  • What is life?
    if some hidden, more fundamental, thing efficiently causes a particle to decay, then would that not beg the question as to what determines the hidden cause?John

    Yes, any tale of efficient/material causes suffers from infinite regress. Hence the need to posit an "unmoved mover" of some kind to ground being.

    One way to do that is to argue the unmoved mover exists in some foundational sense - like a creating God. But that begs a whole bunch of questions - like who made Him.

    So my own Peircean preference is to put the unmoved mover at the end of things - as the limit on being that development asymptotically approaches. That is formal/final cause is Platonically what "drives" existence - except it not a drive but the crystallisation of some "always necessary" state of global constraint.

    So formal/final cause is immanent and emergent - the regularity that results when everything tries to happen, but almost everything then is going to be self-contradicting and thus self-cancelling. If you can go left, you could have gone right. If you could be positively charged, you could be negative. And so as existence tries to express every possibility, it quickly reduces itself to some tiny organised arrangement of that which survives self-negation. A standard quantum path integral or sum over histories ontology in other words.

    That then puts at the beginning - as the initiating conditions, or the material/efficient cause - a state of pure potential or indeterminancy. A Peircean vagueness, firstness or tychism. A sea of unbounded spontaneous fluctuation - sort of like a hot big bang.

    So quantumly, as you approach the Planck scale that defines the Big Bang state, you do find that measurement loses its purchase on events and you are just left with "infinite fluctuation" as the answer to your questions about "what exists". The initiating conditions are not some unmoved mover, but the opposite - the unboundedly moving. The radically unlimited. And thus the purest stuff - a vague everythingness - that is exacly what logic requires as a precursor "state" for any immanent emergence of self-negating limits.

    I just mention all this as there is a fourth metaphysical option which gets beyond the problems presented by the others you mention. And it checks out scientifically - or at least that is what all the quantum evidence, dissipative structure theory, and condensed matter physics should by now suggest.

    So why does the particle decay spontaneously? If you look at it from this constraints based view, the particle is not some stable thing that needs a nudge to fall off some shelf. Instead it is already a bagged-up mess of fluctuations - a locally confined state of excitation. It seethes with necessary nudges. And it persists undecayed due to some wider environmental constraint that imposes a threshold on it just popping off right now. So there is a constant limitation (from a stable classical environment) on its decay that keeps it in existence - with a constant probability that that threshold gets breached by some "lucky" fluctuation among an uncounted number of such fluctuations that characterise the "inside" of the particle.

    Thus when we talk about the essence of a fundamental particle, it is really the environmental limits being imposed on a wild or vague state of material "everythingness" that define it. Its formal/final causes. And at the abstract level, that environment is mathematically described in purely formal terms - the self-limiting ways that a symmetry can be broken. Symmetry modelling speaks to the simplest possible options that would give matter some dichotomously definite identity - like spin left vs spin right, or break positive vs break negative.

    So in this view, the Cosmos as a whole would be a general symmetry breaking in which a vague everythingness became organised into some more limited state of definiteness by become crisply divided against itself - exactly as Anaximander outlined it at the dawn of recorded metaphysics.

    The unmoved mover is the simplicity of form that lies at the end of the trail (the Heat Death that is entropy's self-made sink). And the initiating conditions is the very possibility of a material fluctuation (without yet a direction or relative value). All that had to happen was a formless everythingness that negated itself to leave an irreducible residue of somethingness - which in the case of the Heat Death is a spacetime dimensional void filled with the least possible energy, just a blackbody thermal sizzle of quantum fluctuations now with a temperature of (asymptotically) zero degrees.
  • What is life?
    What you call a constraint on a definition I would describe as an additional term, changing the application.Banno

    I prefer my precise terminology. It makes it clear that adding constraints is the subtraction of possibilities. We are talking about the intersection of sets, not the union of sets - if one must resort to set theoretic talk.

    Your way of putting things is ambiguous. The change could be either logical-or or logical-and.

    So quolls are referred to as tiger cats. They are marsupials. We had one a year ago that would come once a month and have takeaway chicken, curtesy of my coop. When the Girl said things like "That cat took another chook last night", the meaning was clear.

    But one might add to the definition of cat "...and is not a marsupial", thus ruling out the use of "cat" to refer to quolls.

    Sure that "apophatic constraint" works for certain purposes, but it rules out a useful way to use the word "cat"; it would be improper to say that one use was "the correct use of cat".

    There is no essence of cat here; only different uses.
    Banno

    Cute story but full of holes. Just look how fast you slid from "tiger cat" - a common colonial term - to "cat". So quoll might equal tiger cat as a valid translation between mispronounced aborigine and settler coinage. Both would point at the same animal. But to call a quoll or tiger cat a cat is another whole can of worms.

    The quoll is "sort of like a cat, but not really". We would have to be appealing to some more general notion of the essence of catness to create a union of two sets of observations. So rather than getting more precise - adding constraints to produce an intersection - we would be relaxing constraints to produce a union at a higher level of generality. It is the more abstracted essence of catness that we must have in mind to justify this turn of speech.

    So sure, the correct use of "cat" is flexible. We can step back to higher generality in a way that allows union operations - hey, quolls are rather cat-like in look and habit (or more like cats than rabbits, goats, chickens, and other animals we know from our homeland). Or we can add constraints to do the opposite. We can talk about all the cats that are also marsupials - and find the intersection is in fact the null set.

    Language is great because it doesn't get too caught up in levels of generality and particularity. Although it does of course employ pronouns and qualifiers (like -like and -ness and -icity) to add this logical distinction as necessary.

    But still, the Peircean approach does see the metaphysical essence of things as speaking to their formal and final cause. What unifies particulars is their purpose and rational organisation. So quolls would be like cats because the same body form is good for the same purpose, the same ecological niche. There actually is something in common that we might want to capture as a general X-ness. The needs of a small nocturnal carnivore is a constraint that acts on the genetics of both.

    So "apophatic constraint" doesn't in fact rule out the creative use of language. Instead it underpins it. And this is how I know you don't actually get it. It is only this kind language use that remains open-ended even when constraints are combined. Constraints merely limit proper interpretation.

    If we are talking about black cats, we might still be speaking of Miles Davis. "Black" and "cat" can have a whole host of associated meanings according to the communicative context. This essential open-endedness of a sign is not a problem unless you are wedded to a clunky set theoretic view of meaning where words must refer to some definite collection of things. Constraints can only reduce uncertainty, they don't ever eliminate it. That is why Peircean logic employs vagueness as a modality. It explains the inherent flexibility with which even the strictest syntax determines meaning. Semantics is irreducibly open-ended - yet also perfectly ameniable to being apophatically bounded.

    I gather from the parenthetic comment that you are yourself not too happy with this terminology.Banno

    There was a spelling mistake there. I meant communicative intent and not communicative content.

    So again this relates to the Peircean view that essence is final cause or the purpose that shapes things. And the parenthetical point was the positive assertion that even speakers may be vaguer than their rather definite sounding speech acts imply.

    Speech is a creative act and syntax imposes apophatic constraint. We simply have to eliminate a lot of possible qualifications and hesitations we might have in mind to actually say something out loud in a communally acceptable fashion. And in contrary fashion, stating something aloud gives a proposition a crispness that may suddenly make us feel we are thinking with wonderful clarity now. We got our meaning exactly right. We were vague, but now we are not. Our intent is clear to us too because of the way grammar eliminates imprecision ... apparently.