Comments

  • What is life?
    How to put it simply? I would say you are far too focused (like all AI enthusiasts) on the feat of replicating humans. But the semiotic logic here is that computation is about the amplification of human action. It is another level of cultural organisation that is emerging.

    So the issue is not can we make conscious machines. It is how will computational machinery expand or change humanity as an organism - take it to another natural level.

    It is still the case that there are huge fundamental hurdles to building a living and conscious machine. The argument about hardware stability is one. Another is about "data compression".

    To simulate protein folding - an NP-strength problem - takes a fantastic amount of computation just to get an uncertain approximation. But for life, the genes just have to stand back and let the folding physically happen. And this is a basic principle of biological "computation". At every step in the hierarchy of control, there is a simplification of the information required because the levels below are materially self-organising. (This is the hardware instability point seen from another angle.)

    So again, life and mind constantly shed information, which is why they are inherently efficient. But computation, being always dependent on simulation, needs to represent all the physics as information and can't erase any. So the data load just grows without end. And indeed, if it tries to represent actual dynamical criticality, infinite data is needed to represent the first step.

    Now of course any simulation can coarse grain the physics - introduce exactly the epistemic cut offs by which biology saves on the need to represent its own physics. But because there is no actual physics involved now, it is a human engineering decision about how to coarse grain. So the essential link between what the program does, and whether that is supported by the organisation that results in an underpinning physical flow, is severed. The coarse graining is imposed on a physical reality (the universal machine that is the computer hardware) and is not instead the dynamical outcome of some mass of chemistry and molecular structure which is a happy working arrangement that fits some minimum informational state of constraint.

    Anyway. Again the point is about just how far off and wrongly orientated the whole notion of building machine life and mind is when it is just some imagined confection of data without real life physics. What is basic to life and mind is that the relation is semiotic. Every bit of information is about the regulation of some bit of physics. But a simulation is the opposite. No part of the simulation is ever directly about the physics. Even if you hook the simulation up to the world - as with machine learning - the actual interface in terms of sensors is going to be engineered. There will be a camera that measures light intensities in terms of pixels. Already the essential intimate two-way connection between information and physics has been artificially cut. Camera sensors have no ability to learn or anticipate or forget. They are fixed hardware designed by an engineer.

    OK. Now the other side of the argument. We should forget dreams of replicating life and mind using computation. But computation can take human social and material organisation to some next level. That is the bit which has a natural evolutionary semiotic logic.

    So sure, ANNs may be the architecture which takes advantage of a more biological and semiotic architecture. You can start to get machine learning that is useful. But there is already an existing human system for that furrther level of information processing to colonise and amplify. So the story becomes about how that unfolds. In what way do we exploit the new technology - or find that it comes to harness and mould us?

    Agsin, this is why the sociology is important here. As individual people, we are already being shaped by the "technology" of language and the cultural level of regulation it enables. Humans are now shaped for radical physical instability - we have notions of freewill that means we could just "do anything right now" in a material sense. And that instability is then what social level constructs are based on. Social information can harness it to create globally adaptive states of coherent action. The more we can think for ourselves, the more we can completely commit to some collective team effort.

    And AI would just repeat this deal at a higher level. It would be unnatural for AI to try to recreate the life and mind that already exists. What would be the point? But computation is already transforming human cultural organisation radically.

    So it is simply unambitious to speculate about artificial life and mind. Instead - if we want to understand our future - it is all about the extended mentality that is going to result from adding a further level of semiosis to the human social system.

    Computation is just going to go with that natural evolutionary flow. But you are instead focused on the question of whether computation could, at least theoretically, swim against it.

    I am saying even if theoretically it could, that is fairly irrelevant. Pay attention to what nature is likely to dictate when it comes to the emergence of computation as a further expression of semiotic system-level regulation.

    [EDIT] To sum it up, what isn't energetically favoured by physics ain't likely to happen. So computation fires the imagination as a world without energetic constraints. But technology still has to exist in the physical world. And those constraints are what the evolution of computation will reflect in the long run.

    Humans may think they are perfectly free to invent the technology in whatever way they choose. But human society itself is an economic machine serving the greater purpose of the second law. We are entrained to physical causality in a way we barely appreciate but is completely natural.

    So there are strong technological arguments against AI and AL. But even stronger here is that the very idea of going against nature's natural flow is the reason why the simple minded notion of building conscious machines - more freewilled individual minds - ain't going to be the way the future happens.
  • What is life?
    But some initial responses are: why is maths considered to be the order that arises as a consequence?Wayfarer

    Maths is the science of patterns. It is our modelling of pure form. So maths remains just a model of the thing in itself and not itself the thing.

    So I am not making an actually mystic Platonic point. In fact, our mathematical models are generally terribly reductionist - bottom up constructions with numbers as their atoms. So Scientism rules in maths too. But also, to be able to create these reductionist models - of forms! - maths has to be able to think holistically. So the informal or intuitive part of mathematical argument - the inspiration that makes the connections - does have to see the big picture which then gets collapsed to some more mechanistic description. That is how mathematical thought ends up with equations.

    I would have thought the source of the 'unreasonable efficacy of mathematics' is due to the fact that it is prior to the 'phenomenal domain' rather than a consequence of it it - nearer to the source.Wayfarer

    But I am not saying one has to come before the other. Rather both reflect the same process - a summing over everything to discover what doesn't get self-cancelled away by the end. So the first place it has to happen is out there in the real physical world. It starts as ontology. But then epistemology finds itself having to recap the same developmental process - because that just is the essence of development as a process.

    So the surprise is that nature is a process. People normally think of it as a thing - an existence rather than a persistence. And then the process of modelling the world could only develop via the same logic. So that is why maths and reality look like mirror images. Each is a process - one ontic, the other epistemic.

    This is the basis of Peircean metaphysics - the reason why we might call the cosmos semiotic, or - your favourite - consider matter as deadened mind.

    Now I think the reason that this seems backwards is because nowadays it is naturally assumed that intelligence is a result of evolution. It's not something that appears until the last second, in cosmic terms, so intelligence itself is understood as a consequence. Whereas in traditional cosmology the origin of multiplicity is the unborn or unconditioned which is symbolised in various (and often highly divergent) ways in different philosophical traditions but which, suffice to say, is depicted as in some sense being mind-like. Of course that is deprecated nowadays because it sounds religious.Wayfarer

    As we have always agreed, Eastern metaphysics thinks this same general away about existence as a developmental process. Before the mechanistic mode of thought arose (to organise societies by democracy and law, then to harness the world with machines), everyone could see the natural logic of "dependent co-arising" as the basis of any metaphysics.

    And as long as you say intelligence rather than consciousness, then yes, it is quite possible to place that there right at the beginning in some true sense. To me, intelligence means formal and final cause - the having of a purpose and then the organisation that results to achieve it. And even if it is just the second law - a driving desire to entropify which then results in the particular mathematics of dissipative structures - then even scientists are saying that intelligence or intent was there from the start with the Big Bang.
  • What is life?
    You said that computation doesn't produce a steady-state system, and typically it doesn't. But does the mind produce a steady-state? I would say yes and no given the presumption that connected groups of neurons have persistence in some aspects of their structural networks (the neurons and connections approximating "cat" has somewhat coherent or permanent internal structure AFAIK), but parts of neuronal networks also exhibit growth and change overtime to such a degree that the dynamics of the entire system also change.VagabondSpectre

    Again, this is why machines and organisms are at different ends of the spectrum (even if it is the same spectrum in some sense).

    So it is because biology can stabilise the unstable that it can easily absorb new learning. It is already a system of self-organising constraint. So it can afford to accept localised perturbations - new learning - without a danger of becoming generally destabilised.

    Machines by contrast are only as stable as their parts. They have to be engineered so their bits don't break. Because if anything important snaps, the machine simply stops. It can't fix itself. Some human has to call in the repair-man with a bag of replacement components.

    In machine learning - even with deliberate attempts at biological designs like anticipatory neural nets - this lack of the ability to stabilise the unstable shows in the central problems with building such machines. Like catastrophic forgetting. The clunky nature of the faux organicism means that a learning system can keep absorbing small differences until - unpredictably - the general state of coherence breaks down.

    A human brain can absorb an incredible variety of learning with the same circuits. A machine's learning is brittle and liable to buckle because the top-down stability only reaches a small way down. At some point, human designers have to introduce a cut-off and take over. Eventually a repair-man has to be there to fix the breakdown in foundational hardwared stability which the computer still needs, even if it has been pretending in software emulation that it doesn't.

    We could train a single artificial neural network to recognize "cats" (by sound or image or something else), and I'm not suggesting that this artificial neural network would therefore be alive or conscious, but I am suggesting that this is the particular kind of state of affairs which forms the base unit of a greater intelligence which is not only able to identify cats, but associate meaning along with it.VagabondSpectre

    And this is always the engineer's argument. If I can build just this one simple stable bit - the cat pattern recognition algorithm - then that gives me the stability to add the next level of computational complexity. Eventually we must replicate whatever the heck it is that life and mind are actually doing.

    But this line of thought is fallacious for the reasons I've outlined. By continually deferring the stability issue - building it in bottom up rather than allowing it to emerge top-down - the engineer is never going to arrive at the destination of a machine in which all its stability comes top-down as stable information regulating critically unstable physics.

    I still don't understand why life and mind needs to be built on fundamental material instability or it ain't life/mind.VagabondSpectre

    OK, you get that information is so immaterial that it can't push the world very hard. So to have an effect, it must find the parts of the world which respond to the slightest possible push. It needs to work with material instability because it itself is just so very, very weak.

    Right. That entropic equation is then only definitional of life/mind as a central logical principle. It explains life/mind as semiotic mechanism. It show how the price of informational stability is material instability. It is a trade-off - a way to mine a world that is overall rather materially stable by comparison.

    So the definitional strength argument is that life/mind is semiotic dissapative structure. Its essential characteristic is that it takes advantage of this particular informational stability vs material instability trade-off.

    I know why biological life needs extreme material instability, but do minds need it?VagabondSpectre

    Yep. So you can accept biology is semiotic dissipative structure, but you think intelligence or even consciousness is something else - like really complex information processing. The biological or hardware side of the story can be set aside. Computers just deal with the informational realm of symbol manipulation. Syntax can do it.

    But my argument is that all biology is regulated by information. There is "mind" operating even when it is just genetic information and not yet neural information. The genes are an anticipatory model of the organism. The neurons then put that model of "the organismic self" in a larger model of "the world".

    And we can see how that world modelling depends on instability at its very interface between self and world. Sensory receptors wouldn't be sensitive unless they as unstable as possible. They have to be set up as switches that only respond to change in the world. And which stop responding as soon as the change stops. We don't hear the humming fridge because our neurons have already got bored with it. It is only if the fridge stops - data disappears - that they wake up again.

    So minds don't need the world to be unstable in the same way. Perception isn't metabolism. Although the way we think is focused on the affordances of the environment. We are evolved to look for the causal levers by which we can move the world with the least effort. So it all comes back to an economy of control. It is great that the world is also materially stable - we don't have to worry about controlling its existence. We can build a house of solid foundations - or a computer with sound engineering - and then just get on with living and dealing with the surprising. Mental instability is reserved for creative problem solving - not being so fixed in our habits that we can't try new smart ways to regulate the world with the least effort.

    And then to be able to have this kind of sensitivity, that has to be built in from the ground up - from the level of individual sensory receptors.

    So it would only really be from the next level up - the sociocultural one - that we get that biological story of informational stabilisation in search of material instability to regulate. A society depends on a bunch of people who might go off in any direction, yet the lightest touch can keep them all bound in some common direction. Just wave a flag - the simplest signal - and the group will follow.

    Again, this has implications for machine intelligence. If DeepMind is not good at having friends, being inspired by leaders, a natural at working in a team - all because it also has all the opposite potential of being moody, going off message, generally getting chaotic - then how is it ever going to simulate any actual human? A machine by definition is engineered for stability. Instability is the last thing we would engineer into DeepMind - or at least the kind of relationship instability that is critical for humans who are social creatures.

    And all our science fiction gets that. Machines are inhuman - the misfit dynamics of teamwork is the last thing they get. They are never in on the jokes, just tagging along with the human gang in bewilderment. Where there is no risk of individual instability, there can be no reward of collectivised stability. Humans by contrast live on a constant knife edge of fractiousness vs compassion. The smallest social thing can tilt them. Which is ... why humans are so fantastically controllable. Just wave a flag, say thank-you, hoist a finger, or offer any other gesture of minimal effort. The results will be hugely predictable. Behaviour is simple to co-ordinate when there is semiosis to regulate the instability and tilt it in the right general direction.
  • What is life?
    You often ask why nature is so mathematical. And the reason would be that maths (especially symmetry maths) can be considered to be the order that emerges once one has abstractly - metaphysically - summed over all possibilities. Maths starts with everything in an abstract way and winds up with what can't be subtracted away. So you arrive at triangles as you try to remove as many corners from a polygon as you can. Or in the other direction, circles as you try to remove all the faces.

    The argument then is nature arises the same way. To the extent it is the constraint or erasure of "every possible action over all possible dimensionality", it would find its way to the same mathematical outcomes. Simplicity will out.

    So the standard model has "problems" in that it in fact gives a completely mathematical reason why there are quarks and electrons, for example. One is the result of the "eight-fold way" of breaking SU(3) chiral symmetry (the strong force). The other is the result of breaking the SU(2) symmetry of the weak force - the Higgs mechanism explaining how the four-fold way of SU(2) becomes completely broken down to the ultimate simplicity of U(1), the simplest possible kind of particle spin that is the electron with its electromagnetic field (or neutrino, without).

    So the standard model is a stellar success. But having understood the lowest energy modes, we still need to discover the original more complicated initial symmetry that the whole of the Universe might have cracked with its 3D Big Bang. The "problem" is that there are a lot of candidates, such as SU(5), SO(10) and E(8). And to test the different ways of crumbling this "supersymmetry" into the simpler bits that make our observable world - SU(3) and SU(2) and U(1) - we would have to be able to detect the various other particles that the different candidate Big Bang symmetries predict.

    So we could test for SO(10) say. It would have its own characteristic zoo of high energy particles (or excitation modes that exist because the "cosmic plasma" can still ring in a really complex higher dimensional way, and not just the much cooler and simpler way of a quark or electron).

    Thus the Standard Model accounts for the observed world with mathematical simplicity. It already proves that nature shakes itself down to be as simple as organisationally possible. It arrives at the simplest shapes - just like the Platonic solids.

    But the difficulty is to be able to make observations that then limit the earliest symmetry breaking - the configuration which was at the start of it when all forces (including maybe gravity) were "facets" of some still quite hot and multi-directional "vibration", and so still liable to spew out all sorts of weird higher-symmetry particles along with the much simpler ones that eventually came to dominate in a cold/expanded world.

    As such, the Standard Model is hardly a failure or in crisis. It stands above everything in science to show we have got creation's basic shtick right. Given the practical impossibility of doing experiments at Big Bang energies, we might hope to use pure maths to discover the foundational symmetry. Like string theory tried, we might just be able to figure it out by mathematical reasoning. This is still promising, but of course string theory resulted in an almost infinite number of initial symmetry conditions. And it doesn't yet have any definite mathematical reason to pick out just one. And experiment may never come to the rescue as we are essentially asking about what happened "just prior" to the Big Bang itself.

    So in just 500 years, science has managed to explain the stuff out of which everything observable has been made in terms of Platonically-necessary and maximally-simple mathematical principles. Pretty remarkable.

    And yes, there is still the issue of the physical constants. But at worst, that just means there are as many universes as there are different values for those constants (the majority of which would then be unstable and rapidly inexistent anyway). So the formal framework would still be the same - there can only be some "simplest symmetry-breaking" when it comes to the maths. But every survivable arrangement of constants to scale the coupling strength of forces, and masses of particles, would survive to create a larger multiverse zoo of outcomes.

    On the other hand, the constants of our Universe might turn out to be as mathematically necessary as everything else. And why not? Is there some good apriori argument against it?

    But either way, you can see how maths might describe the Universe if both are the product of "sums over possibility". In each case, we can start with an everythingness that is every possibility. Then because much of that everythingness is then going to be parts contradicting some other part (like positive annihalating negative), pretty much everything falls away until we are only left with the simplest possible forms of organisation - the symmetries and symmetry-breakings which maths describes and the Universe physically embodies.
  • What is life?
    But that has lead many people to assume that science somehow can explain those very same regularities, when really why there are such regularities is beyond physics - i.e. meta-physical.Wayfarer

    But that natural order is explicable in terms of an accumulation of history if we understand the mechanism of the critical phase transitions.

    So it is like our Universe being now in its water phase where before it was gaseous. Being watery imposes all sorts of material constraints that we can describe as "the laws of nature". But we can also understand why that is the case if we know about the gaseous phase from which water condensed. We can see how the world was once "a lot less lawful" and so how constraints got added.

    This is why modern physics and cosmology is so focused on symmetry and symmetry-breaking. That is a mathematical strength metaphysics of phase transitions. It describes in a generic fashion what must have been the case before to get what is observably the case after. Or indeed, what we could hope to observe again if we got matter in an accelerator and heated it up enough to reverse the breakings.
  • What is life?
    But Sam L. responded with the claim that matter follows the laws of gravity. That's why I pointed out the category error. The position being argued by VagabondSpectre, and apokrisis as well for that matter, is completely supported by this category error. Simply stated, the error is that existent material can interpret some fundamental laws, to structure itself in a self-organizing way. it is only through this error, that supporters of this position can avoid positing an active principle of "life", and vitalism.Metaphysician Undercover

    My position on the laws of physics is that - to avoid any mystery - laws are "material history". Laws are simply the constraints that accumulate as a system (even a whole Universe) develops its organisation.

    So that is how something global can be felt locally. The Universe has crystalised as some general material state. And that constrains all local actions in radical fashion from then on.

    This again is a big advantage of turning the usual notion of material existence on its head.

    The usual notion is that existence is the result of causal construction. First there was nothing, and then things got added. So that implies someone must have chosen the laws of nature. There was a law-giver who had some free choice and now somehow every object knows to obey the rules.

    But a Peircean semiotic metaphysics - one where existence develops as a habit - says instead everything is possible and then actuality arises by most of that possibility getting suppressed. So the universal laws are universal states of constraint - the historical removal of a whole bunch of possibility. The objects left at the end of the process are heavily restricted in their actions - and by the same token, they then enjoy the equally definite freedoms that thus remain.

    That is what Newtonianism was about. The motion of massive bodies is universally restricted so that it is only free, or inertial, if it is constant motion in a straight line or spinning on a spot (translational and rotational symmetry is preserved). So it is extreme restriction which underpins extreme freedom - the inertia that means a mass has some "actual physical properties", like a quantifiable position and momentum.

    So laws are a mystery in a "something from nothing" metaphysics. There seems no reason for the rules, and no connection between these abstractions and the concrete objects they determine.

    But a constraints-based holistic metaphysics says instead that laws are simply historically embedded material conditions. History fixes the world in general ways that then everywhere impinge as constraints on what can happen. But in doing that, those same constraints also underpin the freedoms that local objects can then call their own.
  • What is life?
    That's all fine, so is there a "unity", a "singularity" in The Big Bang EventPunshhh

    There would be a unity or symmetry. That is implied by the fact something could separate or break to become the "mutually exclusive and jointly exhaustive" two.

    But the further wrinkle is that the initial singular state is not really any kind of concrete state but instead a vagueness - an absence of any substantial thing in both the material and formal sense.

    This radical state of indeterminism is difficult to imagine. But so are many mathematical abstractions. And it is a retroductive metaphysical argument as we are working back from what we can currently observe - a divided world - to say something about what must have been the undivided origins.

    So note how our universe is limited to just three spatial directions. Going on the "everythingness" argument, there seems no reason that before the Big Bang symmetry breaking moment, when a vague everythingness was constrained, this would mean the pre-Bang was infinitely dimensional. Anything happening, bled into an unlimited number of directions. And so nothing could really happen.

    There are good arguments for why the only stable arrangement of dimensions is three. Forces like gravity and EM dilute with the square of the distance. In a universe of less dimensions, force would remain too strong. In more dimensions, it evaporates too fast. So we can argue that there is something Goldilocks about three dimensionality as having the best balance if you have to build a spacetime that is a dissipative structure, expanding and cooling by a steady thermal spread of its radiation.

    So from that, you can imagine the pre-bang state being simply radiative fluctuations that instantly thermalise. Every attempt at action gets swallowed up instantly as it is draining in infinite directions and not taking its time spreading out and thinning inside three dimensions.

    The Big Bang is thus more of a big collapse from infinite or unbounded directionality to the least number of dimensions that could become an eternal unwinding down towards a heat death.

    The details of this argument could be wrong of course. But it illustrates a way of thinking about origins that by-passes the usual causal problem of getting something out of nothing. If you start with vague everythingness (as what prevents everything being possible?) then you only need good arguments why constraints would emerge to limit this unbounded potential to some concrete thermalising arrangement - like our Big Bang/Heat Death universe.
  • What is life?
    Yep. I think apo is working on a theory of life that involves an unconscious signaler and an unconscious receiver. But maybe he didn't mean that, because that type of thing is pervasive in electronics.Mongrel

    Just keep making random shit up.
  • What is life?
    Electrical discharge along axons precedes the release of acetylcholine. I'm not sure why you're denying that. It's a science fact, dude.Mongrel

    You can dude all you like. But action potentials are not electron discharges.

    Ion flow regulated by voltage-gated channels are electrical in that a change in membrane potential at a point does cause a change in protein conformation causing a pore to open. So a changed potential is a signal which the pore mechanically reads to continue a chain reaction of depolarisations.

    But sodium channel blockers don't stop electrons flowing across or along membranes, do they? They block the ability of pores to respond to the signal of a potential difference.

    And in describing the machinery of neural signalling, the striking fact is not the electrical gradients (why would it be?) but the intricate semiotics of messaging involved.

    Eh.. I was an electronic engineer for 10 years. I've been a nurse for 10 years.Mongrel

    And I've written books on neuroscience.

    I believe you're suggesting that only a particular kind of material can be organized as a living thing. And this is somehow related to your understanding that life involves signs in a way that non-life does not.Mongrel

    It's not just my understanding.
  • What is life?
    It's often referred to as the neuro-endocrine system because the two function pretty thoroughly as a team in governing the body.Mongrel

    I will think you will find that is BS. Triggering a gland is different from triggering a muscle. Even if "electrical discharge" is involved in neither.

    So just like botox and muscles, there is a reason why endocrine disruptors are chemicals like dioxins or plasticisers that mimic biological messages. It is not stray EM fields you have to worry about - even if the folk with tin-foil hats might tell you otherwise.
  • What is life?
    So botox works because it blocks tiny amounts of electricity and not large amounts of acetylcholine discharge?

    Cool. I never understood that before.
  • What is life?
    Neurons communicate with muscles, for instance, by electric discharge. Look into it. It's fascinating stuff.Mongrel

    You mean acetylcholine discharge? The muscle fibres know to contract because they get given a molecular message?

    And even if you are getting into the controversy of direct "electric synapses", it is still not about the conduction of an electrical current but a wave of membrane depolarisation - Na+ ions being allowed to flood in through the molecular machinery of membrane pores before being pumped out again to maintain a working gradient.

    So everywhere you look, you see semiotics at work - messages being acted upon as the way the hardware does things - not some simple current flow which has been modulated to carry a "signal" as a physical pattern.

    Think about it. A radio broadcast is modulated frequency. It encodes music and voices in a physical fashion that is simply a sign without interpretance. That pattern then drives some further set of amplifying circuitry and loudspeakers at the receiver end. So no matter how complicated or syntactic the physical pattern, zero semantics is happening as it flows. There is no "communicating".

    Biology is the opposite. The physics and the message are an interplay happening right where it all starts. The circuits are alive because the flow is a process of communication. The two sides - the electron transfers that drive the production of waste, and the proton gradients that do the meaningful work - are strictly separated so they can also crisply interact.

    So when you talk about "electrical discharge", that again sounds like you being vague so as to avoid getting into the complex semiotics that is actually taking place.

    Computers have electrical circuits. Humans have electrical circuits. So hey. Life is just chemistry and mind is just information processing. [Pats small child on the head and walks away.] :)

    Science fiction writers have long imagined silicon-based life forms, silicon and carbon being similar.Mongrel

    Fiction writers can take poetic licence with science. Science will point out the critical differences between silicon and carbon.

    Like the weakness in bonds that means you couldn't make large complex organic molecules. Or the unsuitablity of silicon for redox metabolism as its waste electron acceptors are not a gas like CO2 but instead silicon oxides.

    And you imagine having to excrete sand rather than CO2 which just leaks out of a cell.

    So your objection is all based on silicon+electricity being the wrong stuff in the sense of being the wrong electrochemical stuff. Do you not get that the "wrong stuff" is about it being the wrong stuff in lacking a potential for semiotic mechanism?

    Even if silicon life was limited by molecular complexity and also energetically constrained by the need to excrete solid waste, it could still exist - if it could implement actual nanoscale communication across an epistemic divide. Or be a semiotic "stuff" in other words.
  • What is life?
    Electricity is extensively utilized by living things.Mongrel

    That's a vague claim. Modern biophysics would agree that electron transport chains are vitally important as "entropic mechanism". But even more definitional would be proton gradients across membranes. It is those which are the more surprising fact at least.

    So it is the ability to separate the energy capture from the energy spending - the flow of entropy vs the flow of work - which is the meaningful basis of life.

    We can talk of a machine being driven by energy - because we are there to turn it off and on. But life has to build in that semiotic difference at the foundational level, down at the nanoscale, where a separation between entropy production and negentropic work has to be maintained via a physical or chemical difference.

    So again, silicon/electrons is just not that kind of stuff.
  • What is life?
    Is it really a fourth option, or just essentially an elaboration of the second option I listed?John

    It is different in that it explicitly embraces the holism of a dichotomy. It says reality is the result of a separation towards two definite and complementary poles of being - chance and necessity, material fluctuation and formal constraint, or what Peirce called tychism and synechism, that is, spontaneity and continuity.

    So you can't merely elaborate chance or fluctation to build a world. Instead, the world emerges by the dialectic which separates chance fluctuations (like a particle decay) from the global constraints (like the experimental conditions that specify the observational context for said decay).

    So that is how quantum theory works. On the one side (inside the deterministic wavefunction description of a quantum system) you have all the indeterminism. A purity of spontaneity or uncertainty. Then on the other, you have the determining context - the observer's world - that serves to fix the wavefunction and thus give the quantum probability its very certain measurement basis.

    Thus if we are talking ontically - taking quantum theory as our cue - then the particle decays because its probability space was shaped a certain way. And by the same token, that wavefunction defines some scope of pure and unreachable uncertainty. True spontaneity is being manufactured - by virtue of the dialectic or symmety-breaking which is the other side of things, the determining of an observational context.

    How could we talk about particle decays in the dense heat of the first instant of the big bang? In a thermal chaos without clear divisions, there is nowhere to definitely stand so as to be able to see something else definitely happen. The hot fog has to dissipate for events to become either classed as deterministic or spontaneous. You need a dark, cold void for it to become a thing that a particle has not decayed and so to have a statistical history that says something about the degree of spontaneity exhibited by the fact of its decay.

    So it is not just an elaboration of the claim that nature is fundamentally indeterministic. When considered in full, the argument is really that both spontaneity and its other emerge as crisply definite via a process of dialectical development or symmetry breaking. So quantum weirdness is a thing - only because local classicality is also a thing. And they both become more of a thing together as the cosmos expands and cools.
  • What is life?
    A computer which can work somewhat objectively in translating languages or a camera which takes a picture and records light data are not aware of the meaning contained within the data they manipulate and store, but they somewhat objectively work with that data none the less in a way that retains meaning.VagabondSpectre

    So this is syntax and not semantics.

    A computer can mechanically map a set of constraints specified in one language into the same set of contraints specified in another. A faithful translation like this is semantics preserving. The constraints would still serve to reduce a mind's uncertainty in the same fashion. "Cat" and "chat" can mean the same thing in different languages because they are both verbal signs meant to limit their users to some common viewpoint, some common state of anticipation, of the feline variety.

    So - in Chinese Room fashion - machines can be constructed that "make the same interpretations" as we would, without having the faintest possibility of being minds that actually understand anything. The ability syntactically to manipulate signs in a "proper" fashion isn't actually functioning as a constraint on informational uncertainty in the machine. The machine has no such information entropy to be minimised. And it is that kind of information which is the semantic "data" that matters.

    Again, you are thinking that computers are doing something that is mind-like. And so it is only a matter of time before that gets sufficiently scaled up that it approaches a real mind. But syntax can't generate semantics from syntactical data. Syntax has to be actually acting to constrain interpretive uncertainty.
    It has to be functioning as the sign by which a mind with a purpose is measuring something about the world.

    So syntax operates only as the interface between mind and world. It is the sign that mediates this living triadic relation.

    If I hear, or read, or think the word "cat", I understand it as a constraint on what I expect to experience, or imagine, or anticipate. I am suddenly feeling radically less uncertain or vague in my state of mind (it is now concretely infused with cat expectations). And so it can become a meaningful surprise that the critter I've just seen raiding the chicken house turns out to be a quoll. What I took to be the sign of a cat can return the truth value of "false" ... sort of, as the quoll is a little cat-like in its essential purpose, etc.

    A computer could be designed to simulate this kind of triadic relation. That is what neural networks do. But they are very clunky or grainy. And getting more biologically realistic is not about the number of circuits to be thrown at the modelling of the world - dealing with the graininess of the syntactic-level representation - but about the lightness of touch or sensitivity of the model's interaction with the world. And so again, it is about a relation founded on extreme material instability.

    The more delicately poised between entropy and negentropy - falling apart and becoming organised - these interactions are, the more semantic information they contain. It is no surprise if a mechanical switch is still in the same position half an hour later, or a week later. That stability is engineered in. But if that switch is an organic molecule in constant thermal jitter, then the persistence of a state has to be deliberate and purposeful - maintained by an interpretive state of affairs that is holistically larger than itself.

    So any AI or AL argument based on "more circuits" is only talking about adding syntactic capacity. To add semantic capacity, it is this triadic or holistic semiotic relationship that matters. And it is "more criticality" that would be key to that. Which is not something to be added in fact. It has to become foundational to the very notion of a circuit or switch. The machine-like stability is something that has to be removed from the very stuff from which you are trying to construct your AI or AL.

    Again, this is not an easy argument to track as neural network approaches do try to simulate critical behaviour. That is why they are good at some tasks like pattern matching. But there is a big difference from faking criticality with software that runs on completely mechanical hardware, and actually doing what life/mind does, which is to exist in an entropically open relation with the world. Semantic information has to be organising the state of the hardware from the ground up. It has to run native, no emulators.

    And biophysics has arguments that only a certain kind of organic chemistry is the "right stuff" when it comes to creating this kind of living and mindful "machinery". AI/AL would have to be the same protoplasmic gunk from the ground up. Silicon and electricity are simply the wrong stuff for biophysical reasons.
  • What is life?
    What is required for deductive logic is that the use on the left be the same as the use on the right.Banno

    As if syntax were semantics.
  • What is life?
    But we have good telescopes. We can see the heat death already. The Universe is only a couple of degrees off absolute voidness. The average energy density is a handful of atoms per cubic metre. Nihilism is hardly speculation.
  • What is life?
    Still struggling with how this is not simply nihilism,Wayfarer

    Why does it have to be not nihilism? My argument is that the goal of the Comos is entropification. Then life and mind arise to accelerate that goal where it happens to have got locally retarded. So life and mind are the short-term cost of the Cosmos reaching its long-term goal.

    That's not just nihilism - the idea that our existence is cosmically meaningless. I am asserting we exist to positively pick up the pace of cosmic annihilation. So super-nihilism. :)
  • What is life?
    Given that life is an open system, and that the dissipative structures to which you allude depend on an influx of energy (in order to resist the second law of thermodynamics), where does hard indeterminism actually benefit the model?VagabondSpectre

    I've already said that these are two different issues - that the Comos itself might be indeterministic or vague "at base", and that life requires material indeterminism as the condition for being able to control material flows.

    I think both are true, but I am arguing for them separately.

    The electron transport train is what keeps life warm so to speak, but the self-organizing property of life's data goes beyond that to provide innovative direction well beyond mere random variance.VagabondSpectre

    Now you are conflating material states and information states. We might model material states as "data", but that doesn't mean that entropy is just information.

    Instead, the big deal in modern science is we can translate between matter and information using a common unit now. We can count both in terms of degrees of freedom. But that doesn't make them the same thing. Instead, they are opposite kinds of things (atoms of matter vs atoms of form). So there is a subtle duality that we shouldn't ignore by a conflation of terms.

    If we boil this down, life is self-organizing information (and consumes energy to do it, and so requires abundance of fuel).VagabondSpectre

    Again this lumps levels that I want to keep apart. Dissipative structure occurs in non-living systems - like the atmospheric convection cells that are the weather. So we have to be able to distinguish the informational extra that life brings to harness dissipative structure towards private ends. The weather serves no higher person than the second law. Life is still ultimately entrained to the second law but also does form its own local purposes. And that is information of some new level of order. Which is in turn a significant enough disjunction to needs its own terminological distinction.

    Learning digital information networks are also physical structures which give rise to physical complexity that can rival the complexity found in nano-scale biological machinery. Even though it all exists materially as stored charges (what we abstract as bits), the connections and relationships between these parts can grow in complexity by more efficiently utilizing and ordering it's bits rather than by acquiring more of them (although more bits doesn't hurt).VagabondSpectre

    But again you are ignoring the evidence that life is fundamentally different in seeking hardware instability of a kind that permits its informational control to exist. Digital hardware is just basically different in that it depends on instability being engineered out. Computers don't create their own steady-state environments. They have to be given them. But life does create its own steady-state environment. It makes them. So apples and oranges in the end.

    We don't have an AI yet capable of taking control over it's own existence (in the way that biological life does as a means of perpetuation), but I think that chasm is shrinking faster than most people realize.VagabondSpectre

    Again, my argument is that the chasm is not shrinking at all. There is no trend towards hardware designs with inherently unstable switches rather than inherently stable ones. Computing remains defined by its progress towards a lack of entropic limits on computation, not its steady progress towards computation that is entropically limited.

    So to sum up, I don't have a problem with the idea that computation can add another level to human semiotics. We can express our desires to build these kinds of "thinking machines" because for us it is meaningful.

    But it is another thing to think we are moving towards artificial mind or artificial life. And I just raise that new point about hardware instability as another definitional reason for how far we are from what we tend to claim about what we are doing in our computer science laboratories right now.
  • What is life?
    if some hidden, more fundamental, thing efficiently causes a particle to decay, then would that not beg the question as to what determines the hidden cause?John

    Yes, any tale of efficient/material causes suffers from infinite regress. Hence the need to posit an "unmoved mover" of some kind to ground being.

    One way to do that is to argue the unmoved mover exists in some foundational sense - like a creating God. But that begs a whole bunch of questions - like who made Him.

    So my own Peircean preference is to put the unmoved mover at the end of things - as the limit on being that development asymptotically approaches. That is formal/final cause is Platonically what "drives" existence - except it not a drive but the crystallisation of some "always necessary" state of global constraint.

    So formal/final cause is immanent and emergent - the regularity that results when everything tries to happen, but almost everything then is going to be self-contradicting and thus self-cancelling. If you can go left, you could have gone right. If you could be positively charged, you could be negative. And so as existence tries to express every possibility, it quickly reduces itself to some tiny organised arrangement of that which survives self-negation. A standard quantum path integral or sum over histories ontology in other words.

    That then puts at the beginning - as the initiating conditions, or the material/efficient cause - a state of pure potential or indeterminancy. A Peircean vagueness, firstness or tychism. A sea of unbounded spontaneous fluctuation - sort of like a hot big bang.

    So quantumly, as you approach the Planck scale that defines the Big Bang state, you do find that measurement loses its purchase on events and you are just left with "infinite fluctuation" as the answer to your questions about "what exists". The initiating conditions are not some unmoved mover, but the opposite - the unboundedly moving. The radically unlimited. And thus the purest stuff - a vague everythingness - that is exacly what logic requires as a precursor "state" for any immanent emergence of self-negating limits.

    I just mention all this as there is a fourth metaphysical option which gets beyond the problems presented by the others you mention. And it checks out scientifically - or at least that is what all the quantum evidence, dissipative structure theory, and condensed matter physics should by now suggest.

    So why does the particle decay spontaneously? If you look at it from this constraints based view, the particle is not some stable thing that needs a nudge to fall off some shelf. Instead it is already a bagged-up mess of fluctuations - a locally confined state of excitation. It seethes with necessary nudges. And it persists undecayed due to some wider environmental constraint that imposes a threshold on it just popping off right now. So there is a constant limitation (from a stable classical environment) on its decay that keeps it in existence - with a constant probability that that threshold gets breached by some "lucky" fluctuation among an uncounted number of such fluctuations that characterise the "inside" of the particle.

    Thus when we talk about the essence of a fundamental particle, it is really the environmental limits being imposed on a wild or vague state of material "everythingness" that define it. Its formal/final causes. And at the abstract level, that environment is mathematically described in purely formal terms - the self-limiting ways that a symmetry can be broken. Symmetry modelling speaks to the simplest possible options that would give matter some dichotomously definite identity - like spin left vs spin right, or break positive vs break negative.

    So in this view, the Cosmos as a whole would be a general symmetry breaking in which a vague everythingness became organised into some more limited state of definiteness by become crisply divided against itself - exactly as Anaximander outlined it at the dawn of recorded metaphysics.

    The unmoved mover is the simplicity of form that lies at the end of the trail (the Heat Death that is entropy's self-made sink). And the initiating conditions is the very possibility of a material fluctuation (without yet a direction or relative value). All that had to happen was a formless everythingness that negated itself to leave an irreducible residue of somethingness - which in the case of the Heat Death is a spacetime dimensional void filled with the least possible energy, just a blackbody thermal sizzle of quantum fluctuations now with a temperature of (asymptotically) zero degrees.
  • What is life?
    What you call a constraint on a definition I would describe as an additional term, changing the application.Banno

    I prefer my precise terminology. It makes it clear that adding constraints is the subtraction of possibilities. We are talking about the intersection of sets, not the union of sets - if one must resort to set theoretic talk.

    Your way of putting things is ambiguous. The change could be either logical-or or logical-and.

    So quolls are referred to as tiger cats. They are marsupials. We had one a year ago that would come once a month and have takeaway chicken, curtesy of my coop. When the Girl said things like "That cat took another chook last night", the meaning was clear.

    But one might add to the definition of cat "...and is not a marsupial", thus ruling out the use of "cat" to refer to quolls.

    Sure that "apophatic constraint" works for certain purposes, but it rules out a useful way to use the word "cat"; it would be improper to say that one use was "the correct use of cat".

    There is no essence of cat here; only different uses.
    Banno

    Cute story but full of holes. Just look how fast you slid from "tiger cat" - a common colonial term - to "cat". So quoll might equal tiger cat as a valid translation between mispronounced aborigine and settler coinage. Both would point at the same animal. But to call a quoll or tiger cat a cat is another whole can of worms.

    The quoll is "sort of like a cat, but not really". We would have to be appealing to some more general notion of the essence of catness to create a union of two sets of observations. So rather than getting more precise - adding constraints to produce an intersection - we would be relaxing constraints to produce a union at a higher level of generality. It is the more abstracted essence of catness that we must have in mind to justify this turn of speech.

    So sure, the correct use of "cat" is flexible. We can step back to higher generality in a way that allows union operations - hey, quolls are rather cat-like in look and habit (or more like cats than rabbits, goats, chickens, and other animals we know from our homeland). Or we can add constraints to do the opposite. We can talk about all the cats that are also marsupials - and find the intersection is in fact the null set.

    Language is great because it doesn't get too caught up in levels of generality and particularity. Although it does of course employ pronouns and qualifiers (like -like and -ness and -icity) to add this logical distinction as necessary.

    But still, the Peircean approach does see the metaphysical essence of things as speaking to their formal and final cause. What unifies particulars is their purpose and rational organisation. So quolls would be like cats because the same body form is good for the same purpose, the same ecological niche. There actually is something in common that we might want to capture as a general X-ness. The needs of a small nocturnal carnivore is a constraint that acts on the genetics of both.

    So "apophatic constraint" doesn't in fact rule out the creative use of language. Instead it underpins it. And this is how I know you don't actually get it. It is only this kind language use that remains open-ended even when constraints are combined. Constraints merely limit proper interpretation.

    If we are talking about black cats, we might still be speaking of Miles Davis. "Black" and "cat" can have a whole host of associated meanings according to the communicative context. This essential open-endedness of a sign is not a problem unless you are wedded to a clunky set theoretic view of meaning where words must refer to some definite collection of things. Constraints can only reduce uncertainty, they don't ever eliminate it. That is why Peircean logic employs vagueness as a modality. It explains the inherent flexibility with which even the strictest syntax determines meaning. Semantics is irreducibly open-ended - yet also perfectly ameniable to being apophatically bounded.

    I gather from the parenthetic comment that you are yourself not too happy with this terminology.Banno

    There was a spelling mistake there. I meant communicative intent and not communicative content.

    So again this relates to the Peircean view that essence is final cause or the purpose that shapes things. And the parenthetical point was the positive assertion that even speakers may be vaguer than their rather definite sounding speech acts imply.

    Speech is a creative act and syntax imposes apophatic constraint. We simply have to eliminate a lot of possible qualifications and hesitations we might have in mind to actually say something out loud in a communally acceptable fashion. And in contrary fashion, stating something aloud gives a proposition a crispness that may suddenly make us feel we are thinking with wonderful clarity now. We got our meaning exactly right. We were vague, but now we are not. Our intent is clear to us too because of the way grammar eliminates imprecision ... apparently.
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    Although I am generally skeptical of theories of transcendence or becoming, it seems to me that the two concepts have become infused in a way that actualization in its modern definition has become a dialectic of the two, celebrating both egocentricity and the liberation from it.Erik Faerber

    This is an important point. Another way of looking at peak experience is psychological flow - https://en.wikipedia.org/wiki/Flow_(psychology)

    But the irony is that - neuroscientifically speaking - flow is not transcending self-conscious levels of actualisation. It is more like letting go and running on learnt skill - automatic habit.

    So it is part of the dialectical or dichotomous design of the brain to balance habit against attention.

    And likewise - if we are discussing the human condition - again there is a dichotomy that is not a problem but instead an essential balancing act.

    So social structure is a balance of local competition and global cooperation. The individual (starting even with the parts of a person's own life on up to families, communities, nations) has to have a competitive energy. But also, from the nation down, there must also be a generalised cooperative structure.

    So a dialectic of differentiation and integration. What is natural is to be consciously self-actualising (looking out for yourself) within a social context that fosters generalised cooperation - the "automatism" of habits, laws, customs and other shared meaning.

    A self-actualisation that would seek to transcend its own social conditions is unnatural and so a reason people find it disappointing. The nihilist superman lacks flow.
  • What is life?
    So there is nothing here to stop out common use of "life" being extended - indeed, I have several times explicitly said that definitions can be extended.Banno

    Yeah. Except rather than extended, they need to be differentiated. And so they can no longer be shared - being a new choice.

    This goes back to what seems your fundamental misunderstanding about language use. A word does not have a definitional essence in as ostensive sense. It instead functions as an apophatic constraint on uncertainty.

    So a word like "life" or "cat" is already extended in that it covers anything even vaguely living or catlike. The word, as a sign, points not at some definite collection of particular instances and nothing beyond that. It instead constrains our understanding in some generalised way that could be cashed out in any number of restricted sense. Many of which will be differences that don't make a difference.

    So if I am talking about some cat, it could be large all small, black or tabby, male or female, etc, unless I feel the need to specify otherwise, adding more words and thus more constraints on your state of interpretation.

    Thus there is some essence in play - the purpose that is my communicative content (as much as that is ever completely clear and not vague to oneself, even in some propositional statement). But the word can't carry some exact cargo of meaning from me to you. All we share is some history of learning to have our uncertain interpretations constrained to be near enough similar while still remaining creatively open-ended.

    The advantage of my semiotic or constraints approach is that it accounts for how meaning can be formed and conveyed without something specific, particular or actual existing by way of a referent. I might actually have in mind a black, male moggy. You might have in mind a tabby female. But it doesn't make a difference until it makes a difference.

    And that view has important consequences for truth theories, among other things. It also should explain why definitions matter as the way to bring out putative differences. We can't actually be agreeing in some positive fashion - as opposed to some accidental and undisclosed fashion - unless we have discovered and articulated a possible point of disagreement.

    This cat we are talking about - what's its colour, age and gender? Let's see if we still have the same referent in mind. And if it doesn't matter, then it is not essential. The essence remains at the greater level of generality which is simply what we mean by "cat".

    So the existence of essence is demonstrated by applicability of generality. Reference can be open-ended or "already extended" because - dichotomously - it is also anchored by an apophatic generality. We know that cats aren't dogs or fungi or rocket-launchers as those other general alternatives are ruled out by some abstracted cat-essence.

    And while common usage does seem to get catness by perceptual abstraction (some acceptable combination of traits), science can pin that down with greater ontological rigour. It can say that evolution actually does create genetic lineages - actual constraints encoded in genetic information. So we can start to measure cat-ness in a way that can be quantified as some distance separating cats and pumas, and then more generally, leopards and panthers (although confusingly - hu! - leopards are phylogenetically panthera rather than leopardis).

    It is actually very important that - from Aristotle on - we seek to name the forms of the natural world in this essentialist fashion. The subsumptive hierarchy always seemed completely logical. And so it was discovered to be. Evolution reads like a forking tree of differences that made a difference because something must divide one species from another at the level of actual historical information. We don't just socially construct the meanings of words. We can hope to asymptotically approach the world's essential divisions by seeking out the constraints that got refined by differences that made a difference.
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    Get over yourself. My point was that antinatalist debates on PF pretend to speak for common human experience yet are rather unrepresentative of the variety of both human culture and gender.

    The fact you didn't even acknowledge the cultural specificity of your response shows you didn't get the point.

    I know that birth rates rise and fall with respect to other social factors.Bitter Crank

    Of course. This is well studied. People have lots of children where that seems like a rational socioeconomic investment. Then stop having lots of kids when investing in an education and career makes more socioeconomic sense.

    So both choices would be "self-actualising" on the same grounds, even if the choices underpinning them become dramatically different.

    If that is the argument you want to make, the papers are out there.
  • What is life?
    Hu?Banno
    English seems to have been now completely deducted from the statement as it first appeared. Curious. Perhaps English wasn't the language of logic after all?

    But now we have to figure out what "hu" means in some private language. Guesses anyone? Could it be...

    Hu (ḥw), in ancient Egypt, was the deification of the first word, the word of creation, that Atum was said to have exclaimed upon ejaculating or, alternatively, his self-castration, in his masturbatory act of creating the Ennead.

    A masturbatory exclaimation? Well, quite possibly. So not hu? but hu! ;)
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    I guess only US women represent "real women" then? The USA is 4.4% of the world population, but hey, you guys and gals get to speak for humanity. And if you need to balance your demographics, you will import the children of other countries as economically required.

    Sound legit.
  • What is life?
    I simply cannot get away from the idea that the material instability you describe (providing a mechanism for information to express through) is actually deterministic causation expressing itself in a complex way which only gives the appearance of indeterminacy.VagabondSpectre

    Well there are two levels to the issue here.

    What I was highlighting was the surprising logic (for those used to expecting a biological requirement for hardware stability) that says in fact life requires its hardware to be fundamentally bistable - poised at the critical edge of coming together and falling apart. That way, semiotics - some message - can become the determining difference. Information can become the cause of thermal chaos becoming more efficiently organised as an organised dissipative flow.

    So regardless of whether existence itself is "deterministic", biology may thrive because it can find physics poised in a state of radical instability, giving its stored information a way to be the actual determining cause for some process with an organised structure and persistent identity.

    Then there is the question of whether existence itself is deterministic - or instead, perhaps, also a version of the same story. Maybe existence is pan-semiotic - dependent on the information that can organise its material criticality so that we have the Universe as a dissipative structure that flows down an entropic gradient with a persistent identity, running down the hill from a Big Bang to a Heat Death.

    I realise that metaphysical determinism is an article of faith for many. It is part of the package of "good ideas" that underpins a modern reductionist view of physical existence. Determinism goes with locality, atomism, monadism, mechanicalism, logicism, the principle of sufficient reason. Every effect must have its cause - its efficient and material cause. So spontaneity, randomness, creativity, accident, chaos, organicism, etc, are all going to turn out to be disguised cases of micro-determinism. We are simply describing a history of events that is too complicated to follow in its detail using some macro-statistical level method of description.

    So we all know the ontic story. At the micro-scale, everything is a succession of deterministic causes. The desired closure for causality is achieved simply by the efficient and material sources of change - the bottom-up forces of atomistic construction.

    Now this is a great way of modelling the world - especially if you mostly want to use your knowledge of physics to build machines. But even physics shows how it runs into difficulties at the actual micro-scale - down there among the quantum nitty-gritty. What causes the atom to decay? Is it really some determining nudge or do we believe the strongly supported maths that says the "collapse of the wavefunction" is actually spontaneous or probabilistic?

    So it is simply an empirical finding - that makes sense once you think about it - that life depends on the ability of information to colonise locations of material instability. Dissipative structure can be harnessed by encoded purpose, giving us the more complex phenomenon we call life (and mind).

    And then determinism as an ontic-level supposition is also pretty challenged by the facts of quantum theory. That doesn't stop folk trying to shore up a belief in micro-determinism despite the patent interpretive problems. But there are better ontologies on the table - like Peircean pragmatism.

    In brief, you can get a pretty deterministic looking world by understanding material being to be the hylomorphic conjunction of global (informational) constraints and local (material) freedoms.

    So when some event looks mechanically determined, it could actually be just so highly constrained that its degrees of freedom or uncertainty are almost eliminated.

    Think of a combustion engine. We confine a gas vapour explosion within some system of cylinders, valves, pistons, cranks, etc. Or a clock where a wound coiled spring is regulated by the tick-tock of a swivelling escapement. A machine can always just spontaneously go wrong. The clock could fall of the wall and smash. Your laptop might get some critical transistor fried by a cosmic ray. But if we are any good as designers - the people supplying the formal and final causes here - we can engineer the situation so that almost all sources of uncertainty are constrained to the point of practical elimination. A world that is 99% constrained, or whatever the margin required, is as good enough as ontically determined.

    So that would be the argument for life. Molecular chemistry and thermodynamics doesn't have to be actually deterministic. It just has to be sufficiently constrained. The two things would still look much the same.

    But there is an advantage in a constraints-based view of ontology - it still leaves room for actual spontaneity or accident or creative indeterminism. You don't have to pretend the world is so buttoned-down that the unexpected is impossible. You can have atoms that quantumly decay "for no particular reason" other than that this is a constant and unsuppressed possibility. You can have an ontology that better matches the world as we actually observe it - and makes better logical sense once you think about it.

    Although the pseudo-randomness of these unreliable switches can be incorporated into the functions of the data directly, (innovating new data through trial and error for instance (a happy failure of a set of switches)) at some level these switches must have some degree of reliability, else their suitability as a causal mechanism would be nonexistent.VagabondSpectre

    See how hard you have to strain? Any randomness at the ground level has to be "psuedo". And then even that psuedo instability must be ironed out by levels and levels of determining mechanism.

    But then why would life gravitate towards material instability or sources of flux? It fails logic to say life is there to supply the stabilising information if the instability is merely a bug and not the feature. If hardware stability is so important, life would have quickly evolved to colonise that instead.

    My ontology is much simpler. Life's trick is that it can construct the informational constraints to exploit actual material instability. There is a reason why life happens. It can semiotically add mechanical constraints to organise entropic flows. It can regulate because there is a fundamental chaos or indeterminism in want of regulation.

    Computers already do account for some degree of unreliability or wobbliness in their switches. They mainly use redundancy in data as a means to check and reconstruct bits that get corrupted. In machine learning individual "simulated neuronal units" may exhibit apparent wobbliness owing to the complexity of it's interconnected triggers or owing to a psudeo-random property of the neuron itself which can be used to produce variation.VagabondSpectre

    Yep. Computers are machines. We have to engineer them to remove all natural sources of instability. We don't want our laptop circuitry getting playful on us as it would quickly corrupt our data, crash our programs.

    But biology is different. It lives in the real world and rides its fluxes. It takes the random and channels it for its own reasons.

    You then get the irony of neural network architectures where you have fake instability being mastered by the constraint of repeatedly applied learning algorithms. The human designer seeds the network nodes with "random weights" and trains the system on some chosen data set. So yes, that is artificial life or artificial mind - based on pretend hardware indeterminism and so different in an ontologically fundamental way from a biology that lives by regulating real material/entropic indeterminism.

    ...which then gives way to intracellular mechanisms, then to the mechanisms of DNA and RNA, and then to the molecular and atomic world.VagabondSpectre

    But you went sideways to talk about DNA - the information - and skipped over the actual machinery of cells. And as I say, this is the big recent revolution - realising the metabolic half of the cellular equation is not some kind of chemical stewpot but instead a highly structured arrangement of machinery. And this machinery rides the nanoscale quasi-classical limit. It sits exactly at the scale that it can dip its toe in and out of quantum scale indeterminacy.

    This is why I suggest Hoffman's Life's Ratchet as a read. It gives a graphic understanding of how the quasi-classical nanoscale is a zone of critical instability. You get something emergently new at this scale which is "wobbling" between the purely quantum and the purely classical.

    So again, getting back to our standard ontological prejudices, we think that there are just two choices - either reality is classical (in the good old familiar deterministic Newtonian sense) or it is weirdly quantum (and who now knows how the fuck to interpret that?). But there is this third intermediate realm - now understood through thermodynamics and condensed matter modelling - that is the quasi-classical realm of being. And it has the precise quality of bistability - the switching between determinism and indeterminism, order and chaos - that life (and mind) only has to be able to harness and amplify.

    It is a Goldilocks story. Between too stable and too unstable there is a physical zone where you can wobble back and forth in a way that you - as information, as an organism - can fruitfully choose.

    So metaphysics has a third option now - which was sort of pointed to by chaos maths and condensed matter physics, but which is all too recent a scientific discovery to have reached the popular imagination as yet. (Well tipping points and fat-tails have in social science, but not what this means for biology or neuroscience.)

    Consider the hierarchy of mechanisms found in biological life: DNA is it's base unit and all it's other structures and processes are built upon it using DNA as a primary dynamic element (above it in scale).VagabondSpectre

    This just sounds terribly antiquated. Read some current abiogenetic theorising and the focus has gone back to membrane structures organising entropic gradients as the basis of life. It is a molecular machinery first approach now. Although DNA or some other coding mechanism is pretty quickly needed to stabilise the existence of these precarious entropy-transacting membrane structures.

    I suppose my main difficulty is assenting to indeterminism as a property of living systems for semantic/etymological/dogmatic reasons, but I also cannot escape the conclusion that a powerful enough AI built from code (code analogous to DNA, and to the structure of connections in the human brain) would be capable of doing everything that "life" can do, including growing, reproducing, and evolving.VagabondSpectre

    I do accept that we could construct an artificial world of some kind based on a foundation of mechanically-imposed determinism.

    But my point is that this is not the same as being a semiotic organism riding the entropic gradients of the world to its own advantage.

    So what you are imagining is a lifeform that exists inside the informational realm itself, not a lifeform that bridges a division where it is both the information that regulates, and the persistent entropic flux that materially eventuates.

    My semiotic argument is life = information plus flux. And so life can't be just information isolated from flux (as is the case with a computer that doesn't have to worry about its power supply because its humans take care of sorting out that).

    Now you can still construct this kind of life in an artificial, purely informational, world. But it fails in what does seem a critical part of the proper biological definition. There is some kind of analogy going on, but also a critical gap in terms of ontology. Which is why all the artificial-life/artificial-mind sci-fi hype sounds so over-blown. It is unconvincing when AI folk can't themselves spot the gaping chasm between the circuitry they hope non-entropically to scale up and the need in fact to entropically scale down to literally harness the nanoscale organicism of the world.

    Computers don't need more parts to become more like natural organisms. They need to be able to tap the quasi-classical realm of being which is completely infected by the kind of radical instability they normally do their very best to avoid.

    But why would we bother just re-building life? Technology is useful because it is technology - something different at a new semiotic level we can use as humans for our own purposes. So smart systems may be just smart machines ontically speaking, not alive or conscious, but that is not a reason in itself not to construct these machines that might exploit some biological analogies in a useful way, like DeepMind would claim to do.

    Specifically the self-organizing property of data is what most interests me. Natural selection from chaos is the easy answer, the hard answer has to do with the complex shape and nature of connections, relationships, and interactions between data expressing mechanisms which give rise to anticipatory systems of hierarchical networks.VagabondSpectre

    As I say, biological design can serve as an engineering inspiration for better computer architectures. But that does not mean technology is moving towards biological life. And if that was not certain before, it is now that we understand the basis of life in terms of biophysics and dissipative structure theory.
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    If you think that this sounds about right, do you have your own critiques of the idea of purpose being self-actualzation (or further, that it is good to bring more people in the world so they can become self-actualized)? If you think self-actualization is the summum bonum, why do you think so?schopenhauer1

    It would be interesting to hear from more woman on the question. You might expect the sense of self-actualising purpose might be greater, no?

    Also, the notion of "selfhood" is socially-constructed as well as biologically-constrained. So there are notions of self that are about families, and lineages, or even villages, people and nations. To self-actualise could mean having kids to inherit the estate, continue the name, fulfill ambitions the parents couldn't.

    So being pregnant, giving birth, breast-feeding - at least half the population might count that as a natural completion of the self in terms of actualising a potential. Any antinatal argument ought to represent the realities for both sexes.

    And then self-actualisation doesn't have to mean being socially self-centred. People can feel there is a larger self in a family or community. So it is identity at that level that is worth perpetuating. Again, philosophy can't simply dismiss this natural seeming state as somehow an arbitrary impost. Humans clearly have the potential for a social level of identity. And thus it could be a purpose wanting its actualisation.
  • What is life?
    Perhaps we have three views: Meta advocating essence as a real thing that we can set out in terms of the necessary and exclusive attributes; you, with some notion of an asymptotic essence that we can approach but never quite reach; and I, with the view that essences best ignored in favour of the examination of language use.Banno

    I take a broadly Peircean or systems view of essence. So it is real enough as the formal and final cause of being - the constraints or habits that shape material being. What's the great difficulty exactly?

    Perhaps your misunderstanding is the reductionist one of thinking the essence of things is some mysterious substantial property hidden within - like a spirit stuff. Have you studied metaphysics much?
  • What is life?
    Tell me, Apo, how do you get on with Meta? I can't say I've paid much attention to discussions between you two. Are you in agreement as to the nature of essences?Banno

    What does he think about essences? I can't say I've paid any attention to your discussions with him.
  • What is life?
    So you say. Naive realist I'm fine with; but what is a transcendent solipsist?Banno

    You talk as if you can know the world without making measurements. One only has to look and one sees (ignoring the fact that seeing the world is the forming of a phenomenological state that is our interpreted sign of the noumenal - we can't in fact sneak a peek directly).
  • What is life?
    Yep, it's a simple point. So why all the fuss?Banno

    You seem to forget what you were originally trying to argue....

    And that's the point I want to make; that when someone provides us with a definition we go through a process of verifying it; but what is it that we are verifying it against? We presume to be able to say if the definition is right or wrong; against what are be comparing it? Not against some other earlier definition, but against our common usage.

    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?
    Banno

    Clearly you now accept this was confused as we do seek definitions that introduce new measurables.

    Our earlier "common usage" definitions come into question particularly when we come up against borderline classifications - like: "is a virus alive?". The vagueness or uncertainty we feel when answering is a sign that we now need to sharpen our definition by suggesting some new symmetry-breaking or dichotomous fork in the road by which we can measure what is what. An infected cell goes down this path to join the living, the virion fragment goes down the other path to join the class of the not alive, or whatever.

    So we want to know as usual when facing indecision, what counts as the essential difference? What is the difference that makes a difference. And so, what were all the in fact inessential differences that might have been clouding our earlier "common usage" conception?
  • What is life?
    Where do you look, in order to determine that metabolism and replication are necessary and sufficient for life?

    Presumably, at things that are alive.

    It follows that you already know which things are alive before you set out this posited essence.
    Banno

    Uh, yeah. Just like folk once knew the mountains and rivers and stars were alive.

    That is the entirety of my objection to the framing of the question "What is life?" in terms of essences.Banno

    Yup. And even merely as an epistemological point, that is trite.

    So as humans we always find ourselves in the middle of some pragmatically-justified linguistic usage. Words work to structure our ontological expectations. Whatever follows is merely a more telling refinement of our language. That's obvious.

    But the issue at stake is the goal of inquiry - and whether it has some direction that ultimately targets ontological reality in an essentialist fashion.

    If you believe knowledge is merely socially constructed belief, then whatever stories we make up are whatever stories we make up. Refining our terms is not going to lead to any ultimate truths about existence.

    But science does ask after the abstract essence of things because historically it does appear to get us closer to the facts of the matter. This may be indirect realism - as science also understands it is modelling. But at least its a realism that can hold its head up by neither being naive, nor collapsing into solipsism.

    As usual, your approach appears to leave you being simultaneously naive realist and transcendent solipsist. Not a good look.
  • What is life?
    What difference is there between claiming that a virus is alive, and claiming otherwise? What will we measure?Banno

    With a tighter biophysical definition of life, we would measure for evidence of an entropic flux being regulated by replicating information, and not merely the presence of information produced by replication.

    So rightfully, the virion is not alive by this definition. And this definition captures the metaphysical essence of what it takes to be "alive" - metabolism+replication.

    I'm baffled by what you seem to find so baffling about this. You seem to have embarked on some anti-essentialist rant without thinking the issues through.

    Is there some reason a sharper definition of living doesn't make a difference when it comes to viruses? You are implying that is your position. So in what way do you mean?

    The common folk may indeed think a viral infection is an evil humour or malignant spirit as a conventionalised alternative. But would you still want to insist it is linguistic usage all the way down or would you instead want to suggest there might be some actual fact of the matter?
  • What is life?
    What difference does it make?Banno

    The same old pragmatic one. We can measure the truth of what we claim to believe.
  • What is life?
    What's the issue with viruses? Why would one not consider a virus to be a form of life?Metaphysician Undercover

    Again, the issue that I raised was Banno's claim we can determine such questions without a definition which captures the essence of what makes the actual difference.

    Clearly common usage finds viruses a confusing border-line case. And a tighter definition in terms of infected cell vs inert particle then focuses the debate in useful fashion. It offers the sharper ontic boundary we seek when we can contrast virus and virion.
  • What is life?
    To answer the actual question about viruses, this is the official take - https://rybicki.wordpress.com/2015/09/29/so-viruses-living-or-dead/

    Just define virus as the infected cell - the whole thing of the living highjacked organism turned into a viral factory. Then the inert DNA particle we traditionally identify as an individual virus is the virion - the transmitted genetic package much like a bacterial plasmid or eukaryote sperm.
  • What is life?
    But that wasn't the point. The point was that you would need a definition that could decide such a question. Banno is arguing that standard usage of language is good enough.

    He said...

    We simply do not need to be able to present a definition of life in order to do biology.Banno

    But any biologist would tell him that is ridiculous. :)
  • What is life?
    is a virus alive then?
  • What is life?
    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?Banno

    Because obviously we call for a definition because we want to narrow that common usage in some useful fashion. We want to eliminate some sense of vagueness that is in play by taking a more formally constrained view. And that has to start by a reduction of information. We have to put our finger on what is most essential. Then we have some positive proposition that we can seek to verify (or at least fail to falsify via acts of measurement).

    If we accept common usage, then yes, no problem. The usage already works well enough. But common usage is always in question - at least for a scientist or philosopher who believes knowledge is not some static state of affairs but the limit of a shared community of inquiry.