I simply cannot get away from the idea that the material instability you describe (providing a mechanism for information to express through) is actually deterministic causation expressing itself in a complex way which only gives the appearance of indeterminacy. — VagabondSpectre
Well there are two levels to the issue here.
What I was highlighting was the surprising logic (for those used to expecting a biological requirement for hardware stability) that says in fact life requires its hardware to be fundamentally bistable - poised at the critical edge of coming together and falling apart. That way, semiotics - some message - can become the determining difference. Information can become the cause of thermal chaos becoming more efficiently organised as an organised dissipative flow.
So regardless of whether existence itself is "deterministic", biology may thrive because it can find physics poised in a state of radical instability, giving its stored information a way to be the actual determining cause for some process with an organised structure and persistent identity.
Then there is the question of whether existence itself is deterministic - or instead, perhaps, also a version of the same story. Maybe existence is pan-semiotic - dependent on the information that can organise its material criticality so that we have the Universe as a dissipative structure that flows down an entropic gradient with a persistent identity, running down the hill from a Big Bang to a Heat Death.
I realise that metaphysical determinism is an article of faith for many. It is part of the package of "good ideas" that underpins a modern reductionist view of physical existence. Determinism goes with locality, atomism, monadism, mechanicalism, logicism, the principle of sufficient reason. Every effect must have its cause - its efficient and material cause. So spontaneity, randomness, creativity, accident, chaos, organicism, etc, are all going to turn out to be disguised cases of micro-determinism. We are simply describing a history of events that is too complicated to follow in its detail using some macro-statistical level method of description.
So we all know the ontic story. At the micro-scale, everything is a succession of deterministic causes. The desired closure for causality is achieved simply by the efficient and material sources of change - the bottom-up forces of atomistic construction.
Now this is a great way of modelling the world - especially if you mostly want to use your knowledge of physics to build machines. But even physics shows how it runs into difficulties at the actual micro-scale - down there among the quantum nitty-gritty. What causes the atom to decay? Is it really some determining nudge or do we believe the strongly supported maths that says the "collapse of the wavefunction" is actually spontaneous or probabilistic?
So it is simply an empirical finding - that makes sense once you think about it - that life depends on the ability of information to colonise locations of material instability. Dissipative structure can be harnessed by encoded purpose, giving us the more complex phenomenon we call life (and mind).
And then determinism as an ontic-level supposition is also pretty challenged by the facts of quantum theory. That doesn't stop folk trying to shore up a belief in micro-determinism despite the patent interpretive problems. But there are better ontologies on the table - like Peircean pragmatism.
In brief, you can get a pretty deterministic looking world by understanding material being to be the hylomorphic conjunction of global (informational) constraints and local (material) freedoms.
So when some event looks mechanically determined, it could actually be just so highly constrained that its degrees of freedom or uncertainty are almost eliminated.
Think of a combustion engine. We confine a gas vapour explosion within some system of cylinders, valves, pistons, cranks, etc. Or a clock where a wound coiled spring is regulated by the tick-tock of a swivelling escapement. A machine can always just spontaneously go wrong. The clock could fall of the wall and smash. Your laptop might get some critical transistor fried by a cosmic ray. But if we are any good as designers - the people supplying the formal and final causes here - we can engineer the situation so that almost all sources of uncertainty are constrained to the point of practical elimination. A world that is 99% constrained, or whatever the margin required, is as good enough as ontically determined.
So that would be the argument for life. Molecular chemistry and thermodynamics doesn't have to be actually deterministic. It just has to be sufficiently constrained. The two things would still look much the same.
But there is an advantage in a constraints-based view of ontology - it still leaves room for actual spontaneity or accident or creative indeterminism. You don't have to pretend the world is so buttoned-down that the unexpected is impossible. You can have atoms that quantumly decay "for no particular reason" other than that this is a constant and unsuppressed possibility. You can have an ontology that better matches the world as we actually observe it - and makes better logical sense once you think about it.
Although the pseudo-randomness of these unreliable switches can be incorporated into the functions of the data directly, (innovating new data through trial and error for instance (a happy failure of a set of switches)) at some level these switches must have some degree of reliability, else their suitability as a causal mechanism would be nonexistent. — VagabondSpectre
See how hard you have to strain? Any randomness at the ground level has to be "psuedo". And then even that psuedo instability must be ironed out by levels and levels of determining mechanism.
But then why would life gravitate towards material instability or sources of flux? It fails logic to say life is there to supply the stabilising information if the instability is merely a bug and not the feature. If hardware stability is so important, life would have quickly evolved to colonise that instead.
My ontology is much simpler. Life's trick is that it can construct the informational constraints to exploit actual material instability. There is a reason why life happens. It can semiotically add mechanical constraints to organise entropic flows. It can regulate because there is a fundamental chaos or indeterminism in want of regulation.
Computers already do account for some degree of unreliability or wobbliness in their switches. They mainly use redundancy in data as a means to check and reconstruct bits that get corrupted. In machine learning individual "simulated neuronal units" may exhibit apparent wobbliness owing to the complexity of it's interconnected triggers or owing to a psudeo-random property of the neuron itself which can be used to produce variation. — VagabondSpectre
Yep. Computers are machines. We have to engineer them to remove all natural sources of instability. We don't want our laptop circuitry getting playful on us as it would quickly corrupt our data, crash our programs.
But biology is different. It lives in the real world and rides its fluxes. It takes the random and channels it for its own reasons.
You then get the irony of neural network architectures where you have fake instability being mastered by the constraint of repeatedly applied learning algorithms. The human designer seeds the network nodes with "random weights" and trains the system on some chosen data set. So yes, that is artificial life or artificial mind - based on pretend hardware indeterminism and so different in an ontologically fundamental way from a biology that lives by regulating real material/entropic indeterminism.
...which then gives way to intracellular mechanisms, then to the mechanisms of DNA and RNA, and then to the molecular and atomic world. — VagabondSpectre
But you went sideways to talk about DNA - the information - and skipped over the actual machinery of cells. And as I say, this is the big recent revolution - realising the metabolic half of the cellular equation is not some kind of chemical stewpot but instead a highly structured arrangement of machinery. And this machinery rides the nanoscale quasi-classical limit. It sits exactly at the scale that it can dip its toe in and out of quantum scale indeterminacy.
This is why I suggest Hoffman's Life's Ratchet as a read. It gives a graphic understanding of how the quasi-classical nanoscale is a zone of critical instability. You get something emergently new at this scale which is "wobbling" between the purely quantum and the purely classical.
So again, getting back to our standard ontological prejudices, we think that there are just two choices - either reality is classical (in the good old familiar deterministic Newtonian sense) or it is weirdly quantum (and who now knows how the fuck to interpret that?). But there is this third intermediate realm - now understood through thermodynamics and condensed matter modelling - that is the quasi-classical realm of being. And it has the precise quality of bistability - the switching between determinism and indeterminism, order and chaos - that life (and mind) only has to be able to harness and amplify.
It is a Goldilocks story. Between too stable and too unstable there is a physical zone where you can wobble back and forth in a way that you - as information, as an organism - can fruitfully choose.
So metaphysics has a third option now - which was sort of pointed to by chaos maths and condensed matter physics, but which is all too recent a scientific discovery to have reached the popular imagination as yet. (Well tipping points and fat-tails have in social science, but not what this means for biology or neuroscience.)
Consider the hierarchy of mechanisms found in biological life: DNA is it's base unit and all it's other structures and processes are built upon it using DNA as a primary dynamic element (above it in scale). — VagabondSpectre
This just sounds terribly antiquated. Read some current abiogenetic theorising and the focus has gone back to membrane structures organising entropic gradients as the basis of life. It is a molecular machinery first approach now. Although DNA or some other coding mechanism is pretty quickly needed to stabilise the existence of these precarious entropy-transacting membrane structures.
I suppose my main difficulty is assenting to indeterminism as a property of living systems for semantic/etymological/dogmatic reasons, but I also cannot escape the conclusion that a powerful enough AI built from code (code analogous to DNA, and to the structure of connections in the human brain) would be capable of doing everything that "life" can do, including growing, reproducing, and evolving. — VagabondSpectre
I do accept that we could construct an artificial world of some kind based on a foundation of mechanically-imposed determinism.
But my point is that this is not the same as being a semiotic organism riding the entropic gradients of the world to its own advantage.
So what you are imagining is a lifeform that exists inside the informational realm itself, not a lifeform that bridges a division where it is both the information that regulates, and the persistent entropic flux that materially eventuates.
My semiotic argument is life = information plus flux. And so life can't be just information isolated from flux (as is the case with a computer that doesn't have to worry about its power supply because its humans take care of sorting out that).
Now you can still construct this kind of life in an artificial, purely informational, world. But it fails in what does seem a critical part of the proper biological definition. There is some kind of analogy going on, but also a critical gap in terms of ontology. Which is why all the artificial-life/artificial-mind sci-fi hype sounds so over-blown. It is unconvincing when AI folk can't themselves spot the gaping chasm between the circuitry they hope non-entropically to scale up and the need in fact to entropically scale down to literally harness the nanoscale organicism of the world.
Computers don't need more parts to become more like natural organisms. They need to be able to tap the quasi-classical realm of being which is completely infected by the kind of radical instability they normally do their very best to avoid.
But why would we bother just re-building life? Technology is useful because it is technology - something different at a new semiotic level we can use as humans for our own purposes. So smart systems may be just smart machines ontically speaking, not alive or conscious, but that is not a reason in itself not to construct these machines that might exploit some biological analogies in a useful way, like DeepMind would claim to do.
Specifically the self-organizing property of data is what most interests me. Natural selection from chaos is the easy answer, the hard answer has to do with the complex shape and nature of connections, relationships, and interactions between data expressing mechanisms which give rise to anticipatory systems of hierarchical networks. — VagabondSpectre
As I say, biological design can serve as an engineering inspiration for better computer architectures. But that does not mean technology is moving towards biological life. And if that was not certain before, it is now that we understand the basis of life in terms of biophysics and dissipative structure theory.