• What is "self-actualization"- most non-religious (indirect) answer for purpose?
    Although I am generally skeptical of theories of transcendence or becoming, it seems to me that the two concepts have become infused in a way that actualization in its modern definition has become a dialectic of the two, celebrating both egocentricity and the liberation from it.Erik Faerber

    This is an important point. Another way of looking at peak experience is psychological flow - https://en.wikipedia.org/wiki/Flow_(psychology)

    But the irony is that - neuroscientifically speaking - flow is not transcending self-conscious levels of actualisation. It is more like letting go and running on learnt skill - automatic habit.

    So it is part of the dialectical or dichotomous design of the brain to balance habit against attention.

    And likewise - if we are discussing the human condition - again there is a dichotomy that is not a problem but instead an essential balancing act.

    So social structure is a balance of local competition and global cooperation. The individual (starting even with the parts of a person's own life on up to families, communities, nations) has to have a competitive energy. But also, from the nation down, there must also be a generalised cooperative structure.

    So a dialectic of differentiation and integration. What is natural is to be consciously self-actualising (looking out for yourself) within a social context that fosters generalised cooperation - the "automatism" of habits, laws, customs and other shared meaning.

    A self-actualisation that would seek to transcend its own social conditions is unnatural and so a reason people find it disappointing. The nihilist superman lacks flow.
  • What is life?
    So there is nothing here to stop out common use of "life" being extended - indeed, I have several times explicitly said that definitions can be extended.Banno

    Yeah. Except rather than extended, they need to be differentiated. And so they can no longer be shared - being a new choice.

    This goes back to what seems your fundamental misunderstanding about language use. A word does not have a definitional essence in as ostensive sense. It instead functions as an apophatic constraint on uncertainty.

    So a word like "life" or "cat" is already extended in that it covers anything even vaguely living or catlike. The word, as a sign, points not at some definite collection of particular instances and nothing beyond that. It instead constrains our understanding in some generalised way that could be cashed out in any number of restricted sense. Many of which will be differences that don't make a difference.

    So if I am talking about some cat, it could be large all small, black or tabby, male or female, etc, unless I feel the need to specify otherwise, adding more words and thus more constraints on your state of interpretation.

    Thus there is some essence in play - the purpose that is my communicative content (as much as that is ever completely clear and not vague to oneself, even in some propositional statement). But the word can't carry some exact cargo of meaning from me to you. All we share is some history of learning to have our uncertain interpretations constrained to be near enough similar while still remaining creatively open-ended.

    The advantage of my semiotic or constraints approach is that it accounts for how meaning can be formed and conveyed without something specific, particular or actual existing by way of a referent. I might actually have in mind a black, male moggy. You might have in mind a tabby female. But it doesn't make a difference until it makes a difference.

    And that view has important consequences for truth theories, among other things. It also should explain why definitions matter as the way to bring out putative differences. We can't actually be agreeing in some positive fashion - as opposed to some accidental and undisclosed fashion - unless we have discovered and articulated a possible point of disagreement.

    This cat we are talking about - what's its colour, age and gender? Let's see if we still have the same referent in mind. And if it doesn't matter, then it is not essential. The essence remains at the greater level of generality which is simply what we mean by "cat".

    So the existence of essence is demonstrated by applicability of generality. Reference can be open-ended or "already extended" because - dichotomously - it is also anchored by an apophatic generality. We know that cats aren't dogs or fungi or rocket-launchers as those other general alternatives are ruled out by some abstracted cat-essence.

    And while common usage does seem to get catness by perceptual abstraction (some acceptable combination of traits), science can pin that down with greater ontological rigour. It can say that evolution actually does create genetic lineages - actual constraints encoded in genetic information. So we can start to measure cat-ness in a way that can be quantified as some distance separating cats and pumas, and then more generally, leopards and panthers (although confusingly - hu! - leopards are phylogenetically panthera rather than leopardis).

    It is actually very important that - from Aristotle on - we seek to name the forms of the natural world in this essentialist fashion. The subsumptive hierarchy always seemed completely logical. And so it was discovered to be. Evolution reads like a forking tree of differences that made a difference because something must divide one species from another at the level of actual historical information. We don't just socially construct the meanings of words. We can hope to asymptotically approach the world's essential divisions by seeking out the constraints that got refined by differences that made a difference.
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    Get over yourself. My point was that antinatalist debates on PF pretend to speak for common human experience yet are rather unrepresentative of the variety of both human culture and gender.

    The fact you didn't even acknowledge the cultural specificity of your response shows you didn't get the point.

    I know that birth rates rise and fall with respect to other social factors.Bitter Crank

    Of course. This is well studied. People have lots of children where that seems like a rational socioeconomic investment. Then stop having lots of kids when investing in an education and career makes more socioeconomic sense.

    So both choices would be "self-actualising" on the same grounds, even if the choices underpinning them become dramatically different.

    If that is the argument you want to make, the papers are out there.
  • What is life?
    Hu?Banno
    English seems to have been now completely deducted from the statement as it first appeared. Curious. Perhaps English wasn't the language of logic after all?

    But now we have to figure out what "hu" means in some private language. Guesses anyone? Could it be...

    Hu (ḥw), in ancient Egypt, was the deification of the first word, the word of creation, that Atum was said to have exclaimed upon ejaculating or, alternatively, his self-castration, in his masturbatory act of creating the Ennead.

    A masturbatory exclaimation? Well, quite possibly. So not hu? but hu! ;)
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    I guess only US women represent "real women" then? The USA is 4.4% of the world population, but hey, you guys and gals get to speak for humanity. And if you need to balance your demographics, you will import the children of other countries as economically required.

    Sound legit.
  • What is life?
    I simply cannot get away from the idea that the material instability you describe (providing a mechanism for information to express through) is actually deterministic causation expressing itself in a complex way which only gives the appearance of indeterminacy.VagabondSpectre

    Well there are two levels to the issue here.

    What I was highlighting was the surprising logic (for those used to expecting a biological requirement for hardware stability) that says in fact life requires its hardware to be fundamentally bistable - poised at the critical edge of coming together and falling apart. That way, semiotics - some message - can become the determining difference. Information can become the cause of thermal chaos becoming more efficiently organised as an organised dissipative flow.

    So regardless of whether existence itself is "deterministic", biology may thrive because it can find physics poised in a state of radical instability, giving its stored information a way to be the actual determining cause for some process with an organised structure and persistent identity.

    Then there is the question of whether existence itself is deterministic - or instead, perhaps, also a version of the same story. Maybe existence is pan-semiotic - dependent on the information that can organise its material criticality so that we have the Universe as a dissipative structure that flows down an entropic gradient with a persistent identity, running down the hill from a Big Bang to a Heat Death.

    I realise that metaphysical determinism is an article of faith for many. It is part of the package of "good ideas" that underpins a modern reductionist view of physical existence. Determinism goes with locality, atomism, monadism, mechanicalism, logicism, the principle of sufficient reason. Every effect must have its cause - its efficient and material cause. So spontaneity, randomness, creativity, accident, chaos, organicism, etc, are all going to turn out to be disguised cases of micro-determinism. We are simply describing a history of events that is too complicated to follow in its detail using some macro-statistical level method of description.

    So we all know the ontic story. At the micro-scale, everything is a succession of deterministic causes. The desired closure for causality is achieved simply by the efficient and material sources of change - the bottom-up forces of atomistic construction.

    Now this is a great way of modelling the world - especially if you mostly want to use your knowledge of physics to build machines. But even physics shows how it runs into difficulties at the actual micro-scale - down there among the quantum nitty-gritty. What causes the atom to decay? Is it really some determining nudge or do we believe the strongly supported maths that says the "collapse of the wavefunction" is actually spontaneous or probabilistic?

    So it is simply an empirical finding - that makes sense once you think about it - that life depends on the ability of information to colonise locations of material instability. Dissipative structure can be harnessed by encoded purpose, giving us the more complex phenomenon we call life (and mind).

    And then determinism as an ontic-level supposition is also pretty challenged by the facts of quantum theory. That doesn't stop folk trying to shore up a belief in micro-determinism despite the patent interpretive problems. But there are better ontologies on the table - like Peircean pragmatism.

    In brief, you can get a pretty deterministic looking world by understanding material being to be the hylomorphic conjunction of global (informational) constraints and local (material) freedoms.

    So when some event looks mechanically determined, it could actually be just so highly constrained that its degrees of freedom or uncertainty are almost eliminated.

    Think of a combustion engine. We confine a gas vapour explosion within some system of cylinders, valves, pistons, cranks, etc. Or a clock where a wound coiled spring is regulated by the tick-tock of a swivelling escapement. A machine can always just spontaneously go wrong. The clock could fall of the wall and smash. Your laptop might get some critical transistor fried by a cosmic ray. But if we are any good as designers - the people supplying the formal and final causes here - we can engineer the situation so that almost all sources of uncertainty are constrained to the point of practical elimination. A world that is 99% constrained, or whatever the margin required, is as good enough as ontically determined.

    So that would be the argument for life. Molecular chemistry and thermodynamics doesn't have to be actually deterministic. It just has to be sufficiently constrained. The two things would still look much the same.

    But there is an advantage in a constraints-based view of ontology - it still leaves room for actual spontaneity or accident or creative indeterminism. You don't have to pretend the world is so buttoned-down that the unexpected is impossible. You can have atoms that quantumly decay "for no particular reason" other than that this is a constant and unsuppressed possibility. You can have an ontology that better matches the world as we actually observe it - and makes better logical sense once you think about it.

    Although the pseudo-randomness of these unreliable switches can be incorporated into the functions of the data directly, (innovating new data through trial and error for instance (a happy failure of a set of switches)) at some level these switches must have some degree of reliability, else their suitability as a causal mechanism would be nonexistent.VagabondSpectre

    See how hard you have to strain? Any randomness at the ground level has to be "psuedo". And then even that psuedo instability must be ironed out by levels and levels of determining mechanism.

    But then why would life gravitate towards material instability or sources of flux? It fails logic to say life is there to supply the stabilising information if the instability is merely a bug and not the feature. If hardware stability is so important, life would have quickly evolved to colonise that instead.

    My ontology is much simpler. Life's trick is that it can construct the informational constraints to exploit actual material instability. There is a reason why life happens. It can semiotically add mechanical constraints to organise entropic flows. It can regulate because there is a fundamental chaos or indeterminism in want of regulation.

    Computers already do account for some degree of unreliability or wobbliness in their switches. They mainly use redundancy in data as a means to check and reconstruct bits that get corrupted. In machine learning individual "simulated neuronal units" may exhibit apparent wobbliness owing to the complexity of it's interconnected triggers or owing to a psudeo-random property of the neuron itself which can be used to produce variation.VagabondSpectre

    Yep. Computers are machines. We have to engineer them to remove all natural sources of instability. We don't want our laptop circuitry getting playful on us as it would quickly corrupt our data, crash our programs.

    But biology is different. It lives in the real world and rides its fluxes. It takes the random and channels it for its own reasons.

    You then get the irony of neural network architectures where you have fake instability being mastered by the constraint of repeatedly applied learning algorithms. The human designer seeds the network nodes with "random weights" and trains the system on some chosen data set. So yes, that is artificial life or artificial mind - based on pretend hardware indeterminism and so different in an ontologically fundamental way from a biology that lives by regulating real material/entropic indeterminism.

    ...which then gives way to intracellular mechanisms, then to the mechanisms of DNA and RNA, and then to the molecular and atomic world.VagabondSpectre

    But you went sideways to talk about DNA - the information - and skipped over the actual machinery of cells. And as I say, this is the big recent revolution - realising the metabolic half of the cellular equation is not some kind of chemical stewpot but instead a highly structured arrangement of machinery. And this machinery rides the nanoscale quasi-classical limit. It sits exactly at the scale that it can dip its toe in and out of quantum scale indeterminacy.

    This is why I suggest Hoffman's Life's Ratchet as a read. It gives a graphic understanding of how the quasi-classical nanoscale is a zone of critical instability. You get something emergently new at this scale which is "wobbling" between the purely quantum and the purely classical.

    So again, getting back to our standard ontological prejudices, we think that there are just two choices - either reality is classical (in the good old familiar deterministic Newtonian sense) or it is weirdly quantum (and who now knows how the fuck to interpret that?). But there is this third intermediate realm - now understood through thermodynamics and condensed matter modelling - that is the quasi-classical realm of being. And it has the precise quality of bistability - the switching between determinism and indeterminism, order and chaos - that life (and mind) only has to be able to harness and amplify.

    It is a Goldilocks story. Between too stable and too unstable there is a physical zone where you can wobble back and forth in a way that you - as information, as an organism - can fruitfully choose.

    So metaphysics has a third option now - which was sort of pointed to by chaos maths and condensed matter physics, but which is all too recent a scientific discovery to have reached the popular imagination as yet. (Well tipping points and fat-tails have in social science, but not what this means for biology or neuroscience.)

    Consider the hierarchy of mechanisms found in biological life: DNA is it's base unit and all it's other structures and processes are built upon it using DNA as a primary dynamic element (above it in scale).VagabondSpectre

    This just sounds terribly antiquated. Read some current abiogenetic theorising and the focus has gone back to membrane structures organising entropic gradients as the basis of life. It is a molecular machinery first approach now. Although DNA or some other coding mechanism is pretty quickly needed to stabilise the existence of these precarious entropy-transacting membrane structures.

    I suppose my main difficulty is assenting to indeterminism as a property of living systems for semantic/etymological/dogmatic reasons, but I also cannot escape the conclusion that a powerful enough AI built from code (code analogous to DNA, and to the structure of connections in the human brain) would be capable of doing everything that "life" can do, including growing, reproducing, and evolving.VagabondSpectre

    I do accept that we could construct an artificial world of some kind based on a foundation of mechanically-imposed determinism.

    But my point is that this is not the same as being a semiotic organism riding the entropic gradients of the world to its own advantage.

    So what you are imagining is a lifeform that exists inside the informational realm itself, not a lifeform that bridges a division where it is both the information that regulates, and the persistent entropic flux that materially eventuates.

    My semiotic argument is life = information plus flux. And so life can't be just information isolated from flux (as is the case with a computer that doesn't have to worry about its power supply because its humans take care of sorting out that).

    Now you can still construct this kind of life in an artificial, purely informational, world. But it fails in what does seem a critical part of the proper biological definition. There is some kind of analogy going on, but also a critical gap in terms of ontology. Which is why all the artificial-life/artificial-mind sci-fi hype sounds so over-blown. It is unconvincing when AI folk can't themselves spot the gaping chasm between the circuitry they hope non-entropically to scale up and the need in fact to entropically scale down to literally harness the nanoscale organicism of the world.

    Computers don't need more parts to become more like natural organisms. They need to be able to tap the quasi-classical realm of being which is completely infected by the kind of radical instability they normally do their very best to avoid.

    But why would we bother just re-building life? Technology is useful because it is technology - something different at a new semiotic level we can use as humans for our own purposes. So smart systems may be just smart machines ontically speaking, not alive or conscious, but that is not a reason in itself not to construct these machines that might exploit some biological analogies in a useful way, like DeepMind would claim to do.

    Specifically the self-organizing property of data is what most interests me. Natural selection from chaos is the easy answer, the hard answer has to do with the complex shape and nature of connections, relationships, and interactions between data expressing mechanisms which give rise to anticipatory systems of hierarchical networks.VagabondSpectre

    As I say, biological design can serve as an engineering inspiration for better computer architectures. But that does not mean technology is moving towards biological life. And if that was not certain before, it is now that we understand the basis of life in terms of biophysics and dissipative structure theory.
  • What is "self-actualization"- most non-religious (indirect) answer for purpose?
    If you think that this sounds about right, do you have your own critiques of the idea of purpose being self-actualzation (or further, that it is good to bring more people in the world so they can become self-actualized)? If you think self-actualization is the summum bonum, why do you think so?schopenhauer1

    It would be interesting to hear from more woman on the question. You might expect the sense of self-actualising purpose might be greater, no?

    Also, the notion of "selfhood" is socially-constructed as well as biologically-constrained. So there are notions of self that are about families, and lineages, or even villages, people and nations. To self-actualise could mean having kids to inherit the estate, continue the name, fulfill ambitions the parents couldn't.

    So being pregnant, giving birth, breast-feeding - at least half the population might count that as a natural completion of the self in terms of actualising a potential. Any antinatal argument ought to represent the realities for both sexes.

    And then self-actualisation doesn't have to mean being socially self-centred. People can feel there is a larger self in a family or community. So it is identity at that level that is worth perpetuating. Again, philosophy can't simply dismiss this natural seeming state as somehow an arbitrary impost. Humans clearly have the potential for a social level of identity. And thus it could be a purpose wanting its actualisation.
  • What is life?
    Perhaps we have three views: Meta advocating essence as a real thing that we can set out in terms of the necessary and exclusive attributes; you, with some notion of an asymptotic essence that we can approach but never quite reach; and I, with the view that essences best ignored in favour of the examination of language use.Banno

    I take a broadly Peircean or systems view of essence. So it is real enough as the formal and final cause of being - the constraints or habits that shape material being. What's the great difficulty exactly?

    Perhaps your misunderstanding is the reductionist one of thinking the essence of things is some mysterious substantial property hidden within - like a spirit stuff. Have you studied metaphysics much?
  • What is life?
    Tell me, Apo, how do you get on with Meta? I can't say I've paid much attention to discussions between you two. Are you in agreement as to the nature of essences?Banno

    What does he think about essences? I can't say I've paid any attention to your discussions with him.
  • What is life?
    So you say. Naive realist I'm fine with; but what is a transcendent solipsist?Banno

    You talk as if you can know the world without making measurements. One only has to look and one sees (ignoring the fact that seeing the world is the forming of a phenomenological state that is our interpreted sign of the noumenal - we can't in fact sneak a peek directly).
  • What is life?
    Yep, it's a simple point. So why all the fuss?Banno

    You seem to forget what you were originally trying to argue....

    And that's the point I want to make; that when someone provides us with a definition we go through a process of verifying it; but what is it that we are verifying it against? We presume to be able to say if the definition is right or wrong; against what are be comparing it? Not against some other earlier definition, but against our common usage.

    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?
    Banno

    Clearly you now accept this was confused as we do seek definitions that introduce new measurables.

    Our earlier "common usage" definitions come into question particularly when we come up against borderline classifications - like: "is a virus alive?". The vagueness or uncertainty we feel when answering is a sign that we now need to sharpen our definition by suggesting some new symmetry-breaking or dichotomous fork in the road by which we can measure what is what. An infected cell goes down this path to join the living, the virion fragment goes down the other path to join the class of the not alive, or whatever.

    So we want to know as usual when facing indecision, what counts as the essential difference? What is the difference that makes a difference. And so, what were all the in fact inessential differences that might have been clouding our earlier "common usage" conception?
  • What is life?
    Where do you look, in order to determine that metabolism and replication are necessary and sufficient for life?

    Presumably, at things that are alive.

    It follows that you already know which things are alive before you set out this posited essence.
    Banno

    Uh, yeah. Just like folk once knew the mountains and rivers and stars were alive.

    That is the entirety of my objection to the framing of the question "What is life?" in terms of essences.Banno

    Yup. And even merely as an epistemological point, that is trite.

    So as humans we always find ourselves in the middle of some pragmatically-justified linguistic usage. Words work to structure our ontological expectations. Whatever follows is merely a more telling refinement of our language. That's obvious.

    But the issue at stake is the goal of inquiry - and whether it has some direction that ultimately targets ontological reality in an essentialist fashion.

    If you believe knowledge is merely socially constructed belief, then whatever stories we make up are whatever stories we make up. Refining our terms is not going to lead to any ultimate truths about existence.

    But science does ask after the abstract essence of things because historically it does appear to get us closer to the facts of the matter. This may be indirect realism - as science also understands it is modelling. But at least its a realism that can hold its head up by neither being naive, nor collapsing into solipsism.

    As usual, your approach appears to leave you being simultaneously naive realist and transcendent solipsist. Not a good look.
  • What is life?
    What difference is there between claiming that a virus is alive, and claiming otherwise? What will we measure?Banno

    With a tighter biophysical definition of life, we would measure for evidence of an entropic flux being regulated by replicating information, and not merely the presence of information produced by replication.

    So rightfully, the virion is not alive by this definition. And this definition captures the metaphysical essence of what it takes to be "alive" - metabolism+replication.

    I'm baffled by what you seem to find so baffling about this. You seem to have embarked on some anti-essentialist rant without thinking the issues through.

    Is there some reason a sharper definition of living doesn't make a difference when it comes to viruses? You are implying that is your position. So in what way do you mean?

    The common folk may indeed think a viral infection is an evil humour or malignant spirit as a conventionalised alternative. But would you still want to insist it is linguistic usage all the way down or would you instead want to suggest there might be some actual fact of the matter?
  • What is life?
    What difference does it make?Banno

    The same old pragmatic one. We can measure the truth of what we claim to believe.
  • What is life?
    What's the issue with viruses? Why would one not consider a virus to be a form of life?Metaphysician Undercover

    Again, the issue that I raised was Banno's claim we can determine such questions without a definition which captures the essence of what makes the actual difference.

    Clearly common usage finds viruses a confusing border-line case. And a tighter definition in terms of infected cell vs inert particle then focuses the debate in useful fashion. It offers the sharper ontic boundary we seek when we can contrast virus and virion.
  • What is life?
    To answer the actual question about viruses, this is the official take - https://rybicki.wordpress.com/2015/09/29/so-viruses-living-or-dead/

    Just define virus as the infected cell - the whole thing of the living highjacked organism turned into a viral factory. Then the inert DNA particle we traditionally identify as an individual virus is the virion - the transmitted genetic package much like a bacterial plasmid or eukaryote sperm.
  • What is life?
    But that wasn't the point. The point was that you would need a definition that could decide such a question. Banno is arguing that standard usage of language is good enough.

    He said...

    We simply do not need to be able to present a definition of life in order to do biology.Banno

    But any biologist would tell him that is ridiculous. :)
  • What is life?
    is a virus alive then?
  • What is life?
    And if we already accept that this common usage is the test of our definition, why bother witht eh definition at all?Banno

    Because obviously we call for a definition because we want to narrow that common usage in some useful fashion. We want to eliminate some sense of vagueness that is in play by taking a more formally constrained view. And that has to start by a reduction of information. We have to put our finger on what is most essential. Then we have some positive proposition that we can seek to verify (or at least fail to falsify via acts of measurement).

    If we accept common usage, then yes, no problem. The usage already works well enough. But common usage is always in question - at least for a scientist or philosopher who believes knowledge is not some static state of affairs but the limit of a shared community of inquiry.
  • What is life?
    The kind of computation to which I refer isn't just basic computation; "deep learning" is an example of the type of computation that I would compare to life because the organizational structure of it's data points (a structure which emerges as the machine learns on it's own) is well beyond the complexity threshold of appearing to operate non-deterministically.VagabondSpectre

    So my argument is that essential to a semiotic definition of life is it is information which seeks out material instability. It needs chemical structure poised on a knife edge as that is what then allows the information to act as the determining influence. That is the trick. Information can be the immaterial part of an organism that gives the hardware just enough of a material nudge to tip it in the desired directions.

    So yes, neural computer architectures try to simulate that. They apply some universal learning algorithm to a data set. With reinforcement, they can erase response variety and arrive at the shortest path to achieve some goal - like win a computer game. There is something life-like there.

    But note that you then think that to become more life-like would involve a scaling up - add more information processing to head in the direction of becoming actually conscious or intelligent.

    I instead would be looking to scale down.

    Your DeepMind is still a simulation running on stable hardware and thus merely software insulated from the real world of entropic material processes. Sure, we can imagine the simulation being coupled to the world by some system of actuators or mechanical linkages. The program could output a signal - like "fire the missile". That could flick a switch that triggers the action. But it is essential that the hardware doing this job is utterly deterministic and not at all unstable. Who wants nukes hooked up to wobbly switches?

    So while DeepMind might build a simulation of a learning system that feeds off the elimination of variety - and thus has to deal with its own artificial instability, the catastrophic forgetting problem - it still depends on deterministic devices outside its control to interface with the world. A different bunch of engineers is responsible for fabricated the reliable actuators that can take an output and turn it into the utterly reliable trip of the switch. I mean it makes no difference to the DeepMind computation whether anything actually happens after it has output its signal. A physical malfunction of the switch is not its responsibility as some bunch of humans built that part of the total system. DeepMind hasn't got the wits to fix hardware level faults.

    But for life/mind, the organism is sensitive to its grounding materiality all the way down to the quasi-classical nanoscale. At the level of synapses and dendrites, it is organic. The equilibrium balance between structural breaking down vs structural re-constructing is a dynamic being influenced by the global state of the system. If I pay attention to a dancing dot on a screen, molecular-level stuff is getting tipped in one direction or another. The switch itself is alive and constantly having to be remade, and thus constantly also in a state of anticipatory learning. The shape some membrane or cytoskeletal organisation was in a moment ago is either going to continue to be pretty much still right or competitively selected for a change.

    So my argument is that you are looking in the wrong direction for seeking a convergence of the artificial with the real. Yes, more computational resources would be necessary to start to match the informational complexity of brains. But that isn't what convergence looks like. Instead, the technology has to be pushed in the other direction - down to the level where any reliance on outside help for hardware stability has been squeezed out of the picture and replaced by an organismic self-reliance in directing the transient material flows on which life - as dissipative structure - depends.

    Life and mind must be able to live in the world as information regulating material instability for some remembered purpose. It has to be able to stand on its own two feet entirely to qualify as life (as I said about a virus).

    But that is not to say that DeepMind and neural network architectures aren't a significant advance as technology. Simulated minds could be very useful as devices we insert into tasks we want to automate. And perhaps you could argue that future AI will be a new form of life - one that starts at some higher level of semiosis where the entropic and material conditions are quite different in being engineered to be stable, rather than being foundationally unstable.

    So yes, there may be "life" beyond life if humans create the right hardware conditions by their arbitrary choice. But here I am concerned to make clear exactly what is involved in such a step.

    I do understand the non-linearity of development in complex and chaotic systems. Events may still be pre-determined but they may not predicted in advance because each sequential material state in the system contains irreducible complexity, so it must be played out or simulated to actually see what happens. (like solving an overly-large equation piece by piece because it cannot be simplified).VagabondSpectre

    It still needs to be remembered that mathematical chaos is a model. So we shouldn't base metaphysical conclusions on a model without taking account of how the model radically simplifies the world - by removing, for instance, its instabilities or indeterminancies.

    So a reductionist takes a model that can construct "chaos" deterministically at face value. It does appear to capture much about how the world works ... so long as the view is grainy or placed at a sufficient distance in terms off dynamical scale. If you average, you can pretend that spontaneous fluctuations have been turned into some steady-state blur of action. So while analytic techniques fail (the real world is still a mess of chance or indeterminism), numeric techniques just take the assumed average and get on with the computation.

    So chaos modelling is about eliminated actual complexity - of the semiotic kind - and replacing it with mere complexity. The system in question is granted known boundary conditions and some set of "typical" initial conditions are assumed. With the simulated world thus sealed at both ends, it becomes safe for calculation. All you need is enough hardware to run the simulation in the desired level of detail.

    Machines which we build using mostly two-state parts with well defined effects are extraordinarily simple compared to those which seem to emerge on their own (using dynamic parts such as inter-connected memory cells with many states or strings of pairs of molecules which exhibit many different behaviors depending on their order). Even while I recognize the limits on comprehending such machines using a reductionist approach, I cannot help but assume these limitations are primarily owing to the strength of the human mind.VagabondSpectre

    This is in fact the big surprise from modern biophysics - at the ground level, life is far more a bunch of machinery than we ever expected. Fifty years ago, cells seemed like bags of chemical soup into which genes threw enzymes to make reactions go in desired directions. Now it is being discovered that there are troops of transport molecules that drag stuff about by walking them along cytoskeletal threads. Membranes are full of mechanical pumps. ATP - the universal energy source - is charged up by being cranked through a rotating mill.

    So in that sense, life is mechanism all the way down. It is far less some soup of chemistry than we expected. Every chemical reaction is informationally regulated.

    But the flip side of that is that this then means life is seeking out material instability at its foundational scale - as only the unstable could be thus regulated by informational mechanism.

    If you are at all interested, Peter Hoffman's Life's Ratchet is a brilliant read on the subject. Nick Lane has done a number of good books too.

    So there are two things here. You are talking about the modelling of informational-level complexity - the kind of intricate patterns that can be woven by some network of switches regulated by some set of rules. And there is a ton of fun mathematics that derives from that, from cellular automata and Ising models, to all the self-organising synchrony and neural network stuff. However that all depends on switches that are already behaving like switches - ie: they are deterministic and they don't add to the total complexity by "having a mind of their own".

    But I am talking about life and mind as a semiotic process where the hardware isn't deterministic. In fact, it mustn't be deterministic if that determinism is what the information processing side of the equation is hoping to supply.

    And where are our pretty software models to simulate that kind of world? Or rather, where are our actual "machines" that implement that semiotic notion as some actual device? In technological terms, we can do a fantastic amount of things at the software simulation level. But can we do anything life-like or mind-like at the self-assembling hardware actuality level?

    Hell no. It's only been about 10 years that biology has even begun to grasp that this is such an issue.
  • What is life?
    I think that the biophysical discoveries of the past 15 years - the new and very unexpected detail we have about the molecular machinery of cells - really explains how life and computation are deeply different.

    To sum that up, the reductionist view you just expressed hinges on the belief that the physics or hardware of the system is a collection of stable parts. Even it we are talking about circuits that can be switched, they stay in whatever state they were last left in. You can build up a hierarchy of complexity - such as the layers of microcode and instruction sets - because the hardware operates deterministically. It is fixed, which allows the software to flex. The hardware can support any programme without being the slightest bit bothered by anything the software is doing.

    But biology is different in that life depends on physical instability. Counter-intuitively, life seeks out physical processes that are critical, or what used to be called at the edge of chaos. So if you take any protein or cellular component (apart from DNA with its unusual inertness), as a molecule it will be always on the edge of falling apart ... and then reforming. It will disassociate and get back together. The chemical milieu is adjusted so that the structural components are poised on that unstable edge.

    And the big trick is that the cell can then use its genetic information to give the unstable chemistry just enough of a nudge so the parts rebuild themselves slightly more than they fall apart. This is the semiotic bit. Life is information that sends the signal to hang together. And it is the resulting flux of energy through the system - the dissipative flux - that keeps the componentry rebuilding.

    So computers have stable hardware that the software can forget about and just crunch away. If you are equating the program with intelligent action, it is all happening in an entirely different world. That is why it needs biological creatures - us - to write the programmes and understand what they might be saying about the world. To the programmes, the world is immaterial. They never have to give a moment's thought to stopping the system of switches falling apart because they are not being fed by a flux of entropy.

    Life is then information in control of radical physical instability. That is what it thrives on - physics that needs to be pointed in a direction by a sign, the molecules that function as messsges. It has to be that way as cellular components that were stable would not respond to the tiny nudges that signals can deliver.

    This leads into the other counter-intuitive aspect of life and mind - the desire for a general reduction in actual information in a living system.

    Again, with computation, more data, more detail, seems like a good thing. As you say, to model a physical process, the level of detail we need seems overwhelming. We feel handicapped because to get it right, we have to represent every atom, every event, every possibility. In principle, universal computation could do that, given infinite resources. So that is a comfort. But in practice, we worry that our representations are pretty sparse. So we can make machines that are somewhat alive, or somewhat intelligent. However to complete the job, we would have to keep adding who knows how many bits.

    The point is that computation creates the expectation that more is better. However when it comes to cellular control over falling apart componentry, semiotics means that the need is to reduce and simplify. The organism wants to be organised by the simplest system of signals possible. So information needs to be erased. Learning is all about forgetting - reducing what needs to be known to get things done to the simplest habits or automatic routines.

    This then connects to the third way biology is not like computation - and that is the way life and mind are forward modelling systems. Anticipatory in their processes. So a computer is input to output. Data arrives, gets crunched, and produces an output. But brains guess their input so as to be able to ignore what happens when it happens. That way anything surprising or novel is what will automatically pop out. In the same way, the genes are a memory that anticipates the world the organism will find itself in. Of course the genes only get it 99% right. Selection then acts to erase those individuals with faulty information. The variety is reduced so the gene pool gets better at anticipation.

    So life is unlike the reductionist notion of machinery in seeking out unstable componentry (as that gives a system of signals something to control). And at the "software" or informational level, the goal is to create the simplest possible control routines. Information needs to be erased so that signal can be distinguished from noise. It is just the same as when we draw maps. The simpler the better. Just a few lines and critical landmarks to stand for the complexity of the world.
  • What is life?
    But what molecular machinery does a virus have? It has no ribosomes or mitochondria or any of the other gear to construct an organismic economy. It doesn't even have the genetic information to code for that machinery.

    So I am not too fussed about whether to define a virus as alive. It is OK that it is on the margins of the definition in being a genetic fragment that can hijack proper organismic complexity. Problems only arise in thinking that the simplicity of a virus might make it a stepping stone precursor that marks the evolutionary path from the abiotic to the biotic. I mean you wouldn't treat cancer as a simpler lifeform, or an amputated leg.

    Then you are self sustaining in the "closed for causality" fashion I specified. You have your own respiratory machinery for burning or oxidating electron sources. You don't hijack the respiratory machinery of plants. You take that intricate living machinery and metabolically burn it. It's usually pretty dead by the time it gets into your stomach. A virus needs a living host. You just need chemical bonds you can crack for their energy.

    A computer virus is an analogy for a real virus, but computers - of the regular Turing Machine kind - are nothing like life. As I said, they lack the qualities that define an organism. And thinking in terms of organisms does usefully sharpen up what we - or biologists - mean by life.

    Life (like mind) still has echoes of a vitalistic ontology - the presence of some generic spirit that infects the flesh to make it reactive. Talking about organisms ensures that structural criteria - like being closed for causality in terms of embodying a purpose with efficient means - are top of mind. We are paying attention to the process of how it is done rather than treating life as some vague reactive matter.
  • What is life?
    I'm not asking you to define life, I'm asking you to give me an example of anything which could plausibly be agreed to as life which also happens to be uncomplicated. I don't have a good answer as to why life needs to be complex, it just is. Maybe because simple things never do anything intelligent. I don't know, the answer is complex.VagabondSpectre

    A biologist would define life semiotically. That is, a line is crossed when something physical, like a molecule, can function as something informational, like a message.

    At the simplest level, that is a bit of mechanism like a switch. A recipe read off a strand of DNA gets made into a molecular message that then changes the conformation of a protein complex and leads to some chemical reaction taking place.

    Of course, we then want to think of life in terms of individual organisms - systems that are closed circuits of signals. There has to be some little sealed world of messages being exchanged that thus gives us a sense of there being a properly located state of purpose. An organism is a collection of semiotic machinery talking to itself in a way that makes for a definite inside and outside.

    So what that says is even the simplest semiotics already has to make that leap to a holistic complexity. It becomes definitional of an organism that it has its particular own purpose, and thus it individuates itself in meaningful fashion from the wider world. A network of switches is the minimal qualification. And that is why a virus seems troubling. We can't really talk about it as an "it" because it is not self-sustaining in that minimal fashion. It is a bare message that hijacks other machinery.

    Computers then fail the definition for life to the degree that they are not organismic. Do they have their own purpose for being - one individuated from their makers? Do they regulate their own physics through their messages? (Clearly not with a normal computer which is designed so the software lives in a different world to its hardware.)

    So the semiotic or organismic view defines life and mind fairly clearly by that boundary - the moment a molecule becomes a message. But for that to happen, a messaging network with some closed local purpose must already be in place. To be a sign implies there is some existent habit of interpretation. So already there is irreducible complexity.

    This can seem a troubling chicken and egg situation. On the other hand, it does allow one to treat life as mind-like and intelligent or anticipatory down to the most primitive level. When biologists talk about information processing, they really do mean something that is purposeful and meaningful to an organism itself.
  • Aphantasia and p-zombies
    We seem at cross purposes. I wasn't talking about the ability to imagine the sounds, sights or feelings of language as such. But sure, braille is another possibility. You could have a feeling of bumps under your fingertips as the equivalent of an inner voice.
  • Aphantasia and p-zombies
    I'm sure Hellen Keller was able to conceive of things without utilizing visual or auditory signs.Marchesk

    In her case, it would have helped that she was hearing and sighted until she was two. And before she was taught a finger-spelling system by Annie Sullivan, she was using her own made-up system of signs, like a shiver for ice-cream and miming putting on glasses for her father. So there was a neural basis established for both language and those conceptual modalities.
  • Aphantasia and p-zombies
    Um yeah. I think I will file that under "more fellating".
  • Aphantasia and p-zombies
    But the proper response wouldn't be to confirm or deny the possibility of visualizing an abstract triangle, but to say that dealing with abstract triangles involves a capacity different than 'visualization'. As in: you can't apply the sense-impression model here.csalisbury

    As usual, the conventional thing is to try to deal with a dichotomy by reducing it to one or the other of two options. So in labelling the mind, either we are dealing with one common faculty or two very different ones.

    Yet my point is that a dichotomy - a symmetry breaking that leads to hierarchical organisation - is in fact the proper natural option. The brain is organised by this logic. And so that is what our language would most fruitfully capture. Rather than fighting the usual lumper vs splitter battles, we should be amazed if any "mental faculty" wasn't divided in this mutually complementary fashion. That is the only way anything could exist in the first place. The idea of one hand clapping makes no sense.

    So yes, I have to use conventional language to communicate here. I have to talk about abstract vs concrete, or conceptual vs perceptual - the terms of art of philosophy. But I don't actually think about brain architecture in the literally divided fashion this implies. I would prefer systems jargon like talk of global constraints and local freedoms. But I know also where using that outsider jargon gets me. :)

    Anyway, the whole sense-impression deal gets you into a computational/representational model of mind that my ecological and anticipatory modelling approach rejects. Which makes my understanding of conception very different too. And it is a fact both phenomenological and theoretical to me that abstraction in thought involves the relaxation of constraints.

    So a proper mathematical conception of triangleness does shed specific details and yet still leaves behind a "Cheshire Cat's grin" as its "true gist" that I can then manipulate in ways that - under a brain scanner - will show up as concrete activity in expected places. Or in fact - as it ceases to be an effort with practice - the activation involved shrinks in a fashion that it seems those parts of the brain aren't even doing anything much.

    Think of expert chess players who can see all the dangers and opportunities with a glance at the board. They don't have to work stepwise through a succession of future moves - like a computer. The general patterns are immediately obvious. They can focus in on some narrower gameplay and visualise that in the level of detail required.

    And all mental activity is like that. A rough gist is good enough to get started - throw down the preliminary detail-light sketch. Then flesh that out with detail as required. Add in the information that is further constraint on various uncertainties. Its standard engineering - hierarchical decomposition from broad intention to exact model.

    If we start to build computers that think the same way, then we might need to get worried.
  • Aphantasia and p-zombies
    I bow to your expertise. God knows why I ever had my own neuroscience column in a major journal.
  • Does it all come down to faith in one's Metaphysical Position?
    None of Plato's dialogues ended in confusion? Not even Theaetetus? What of Aporia?anonymous66

    Well, the Theaetetus is simply wrong in treating rationality as the memory of eternal ideas. Although it could then be regarded as essentially right if you understand Plato's argument less literally as making an early ontic structural argument. But either way, I don't think it is confused. It made a thesis in concrete enough fashion to become a long-lived metaphysical notion. You can understand it well enough to dispute it.

    And likewise, demonstrations of aphoria are a positively instructive fact of epistemology. They show how ill-founded many common beliefs are - because they are essentially vague ideas and so fall into the class of "not even wrong". The confusion lies not in Plato's dialectics but in the weak arguments that have to be got past.
  • Aphantasia and p-zombies
    Not according to what I've read about him, or his brain. Why disagree?Wosret

    OK, granted there is the suggestion his Broca area was "funny" in a way that would explain a developmental delay in expressive speech, But what I mean - and why I disagree - is that Einstein was pretty articulate as an adult. And it would be just as plausible that part of being very smart when young is that it can inhibit attempts to speak because the ideas are bigger than the capacity to put them into words.

    So see for example this summary of his alleged language difficulties, which both still allows for some possible neuro-structural reason for delayed expressiveness, and yet gives evidence for top-of-the-class level performance - http://www.albert-einstein.org/article_handicap.html
  • Aphantasia and p-zombies
    As I mentioned with Einstein, he had a big visual cortex, and wasn't as great with language.Wosret

    He was great with language. Just slow to start speaking. And the suggestion is that he had a "well developed" inferior parietal lobe - which chimes well with the idea that this is a high-level area for spatial imagery. So the mathematical ability to manipulate rather abstracted geometric directions in your head.

    The opposite would be a poor ability to spatially manipulate. And that seems born out by the fairly recent recognition of dyscalculia as an academic handicap. People can't learn to tell the time or master maths easily, and there is some brain scan evidence to link that to the same part of the brain.

    So talking of visual imagination, there would be two broad divisions with their own variation to start with. You have the parietal "where" pathway for all aspects of imagining spatial relations. And then the temporal "what" pathway where object identification takes place and so also the generation of concrete imagery of things. A weakness in one could be associated with a strength in the other. My daughter has dyscalculia and yet has photographic level ability as an artist.

    Wasn't the point about visualizing abstract triangles that you'd have to visualize a particular triangle (with certan angles etc) ?csalisbury

    This is another relevant dichotomy of brain design. Abstraction is about being able to forget the concrete details to extract the essence. So if you listen to people like Einstein describe their creative process, they do stress that it does feel like an imageless mental manipulation of pure possibility - a kind of juggling of shapes and relations which aren't specific.

    So in one sense, this is a powerful imaginative faculty - to be able to think in a concrete fashion about the juggling of generalities. Instead of picturing a triangle, you have to be able to picture "triangleness". And you have to suppress or shed the literal detail to get there. You have to be able to vividly ignore as much as vividly imagine you could say.

    Think also of tip of tongue experiences, or the moment that you know you have cracked some puzzle before you actually spit out the full answer. The brain is divided between its high level abstract conceptions and its low level fleshed out concrete impressions. So just having that "first inkling" where it all clicks into place in an abductive way we know already bound to work out as the solution, already most of the intellectual work is done.

    This is why thought can often seem wordless and imageless - we already know where that snap of connections was going to lead and can move on before its gets said in the inner voice, or pictured in the visual cortex. Why dwell on experiencing in an impressionistic way what we feel conceptually secure about?

    Though of course, actually slowing to flesh out thought and experience it that way - as even when typing it out as a post and wondering if it still makes as much sense - is pretty important for our thinking to be more that rapid habitual shoot-from-the-hip response.

    Thought and consciousness are wonderfully various activities. And again, that complexity maps to known neuroscience and even rational (dialectical) principles.

    An unimportant aside but that's how I first got into dichotomies and hierarchies. It just ended up screaming at you from neuroanatomy. It is the logic that shapes the architecture of the brain from the first neuron and its receptive field design.
  • Aphantasia and p-zombies
    This hypnagogic imagery has the usual pedestrian neurological explanation. Falling asleep involves a gating of outside sensation at the level of the brainstem. Suddenly starved of a flow of stimulation, the brain tries to imagine the world that has just gone missing. Hence the common experience of a sudden sense of falling - noticing the absence of the expected sensations of being a body weighed down by gravity in the usual way.

    Also part of falling asleep is the internal disconnection that does away with an integrated state of attention feeding a digested view of the world into short-term, then long-term memory. So a shut down of higher executive functions. That is why the dream imagery bubbling up is a series of fragmentary and loosely associative impressions coalescing.

    In actual REM dreams, we are physiologically aroused enough in terms of our habits of executive fuction to try and chase a meaning. There is an inner-voice attempt to narratise and give the usual discursive shape to our flow of experience.

    But in hypnagogia - that specific instant of falling asleep - there is just the bare dream imagery as the narrative function has to let go of the day. So - once you are primed for its existence and have had some practice at reawakening enough to catch it and fix it retrospectively - it has an even greater naked intensity than partial narratised REM dreams.

    So as for waking powers of visualisation, this always has to compete with a flow of incoming sensation. It is thus more the other way round. It is a conscious attentional effort to conjure up such imagery - the kind of narrative daydreams we might entertain ourselves with. But then you can do that even when driving your car in busy traffic - so long as the world is predictable enough to hand that off to your subconscious or habit-level brain to deal with (the basal ganglia-level motor control) while you dwell in private fantasies and mental imagery of "elsewhere".

    There is plenty of other neuroscience to explain the phenomenology. It takes about half a second to generate a full strength mental image - that is how long it takes to turn a high-level inkling into a low level fully fleshed out perceptual image. But then the image fades equally fast because all the neurons involved habituate. They "tire" - because it is unnatural in the ecological setting to hold one image fixed in mind if it is not actually functioning as a perceptual expectation about something just about to happen.

    Imagery is for predicting the immediate future - what the world is going to be like in the next split second. That is why it feels like an impossible attentional effort to hold the one picture in your mind for much longer without a refresh of some kind, or the switch to a different but related view.

    Again, eidetics show that there is considerable neuro-variation. But everything here can be explained in terms of neuro-typical functionality - which is why philosophy of mind can be criticised for getting so easily carried away by the whole p-zombie and explanatory gap debate.

    The idea that there is a metaphysical dualistic divide - mind vs matter - can only flourish in a positive ignorance of the neuroscience. That is not to say we have some fully worked out scientific theory of the mind - I'm of course forever pointing out that current mainstream physicalism really needs to understand semiotics to be able to claim any level of completeness. But the ghost in the machine become pretty residual the more you understand the complexity of the "machine" (that is, how the mind/body is not a machine at all).
  • Does it all come down to faith in one's Metaphysical Position?
    But, as I understand it, they didn't so much challenge or try convince anyone else about anything, as much as just believe themselves that there were as many reasons to accept any position, as there were reasons to doubt it. So, they found comfort in not making any claims about any positions.anonymous66

    So as I say, proper scepticism would be about being able to recognise when the issue under discussion is vague or undecidable. The difference then is between whether that is so due to the question itself being "not even wrong", or due to a lack of sufficient facts.

    What do you think this idea of there being "as many reasons for as reasons against" actually spells out as a situation? It could be a sign of a third ontological position - a state of equilibrium balance. As I say, metaphysical arguments do logically throw up dichotomous or dialectical alternatives like "chance vs necessity". So inquiry into existence does often result in "both sides appearing right" for a good reason. The polar choices are the extremes or the limits of possibility. And then actuality is what exists in-between as their mixing. We ask which nature is - chance or necessity - and wind up with equal reason to believe both are in play. Nature is some equilibrium balance. And that is in fact what our metaphysics predicts (if we understand polar opposites as the complementary extremes of the possible, leaving the actual as what must fall on the spectrum in-between),

    So scepticism can find itself undecided when faced with a well-posed metaphysical choice. There is as much reason to believe in the one option as the other. But that then is evidence that both are true and fundamental - just as reasoning says they should be if they are the complementary bounds of what is even the possible.

    And actually, I do like Plato, especially the way that he portrays Socrates. Socrates doesn't seem to have a conclusion in mind, or have any agenda at all when he gets into conversations. Both parties may learn something, or the conversation just might end in confusion... but, it's an interesting journey nonetheless.anonymous66

    Lots of stuff is interesting in life. No harm in that. But we are only still talking about Plato/Socrates because their dialectical mode of reasoning proved foundational of modern thought. The dialogues didn't result in confusion but sharply delineated alternatives that have been productive ever since.

    Well of course there is plenty of confusion - like not recognising sceptism only works as the servant of hypothesis generation, or that metaphysical dichotomies are hitting the pay-dirt of finding the complementary divisions that then encompass all that is even "the possible".

    But metaphysics itself aims at rational clarity and has no point if it can't achieve that. Argument which has the goal of sewing confusion is called sophistry. Plato/Socrates certainly had something to say about those bastards. :)
  • Aphantasia and p-zombies
    Philosophers talk about whether p-zombies are metaphysically possible, but what a priori grounds do we have for ruling out the possibility that they're actual?The Great Whatever

    It's a good line of thought. But isn't it also the case that to be able to realise there is a gap in experience - like a lack of visual imagery - there must also be the counterfactual contrast ... which is having that image without having to make an effort at imagining?

    So the guy with aphantasia both knows he sees his girlfriend vividly when she stands in front of him, and then that contrasts with his efforts to visualise her. What he says is in fact quite detailed and counterfactually phrased:

    "When I think about my fiancee there is no image, but I am definitely thinking about her, I know today she has her hair up at the back, she's brunette. But I'm not describing an image I am looking at, I'm remembering features about her, that's the strangest thing and maybe that is a source of some regret."

    http://www.bbc.com/news/health-34039054

    A p-zombie would have to lack all such psychic contrast. So there would be nothing for its reasoning to latch on to.

    Aphantasia is therefore no more evidence for p-zombies than other irregularities of neurology, like blindness, dyslexia or other lacks which we don't treat as philosophically puzzling. We accept biological variation as causal of neuro-atypicality as there is no grounds to question that.

    And in fact that is the strong argument against p-zombies. Can we really imagine a neuro-typical body that is doing all that "information processing" and it not feeling like something? What actually warrants that belief apart from an ability to ignore facts like aphantasia as indeed another of the many demonstrations of the exact correlation between biological structure and experiential reports.

    Aphantasia is actually fine-grain evidence for the non-existence of p-zombies as it takes the causal connection between neurology and phenomenology to another level - at least it will once we can check theories that it is all to do with the functional top-down connections needed to drive the primary visual cortex to highly vivid states of perceptual impression, or whatever the case turns out to be.

    So p-zombie theory has to posit that a neurotypical person could completely lack neurotypical phenomenology. Anything less than that is a cop-out. And a physicalist theory only has to admit that it doesn't have a complete account of phenomenology. It already stands on the ground of having a partial physicalist account in that no-one sees a problem in attributing blindness to a lack of the relevant equipment, or aphantasia now being due to some similar plausible and demonstrable neural lack. Aphantasia becomes simply, at worst, a promise of physicalist explanation still to be cashed out.

    Of course, a physicalist can and should also admit that physicalism has its limits. It will remain radically incomplete - there is an epistemic hard problem - once it gets to the point of being unable to raise theoretical counterfactuals. We can't know the unknown unknowns - even if we can suspect they lurk. So what would it be like to experience grue, etc, etc. Explanation generally loses its purchase when we start trying to tackle differences that don't make a difference. And that is true of physicalism also as an explanatory enterprise.

    However that is also not a big issue in practice. It certainly isn't any kind of argument for a positive belief in p-zombies. Just as aphantasia is precisely the kind of further fine-grain counterfactuality that argues in favour of physicalism rather than against it.

    (But I'm entertained by the point that Dennett might simply be neuroatypical and that might biologically explain the vigour of some of his beliefs. And neuroscience would say we are all atypical anyway - much more phenomenologically unalike than we realise. That in itself ought to be a fact that informs philosophy of mind - likely a very good paper someone ought to write, if it hasn't been already.)
  • Does it all come down to faith in one's Metaphysical Position?
    Metaphysics done right would seek out the most rational or justified beliefs. And then these would be hypotheses or axioms to put to the test. We would judge their validity by how well they function as beliefs in pursuit of some lived purpose or other.

    So it would be the usual pragmatic justification of any belief. We can figure out what logically seems to make sense as a theory - or at least stands as a clear enough alternative to make a difference whether we believe it vs not believing in it. Then we can pay attention to the consequences of acting on said belief.

    This means that a large class of traditional metaphysical dilemmas may indeed be classed as theories that are "not even wrong". They don't actually frame alternatives that make a difference. So is there a god, is there freewill, is there a meaning to life? Really, unless you are advancing a position which would make some actual difference if you believed it, it only sounds like you are being philosophical to talk about the "what if". A belief with zero consequences is not really "a belief" in any strong sense.

    This is why skepticism is in the end unsatisfying. You can likewise claim to disbelieve anything. There is always "some grounds" for denying any positive claim. But such disbelief has to have consequences too to be a difference that makes a difference. And also, pointing out that something can be legitimately disbelieved is only to confirm that the original belief was one of consequence. It was already doing the right thing in being framed in a way that could be found wrong. So the skeptic's position - if it has any actual value - is already incorporated (at least implicitly) in what it seeks to challenge.

    The moon could be made of green cheese. But that skeptical possibility is already subsumed in the belief it is made of rock. Likewise views on god, freewill or life's meaning are properly cashed out in metaphysics only to the degree there is some positive claim that then admits to the skeptics test.

    So yes, you have a problem if your beliefs about freewill or whatever are only framed in a vague and untestable fashion. Faith then seems the only option. But in fact what should be questioned is whether you really "have a belief" rather than simply some meaningless formula of words - an idea that is in philosophical reality, not even wrong.

    Metaphysics did start out in this rigorous fashion. It posed concrete alternatives, asking questions like whether existence was ruled by chance or necessity, flux or stasis, matter or form, etc, etc. And the whole of science arose from this rational clarification of the options. It was a really powerful exercise in pure thought.

    But the big stuff was sorted in a few hundred years in Ancient Greece. So what is left now is mostly the need to learn this way of thinking more than to attempt to solve a lot of unsolved metaphysical riddles. And worst of all is to try to treat ideas without concrete consequences as real philosophical inquiries. Metaphysics didn't become central to modern thought by worrying about beliefs with no effects.
  • What's the difference between opposite and negative?
    ntuitively, I'd say negation is generally predicated of the 'same' thing, as in 'negative charge' and 'positive charge', or 'positive spin' and 'negative spin', while two things which are opposite are so with respect to some third term, as in 'green and red are opposites on the color wheel'.StreetlightX

    I'd agree pretty much. Negation speaks to opposites that are reflexive or easily reversible because they are of the same scale. So spin or charge are symmetries that can be broken in two directions. And just as easily unbroken by another reversal. Or equivalently, you can imagine creating a hole via a taking away, that can then function as the negation - as in the electron hole of Dirac's sea.

    But then the other sense of opposite is an asymmetry - where the relationship is inverse, reciprocal, dichotomistic. A breaking of a symmetry across scale. So this is where you get the metaphysically general opposites, like discrete vs continuous or chance vs necessity, where the two poles are as unalike as it is possible to be. There is a thirdness involved - in that the two poles are mutually exclusive, but also jointly exhaustive. That is, together they negate all other possibility, so taking the negating to a whole other universal level.

    So opposition suggests a dyadic relation. But simple or particular states of opposition are so easily reversed that the stable or substantial thing seems to be the third thing of the symmetry they break. And then metaphysical oppositions seem irreducible and undeniable because they also speak to the third thing of the symmetry they break, but now in terms of excluding it as an unstable ground of possibility.
  • How did living organisms come to be?
    I accept real non-spatial existence, so my claim is that there are real things, demonstrated by physics to have real existence, which cannot be represented as having a spatial form. These things are non-spatial, non-dimensional.Metaphysician Undercover

    What things exactly? And what is their relevance to this discussion about modelling particles as located objects with no internal structure?
  • How did living organisms come to be?
    The point I was making, which started this discussion is that we have no way to establish correspondence between the model and the reality, because the things are modeled as non-dimensional, and we have no way of conceiving of non-dimensional existence. If your argument is that the model doesn't necessarily represent the reality, then you are arguing that we should accept fiction.Metaphysician Undercover

    Read the wiki page. What physics means is that you can treat an elementary particle as a mathematical point as that is a model of located material being without internal structure.

    You can treat the Earth as a mathematical point too - a centre of gravity. And it works so long as you are far enough away not to be bothered by the Earth's material variations - the effect that mountain ranges would have for instance (coincidentally, Peirce's specialist area in science).

    Likewise the standard model can call an electron a point. But then string theory or braid theory might discover an internal structure that shows the pointiness to be merely an effective theory of the real deal.

    So as usual, you are trying to insist on your lay interpretation of what is being said and not taking in the subtleties of the way science employs its metaphysics.
  • How did living organisms come to be?
    OK. You still don't get how it works. Cool.
  • Bringing reductionism home
    Sadly, there are an infinite number of wavefunction world branches in which that is guaranteed to be the case. The equation has no other solution but to say yes to the reality of every outcome, no matter how improbable.. ;)