Comments

  • What is life?
    The kind of computation to which I refer isn't just basic computation; "deep learning" is an example of the type of computation that I would compare to life because the organizational structure of it's data points (a structure which emerges as the machine learns on it's own) is well beyond the complexity threshold of appearing to operate non-deterministically.VagabondSpectre

    So my argument is that essential to a semiotic definition of life is it is information which seeks out material instability. It needs chemical structure poised on a knife edge as that is what then allows the information to act as the determining influence. That is the trick. Information can be the immaterial part of an organism that gives the hardware just enough of a material nudge to tip it in the desired directions.

    So yes, neural computer architectures try to simulate that. They apply some universal learning algorithm to a data set. With reinforcement, they can erase response variety and arrive at the shortest path to achieve some goal - like win a computer game. There is something life-like there.

    But note that you then think that to become more life-like would involve a scaling up - add more information processing to head in the direction of becoming actually conscious or intelligent.

    I instead would be looking to scale down.

    Your DeepMind is still a simulation running on stable hardware and thus merely software insulated from the real world of entropic material processes. Sure, we can imagine the simulation being coupled to the world by some system of actuators or mechanical linkages. The program could output a signal - like "fire the missile". That could flick a switch that triggers the action. But it is essential that the hardware doing this job is utterly deterministic and not at all unstable. Who wants nukes hooked up to wobbly switches?

    So while DeepMind might build a simulation of a learning system that feeds off the elimination of variety - and thus has to deal with its own artificial instability, the catastrophic forgetting problem - it still depends on deterministic devices outside its control to interface with the world. A different bunch of engineers is responsible for fabricated the reliable actuators that can take an output and turn it into the utterly reliable trip of the switch. I mean it makes no difference to the DeepMind computation whether anything actually happens after it has output its signal. A physical malfunction of the switch is not its responsibility as some bunch of humans built that part of the total system. DeepMind hasn't got the wits to fix hardware level faults.

    But for life/mind, the organism is sensitive to its grounding materiality all the way down to the quasi-classical nanoscale. At the level of synapses and dendrites, it is organic. The equilibrium balance between structural breaking down vs structural re-constructing is a dynamic being influenced by the global state of the system. If I pay attention to a dancing dot on a screen, molecular-level stuff is getting tipped in one direction or another. The switch itself is alive and constantly having to be remade, and thus constantly also in a state of anticipatory learning. The shape some membrane or cytoskeletal organisation was in a moment ago is either going to continue to be pretty much still right or competitively selected for a change.

    So my argument is that you are looking in the wrong direction for seeking a convergence of the artificial with the real. Yes, more computational resources would be necessary to start to match the informational complexity of brains. But that isn't what convergence looks like. Instead, the technology has to be pushed in the other direction - down to the level where any reliance on outside help for hardware stability has been squeezed out of the picture and replaced by an organismic self-reliance in directing the transient material flows on which life - as dissipative structure - depends.

    Life and mind must be able to live in the world as information regulating material instability for some remembered purpose. It has to be able to stand on its own two feet entirely to qualify as life (as I said about a virus).

    But that is not to say that DeepMind and neural network architectures aren't a significant advance as technology. Simulated minds could be very useful as devices we insert into tasks we want to automate. And perhaps you could argue that future AI will be a new form of life - one that starts at some higher level of semiosis where the entropic and material conditions are quite different in being engineered to be stable, rather than being foundationally unstable.

    So yes, there may be "life" beyond life if humans create the right hardware conditions by their arbitrary choice. But here I am concerned to make clear exactly what is involved in such a step.

    I do understand the non-linearity of development in complex and chaotic systems. Events may still be pre-determined but they may not predicted in advance because each sequential material state in the system contains irreducible complexity, so it must be played out or simulated to actually see what happens. (like solving an overly-large equation piece by piece because it cannot be simplified).VagabondSpectre

    It still needs to be remembered that mathematical chaos is a model. So we shouldn't base metaphysical conclusions on a model without taking account of how the model radically simplifies the world - by removing, for instance, its instabilities or indeterminancies.

    So a reductionist takes a model that can construct "chaos" deterministically at face value. It does appear to capture much about how the world works ... so long as the view is grainy or placed at a sufficient distance in terms off dynamical scale. If you average, you can pretend that spontaneous fluctuations have been turned into some steady-state blur of action. So while analytic techniques fail (the real world is still a mess of chance or indeterminism), numeric techniques just take the assumed average and get on with the computation.

    So chaos modelling is about eliminated actual complexity - of the semiotic kind - and replacing it with mere complexity. The system in question is granted known boundary conditions and some set of "typical" initial conditions are assumed. With the simulated world thus sealed at both ends, it becomes safe for calculation. All you need is enough hardware to run the simulation in the desired level of detail.

    Machines which we build using mostly two-state parts with well defined effects are extraordinarily simple compared to those which seem to emerge on their own (using dynamic parts such as inter-connected memory cells with many states or strings of pairs of molecules which exhibit many different behaviors depending on their order). Even while I recognize the limits on comprehending such machines using a reductionist approach, I cannot help but assume these limitations are primarily owing to the strength of the human mind.VagabondSpectre

    This is in fact the big surprise from modern biophysics - at the ground level, life is far more a bunch of machinery than we ever expected. Fifty years ago, cells seemed like bags of chemical soup into which genes threw enzymes to make reactions go in desired directions. Now it is being discovered that there are troops of transport molecules that drag stuff about by walking them along cytoskeletal threads. Membranes are full of mechanical pumps. ATP - the universal energy source - is charged up by being cranked through a rotating mill.

    So in that sense, life is mechanism all the way down. It is far less some soup of chemistry than we expected. Every chemical reaction is informationally regulated.

    But the flip side of that is that this then means life is seeking out material instability at its foundational scale - as only the unstable could be thus regulated by informational mechanism.

    If you are at all interested, Peter Hoffman's Life's Ratchet is a brilliant read on the subject. Nick Lane has done a number of good books too.

    So there are two things here. You are talking about the modelling of informational-level complexity - the kind of intricate patterns that can be woven by some network of switches regulated by some set of rules. And there is a ton of fun mathematics that derives from that, from cellular automata and Ising models, to all the self-organising synchrony and neural network stuff. However that all depends on switches that are already behaving like switches - ie: they are deterministic and they don't add to the total complexity by "having a mind of their own".

    But I am talking about life and mind as a semiotic process where the hardware isn't deterministic. In fact, it mustn't be deterministic if that determinism is what the information processing side of the equation is hoping to supply.

    And where are our pretty software models to simulate that kind of world? Or rather, where are our actual "machines" that implement that semiotic notion as some actual device? In technological terms, we can do a fantastic amount of things at the software simulation level. But can we do anything life-like or mind-like at the self-assembling hardware actuality level?

    Hell no. It's only been about 10 years that biology has even begun to grasp that this is such an issue.
  • What is life?
    I think that the biophysical discoveries of the past 15 years - the new and very unexpected detail we have about the molecular machinery of cells - really explains how life and computation are deeply different.

    To sum that up, the reductionist view you just expressed hinges on the belief that the physics or hardware of the system is a collection of stable parts. Even it we are talking about circuits that can be switched, they stay in whatever state they were last left in. You can build up a hierarchy of complexity - such as the layers of microcode and instruction sets - because the hardware operates deterministically. It is fixed, which allows the software to flex. The hardware can support any programme without being the slightest bit bothered by anything the software is doing.

    But biology is different in that life depends on physical instability. Counter-intuitively, life seeks out physical processes that are critical, or what used to be called at the edge of chaos. So if you take any protein or cellular component (apart from DNA with its unusual inertness), as a molecule it will be always on the edge of falling apart ... and then reforming. It will disassociate and get back together. The chemical milieu is adjusted so that the structural components are poised on that unstable edge.

    And the big trick is that the cell can then use its genetic information to give the unstable chemistry just enough of a nudge so the parts rebuild themselves slightly more than they fall apart. This is the semiotic bit. Life is information that sends the signal to hang together. And it is the resulting flux of energy through the system - the dissipative flux - that keeps the componentry rebuilding.

    So computers have stable hardware that the software can forget about and just crunch away. If you are equating the program with intelligent action, it is all happening in an entirely different world. That is why it needs biological creatures - us - to write the programmes and understand what they might be saying about the world. To the programmes, the world is immaterial. They never have to give a moment's thought to stopping the system of switches falling apart because they are not being fed by a flux of entropy.

    Life is then information in control of radical physical instability. That is what it thrives on - physics that needs to be pointed in a direction by a sign, the molecules that function as messsges. It has to be that way as cellular components that were stable would not respond to the tiny nudges that signals can deliver.

    This leads into the other counter-intuitive aspect of life and mind - the desire for a general reduction in actual information in a living system.

    Again, with computation, more data, more detail, seems like a good thing. As you say, to model a physical process, the level of detail we need seems overwhelming. We feel handicapped because to get it right, we have to represent every atom, every event, every possibility. In principle, universal computation could do that, given infinite resources. So that is a comfort. But in practice, we worry that our representations are pretty sparse. So we can make machines that are somewhat alive, or somewhat intelligent. However to complete the job, we would have to keep adding who knows how many bits.

    The point is that computation creates the expectation that more is better. However when it comes to cellular control over falling apart componentry, semiotics means that the need is to reduce and simplify. The organism wants to be organised by the simplest system of signals possible. So information needs to be erased. Learning is all about forgetting - reducing what needs to be known to get things done to the simplest habits or automatic routines.

    This then connects to the third way biology is not like computation - and that is the way life and mind are forward modelling systems. Anticipatory in their processes. So a computer is input to output. Data arrives, gets crunched, and produces an output. But brains guess their input so as to be able to ignore what happens when it happens. That way anything surprising or novel is what will automatically pop out. In the same way, the genes are a memory that anticipates the world the organism will find itself in. Of course the genes only get it 99% right. Selection then acts to erase those individuals with faulty information. The variety is reduced so the gene pool gets better at anticipation.

    So life is unlike the reductionist notion of machinery in seeking out unstable componentry (as that gives a system of signals something to control). And at the "software" or informational level, the goal is to create the simplest possible control routines. Information needs to be erased so that signal can be distinguished from noise. It is just the same as when we draw maps. The simpler the better. Just a few lines and critical landmarks to stand for the complexity of the world.
  • What is life?
    But what molecular machinery does a virus have? It has no ribosomes or mitochondria or any of the other gear to construct an organismic economy. It doesn't even have the genetic information to code for that machinery.

    So I am not too fussed about whether to define a virus as alive. It is OK that it is on the margins of the definition in being a genetic fragment that can hijack proper organismic complexity. Problems only arise in thinking that the simplicity of a virus might make it a stepping stone precursor that marks the evolutionary path from the abiotic to the biotic. I mean you wouldn't treat cancer as a simpler lifeform, or an amputated leg.

    Then you are self sustaining in the "closed for causality" fashion I specified. You have your own respiratory machinery for burning or oxidating electron sources. You don't hijack the respiratory machinery of plants. You take that intricate living machinery and metabolically burn it. It's usually pretty dead by the time it gets into your stomach. A virus needs a living host. You just need chemical bonds you can crack for their energy.

    A computer virus is an analogy for a real virus, but computers - of the regular Turing Machine kind - are nothing like life. As I said, they lack the qualities that define an organism. And thinking in terms of organisms does usefully sharpen up what we - or biologists - mean by life.

    Life (like mind) still has echoes of a vitalistic ontology - the presence of some generic spirit that infects the flesh to make it reactive. Talking about organisms ensures that structural criteria - like being closed for causality in terms of embodying a purpose with efficient means - are top of mind. We are paying attention to the process of how it is done rather than treating life as some vague reactive matter.
  • What is life?
    I'm not asking you to define life, I'm asking you to give me an example of anything which could plausibly be agreed to as life which also happens to be uncomplicated. I don't have a good answer as to why life needs to be complex, it just is. Maybe because simple things never do anything intelligent. I don't know, the answer is complex.VagabondSpectre

    A biologist would define life semiotically. That is, a line is crossed when something physical, like a molecule, can function as something informational, like a message.

    At the simplest level, that is a bit of mechanism like a switch. A recipe read off a strand of DNA gets made into a molecular message that then changes the conformation of a protein complex and leads to some chemical reaction taking place.

    Of course, we then want to think of life in terms of individual organisms - systems that are closed circuits of signals. There has to be some little sealed world of messages being exchanged that thus gives us a sense of there being a properly located state of purpose. An organism is a collection of semiotic machinery talking to itself in a way that makes for a definite inside and outside.

    So what that says is even the simplest semiotics already has to make that leap to a holistic complexity. It becomes definitional of an organism that it has its particular own purpose, and thus it individuates itself in meaningful fashion from the wider world. A network of switches is the minimal qualification. And that is why a virus seems troubling. We can't really talk about it as an "it" because it is not self-sustaining in that minimal fashion. It is a bare message that hijacks other machinery.

    Computers then fail the definition for life to the degree that they are not organismic. Do they have their own purpose for being - one individuated from their makers? Do they regulate their own physics through their messages? (Clearly not with a normal computer which is designed so the software lives in a different world to its hardware.)

    So the semiotic or organismic view defines life and mind fairly clearly by that boundary - the moment a molecule becomes a message. But for that to happen, a messaging network with some closed local purpose must already be in place. To be a sign implies there is some existent habit of interpretation. So already there is irreducible complexity.

    This can seem a troubling chicken and egg situation. On the other hand, it does allow one to treat life as mind-like and intelligent or anticipatory down to the most primitive level. When biologists talk about information processing, they really do mean something that is purposeful and meaningful to an organism itself.
  • Aphantasia and p-zombies
    We seem at cross purposes. I wasn't talking about the ability to imagine the sounds, sights or feelings of language as such. But sure, braille is another possibility. You could have a feeling of bumps under your fingertips as the equivalent of an inner voice.
  • Aphantasia and p-zombies
    I'm sure Hellen Keller was able to conceive of things without utilizing visual or auditory signs.Marchesk

    In her case, it would have helped that she was hearing and sighted until she was two. And before she was taught a finger-spelling system by Annie Sullivan, she was using her own made-up system of signs, like a shiver for ice-cream and miming putting on glasses for her father. So there was a neural basis established for both language and those conceptual modalities.
  • Aphantasia and p-zombies
    Um yeah. I think I will file that under "more fellating".
  • Aphantasia and p-zombies
    But the proper response wouldn't be to confirm or deny the possibility of visualizing an abstract triangle, but to say that dealing with abstract triangles involves a capacity different than 'visualization'. As in: you can't apply the sense-impression model here.csalisbury

    As usual, the conventional thing is to try to deal with a dichotomy by reducing it to one or the other of two options. So in labelling the mind, either we are dealing with one common faculty or two very different ones.

    Yet my point is that a dichotomy - a symmetry breaking that leads to hierarchical organisation - is in fact the proper natural option. The brain is organised by this logic. And so that is what our language would most fruitfully capture. Rather than fighting the usual lumper vs splitter battles, we should be amazed if any "mental faculty" wasn't divided in this mutually complementary fashion. That is the only way anything could exist in the first place. The idea of one hand clapping makes no sense.

    So yes, I have to use conventional language to communicate here. I have to talk about abstract vs concrete, or conceptual vs perceptual - the terms of art of philosophy. But I don't actually think about brain architecture in the literally divided fashion this implies. I would prefer systems jargon like talk of global constraints and local freedoms. But I know also where using that outsider jargon gets me. :)

    Anyway, the whole sense-impression deal gets you into a computational/representational model of mind that my ecological and anticipatory modelling approach rejects. Which makes my understanding of conception very different too. And it is a fact both phenomenological and theoretical to me that abstraction in thought involves the relaxation of constraints.

    So a proper mathematical conception of triangleness does shed specific details and yet still leaves behind a "Cheshire Cat's grin" as its "true gist" that I can then manipulate in ways that - under a brain scanner - will show up as concrete activity in expected places. Or in fact - as it ceases to be an effort with practice - the activation involved shrinks in a fashion that it seems those parts of the brain aren't even doing anything much.

    Think of expert chess players who can see all the dangers and opportunities with a glance at the board. They don't have to work stepwise through a succession of future moves - like a computer. The general patterns are immediately obvious. They can focus in on some narrower gameplay and visualise that in the level of detail required.

    And all mental activity is like that. A rough gist is good enough to get started - throw down the preliminary detail-light sketch. Then flesh that out with detail as required. Add in the information that is further constraint on various uncertainties. Its standard engineering - hierarchical decomposition from broad intention to exact model.

    If we start to build computers that think the same way, then we might need to get worried.
  • Aphantasia and p-zombies
    I bow to your expertise. God knows why I ever had my own neuroscience column in a major journal.
  • Does it all come down to faith in one's Metaphysical Position?
    None of Plato's dialogues ended in confusion? Not even Theaetetus? What of Aporia?anonymous66

    Well, the Theaetetus is simply wrong in treating rationality as the memory of eternal ideas. Although it could then be regarded as essentially right if you understand Plato's argument less literally as making an early ontic structural argument. But either way, I don't think it is confused. It made a thesis in concrete enough fashion to become a long-lived metaphysical notion. You can understand it well enough to dispute it.

    And likewise, demonstrations of aphoria are a positively instructive fact of epistemology. They show how ill-founded many common beliefs are - because they are essentially vague ideas and so fall into the class of "not even wrong". The confusion lies not in Plato's dialectics but in the weak arguments that have to be got past.
  • Aphantasia and p-zombies
    Not according to what I've read about him, or his brain. Why disagree?Wosret

    OK, granted there is the suggestion his Broca area was "funny" in a way that would explain a developmental delay in expressive speech, But what I mean - and why I disagree - is that Einstein was pretty articulate as an adult. And it would be just as plausible that part of being very smart when young is that it can inhibit attempts to speak because the ideas are bigger than the capacity to put them into words.

    So see for example this summary of his alleged language difficulties, which both still allows for some possible neuro-structural reason for delayed expressiveness, and yet gives evidence for top-of-the-class level performance - http://www.albert-einstein.org/article_handicap.html
  • Aphantasia and p-zombies
    As I mentioned with Einstein, he had a big visual cortex, and wasn't as great with language.Wosret

    He was great with language. Just slow to start speaking. And the suggestion is that he had a "well developed" inferior parietal lobe - which chimes well with the idea that this is a high-level area for spatial imagery. So the mathematical ability to manipulate rather abstracted geometric directions in your head.

    The opposite would be a poor ability to spatially manipulate. And that seems born out by the fairly recent recognition of dyscalculia as an academic handicap. People can't learn to tell the time or master maths easily, and there is some brain scan evidence to link that to the same part of the brain.

    So talking of visual imagination, there would be two broad divisions with their own variation to start with. You have the parietal "where" pathway for all aspects of imagining spatial relations. And then the temporal "what" pathway where object identification takes place and so also the generation of concrete imagery of things. A weakness in one could be associated with a strength in the other. My daughter has dyscalculia and yet has photographic level ability as an artist.

    Wasn't the point about visualizing abstract triangles that you'd have to visualize a particular triangle (with certan angles etc) ?csalisbury

    This is another relevant dichotomy of brain design. Abstraction is about being able to forget the concrete details to extract the essence. So if you listen to people like Einstein describe their creative process, they do stress that it does feel like an imageless mental manipulation of pure possibility - a kind of juggling of shapes and relations which aren't specific.

    So in one sense, this is a powerful imaginative faculty - to be able to think in a concrete fashion about the juggling of generalities. Instead of picturing a triangle, you have to be able to picture "triangleness". And you have to suppress or shed the literal detail to get there. You have to be able to vividly ignore as much as vividly imagine you could say.

    Think also of tip of tongue experiences, or the moment that you know you have cracked some puzzle before you actually spit out the full answer. The brain is divided between its high level abstract conceptions and its low level fleshed out concrete impressions. So just having that "first inkling" where it all clicks into place in an abductive way we know already bound to work out as the solution, already most of the intellectual work is done.

    This is why thought can often seem wordless and imageless - we already know where that snap of connections was going to lead and can move on before its gets said in the inner voice, or pictured in the visual cortex. Why dwell on experiencing in an impressionistic way what we feel conceptually secure about?

    Though of course, actually slowing to flesh out thought and experience it that way - as even when typing it out as a post and wondering if it still makes as much sense - is pretty important for our thinking to be more that rapid habitual shoot-from-the-hip response.

    Thought and consciousness are wonderfully various activities. And again, that complexity maps to known neuroscience and even rational (dialectical) principles.

    An unimportant aside but that's how I first got into dichotomies and hierarchies. It just ended up screaming at you from neuroanatomy. It is the logic that shapes the architecture of the brain from the first neuron and its receptive field design.
  • Aphantasia and p-zombies
    This hypnagogic imagery has the usual pedestrian neurological explanation. Falling asleep involves a gating of outside sensation at the level of the brainstem. Suddenly starved of a flow of stimulation, the brain tries to imagine the world that has just gone missing. Hence the common experience of a sudden sense of falling - noticing the absence of the expected sensations of being a body weighed down by gravity in the usual way.

    Also part of falling asleep is the internal disconnection that does away with an integrated state of attention feeding a digested view of the world into short-term, then long-term memory. So a shut down of higher executive functions. That is why the dream imagery bubbling up is a series of fragmentary and loosely associative impressions coalescing.

    In actual REM dreams, we are physiologically aroused enough in terms of our habits of executive fuction to try and chase a meaning. There is an inner-voice attempt to narratise and give the usual discursive shape to our flow of experience.

    But in hypnagogia - that specific instant of falling asleep - there is just the bare dream imagery as the narrative function has to let go of the day. So - once you are primed for its existence and have had some practice at reawakening enough to catch it and fix it retrospectively - it has an even greater naked intensity than partial narratised REM dreams.

    So as for waking powers of visualisation, this always has to compete with a flow of incoming sensation. It is thus more the other way round. It is a conscious attentional effort to conjure up such imagery - the kind of narrative daydreams we might entertain ourselves with. But then you can do that even when driving your car in busy traffic - so long as the world is predictable enough to hand that off to your subconscious or habit-level brain to deal with (the basal ganglia-level motor control) while you dwell in private fantasies and mental imagery of "elsewhere".

    There is plenty of other neuroscience to explain the phenomenology. It takes about half a second to generate a full strength mental image - that is how long it takes to turn a high-level inkling into a low level fully fleshed out perceptual image. But then the image fades equally fast because all the neurons involved habituate. They "tire" - because it is unnatural in the ecological setting to hold one image fixed in mind if it is not actually functioning as a perceptual expectation about something just about to happen.

    Imagery is for predicting the immediate future - what the world is going to be like in the next split second. That is why it feels like an impossible attentional effort to hold the one picture in your mind for much longer without a refresh of some kind, or the switch to a different but related view.

    Again, eidetics show that there is considerable neuro-variation. But everything here can be explained in terms of neuro-typical functionality - which is why philosophy of mind can be criticised for getting so easily carried away by the whole p-zombie and explanatory gap debate.

    The idea that there is a metaphysical dualistic divide - mind vs matter - can only flourish in a positive ignorance of the neuroscience. That is not to say we have some fully worked out scientific theory of the mind - I'm of course forever pointing out that current mainstream physicalism really needs to understand semiotics to be able to claim any level of completeness. But the ghost in the machine become pretty residual the more you understand the complexity of the "machine" (that is, how the mind/body is not a machine at all).
  • Does it all come down to faith in one's Metaphysical Position?
    But, as I understand it, they didn't so much challenge or try convince anyone else about anything, as much as just believe themselves that there were as many reasons to accept any position, as there were reasons to doubt it. So, they found comfort in not making any claims about any positions.anonymous66

    So as I say, proper scepticism would be about being able to recognise when the issue under discussion is vague or undecidable. The difference then is between whether that is so due to the question itself being "not even wrong", or due to a lack of sufficient facts.

    What do you think this idea of there being "as many reasons for as reasons against" actually spells out as a situation? It could be a sign of a third ontological position - a state of equilibrium balance. As I say, metaphysical arguments do logically throw up dichotomous or dialectical alternatives like "chance vs necessity". So inquiry into existence does often result in "both sides appearing right" for a good reason. The polar choices are the extremes or the limits of possibility. And then actuality is what exists in-between as their mixing. We ask which nature is - chance or necessity - and wind up with equal reason to believe both are in play. Nature is some equilibrium balance. And that is in fact what our metaphysics predicts (if we understand polar opposites as the complementary extremes of the possible, leaving the actual as what must fall on the spectrum in-between),

    So scepticism can find itself undecided when faced with a well-posed metaphysical choice. There is as much reason to believe in the one option as the other. But that then is evidence that both are true and fundamental - just as reasoning says they should be if they are the complementary bounds of what is even the possible.

    And actually, I do like Plato, especially the way that he portrays Socrates. Socrates doesn't seem to have a conclusion in mind, or have any agenda at all when he gets into conversations. Both parties may learn something, or the conversation just might end in confusion... but, it's an interesting journey nonetheless.anonymous66

    Lots of stuff is interesting in life. No harm in that. But we are only still talking about Plato/Socrates because their dialectical mode of reasoning proved foundational of modern thought. The dialogues didn't result in confusion but sharply delineated alternatives that have been productive ever since.

    Well of course there is plenty of confusion - like not recognising sceptism only works as the servant of hypothesis generation, or that metaphysical dichotomies are hitting the pay-dirt of finding the complementary divisions that then encompass all that is even "the possible".

    But metaphysics itself aims at rational clarity and has no point if it can't achieve that. Argument which has the goal of sewing confusion is called sophistry. Plato/Socrates certainly had something to say about those bastards. :)
  • Aphantasia and p-zombies
    Philosophers talk about whether p-zombies are metaphysically possible, but what a priori grounds do we have for ruling out the possibility that they're actual?The Great Whatever

    It's a good line of thought. But isn't it also the case that to be able to realise there is a gap in experience - like a lack of visual imagery - there must also be the counterfactual contrast ... which is having that image without having to make an effort at imagining?

    So the guy with aphantasia both knows he sees his girlfriend vividly when she stands in front of him, and then that contrasts with his efforts to visualise her. What he says is in fact quite detailed and counterfactually phrased:

    "When I think about my fiancee there is no image, but I am definitely thinking about her, I know today she has her hair up at the back, she's brunette. But I'm not describing an image I am looking at, I'm remembering features about her, that's the strangest thing and maybe that is a source of some regret."

    http://www.bbc.com/news/health-34039054

    A p-zombie would have to lack all such psychic contrast. So there would be nothing for its reasoning to latch on to.

    Aphantasia is therefore no more evidence for p-zombies than other irregularities of neurology, like blindness, dyslexia or other lacks which we don't treat as philosophically puzzling. We accept biological variation as causal of neuro-atypicality as there is no grounds to question that.

    And in fact that is the strong argument against p-zombies. Can we really imagine a neuro-typical body that is doing all that "information processing" and it not feeling like something? What actually warrants that belief apart from an ability to ignore facts like aphantasia as indeed another of the many demonstrations of the exact correlation between biological structure and experiential reports.

    Aphantasia is actually fine-grain evidence for the non-existence of p-zombies as it takes the causal connection between neurology and phenomenology to another level - at least it will once we can check theories that it is all to do with the functional top-down connections needed to drive the primary visual cortex to highly vivid states of perceptual impression, or whatever the case turns out to be.

    So p-zombie theory has to posit that a neurotypical person could completely lack neurotypical phenomenology. Anything less than that is a cop-out. And a physicalist theory only has to admit that it doesn't have a complete account of phenomenology. It already stands on the ground of having a partial physicalist account in that no-one sees a problem in attributing blindness to a lack of the relevant equipment, or aphantasia now being due to some similar plausible and demonstrable neural lack. Aphantasia becomes simply, at worst, a promise of physicalist explanation still to be cashed out.

    Of course, a physicalist can and should also admit that physicalism has its limits. It will remain radically incomplete - there is an epistemic hard problem - once it gets to the point of being unable to raise theoretical counterfactuals. We can't know the unknown unknowns - even if we can suspect they lurk. So what would it be like to experience grue, etc, etc. Explanation generally loses its purchase when we start trying to tackle differences that don't make a difference. And that is true of physicalism also as an explanatory enterprise.

    However that is also not a big issue in practice. It certainly isn't any kind of argument for a positive belief in p-zombies. Just as aphantasia is precisely the kind of further fine-grain counterfactuality that argues in favour of physicalism rather than against it.

    (But I'm entertained by the point that Dennett might simply be neuroatypical and that might biologically explain the vigour of some of his beliefs. And neuroscience would say we are all atypical anyway - much more phenomenologically unalike than we realise. That in itself ought to be a fact that informs philosophy of mind - likely a very good paper someone ought to write, if it hasn't been already.)
  • Does it all come down to faith in one's Metaphysical Position?
    Metaphysics done right would seek out the most rational or justified beliefs. And then these would be hypotheses or axioms to put to the test. We would judge their validity by how well they function as beliefs in pursuit of some lived purpose or other.

    So it would be the usual pragmatic justification of any belief. We can figure out what logically seems to make sense as a theory - or at least stands as a clear enough alternative to make a difference whether we believe it vs not believing in it. Then we can pay attention to the consequences of acting on said belief.

    This means that a large class of traditional metaphysical dilemmas may indeed be classed as theories that are "not even wrong". They don't actually frame alternatives that make a difference. So is there a god, is there freewill, is there a meaning to life? Really, unless you are advancing a position which would make some actual difference if you believed it, it only sounds like you are being philosophical to talk about the "what if". A belief with zero consequences is not really "a belief" in any strong sense.

    This is why skepticism is in the end unsatisfying. You can likewise claim to disbelieve anything. There is always "some grounds" for denying any positive claim. But such disbelief has to have consequences too to be a difference that makes a difference. And also, pointing out that something can be legitimately disbelieved is only to confirm that the original belief was one of consequence. It was already doing the right thing in being framed in a way that could be found wrong. So the skeptic's position - if it has any actual value - is already incorporated (at least implicitly) in what it seeks to challenge.

    The moon could be made of green cheese. But that skeptical possibility is already subsumed in the belief it is made of rock. Likewise views on god, freewill or life's meaning are properly cashed out in metaphysics only to the degree there is some positive claim that then admits to the skeptics test.

    So yes, you have a problem if your beliefs about freewill or whatever are only framed in a vague and untestable fashion. Faith then seems the only option. But in fact what should be questioned is whether you really "have a belief" rather than simply some meaningless formula of words - an idea that is in philosophical reality, not even wrong.

    Metaphysics did start out in this rigorous fashion. It posed concrete alternatives, asking questions like whether existence was ruled by chance or necessity, flux or stasis, matter or form, etc, etc. And the whole of science arose from this rational clarification of the options. It was a really powerful exercise in pure thought.

    But the big stuff was sorted in a few hundred years in Ancient Greece. So what is left now is mostly the need to learn this way of thinking more than to attempt to solve a lot of unsolved metaphysical riddles. And worst of all is to try to treat ideas without concrete consequences as real philosophical inquiries. Metaphysics didn't become central to modern thought by worrying about beliefs with no effects.
  • What's the difference between opposite and negative?
    ntuitively, I'd say negation is generally predicated of the 'same' thing, as in 'negative charge' and 'positive charge', or 'positive spin' and 'negative spin', while two things which are opposite are so with respect to some third term, as in 'green and red are opposites on the color wheel'.StreetlightX

    I'd agree pretty much. Negation speaks to opposites that are reflexive or easily reversible because they are of the same scale. So spin or charge are symmetries that can be broken in two directions. And just as easily unbroken by another reversal. Or equivalently, you can imagine creating a hole via a taking away, that can then function as the negation - as in the electron hole of Dirac's sea.

    But then the other sense of opposite is an asymmetry - where the relationship is inverse, reciprocal, dichotomistic. A breaking of a symmetry across scale. So this is where you get the metaphysically general opposites, like discrete vs continuous or chance vs necessity, where the two poles are as unalike as it is possible to be. There is a thirdness involved - in that the two poles are mutually exclusive, but also jointly exhaustive. That is, together they negate all other possibility, so taking the negating to a whole other universal level.

    So opposition suggests a dyadic relation. But simple or particular states of opposition are so easily reversed that the stable or substantial thing seems to be the third thing of the symmetry they break. And then metaphysical oppositions seem irreducible and undeniable because they also speak to the third thing of the symmetry they break, but now in terms of excluding it as an unstable ground of possibility.
  • How did living organisms come to be?
    I accept real non-spatial existence, so my claim is that there are real things, demonstrated by physics to have real existence, which cannot be represented as having a spatial form. These things are non-spatial, non-dimensional.Metaphysician Undercover

    What things exactly? And what is their relevance to this discussion about modelling particles as located objects with no internal structure?
  • How did living organisms come to be?
    The point I was making, which started this discussion is that we have no way to establish correspondence between the model and the reality, because the things are modeled as non-dimensional, and we have no way of conceiving of non-dimensional existence. If your argument is that the model doesn't necessarily represent the reality, then you are arguing that we should accept fiction.Metaphysician Undercover

    Read the wiki page. What physics means is that you can treat an elementary particle as a mathematical point as that is a model of located material being without internal structure.

    You can treat the Earth as a mathematical point too - a centre of gravity. And it works so long as you are far enough away not to be bothered by the Earth's material variations - the effect that mountain ranges would have for instance (coincidentally, Peirce's specialist area in science).

    Likewise the standard model can call an electron a point. But then string theory or braid theory might discover an internal structure that shows the pointiness to be merely an effective theory of the real deal.

    So as usual, you are trying to insist on your lay interpretation of what is being said and not taking in the subtleties of the way science employs its metaphysics.
  • How did living organisms come to be?
    OK. You still don't get how it works. Cool.
  • Bringing reductionism home
    Sadly, there are an infinite number of wavefunction world branches in which that is guaranteed to be the case. The equation has no other solution but to say yes to the reality of every outcome, no matter how improbable.. ;)
  • How did living organisms come to be?
    Are we talking about the defining feature of the model or the reality. (You understand the difference by now, right?)
  • Bringing reductionism home
    Let me hold you by the hand and give you a childish example: An equation may have a solution, which you may prove must exist, but that does not mean you possess the solution. Is that a bit complicated?tom

    But you were saying something about reality itself being comprehensible. We might certainly be inclined towards such an ontological belief given a supporting epistemic framework of theory and measurement. However you seem fixated on a naive Platonism when it comes to this issue. For you, the deductive truths of mathematics appear to bypass any need to demonstrate that the world is as the models say, rather than those models merely placing strong notional constraints on our speculations.

    Even mathematics has had to accept that it starts its modelling with the "good guess" of an axiom. The point of Godel was to show that axioms are modelling hypotheses, not self-formalising truths. They become secured over time due to the fact they deliver - in terms of our also rather human purposes.

    But all this is Epistemology 101. Curious.
  • How did living organisms come to be?
    Except that quarks, leptons, and bosons are point particles.tom

    Why do you keep avoiding the "modelled as" point particles? I mean it is normal in physics to understand that it is a specific kind of useful idealisation.

    A point particle (ideal particle[1] or point-like particle, often spelled pointlike particle) is an idealization of particles heavily used in physics. Its defining feature is that it lacks spatial extension: being zero-dimensional, it does not take up space.[2] A point particle is an appropriate representation of any object whose size, shape, and structure is irrelevant in a given context. For example, from far enough away, an object of any shape will look and behave as a point-like object.

    https://en.m.wikipedia.org/wiki/Point_particle

    The key idea is that a point particle lacks internal structure. So even with quantum HUP fuzziness, you can still exactly locate it as a quantum superposition of itself. There are no observables resulting from further internal structure to blur the picture.

    But equally - if we are talking about ontological reality - you run into problems trying to get close to a massive point particle. There is the Compton wavelength at which your point will start spewing other points. In trying to get close observationally, we produce more particles due to the energy density. And which is now the one we claim to have been there?

    So is there some reason you take a monstrously simple approach to your ontological claims about particles. Or do just believe the pictures of things are those things due to some kind of naive realism (kind of like your happy literal acceptance of branching multiverses)?
  • How did living organisms come to be?
    Which means, according to you, the Standard Model is wrong. Please explain.tom

    So if particles are modelled as strings rather than points, the SM is "wrong"?

    Sounds legit.
  • The Philosophy of the Individual in the Christian West
    Some people wonder why Americans are so religious. (They are compared to Europe, especially). I would say it is (at least to some extent) BECAUSE there has been so much splintering. Every time a group divides, it is re-energized.Bitter Crank

    Hah. That's a very good point. The Anglican church in the UK is pretty relaxed about actual belief in God these days. The social service aspect is what counts. So mysticism ending in politics as you say. And the Anglican traditionalists are in Africa now, wondering what happened.
  • Relative Time... again
    No. Totally legit. Complete sense. [Backs away nervously, feeling for the door...]
  • Relative Time... again
    So God is moving the whole of time as a block. Is He pulling off that feat by shifting it to some other point of ... time?

    Sounds legit.
  • Relative Time... again
    What I was originally pointing out to you was the big difference between reacting to Newtonian vs modern understandings of space/time.

    So for instance you were imagining Leibniz running a relativistic argument against Newtonian time by God turning back the universal clock by four hours and seeing no difference.

    Well great. Newtonian mechanics is reversible like that. It does have that time symmetry. That is why time seems like a dimension one can freely time travel in.

    But once you admit energy into the picture - global time as entropically-directional change - then going back in time is actually going to break a symmetry. If you go back four hours, the whole universe is now four hours hotter ... as well as four hours smaller. There are actual consequences that a thermometer would reveal.

    So Einsteinian relativity tries to recover some of the good old Newtonian scale indifference. It gives you a formula for handling "energetic Lorenzian boosts" - the symmetry-breaking effects of going at some other speed less than c.

    So on the one hand, we still seem to have backdrop time - and quantum mechanics says thank goodness for that.

    But then next up on the batting plate is quantum gravity theory and now we really have to rethink our notion of time so that it does align with a thermal view - uni-directional emergent energetic change.

    Talk about time is tricky because really it is about relative rates of change - change overall in a cooling/expanding universe versus change locally due to relative energy scale. And each is the backdrop against which we read off the other - that is the lightspeed view (of a thermalising bath of cosmic background radiation) vs the restmass view (of these lumpy, sluggish particles of "mass" that can "move through time" just by, relatively speaking, not moving at all).

    Again, I have no idea whether you have a concrete thesis or any actual interest in the science involved. But a thermometer would tell you if you have wound the Universe's clock backwards (just hold it up in deep space and measure the temperature of the CMB).
  • Relative Time... again
    I'm still mystified by what you mean. But I guess I shouldn't hold my breath hoping you will explain. Opacity is your weapon of choice again it appears.
  • Relative Time... again
    Do you mean spacetime? I don't follow. Why do you think it doesn't relate to the dynamical view I described?
  • Relative Time... again
    As you say, that is what he argued directly against. So he may have been reacting with a religious/idealist point of view. But the context was still a particular model of spacetime that lacked direction or growth and was considered to be statically eternal.

    I mean do you think his arguments work against the relativistic view and its particular features, like the Einstein hole argument?

    They can sound similar, but then they may be quite opposite in fact...

    Thus in Leibniz-style thought experiments, worlds that at first sight [appear] physically different, turn out to be mathematically identical, [but] in the [Einstein] hole argument, apparently mathematically different worlds reveal themselves as physically identical.

    http://philsci-archive.pitt.edu/9676/1/Giovanelli_-_Leibniz_Equivalence.pdf
  • Relative Time... again
    Leibniz and Kant may not be much help then as they were still operating in a Newtonian reference frame in which the best that could be imagined was Galilean relativity.

    Modern physics changes things by bringing change directly into the picture as action or energy. Nothing has substantial being - neither massive objects nor the massless space they occupy. It is all just shapes given to energy transactions. So it becomes natural - if not formally expressed in that fashion - to measure time in terms of energy units.

    The greater the momentum, the slower the local clock runs. The speeding particle gains "more time" because it takes longer to decay. Or conversely, you can say it loses the potential for change and becomes more the changeless object. Like the photon that goes so fast it never really exists so far as it is concerned.

    By contrast, a static mass is in a least energetic state and so experiences the actuality of global temporal dimensionality the most fully. It can actually fail to change in knowing itself to remain in the same location while everything else has moved. And then know that is itself moving as mostly everything else is staying the same now.

    So the picture of time is completely changed by including energy or action as part of the co-ordinate system. It may still sound a spatialised description - as when we count the revolutions of a clock hand making its exactly repeating round trips (a way to watch something move, but not let it run away out of sight). But the clock has to be wound up. So the spatialised trickery is still the measurement of some energy potential. Which we soon discover when accelerating the clock in a rocket.
  • Relative Time... again
    Is time an aspect of an object, even?Moliere

    Time is either going to be a global dimension through which things move or it is instead a measure of local change. And as usual when faced with a compelling dichotomy, the answer is going to be you need to combine both to get a full answer.

    So time is clearly about a local potential for change - what could happen in the future of some object or substance in terms of its degrees of freedom. How could what is currently the same become something different.

    And time is just as clearly about a global constraint on all that. Time has a universal direction in which the past represents an accumulation of all the ways the present has become historically limited. And that leaves then the localised degrees of freedoms - the possible future of all those existing objects or substances.

    So time measures change against some notion of stasis. It measures the differences that make a difference. When we say an object moves through time, we mean that it doesn't change while the history all around it is changing - eliminating degrees of freedom in many other locations. And then there comes the moment where the object does itself change - becomes further historically marked in some way we consider different enough to make a difference. Now it is the changing object being seen against a static backdrop.

    So it is all about flux vs stasis. And we can read that off the world either as local flux seen against generalised stasis, or local stasis against generalised (cooling and expanding, thermal arrow of time) flux. It's all relative, as relativity says.
  • The Philosophy of the Individual in the Christian West
    Whether or not people actually believe in materialism is kind of null vs. the fact that a materialistic ethos controls how society functions, in relation to technology.Noble Dust

    I agree. But then that is the target - the way we have wired in a machine-like approach to life in our general social institutions.

    You can blame the scientists to an extent. They have jobs because society values the economic return on technological control over the world that their modelling efforts provide. And then some may endorse this at a metaphysical level as the only way to be - a giant resource consuming machine.

    But the Dawkins and Dennetts then become symptoms of the disease, not its causes. And to attack what is going on in woolly spiritual terms just ain't going to work. Marx tried it. The hippies tried it. The new agers tried it. Wishful thinking just doesn't scale.

    It may also be a struggle for ecological or systems thinking to make a difference. But at least that has a hope if it is a correct basis of analysis. So the only cure for scientism is better science.
  • The Philosophy of the Individual in the Christian West
    On the lowest level is the material/physical world, which depends for its existence on the higher levels. On the very highest/deepest level is the Infinite or AbsoluteWayfarer

    My problem with this is that it is so vague that it can be interpreted as being true either way.

    So a systems scientist at least would say that the world is both matter and form - or energetic actions and organisational constraints. So the infinite/absolute is understood rather Platonically as mathematical necessity. There are forms of organisation that simply have to be (because they are the most symmetric states, the ones that have the least action).

    And although modern physics doesn't proclaim that it thinks this way, in fact it does. The materiality of atomism has long been replaced by the pursuit of the global mathematical symmetries that are the possible forms of localised excitations. Actual matter has been reduced to nothing but some measured constant to be plugged into the equations. Where the forms feel really concrete, the material bit has become as ethereal as can be imagined.

    I think it is important to respect this actual shift in scientific thought. In quantum ontology, a particle has become a sum over all its possibilities. So hammering scientists for being dull materialists has become completely wrong.

    So yes, science (as an institution) does still reject transcendent or spiritual causation. But if you are making a comparative religion point, science has shifted away from a material substance reductionism towards the other end of the spectrum - seeing mathematical form as the eternal organising force.

    And in doing that, it returns towards ancient immanent metaphysics where chaos, or apeiron, or pure potential are the "material grounds" upon which rational necessity imposes its organisational desires.

    So science is pre-Christian in going back to first philosophy notions - that you find also in Taoism, Buddhism, Judaism.

    That is the irony. Scientism and Christianity would have more in common in framing the world as matter vs mind or spirit. They accept those two apparently conflicting choices as what they either fight for or against.

    You want to crusade against materialists? Actual scientists stopped being that - in terms of operative metaphysics - about a century ago.

    So in broad terms, what I think has happened to Western culture is that it has been hijacked by a hostile force, almost a parasitic entity, namely scientific materialism.Wayfarer

    Certainly you can name your hate figures - Dawkins, Dennett, Krauss ... er, I guess there are a few more who like the limelight and book sales that come with being the Church's loyal opposition.

    And certainly, science in general (as an institution) thinks of itself as doing naturalism. So it would reject any transcendent explanations at a gut level, because its successful working presumption is the world is closed for causality - immanent in its material organisation.

    But rather than a hijack, you have the Enlightenment creating its very useful machine model of reality. It was a mode of thought that was great at turning us into technological beings. Then you have the variety of responses that turn of events provoked.

    I would say the illegitimate response was Romanticism - or at least that aspect of Romanticism that tried to retrieve a transcendent metaphysics.

    The legitimate response - in the sense of being metaphysically correct in its analysis - would be the organicism or systems thinking that persisted in the corners of the larger scientific enterprise, and understood its deeper connections to the ancient metaphysical paradigms.

    So this is where we are at. Science did take the view that the world is a machine. Culture did respond by saying that "materialism" is fine as far as it goes, but misses the larger metaphysical picture. But that larger picture is either going to include spirit or some other notion of causal rupture - which at worst becomes "a big daddy in the sky" - or it is instead going to presume that the world is an organic self-organisation out of pure possibility, and build some useful scientific model of that.

    As I say, we are a century into that new way of thinking about the world. Yet news of that is being drowned out by all the physics-bashing (and I admit, also by the fact that the computer scientists and neuroscientists - reflecting medicine's belief that the body is another machine - do continue to promote the technologist's metaphysical creed).

    However dig into the ontology of modern physics, and it seems as immaterial as it gets. You are dealing with mathematical forms imposed on pure possibility - constraints on actions. But the fact that the metaphysics is now mathematical abstraction makes it also rather inaccessible to most. So that is another ingredient here - why the cultural war takes the shape it does rather than engaging with the real philosophical issues.
  • The Philosophy of the Individual in the Christian West
    But I think the historical evidence for the role of Christianity in the formation of the modern liberal state, and the principles I mentioned in the quotation at the top of this thread, are unarguableWayfarer

    Of course it plays a role. But don't we find the origins of the notion of individuality in ancient Greece - overtly in Athenian democracy/Socratic philosophy, and then pragmatically in the earlier Milesian trading cities where being a cross-roads for travellers was already broadening the mind?

    So I take the cynical view that a new kind of religious meme was born that involved telling folk they had a soul and a personal relation with God. This disconnected them from their attachment to a traditional communal setting - the local sacred places, customs and spirit figures - and tied them instead to the abstracted institutional notion of a holy church. Even kings were just other humans. Only you and God mattered in the end. Pass the hat around the congregation and funnel the proceeds to Rome.

    So sure, Christianity was a good way of organising humans as it was all part of the detaching people from their very local social institutions and creating the kind of organisational scale that an abstracted religious institution can sustain. Just as the Romans also turned Greek city states into organised empires by institutionalising abstract laws of behaviour and governance.

    The structural commonality is thus the creation of the abstracted individual to match the abstracted social institution. We teach people they are "unique selves", and that is powerful because that means people start acting in accordance to culturally-evolving abstractions - philosophical or rational ideas like moral codes and rules of law.

    The active philosophical question then is, if we understand that to be the game, how should it be played now that we realise it? Christian behaviour does sound like a good way to run a society. It has pragmatic merit. So what would it be in tension with exactly when viewed perhaps from an atheist/enlightenment libertarian camp? Why would we give it special credit except in terms of its practical results (which could be a sense of wellbeing and purpose, as opposed to the nihilism that appears to be grounded on some versions of enlightenment scientism - the line I'm guessing you would take)?
  • The Problem with Counterfactuals
    know Schrodinger's point was that it was ridiculous to think the cat would be in a superposed state of alive and dead before we look, but a lot of people have taken it to mean the opposite.Marchesk

    Thermal decoherence adds extra constraints on those probabilities now, keeping the weirdness suitably quantum scale. The observer/collapse issue is not solved as such, but there is a commonsense work around where the statistics of the decaying particle (which causes the rather classical death by a shattered vial of poison) gives you a good argument for how soon the death is likely to happen.
  • The Problem with Counterfactuals
    No, all I'm saying is that aletheist's solution to the problem of counterfactuals doesn't work. He said that "if X then Y" is true if the laws of nature determine that if X happens then Y will happen. But when it comes to quantum events, the laws of nature don't determine that if X happens then Y will happen; they only determine that if X happens then Y might happen – even if "if X then Y" is true.Michael

    You do realise that you keep trying to build in a classical notion of causality where the past constrains the future in some general fashion? So you are making what since QM - as in delayed choice quantum eraser experiments - has become a questionable presumption. Instead - retrocausally - the future can constrain the past.

    So the laws of nature can be real generals, or actual constraints. But they are not as anchored in the general thermodynamic arrow of time or causality as classical metaphysics would presume. The quantum scale of action sits outside of this flow - doing its non-classical sum over all counterfactual possibilities so as to take even its unhappened future into account as part of its wavefunction.

    Gravity is pulling on the stone in Peirce's hands. So that sets up a reasonable expectation in our minds. It would fall, but he is stopping that. However the stone has some remote possibility of quantum tunnelling through Peirce's mitts. That too is part of the natural law here. You just would treat it as a remote possibility as you are unlikely to think it reasonable to spend the rest of eternity waiting for that to happen.

    So quantum natural law simply defies your preconceptions with regards to counterfactuality both in time and space. Counterfactually, the stone could be on the other side of Peirce's hands. Hence tunnelling really happens.
  • The Problem with Counterfactuals
    Honestly, you are only demanding to be given a "better law of nature" here - one that conforms to your bent for counterfactual definiteness at all times and places.

    So for you, if QM's indeterminism is a falsification of your preference for metaphysical determinism, then you reject QM as an adequate account of nature. The world has to adjust itself so that it conforms to your notion of how to be truth-apt.

    You started off backwards on this whole issue, and now you are aiming to be as backwards as it could possibly get.