• Isaac
    10.3k
    we cannot determine whether the neurons act representatively, through reference to the model, because the model represents how the thing behaves, not the reason (why) for that behaviour.Metaphysician Undercover

    As ever, I have no idea what you're talking about. Why is there even a reason for the behaviour of neurons? They just fire according to physical laws, they don't have a reason.
  • Banno
    23.5k
    Ahhh. Ok, the penny drops for me. I think I need more on the basic statistical framework for the Markove boundary. It's at the edge of my understanding.
  • Isaac
    10.3k


    This is the best paper on the maths.

    This one puts it all into context.

    Put simply, a Markov boundary is the set of states which separate any system we're interested in studying from the parts we're not. So an individual neuron has a Markov boundary at the ion channels and the vesicles of the pre-synaptic membrane. A single-celled organism has a Markov boundary at it's cell mebrane. A brain (nervous system) has Markov boundary at it's sensorimotor neuron dendrites.

    Statistically, it just represents that point in a probabilistic system where the internal and external states of that system are conditionally dependant on one another in terms of Bayesian conditional probabilities. So every internal state is associated (theoretically) with a most probable external state and the nodes which form the posterior distribution for the nodes maximising that equation are the Markov boundary. In the case of perception, these are the sensorimotor neurons. Anything outside of these is therefore a 'hidden state' only in that it doesn't directly provide posterior distributions for the internal Bayesian conditional probabilities the internal state is representing of the external one. Which is another way of saying that we have to infer the causes of the data from the sensorimotor system because that system is the last point at the edge of the part doing the inferring (sensorimotor cells cannot, themselves infer).

    Does that help? Or make everything worse, perhaps...?
  • Janus
    15.8k
    ...is exactly what I'm arguing for. There is nothing whatsoever about these 'hidden states' which prevents us from naming them. In fact, I think that's exactly what we do. The 'hidden state' I'm sitting on right now is called a chair. It's hidden from my neural network because the final nodes of it's Markov boundary are my sensorimotor systems. It's not hidden from me, I'm sat right on it.Isaac

    If you are the body is it not, along with the chair, a hidden state (or as I would prefer to say hidden process)? Of course we can name them, but it seems we are doing so from within the familiarity which constitutes our common and also individual experience.
  • Banno
    23.5k
    I might have to wait for the children's version...

    Part of the philosophically interesting stuff has been the extent to which cognition involves the stuff outside our neurones, if I can put it so coarsely. I have an image of cognition occurring somewhere between one's body and those things that the body manipulates - embedded or extended cognition. To this we now add enactive cognition, that it is in our manipulation of things that cognition occurs. I'm puzzling over the extent to which the mathematics here assists in that choice, and I'm supposing for the moment that it is neutral.

    Am I wrong?

    @Michael?
  • hwyl
    87
    Yes, there seems to be. We can't tell with absolute certainty, and never will. Next question?

    When I argue with scientists I just get so angry about their know-nothingness about philosophy, like can it mend a fuse - who cares that's what engineers and scientists are for. And I reply that the questions, the requirements are really essential, really central. And then there are these endless 17th century questions which really essentially are rather meaningless. Should we maybe just vacate the 17th century from modern philosophy? God and Aristotle and Descartes and Spinoza and Leibnitz, subjects and objects shuffling this and that way in a static universe and, last but certainly not least, the most absurd and complicated metaphysical constructions - should we just let them all go?
  • Isaac
    10.3k
    I have an image of cognition occurring somewhere between one's body and those things that the body manipulates - embedded or extended cognition. To this we now add enactive cognition, that it is in our manipulation of things that cognition occurs. I'm puzzling over the extent to which the mathematics here assists in that choice, and I'm supposing for the moment that it is neutral.Banno

    Yes, that's right to an extent. If we look at, say, the ecosystem, then that will have it's own Markov boundary and all the organisms within it (and the non-living components) will be part of a network which could (theoretically) infer stuff about the nodes outside the Markov boundary of the ecosystem*.

    The caveat is that there has to be networked data transfer for there to be inference, and that's a small problem with the enactivist account. Without suppressive feedback updating posterior distributions, it's hard to see how nodes could infer anything about the distribution of their neighbouring data points. So I struggle to see how one could create a Markov bounded system which includes the objects of our environment (but not, say the entire ecosystem) because there are so very clearly these two, non-inference, data nodes at our senses and our motor functions.

    Having said that, some very smart people still hold to a full enactivist account, so I'm not in a position to gainsay them. The systems dynamics doesn't seem to add up, to me.


    * In fact I distinctly remember reading a paper on that very subject, but I don't seem to have it in my biblio database. I might do a Google trawl for it later.
  • Agent Smith
    9.5k
    The problem is that sensory neurons (past the sensing apparatus at the tip/end) all talk the same language (action potentials) i.e. though nerves will only activate to a specific stimuli (pressure/temperature/etc.), the action potential that carry the information to the brain are identical which means we can extract a brain and electrically manipulate it (mid-axon) to experience an "external world"! The brain in a vat thought experiment! This rather macabre experiment is feasible in principle though not with current biotech!

    Feels like a roundabout way of broaching the simulation hypothesis. Wishing right now that I knew advanced math! :sad:
  • Metaphysician Undercover
    12.6k
    Why is there even a reason for the behaviour of neurons? They just fire according to physical laws, they don't have a reason.Isaac

    The point was, that you haven't the premises required to logically conclude that there is no reason for the behaviour of neurons.

    We're back to the point where Banno started this, by claiming that the behaviour of neurons is not representative. I said you cannot conclude that without knowing the reason for the behaviour. To simply assert "they just fire according to physical laws" does not give that reason. You appear to assume that there is no reason. This is obviously an unsupported assumption, as the following example demonstrates.

    All tools created by human beings operate according to physical laws, and this does not necessitate the conclusion that there is no reason for them. You are just proving my point, knowing how a thing operates (according to physical laws) does not provide you with the knowledge required to make any conclusion about why the thing operates that way. Denying that there is a reason why, is simply an uninformed, unjustified, and unwarranted assumption. So I've just gone around a circle, with you taking up where Banno left off, and proceeding back to Banno's starting point.

    Put simply, a Markov boundary is the set of states which separate any system we're interested in studying from the parts we're not.Isaac

    Systems theory is extremely flimsy. Boundaries can be imposed for various reasons, and various degrees of arbitrariness, with varying degrees of openness and closedness. Then, on top of all these levels of arbitrariness, when things don't behave according to what is dictated by the imposed boundaries, we can just rationalize the misbehaviour through reference to mysterious things like "hidden states".
  • Metaphysician Undercover
    12.6k
    They just fire according to physical laws, they don't have a reason.Isaac

    What about those "hidden states"? Those unknown aspects disqualify this conclusion. All you can say is that they act according to physical laws to an extent which does not include thos hidden aspects.
  • Isaac
    10.3k
    The point was, that you haven't the premises required to logically conclude that there is no reason for the behaviour of neurons.Metaphysician Undercover

    I don't need premises. I don't consider ants have bank accounts. I don't consider atoms have feelings. I can't for the life of me think why anyone would consider neurons having reasons for long enough to even consider the premises required.

    If it floats your boat, be my guest, but I've as little interest in checking whether neurons have reasons as I have checking whether rocks have holiday plans.

    Systems theory is extremely flimsy.Metaphysician Undercover

    Ha! But the notion that neurons have reasons is practically watertight?

    What about those "hidden states"? Those unknown aspects disqualify this conclusion.Metaphysician Undercover

    Who said anything about unknown. We can know a hidden state. If we have a successful model of it, we know it. What more is there to knowing something?
  • Agent Smith
    9.5k
    vacatehwyl

    Evict?
  • Isaac
    10.3k
    * In fact I distinctly remember reading a paper on that very subject, but I don't seem to have it in my biblio database. I might do a Google trawl for it later.Isaac

    @Banno. Found it.

    https://discovery.ucl.ac.uk/id/eprint/10066600/1/Friston_Variational%20ecology%20and%20the%20physics%20of%20sentient%20systems_Proof.pdf

    Here it talks about something similar to the wider system approach you mentioned...

    What are the internal states of the niche? And what are the causal regularities that they model? We suggest that internal states of the niche are a subset of the physical states of the material environment. Namely, the internal states of the niche are the physical states of the environment, which have been modified by the dense histories of different organisms interacting in their shared niche (i.e., histories of active inference).
  • Tate
    1.4k
    Who said anything about unknown. We can know a hidden state. If we have a successful model of it, we know it. What more is there to knowing something?Isaac

    Strictly speaking, you know what you inferred. Inference is not extra-sensory perception.
  • jorndoe
    3.4k
    10.1177_1059712319862774-fig1.gifIsaac

    Thanks for posting those papers, however crazy technical they are. :up: :)

    Hierarchical Models in the Brain (Nov 2008), Active Inference: A Process Theory (Aug 2016), Variational ecology and the physics of sentient systems (Dec 2019)

    Didn't see the image you posted; is that from a different paper?

    I guess they don't address the Levine / Chalmers thing directly, yet the models give their own insights.
  • hwyl
    87
    Evict?Agent Smith

    Banish?
  • Agent Smith
    9.5k
    Banish?hwyl

    Rusticate! :snicker:
  • Isaac
    10.3k
    Strictly speaking, you know what you inferred. Inference is not extra-sensory perception.Tate

    Depends on the context the words are used in. I don't hold with 'strictly speaking' when it comes to definitions. Words mean whatever they're successfully used for. If I see a map showing where the pub is and it's a good map, then I know where the pub is. I don't see a need to complicate the matter by saying that I 'really' know where the mark for a pub is in the map.

    Like 'see', inferring is part of knowing, not the object of it. What we know is the external state (or the proposition, as I believe the philosophers have it). How we know is by inference.
  • Tate
    1.4k

    You know what happened when you tested the model. That gives some degree of confidence in your subsequent inferences.

    No need to overstate things.
  • Joshs
    5.3k


    Here’s an interesting analysis of the issue from an enactivist perspective:

    Active inference, enactivism and the hermeneutics of social cognition: Shaun Gallagher and Micah Allen

    Abstract:

    We distinguish between three philosophical views on the neuroscience of predictive models: predictive coding (associated with internal Bayesian models and prediction error minimization), predictive processing (associated with radical connectionism and ‘simple’ embodiment)
    and predictive engagement (associated with enactivist approaches to cognition). We examine the concept of active inference under each model and then ask how this concept informs discussions of social cognition. In this context we consider Frith and Friston’s proposal for a
    neural hermeneutics, and we explore the alternative model of enactivist hermeneutics.

    Snippet:

    Conceiving of the differences or continuities among the positions of PC, PP, and PE depends on how one views the boundaries of the Markov blanket, not just where the boundaries are drawn, but the nature of the boundaries—whether they keep the world ‘off limits’, as Clark suggests, or enable coupling. For PC and PP, active inference is part of a process that produces sensory experiences that confirm or test my expectations; e.g., active ballistic saccades do not merely passively orient to features but actively sample the bits of the world that fit my expectations or resolve uncertainty (Friston et al 2012)—‘sampling the world in ways designed to test our hypotheses and to yield better information for the control of action itself’ (Clark 2016, p. 7; see Hohwy 2013, p. 79). On the enactivist view, however, the dynamical
    adjustment/attunement process that encompasses the whole of the system is not a mere testing or sampling that serves better neural prediction; active inference is more action than inference; it’s a doing, an enactive adjustment, a worldly engagement—with anticipatory and
    corrective aspects already included.
    Enactivists suggest that the brain is not located at the center, conducting tests along the radiuses; it’s on the circumference, one station amongst other stations involved in the loop that also navigates through the body and environment and forms the whole. Neural
    accommodation occurs via constant reciprocal interaction between the brain and body, and notions of adjustment and attunement can be cashed out in terms of physical dynamical processes that involve brain and body, including autonomic and peripheral nervous systems. We can see how this enactivist interpretation can work by exploring a more basic conception operating in these predictive models, namely, the free energy principle (FEP).

    https://www.researchgate.net/journal/Synthese-1573-0964/publication/311166903_Active_inference_enactivism_and_the_hermeneutics_of_social_cognition/links/5f75e89e92851c14bca49c36/Active-inference-enactivism-and-the-hermeneutics-of-social-cognition.pdf
  • Isaac
    10.3k
    Thanks for posting those papersjorndoe

    Glad you liked them. It's rich ground for study.

    Didn't see the image you posted; is that from a different paper?jorndoe

    It's from a stock of image links I have. It'll be from a paper, but I don't know which, I'm afraid.

    I guess they don't address the Levine / Chalmers thing directly, yet the models give their own insights.jorndoe

    Try this.
  • Isaac
    10.3k
    You know what happened when you tested the model.Tate

    Well, if we're not 'overstating', you only know what you currently remember about what happened when you tested the model.

    All thought is post hoc by at least a few milliseconds.
  • Isaac
    10.3k


    That's really interesting, thanks. Did you ever read the article here...

    https://journals.sagepub.com/doi/full/10.1177/1059712319862774

    ...where Friston responds to some of the enactivist critique? I'd be interested to hear how well you think it answers the criticisms.
  • Tate
    1.4k
    Well, if we're not 'overstating', you only know what you currently remember about what happened when you tested the model.

    All thought is post hoc by at least a few milliseconds.
    Isaac

    So we're not trying to be serious here?
  • Isaac
    10.3k
    So we're not trying to be serious here?Tate

    Nothing non-serious about it. If you want to say we don't actually 'know' a hidden state because all we 'really' have access to is our inference about it from experiment, then it is no less true to say that we don't 'really' know that either because all we 'really' have access to is our memory of what the inference was when we made it.

    It's the problem with putting 'really's everywhere.

    We know the hidden state. We sometimes make errors. I know the colour of the dress. Sometimes I'm wrong.
  • Tate
    1.4k
    We know the hidden state.Isaac

    Nah. There's an 80 percent chance of rain. I don't know it's going to rain.
  • Isaac
    10.3k
    There's an 80 percent chance of rain. I don't know it's going to rain.Tate

    The hidden state is not a future state, it's a current one.
  • Tate
    1.4k

    "To sample the future, what you do is first sample the last state, given its distribution. Then sample the next hidden state, using the transition matrix and repeat ad nauseum. Since you have no actual observations after the last point in the sequence, you are sampling from a markov chain. This will get you samples of the future, given everything you know of the partial sequence."
  • NOS4A2
    8.5k


    Good explanation.

    Of course this all depends on your theory of selfhood (what is 'me'?) but that's probably a whole 'nother can of worms we don't want to open here.

    That’s an important point.

    If one expands “the network doing the inference” to include the sensorimotor systems, what happens to the hidden state?

    It troubles me because every single “network doing the inference” appears to be the organism itself. By their own admission, and our own, organisms infer.

    Maybe this is partly a problem of systems theory in biology, the idea that this or that group of organs can be considered its own system, while other parts and other systems remain outside of it, different nodes so to speak. While this may be a decent abstract model of biological function, empirically this isn’t the case because whenever such a system is isolated, or otherwise taken out of the system, it no longer performs the functions it is supposed to and is known for. A brain sitting on a chair, for example, could not be said to be thinking. It’s only function as a system at this point is to rot.

    So can an activity that only organisms can be shown to perform—experiencing, thinking, inferring, believing, seeing—be isolated to a single part of it?
  • Metaphysician Undercover
    12.6k
    I don't need premises. I don't consider ants have bank accounts. I don't consider atoms have feelings. I can't for the life of me think why anyone would consider neurons having reasons for long enough to even consider the premises required.Isaac

    Well, if you consider that each and every internal organ has its own purpose, function, relative to the existence of the whole, which is a living being, then you would understand that each of these organs has a reason for its existence. If it has a function, it has a reason, that's plain and simple. It serves a purpose relative to the overall whole, which is the living being, therefore it has a reason for being there, to serve that purpose. Obviously, neurons serve a purpose relative to the existence of the being, therefore there is a reason for their existence.

    I really do not understand how anyone could even consider denying this obvious fact. Those who do, seem to suffer from some form of denial which appears to be an illness.

    Ha! But the notion that neurons have reasons is practically watertight?Isaac

    Yes, that neurons have reasons for being, based on their purpose, is a very sound principle. That neurons comprise a system is a very flimsy principle because they are actually a small part of a much larger "system", better known as a living being. And neurons serve a purpose (they have a function) relative to that being, but they do not make up an independent "system" in themselves.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.