• frank
    14.6k
    Multiple realizability is a feature of a non-reductionist theory of mind. I want to explain why its adherents like it and how its opponents reject it.

    MR is a response to the flaw in brain-state reductionism: it doesnt appear to be possible to correlate a particular brain state to a psychological state (like pain). This flaw is particularly noticeable when we think about the broad range of creatures who can feel pain: their varying anatomy makes it seem impossible to make thus correlation.

    MR says we don't need to identify a particular brain state because pain can be realized by many different physical states.

    If you're a proponent of MR, what would you say the basis for the claim is?

    Next post: challenges to MR.
  • god must be atheist
    5.1k
    We simply don't know how the brain works, and our measuring techniques are not much help either.

    MR is a theory based on unknowability. I reject that. I think the functioning of the brain is knowable, but we just haven't got there yet.

    MR may have practical applications, up to the point when it becomes obsolete due to advances on knowing how the brain works. If it works in the first place.
  • Pfhorrest
    4.6k
    MR is not itself a theory of mind, it’s just a feature of functionalism. Functionalism says that mental states correspond to functional states of (particular kinds of) state machines, which in general are multiply realizable: you can run the same program on a computer made of transistors, vacuum tubes, or pipes and valves, in principle, as it’s not about the hardware per se but about the functionality that it can implement. Functionalism is not itself even inherently reductionist: in principle the function of the mind could be realized in some kind of immaterial substance, if such things can even exist. Functionalism just doesn’t require that such a thing exist.
  • frank
    14.6k
    you can run the same program on a computer made of transistors, vacuum tubes, or pipes and valves, in principle, as it’s not about the hardware per se but about the functionality that it can implement.Pfhorrest

    This is why I dont like that analogy: if you're running the same program on computers with different hardware, it would still be simple (with a diagnostic device called a logic analyzer) to identify correlating states. They're all doing the same thing, just with different voltage levels and technological platforms.

    If we change it to devices with different brands of microprocessors so the machine language is different, we could still discover the correlation diagnostically. IOW, I wouldn't gave to identify an external state and trace it back to the state of the logic gates.

    I think MR is a stronger thesis than: same software/different hardware. It's unrelated software and hardware that's only related by attachment to the same evolutionary tree.

    Or is that wrong? Has a "software" format been discovered that allows us to correlate humans and octupi?
  • frank
    14.6k
    MR is a theory based on unknowability. I reject that. I think the functioning of the brain is knowable, but we just haven't got there yet.god must be atheist

    I would like to know the extent to which MR is a shot in the dark vs based on evidence.
  • Wayfarer
    20.7k
    MR says we don't need to identify a particular brain state because pain can be realized by many different physical states.frank

    I've always felt that there's a much stronger argument for MR than just pain, in that neuroscience can't find any objective correlation between 'brain states' and all manner of mental phenomena, including language. I mean, in individuals who suffer brain damage, the brain is able to re-route its functionality so that areas not typically associated with language abilities are re-purposed. Not only that (and I have some references for this), research has been entirely able to correlate particular patterns of neural activity for even the most simple learning tasks.

    (I'm interested in this topic, but have to go to work, but will follow the thread.)
  • Pfhorrest
    4.6k
    I think functionalism is more about implementing a protocol or format or even more generally a... well, a function. AIM on Windows and Mac are different realizations of the same program, for Mac x86 or PPC are likewise even though the processors are different, AIM Windows and iChat still both communicated over the same protocol, and all of those are still chat programs just like ICQ even though that’s not directly compatible with them. Human pain and octopus pain could be comparable to iChat on Mac PPC and ICQ on a vacuum tube emulation of Windows x86: they’re very different tech stacks at every level but they’re both still doing the same thing.
  • Wayfarer
    20.7k
    Functionalism says that mental states correspond to functional states of (particular kinds of) state machines, which in general are multiply realizable: you can run the same program on a computer made of transistors, vacuum tubes, or pipes and valves, in principle, as it’s not about the hardware per se but about the functionality that it can implement.Pfhorrest

    When you’re performing a function or carrying out a calculation or in reference to a machine, then this makes sense. But how would a machine realise pain? At all? You could surely program a computer to respond in a particular way to a range of inputs that would output a symbol for 'pain', but the machine by definition is not an organism and cannot literally feel pain.
  • mcdoodle
    1.1k
    (HI Wayfarer, hope all is well) I don't see why a machine couldn't be developed that would know how to simulate the expression of pain, and would also know that 'pain' is usually, but not always, a sign that something is wrong, so would also either simulate the wrongness, or only express pain when something is wrong.

    I don't mean that I agree with the machine-metaphor behind reductionism, but I think it needs a subtler critique than this, now that we can envisage quasi-organisms.
  • Wayfarer
    20.7k
    I don't see why a machine couldn't be developed that would know how to simulate the expression of painmcdoodle

    (Very well, thanks.) As I said, you could simulate pain or a 'pain-type-reaction'. But one of the key points of pain is that it is felt.

    Which provides me the opportunity to post one of my all-time favourite stock quotes, from Rene Descartes, in 1630 (!):

    if there were machines that resembled our bodies and if they imitated our actions as much as is morally possible, we would always have two very certain means for recognizing that, none the less, they are not genuinely human. The first is that they would never be able to use speech, or other signs composed by themselves, as we do to express our thoughts to others. For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs - for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. The second means is that, even if they did many things as well as or, possibly, better than anyone of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action. — Rene Descartes

    And, can we 'envisage quasi-organisms'? I maintain that all such 'envisaging' is in fact 'projection', which is a consequence of our immersion in image-producing and computing technology, such that we loose sight of the fact that computers are neither organisms, nor beings, but devices. Yet from experience on this forum, people will fight that distinction tooth and nail.
  • frank
    14.6k
    AIM on Windows and Mac are different realizations of the same program, for Mac x86 or PPC are likewise even though the processors are differentPfhorrest

    Right. So I write a program and compile it for an Intel microprocessor, then compile it for some other processor. What in the biological world compares to that "same program"?
  • frank
    14.6k
    I'm interested in this topic, but have to go to work, but will follow the thread.)Wayfarer

    I hear you. I'm just exploring different aspects of the concept of emergence.
  • Pfhorrest
    4.6k
    That is the question of the hard problem of phenomenal consciousness, and you already know my answer to that: everything has some phenomenal experience or another, and the specifics of that experience vary with the function of the thing, so anything that realizes the same function as a human brain has the same experience as a human brain.
  • Wayfarer
    20.7k
    That is the question of the hard problem of phenomenal consciousness, and you already know my answer to that: everything has some phenomenal experience or another, and the specifics of that experience vary with the function of the thing, so anything that realizes the same function as a human brain has the same experience as a human brain.Pfhorrest

    What I don't see, is how the symbolic representation of pain, like the word PAIN, is actually painful. Nor how it is possible to argue that computers are subjects of experience.
  • bert1
    1.8k
    180, are you approving of Pfhorrest's panpsychism or of his functionalism regarding the content of consciousness? Or both?
  • mcdoodle
    1.1k
    Great quote! To debate it thoroughly would take us off-topic. My feeling is that social robotics - not Siri and Alexa, but the robots that provide care and comfort - have progressed to the point where they defy Descartes' first point. If it feels like a carer, if it acts like a carer, then it's a carer. But (Descartes' second point) it won't, indeed, go off-piste as humans would and tell you how moved it was by its grandfather's wartime experiences.
  • ovdtogt
    667
    Something does not have to be aware (think) to exist ( for us).
  • Wayfarer
    20.7k
    If it feels like a carer, if it acts like a carer, then it's a carer.mcdoodle

    Could you hurt it? Cause it to feel physical or emotional pain?
  • ovdtogt
    667
    Could you hurt it? Cause it to feel physical or emotional pain?Wayfarer

    For something to care for you you need not be able to hurt it. You need only be able to care for it.
  • frank
    14.6k
    Per Fodor there are two degrees of MR: a weaker MR allows the same psychological state to arise from distinct structures: say electronic technology vs biological.

    The stronger version allows the same pain, for example, to arise from different token physical states of the same system.

    Horgan 1993:
    "Multiple realizability might well begin at home. For all we now know (and I emphasize that we really do not now know), the intentional mental states we attribute to one another might turn out to be radically multiply realizable at the neurobiological level of description, even in humans; indeed, even in individual humans; indeed, even in an individual human given the structure of his central nervous system at a single moment of his life. (p. 308; author's emphases)"

    This stronger thesis is empirically supported by evidence of neural plasticity in trauma victims.
  • frank
    14.6k
    This stronger version of MR is the prevailing view in philosophy of mind at this point. 'MR in token systems over time' is non-reductive physicalism. Horgan's comment makes clear why its prevalence is a matter of fashion and tastes vs a firmer empirical foundation, which is really the question I was wondering about when I started this thread, but I'll continue by laying out the family of perspectives surrounding this issue.

    Next: non-reductive physicalism vs functionalism:
  • god must be atheist
    5.1k
    it won't, indeed, go off-piste as humans would and tell you how moved it was by its grandfather's wartime experiences.mcdoodle

    Well, maybe it would if only it had its own grandfather. :-) Which served in the war. :-) And had experience-ready capabilities. :-)
  • 180 Proof
    14.1k
    I agree with Pfhorrest's take on MR functionalism. His 'panpsychism', however, I don't accept; as far as I'm concerned, the notion posits an ad hoc appeal to ignorance (i.e. WOO-of-the-gaps) from which is 'derived' what amounts to nothing more than, in effect, a compositional fallacy (if some part has 'phenomenal experience', then the whole has (varying degrees of discrete(?)) 'phenomenal experience' :roll: ), which of course doesn't, even in principle, explain what it purports to explain.
  • armonie
    82
    るということ
  • mcdoodle
    1.1k
    Could you hurt it? Cause it to feel physical or emotional pain?Wayfarer

    I am only proposing that you can give a social robot enough of the appearance of a carer for humans to feel comfortable interacting with it. It seems to me that ai is now sophisticated enough to give a machine for example parameters that would represent our two broad theories of other minds, i.e. simulation theory or theory theory. And the social robot would have a head start with its human because it would indeed appear to be reading the human's mind, as that would be its primary purpose: to provide for the care needed, including anticipating future needs. For example, if a doddery person falls over when they stand on more than one occasion, a machine could perfectly well begin to anticipate and help out with that. Clever dogs are already trained to do that to a limited degree.
  • frank
    14.6k
    Whatever gave rise to the uniqueness of these emerging states is linked to the overall evolution of the structure.armonie

    I'm not sure I'm understanding you, but spider eyes evolved separately from human eyes. Could arachnids continue to evolve into creatures with rich inner worlds with some commonality of visual experience with humans? If not, why not?
  • Wayfarer
    20.7k
    I am only proposing that you can give a social robot enough of the appearance of a carer for humans to feel comfortable interacting with it.mcdoodle

    True enough, but it still doesn’t amount to being able to feel pain. So Putnam’s idea of ‘multiple realisability’ doesn’t extend to the domain of robots or AI.

    I reiterate that I’m dubious about the effectiveness of referring to ‘pain’ as a ‘psychological state’. It seems to me a physiological state. I think a much more philosophically sophisticated argument could be constructed around the argument that the same ideas can be realised in multiple ways - different languages and even different media. So, the argument would go, if the same proposition can be represented in a vast diversity of symbolic forms, how could the meaning of the proposition be reduced to anything physical? In other words, if the information stays constant, while the physical form changes, how can you argue that the information being conveyed by the symbols is physical? To do so is to mistake ‘brain states’ for symbolic forms, which they’re surely not.
  • Galuchat
    809

    Pain interoception (nociception) is a type of corporeal state perception (sensation mental effect). So, pain is a psychological state caused by a physiological state (sensation).

    In other words: the physical information of nociception becomes the semantic information of pain.
  • Wayfarer
    20.7k
    :up: Thanks. That makes it a little clearer. I still think it's a pretty lame argument, so I suppose I ought to butt out.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.