I am inclined to think that the idea of qualia is useful to some extent, but with some limitation, in the way in which it can become a bit of a knotty tangent at times. — Jack Cummins
The question is, how do qualia improve the analysis in a way that is not just as clear from a discussion of colour scales and pitch and tone and time scales etc... And this is the part that never receives a clear answer. The usual approach is to allude to their being useful without actually saying how - as here with a couple of citations and some passive aggression.There was a lot of interesting analysis of art and music based on qualia as colour scales and pitch and tone and time scales etc. Prall, Goodman, Boretz. But there, there was no philosophical bias, no claim of epistemological priority. It was just a matter of starting the analysis with those elements. — bongo fury
The question is, how do qualia improve the analysis in a way that is not just as clear from a discussion of colour scales and pitch and tone and time scales etc... — Banno
but the question was why do hallucinated spiders look like real spiders. How do you explain the behavior of someone hallucinating without "silly" qualia? How is it that something that isn't real looks like something that is unless they both take the same form (qualia)? — Harry Hindu
Denying realism isn't denying what is real. It just changes the reference to what is real. — Harry Hindu
No, not at all, but it's what I was getting at with my clumsy introduction of stochastic resonance. What's inside or outside any Markov blanket is not necessarily the same as indies or outside a skull. That's true of our sensory receptors (for whom their first 'inside' node os actually outside the body) and it's true for our internal models (which may have nodes outside their Markov boundary - my stochastic resonance example - but inside the brain) — Isaac
Vehicle externalism, more commonly known as the thesis of the extended mind, is externalism about the vehicles of mental content. According to the thesis of extended mind, the vehicles of mental content—roughly, the physical or computational bearers of this content—are not always determined or exhausted by things occurring inside the biological boundaries of the individual. — SEP
The distinction between states and acts is, in the context of this form of externalism, a significant one, and the general idea of extended mind can be developed in two quite different ways depending on whether we think of the vehicles of content as states or as acts. Thinking of the vehicles of content as states leads to a state-oriented version of extended mind. Thinking of these vehicles as acts leads to a process-oriented alternative. — SEP
Mental content is not free-floating. Wherever there is mental content there is something that has it—a vehicle of content. Mental states (belief, desires, hopes, and fears, etc.) are natural candidates for vehicles of content. So too are mental acts (believing, desiring, hoping, fearing, etc.). As a rough, initial approximation, extended mind is the view that not all mental states or acts are exclusively located inside the person who believes, desires, hopes, fears, and so on. Rather, some mental states or acts are, in part, constituted by factors (e.g., structures, processes) that are located outside the biological boundaries of the individuals that have them. Thus, extended mind differs from content externalism not merely in being about mental vehicles rather than mental contents, but also in being committed to a claim of external location rather than simply external individuation. If extended mind is true, some vehicles of content are not, entirely, located inside the biological boundaries of individuals that have them. Rather, they are, partly, constituted by, or are composed of, factors that lie outside those boundaries. — SEP
Content externalism (henceforth externalism) is the position that our contents depend in a constitutive manner on items in the external world, that they can be individuated by our causal interaction with the natural and social world.
Content internalism (henceforth internalism) is the position that our contents depend only on properties of our bodies, such as our brains. Internalists typically hold that our contents are narrow, insofar as they locally supervene on the properties of our bodies or brains. — SEP
“Erotic perception is not a cogitatio which aims at a cogitatum; through one body it aims at another body, and takes place in the world, not in a consciousness. A sight has a sexual significance for me, not when I consider, even confusedly, its possible relationship to the sexual organs or to pleasurable states, but when it exists for my body, for that power always available for bringing together into an erotic situation the stimuli applied, and adapting sexual conduct to it. There is an erotic ‘comprehension’ not of the order of understanding, since understanding subsumes an experience, once perceived, under some idea, while desire comprehends blindly by linking body to body. Even in the case of sexuality, which has nevertheless long been regarded as pre-eminently the type of bodily function, we are concerned, not with a peripheral involuntary action, but with an intentionality which follows the general flow of existence and yields to its movements. — MP, Phenomenology of Perception, 437
(though I'm sure Banno would tell you that intentional states are directed towards statements — fdrake
I think that's a gross deflation of all the work that neuroscience has done on this. Most of the neuroscientists I've spoken to or listened to consider themselves to be investigating the matter of what perception is as a scientific investigation, not one in philology. — Isaac
The discoveries [of neuroscience] are no doubt splendid and fascinating - the conclusions sometimes drawn from them [eg that what we perceive are pictures in the brain (Crick), or `virtual reality' constructed by the brain (Smythies), or `movies in the brain' (Antonio Damasio)] are one form or other of latent nonsense (concealed transgressions of the bounds of sense) as we demonstrated in PFN. — Letter to the Editor: Reply to critical review by Professor John Smythies, Perception, 2011, volume 40
That is, this red flower here is the intended object of my perception.
— Andrew M
I agree with this. It's the 'realism' bit. The object we're all trying (with our modelling processes and our social interaction) to react to is the red flower, out in the world. I don't see how it being the object of our intention somehow removes the 'veil' between us and it. — Isaac
You're saying that any time we're mistaken about the properties of the object we've instead perceived nothing? If I perceive a flower, but in my mind it had red petals (I only briefly glanced at it). I return to it for a closer look and find I had merely assumed the petals were red - expectation bias - they were, quite clearly pink). Now I have to admit that I perceived nothing at all? — Isaac
someone is an internalist about X if they believe X only is determined by/depends upon the body or mind of the individual which bears X. — fdrake
Rather, they are, partly, constituted by, or are composed of, factors that lie outside those boundaries. — SEP
Maybe it could be construed that the ball isn't a 'physical bearer' or 'partly constituting' the process of perception - if you focus on what's 'logged to consciousness' as a meaning for 'what's perceived', it might be possible to argue that 'what's perceived' doesn't have an immediate dependence upon the external state values because the sensory states interface with the world and the internal states which are logged to consciousness don't. There's probably some wrangling regarding where you draw the line. If the 'dependence' is 'any sort of dependence' rather than 'proximate cause in terms of states in the model's graph', it looks to be vehicle externalist in the process sense, if it's the latter maybe it's still possible to be a vehicle internalist. — fdrake
So there's a puzzle regarding bridging the 'content of a state in a neural network' with the content of an intentional act.
The content of a state in a neural network doesn't seem to be a good match for the use of the word 'cup', since using the word to refer to a cup involves a perception which consists in lots of states synergising together in a body-environment interaction — fdrake
Let's say I want to take a drink from my mug. I have an intentional state toward my mug, desiring to drink something out of it. I'm sure there are more than two ways of spelling out their content relevant to this discussion, but I'm going to write down two. — fdrake
Another distinction between the kind of directedness state relations have in a perceptual neural network and the kind of directedness intentional states have is the directedness of an intentional state might be an emergent(I mean weakly emergent, but I'd guess there are strong emergentist takes too) property of the whole perceptual process. — fdrake
However when we see a red flower, do we see it in the brain, or in the mind, or in the garden? I'm not suggesting this applies to you but without clarification of the terms involved, this is the kind of confusion that can arise. — Andrew M
If you agree that there can be a red flower there that I can perceive, then I'm not clear why you're invoking a "veil". What exactly is being veiled here? — Andrew M
The example I had in mind was a hallucination, which isn't perception. Yes of course there can be conditions where we see a flower that looks red (or assume is red), but isn't. — Andrew M
What does it mean for hallucinations to look like the real thing? How can something that isn't real look like something that is?All of this can be put simply a "Spider hallucinations look like spiders" - no use of "qualia"!
What's relevant about an hallucination of a spider is that thereis no spider. Hence, as you point out, characterising some event as an hallucination presumes realism. — Banno
But "real" in what sense? You seemed to agree earlier with the statement, "we are our minds". Are you saying that "we" and our "minds" are not real?To be sure, realism is the view that there is stuff in the world that is independent of the mind, so the claim that what is real is stuff in the mind would not count as realism. — Banno
Yes, it's the primary difficulty here. If I (as a scientist) am to explain what your 'seeing the rose in the garden' consists in, I can't very well give the answer "you're not seeing the rose in the garden". That didn't really answer the question. But equally, I'd be remiss if I didn't provide an explanation of how you can see the red rose out of the corner of your eye despite dendritic trees from the ganglia there being too complex to interpret colour from. You filled-in the colour you expected the rose to be, nothing to do with any physical activity in the actual garden. — Isaac
Because the 'red flower' I'm trying to model and the current 'snapshot' state of my model are not necessarily the same, and some of the reason they're not the same is expectation biasing the interpretation of (and occasionally outright suppressing) the sensory data. It's only the sensory data which is directly connected to the 'red flower', the thing I'm trying to model. The 'veil' is everything else which plays a part in the modelling process not caused directly (or even indirectly) by the 'red flower'. — Isaac
Then how is that not a 'veil'? If we can see a flower as red.but it isn't red, then what got in the way? Whatever got in the way - that's what I'm referring to as a 'veil'. — Isaac
Do they have to believe in non-determinism of some sort? After all, our bodies have not been around forever (though mine sometimes feels like it has!) — Isaac
A narrow content of a particular state is a content of that state that is completely determined by the individual's intrinsic properties. An intrinsic property of an individual is a property that does not depend at all on the individual's environment. For example, having a certain shape is, arguably, an intrinsic property of a particular penny; being in my pocket is not an intrinsic property of the penny. This is because the penny's shape depends only on internal properties of the penny, whereas the fact that it is in my pocket depends on where it happens to be, which is an extrinsic property. The shape of the penny could not be different unless the penny itself were different in some way, but the penny could be exactly the way it is even if it were not in my pocket. Again, there could not be an exact duplicate of the penny that did not share its shape, but there could be an exact duplicate that was not in my pocket. Similarly, a narrow content of a belief or other mental state is a content that could not be different unless the subject who has the state were different in some intrinsic respect: no matter how different the individual's environment were, the belief would have the same content it actually does. Again, a narrow content of an individual's belief is a content that must be shared by any exact duplicate of the individual. (If some form of dualism is true, then the intrinsic properties of an individual may include properties that are not completely determined by the individual's physical properties. In that case an “exact duplicate” must be understood to be an individual who shares all intrinsic nonphysical properties as well as physical ones.) — SEP on Narrow Content
David Chalmers builds on this conceptual role account of narrow content but defines content in terms of our understanding of epistemic possibilities. When we consider certain hypotheses, he says, it requires us to accept some things a priori and to reject others. Given enough information about the world, agents will be “…in a position to make rational judgments about what their expressions refer to” (Chalmers 2006, 591). Chalmers defines scenarios as “maximally specific sets of epistemic possibilities,” such that the details are set (Chalmers 2002a, 610). By dividing up various epistemic possibilities in scenarios, he says, we assume a “centered world” with ourselves at the center. When we do this, we consider our world as actual and describe our “epistemic intensions” for that place, such that these intentions amount to “…functions from scenarios to truth values” (Chalmers 2002a, 613). Epistemic intensions, though, have epistemic contents that mirror our understanding of certain qualitative terms, or those terms that refer to “certain superficial characteristics of objects…in any world” and reflect our ideal reasoning (for example, our best reflective judgments) about them (Chalmers 2002a, 609; Chalmers 2002b, 147; Chalmers 2006, 586). Given this, our “water” contents are such that “If the world turns out one way, it will turn out that water is H20; if the world turns out another way, it will turn out that water is XYZ” (Chalmers 2002b, 159).
This is intriguing, do we have some examples? If I've understood it right, could my theories about the role of social narratives fit here (always looking for interesting new ways to frame this stuff)? — Isaac
I think essentially we'd be remiss if we didn't include our intentions toward an object in the act of perception, but again if we're not to prevent ourselves from being able to say anything at all, we have to be able to draw a line somewhere. I may be oversimplifying, but is there any reason why we shouldn't draw the line at the decision to act? If we're asking the question "Why did you hit your brother?" we might well include intentionality in the perception "he was about to hit me", did our aggressive intention have some role in the perception of the shoulder going back, the fist clenching - probably. But at the point of the message being sent to the arm to strike - that's the point we're interested in - not because it's got some ontological significance, but because that's what we asked the question about. At that point, there was an object (a brother threatening violence) which was the result of some perception process (plus a tone of social conditioning) and the object of an intention (to punch). I don't think it matters that the intention (to act aggressively) might have influenced the perception (a person about to hit me). We can have our cake and eat it here. We can talk about the way in which the intention influences the perception of the object before the question we want to ask of it and still have the final version* be the object of the intention we're asking the question about. (*final version here referring to the object on which the move to strike was based). after the action in question, the whole process will continue seamlessly, the perception might change a bit as a result of our interaction with the object, our intention might change and so affect the perception..., but we marked a point in that continuous process, simply to ask a question (why did you hit your brother) and to answer that question we need to 'freeze-frame' the movie to see what the object of perception was at the time the intentional decision was made. — Isaac
Like saccades, perhaps? Yes, I think there must be cases where this is true, but again, probably just some, not all. We'd be missing something if we wanted to model perception and action this way, but we'd be kidding ourselves if we didn't have such a model to explain things like saccades. — Isaac
I recall having a long argument with Banno about whether the intentionality in saccades counts as a form of belief that wasn't propositional (I argued that it was), so that might be another point of tension with someone who's quite strict about the relationship of mental content to statements and truth conditions. — fdrake
Would the intentionality in saccades best be called 'belief' or 'expectation'? — Janus
although not present to consciousness in propositional form, could be rendered as such? — Janus
So the phrase "veil of perception" has a historical connection with certain 17th century metaphysical views that deny any direct world-involvement. — Andrew M
our model of (some part of) the world and the world we are modeling sometimes match up. I think we essentially agree. — Andrew M
A move which gets taken is to massage the notion of dependence and the type of content. You could 'bite the bullet' of whatever externalist argument you like which was dedicated to mental content of type X and say "Yes, type X as a whole has some external dependence, but type X1 which is a subset of X does not", I think that type of mental content gets called 'narrow'. — fdrake
that system of predictions will completely rule out some things from occurring (as we think anyway, we might be wrong) and largely endorse some things, it will split up 'epistemic space' into what's plausible, irrelevant, implausible etc. But what splits up the predictions is arguably solely determined by non-external properties, since you just fixed them.
Why I did not spend more time with the Chalmers papers exegetically
I imagine you maybe have some sympathy with a view like that? — fdrake
if social processes act as some kind of distributed mental process - cf Lakatos' term for reasoning with people 'thinking loudly' -, then social processes are vehicles in that regard. The latter seems like an extended mind thesis towards the social milieu. — fdrake
Yes I see it as implausible that intentionality isn't right down in the motor functions considering the directedness of visual foraging, that it's not conscious, and that it's salience+causal relevance+information density based. I recall having a long argument with Banno about whether the intentionality in saccades counts as a form of belief that wasn't propositional (I argued that it was not propositional), so that might be another point of tension with someone who's quite strict about the relationship of mental content to statements and truth conditions. — fdrake
Is it appropriate to think of a worm's consciousness as intention driven? Are going to end up equivocating about "intention" if we do? — frank
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.