Though I should say, I have (from reading the papers you cited) some grave concerns about the route Chalmers takes to get here (if here is indeed where I think he is - I suspect my ability to understand what he's on about is substantially less than yours). I'm not sure that the modality is actually a viable approach if he's trying to get at the way we actually think. There's too little scope in a kind of 'this else that' model where I think It's more 'this until further notice', but I may have misunderstood. — Isaac
So, there's these strong connections which neuroscientists (to my knowledge) have yet to fully work out the function of between early areas of sub-conscious cortices and the hippocampus, an example might be the V2 region of the visual cortex. Usually a connection to the hippocampus is involved in consolidation of some memory, so it seems odd that such early regions would be strongly tied to it. One idea is that there's some higher level modelling suppression going on even in these early centres, like - 'is that likely to be an edge? Let me just check'. I think (though I can't lay my hands on any papers right now) there's one of these connections into the cerebellum too. — Isaac
You can tell what Terry Pratchett referred to as 'lies to children' about the content of saccades in terms of propositions, though. EG, someone might 'look at a chin to provide more information about the orientation of a face', but there's no conscious event of belief or statement associated with what the saccade's doing at the time, the speech act which associates the propositional content with the saccade is retrospective. — fdrake
What do you see as the difference between the two? — Isaac
our model of (some part of) the world and the world we are modeling sometimes match up. I think we essentially agree.
— Andrew M
Yeah. We have a vested interest in them matching up, not just with the world, but (and this is the really important part, for me) with each other's models. In fact I'd be tempted to go as far as to say that it's more important that our models match each others than it is they match the state they're trying to model. I'm pretty sure this is main function of many language games, the main function of social narratives, the main function of rational thought rules. To get our models to match each others. — Isaac
Hold up. What do you mean by "conscious" here? What is a worm missing that it would need in order to be conscious? — frank
at least part of this discomfort regarding modality is rooted in the idea that models of neural networks don't seem to compute possibilities, they tend to compute probabilities. — fdrake
Were there other reasons you thought that employing modality as Chalmers does might not be okay? — fdrake
I'm not really convinced by that reasoning, but there you go. — fdrake
the microsaccades having directional biases towards required coloured stimuli. Assuming that the content of the attentional template of a microsaccade has its information being passed about the brain in the way you mentioned, anyway. — fdrake
I think of beliefs as being more obstinate than expectations. For example say I always put my keys in a particular place; then I 'automatically' expect them to be there even though I know that sometimes I fail to put them there. On the other hand if asked whether I believe they are there I might say 'no' because I acknowledge I might have put them somewhere else, someone might have moved them, and so on. — Janus
it's worth noting that we have language not just for agreement and disagreement (i.e., whether our model matches up with other people's models), but also for being correct and mistaken (i.e., whether our models match up with the world we are modeling).
Consider a Robinson Crusoe on a deserted island who doesn't communicate with anyone. Mistakes in his world modeling can be costly (nope, no precipice there...) — Andrew M
I meant 'conscious of...' — Isaac
Not that you couldn't ask the same question there too, but I'm really just trying to get at whatever distinction you're applying to 'intent' that you think a sub-conscious process couldn't satisfy the definition. You want to reserve the word for some types of directed behaviour but not others, right? — Isaac
Well you said "conscious species." You can't be a functionalist and use that kind of language. — frank
"Directedness" just sounds teleological. At the chemical level we just need chemicals and no purposeful events. — frank
I define 'conscious' (earlier in this thread, even, I think) as a process of logging certain mental states to memory — Isaac
So I think I can be functionalist and still use that language — Isaac
Yeah, maybe. But all those chemicals are instructed by a mind which is itself several models which have a function. I don't have any problem in saying that the purpose of the printer cable is carry information from the computer to the printer, or that the purpose of some sub-routine in the program is to translate the key inputs into binary code. None of these has purpose as a system isolated, but it has purpose as part of the larger machine. — Isaac
As a rather selfish request, can you please provide more words and citations for these positions. — fdrake
By the sounds of it you're writing largely from Chalmers' perspective on things? To my knowledge it's rather contentious that consciousness goes 'all the way down' with functional properties if one is a functionalist. — fdrake
It's also ambiguous whether you're using 'all the way down' to refer to a panpsychism or whether bodily functions are conscious 'all the way down'. — fdrake
I baulk at equating each neural network with some attitude towards a proposition.
— Banno
Why is that? — Isaac
But if a functionalist says consciousness is identical to function, but excludes worms, they'd need to explain why. — frank
If you exclude worms from the function of consciousness to see the world, — AgentTangarine
Maybe replace "see the world" with "light sensitivity.". — frank
I think this argument shows that there is a difference in kind between proposition beliefs and neural nets that mitigates against our being able to equate the two directly, while admitting that there is nothing much more to a propositional attitude than certain neural activity. That is, it is anomalous, yet a monism — Banno
Sorry, I don't know what you're saying — frank
If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open? — Banno
↪fdrake Well, it might best be described as Banno's understanding of Davidson. Then if I have it wrong it's not his fault, and if I have it right he can take the credit. — Banno
More formally, there's Davidson's question concerning the scientific validity of any such equation. Suppose that we identify a specific neural network, found in a thousand folk, as being active when the door is open, and hence conclude that the network is roughly equivalent to the propositional attitude "I believe that the door is open". If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open? — Banno
If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network ( possibility 1 ), or do we conclude that they do not really believe the door is open (possibility 2)?
A worm demonstrates functions of consciousness. — frank
I know a tad about computer architecture. There are no purposes in there. — frank
Mostly because such propositional attitudes are so mercurial. — Banno
If we examine person 1001, who claims to believe that the door is open, and do not find in them that specific neural network, do we conclude that we have not identified the correct specific network, or do we conclude that they do not really believe the door is open? — Banno
My apologies for this not being clearly expressed. — Banno
Maybe. I'm not trying to draw a line here, I'm trying to establish the line you want to draw. The type of thing/process 'intention' is reserved for. — Isaac
Directedness" just sounds teleological. At the chemical level we just need chemicals and no purposeful events.
— frank
Yeah, maybe. But all those chemicals are instructed by a mind which is itself several models which have a function. — Isaac
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.