• Emotions are a sense like sight and hearing
    Yeah, there is a lot of nihilism, existentalism and ant-natalism expressed around these parts. It is a natural reaction to being asked to jump so high as a "self-actualising being" in the modern individualistic world.

    So while you finger the rational side of the equation - Scientism - I see that as merely the other half of the same essential duo. It is the romantic belief in human transcendence that was also fixed by Plato right at the birth of the Socratic Western ideal of the self-made person answering to a call from beyond his or her actual (biologically and socially constrained) world.

    That would be why the values of other cultures - like Buddhism - might appeal to you. They speak to a traditional, non-technological, lifestyle that perforce is more communal and ecological, less directed at growth and advance.

    But what happens when Westerners start picking and choosing the bits they think they best like from other cultures? You start to get the new agers and transhumanists. You get a romanticised version of the Eastern wisdom where again it becomes all about personal ascendancy - tapping into spiritual power so as to become super-human.
  • Emotions are a sense like sight and hearing
    In classical philosophy, emotions (or 'the passions') were something to be overcome.Wayfarer

    Plato's chariot allegory - a tripartite division of reason, higher moral feeling and base animal emotion would be the influential basis of the Western view.

    Plato paints the picture of a Charioteer driving a chariot pulled by two winged horses...

    The Charioteer represents intellect, reason, or the part of the soul that must guide the soul to truth; one horse represents rational or moral impulse or the positive part of passionate nature (e.g., righteous indignation); while the other represents the soul's irrational passions, appetites, or concupiscent nature. The Charioteer directs the entire chariot/soul, trying to stop the horses from going different ways, and to proceed towards enlightenment.

    https://en.wikipedia.org/wiki/Chariot_Allegory

    I suppose this attitude conveys the impression of the proverbial Mr Spock - the cool rationalist, for whom emotion is a peculiar human trait, but who on that account misses something vital about being human.Wayfarer

    This is what I mean about the power of cultural imagery to teach appropriate attitudes. Mr Spock speaks for the cool reason of the Enlightenment. Capt Kirk speaks for the Romantic repost that humanity is ultimately defined by its heart.

    Meanwhile the whole show teaches impressionable youngsters that the American Dream of individual freedom and free enterprise should be imposed on all alien cultures encountered. Even bug-eyed monsters should be given the supreme gift of human independently-minded feeling.

    Oh the irony of the fact that it is social constraints that give shape to human freedoms, pointing us in a clear direction and leaving us to do our best to meet those goals.

    Emotionality is a clay to be sculpted. And the goals are made so lofty, so abstract, they become unrealistic to achieve.
  • Good Partners
    A good partner brings out the best in one's self.
  • Emotions are a sense like sight and hearing
    I was particularly struck by work done by Roddy Cowie et al for the HUMAINE projectmcdoodle

    I read one of his literature reviews and thought it presented a very confused picture. For me, nothing about emotion makes sense until you can clearly distinguish between a neurobiological level of evaluation - what all animal brains are set up to do - and the socially-constructed emotionality of humans, which is a cultural framing of experience.

    The best source on social emotion was the group led by Oxford philosopher Rom Harre in the 1980s. It took the anthropological route of showing how different such constructs are across different cultures. And it tied in with the rediscovery of Vygotsky, the development of discursive psychology, at the same time. Harre did two good collections of essays.

    So the two levels of emotionality have to be understood in separate ways.

    The biological emotions are basic affective responses. The brain needs to be able to sense the body's physiological state - we are hungry, tired, etc - and also interpret the world in terms of its dangers, its rewards. It constantly needs to orientate in ways that match our physiology to the demands that are imminently expected. If we see a tiger, we need to start feeling the adrenaline that primes us for whatever decision we are going to take. That level of emotionality is simply what it feels like to be changing gear metabolically in a way that fits the particular challenge or opportunity of the moment.

    Then the social emotions are not about our own metabolic/physiological needs but about socially appropriate behaviour.

    There is also a biological basis as we are creatures highly evolved for social living. We naturally feel empathy or dominance or whatever. We can point to specific neurotransmitters and hormones, like oxytocin and testosterone, which subserve specific brain pathways.

    But language means that feelings and ideas can be woven together as social scripts. We know how we are supposed to behave when we are being "in love", or "brave", or "ashamed". These "higher emotions", or Platonic passions, stand as cultural ideals we are meant to do our best to live up to. And how we act rather than how we truly feel is what really matters.

    As I say, once you check the cross-cultural anthropological evidence, this becomes very obvious. But Western culture - with its particular stress on the rationality vs emotionality dichotomy - actually cuts across people's ability to believe it as a fact. The Western script - reaching its height of development through the dialectic of the Enlightenment and Romanticism - means that human emotionality can't be understood in simple pragmatic fashion as the learning of appropriate social habits. Reason and feeling must be dualistically divided, each somehow at war with the other for ownership of the individual psyche.

    That is the irony. Much of the energy of even science or philosophy goes into perpetuating a Western cultural mythology. And that is why emotionality seems such a confused and self-contradictory subject. People think they know the answer - its reason vs feeling, rationality vs irrationality, stupid - and so bend all their arguments to steer to that outcome.

    But also, it is a very successful social script, which is why it persists. By creating an exalted image of the individual human - always in a battle to conquer his/her base self by applying either higher reason, or higher feeling - then society is able to exert the maximum constraint on individual behaviour. We all become controlled by these learnt abstractions that are at the bottom of the West's creative, driving, growth-obsessed, mindset.
  • Utilitarian AI
    I'd refer you to the writings of Robert Rosen and other theoretical biologists like Howard Pattee. The whole idea of simulation falls apart when you consider biological reality.

    The very point of a machine is that it is materially and energetically disconnected from the real world of dissipative relations. A computer just mindlessly shuffles strings of symbols. It becomes Turing universal once that physical disconnection is made explicit by giving the machine an infinite tape and infinite time. The only connection now is via the mind of some human who thinks some programme is a useful way of rearranging a bunch of signs and is willing to act as the interpreter. If the output of the machine is X, then I - the human - am going to want to do physical thing Y.

    So one can imagine setting up a correspondence relation where every physical degree of freedom in the Universe is matched by some binary information bit stored on an infinite tape which can shuffle back and forth in infinite time. But clearly that is physically unrealistic. And also it misses the point that life and mind are all about there being a tight dynamical interaction between informational symbols and material actions.

    There may be a divide between information and entropy. Yet there has to be also that actual connection where the information is regulating the entropy flow (and in complementary fashion, that flow is optimising the information which regulates it).

    So until you are talking about this two-way street - this semiotic feedback loop - at the most fundamental level, then you are simply not capturing what is actually going on.

    Reality is not a simulation and simulation cannot be reality. CTD makes empty claims in that regard. Formal cause can shape material reality, but it can't be that material reality.
  • Utilitarian AI
    I'm not asking you to prove something cannot happen. I'm asking you to demonstrate that what you claim has started to happen.

    So - as is one of the defining differences between minds and machines - the argument is inductive rather than deductive. The degree of belief is predicated on a hypothesis seeming reasonable in that it is capable of being falsified. Has your claimed counterfactual - AI is simulating the essence of mindful action - come into sight yet?
  • Utilitarian AI
    I don't think this is necessarily something that can be understood in terms of the 'entropification principle'. I prefer a teleological attitude - that we're something the Universe enjoys doing.Wayfarer

    Sure. But if there is something like a 140 orders of magnitude difference between the amount of "dumb entropification" and the amount of "smart entropification" achieved by humans, then the Universe either is horribly bad at achieving its ends or it enjoys something else more.

    Just a little bit of quantitative fact checking there.

    Of course, the Singulatarians claim AI will spread intelligence across the Universe in machine form. It comes from the same place as interstellar panspermia.

    But again it is not hard to do the entropic sums on that. There are no perpetual motion machines. And indeed, it is not possible even to get close to that level of thermodynamic efficiency, no matter how clever the intelligent design.
  • Utilitarian AI
    Because I don't see what you say or have referenced as being proof that stuff like simulating the human brain as being impossible.Posty McPostface

    The burden is really on you - as the AI proponent - to show that your machine architecture is actually beginning to simulate anything the human brain is doing.

    So what is it that "conscious brains" actually do in core terms? That is the model you have to be able to present and defend to demonstrate that your alleged technical progress is indeed properly connected to this particular claimed end.
  • Utilitarian AI
    Except for one point: when intelligence evolves (which is surely does) how come it discovers 'the law of the excluded middle'. That is not 'something that evolved'.Wayfarer

    Funnily enough, that is the very first thing nature must discover. Existence itself - speaking as an organicist - arises via dichotomous symmetry-breaking. That is how dissipative structure is understood - as the emergence of the dichotomy of "dumb" local entropy and the "smart" global organisation that can waste it.

    So the laws of though recapitulate that basic world-creating mechanism. The LEM is final part of the intellectual apparatus that dissipates our uncertainty concerning possibility. We get fully organised logically when we boil things down to being definitely either/or (and hence, ultimately, both).

    We can't just have made up the ways of thinking that have proved so unreasonably effective. The laws of thought are not arbitrary whims but an expression of the logic of existence itself.

    That is what Peirceian pansemiotic metaphysics is all about, after all. The universe arises via a generalised growth of reasonableness. That sounds mystical until you see it is just talking about the logic of symmetry-breaking upon which our best physical theories are now founded.
  • Utilitarian AI
    There are two AI scenarios. AI will either replace humans or augment humans. And given the "technology" is fundamentally different - machines can never be alive - a symbiotic relationship is the sensible prediction.

    Human consciousness is already a socially-augmented reality. We are creatures of a cultural super-organism. Language became stories, books, mathematics - a social machinery for constructing "enlightened individuals".

    Technology simply takes that story to another level. Look what happened when exponential tech resulted in a smartphone that had a gazillion times more processing power than an 1970s mainframe. Our lives got taken by this new mad thing of social media.

    Back in the 1970s, scientists could only imagine that such processing power would be used to solve the problems of humanity, not obsess about the Kardashians.

    So sure as shit "AI" will transform things. But if you want to predict the future, you have to focus on the actual human story. We have to understand what we are about first. And that isn't just a story of "relentless intelligence and rationality".

    [Spoiler: Here I would go off down the usual path of explaining how intelligence arises in nature as dissipative entropic organisation - an expression of the second law of thermodynamics. :)]
  • The Unconscious
    I'm curious, what are your thoughts on the global workspace theory stuff?JupiterJess

    I don't think it is wrong so much as just clunky. It is still stuck in essentially a representational/computational paradigm with its flaws.

    But on the other hand - in the mid-1990s - I thought it also clearly the leader in terms of that approach. It got the neural basics right, like the two stage habit~attention distinction, and the contextual, or constraints-based, approach to processing.

    I knew Baars, so we discussed this quite a bit. At the time, the debate for me was about how to reconceive brain function as self-organising dynamics instead of data-processing computation. Both paradigms appeared to have a lot of correctness, yet how could they be married? That was when I got into the emerging bio-semiotic approach in theoretical biology. Semiotics does marry the dynamical and computational views in the one idea of the sign relation.

    So in summary, 20 years ago, the global workspace was a competent summary of the neurocognitive evidence. But the philosophy of mind issue of "what paradigm" was equally clearly not solved by that. It still awaits its semotic revolution. :)
  • Emotions are a sense like sight and hearing
    Emotionality is nearly always a complex of more than one 'emotion': in grief I am angry, in despair I often keep hopeful, and so on. To place them on some binary scale feels trivial.mcdoodle

    There is a good reason why binaries make sense. To understand the world in the most computationally efficient fashion, you want to break it into sharp-edged black and white. So a dichotomy - like approach and avoidance, or relaxed and aroused - is a way to see the world in its complementary extremes. It gives two precisely opposed points of view. And that then provides the clarity within which a spectrum of graded response can occur. Once the bounds of possibility have been anchored crisply by black vs white, then in-between you can have with equal definiteness every possible shade of grey.

    So a dichotomy is a general processing principle. It fixes a decisive direction on the world. Then having broken the world towards two contrasting extremes, the third thing of a spectrum of intermediate reactions becomes possible. A glass can't be half full or half empty until there is a glass that is either completely full or completely empty.

    The question then for the study of emotionality would be what is the fewest such dichotomies that you could get away with in modelling the brain's architecture.

    Again, a most basic one would seem to be the sympathetic vs parasympathetic response - wind down or crank up.

    But then there is a whole hierarchy of further more specific breaks. If we are negatively aroused, this may manifest as demanding a sharp decision of whether to fight or run. And even the flight response is dichotomised in the neural wiring of animals. A further escape strategem is a choice of whether to run or freeze. If a tiger appears before you in the jungle glade, there are two "best" instinctive options that evolution has built into the brain.

    So yes, there is plenty of evolved complexity when it comes to our emotional responses. But also there is a single logic to all brain processing. The first job in making sense of the world is to break it apart as thesis and antithesis - frame it clearly as a black and white choice. Impose a clear directionality that makes a choice actually meaningful. Doing one thing becomes definitely not doing its opposite. And in that way, the whole of what it would be possible to do becomes contained within the spectrum of positions thus created.

    Complexity can then arise because having made a first most general black and white decision, a whole lot more more particular black and white decisions can be piled on top. Once the brain can decide to relax or crank up, it can decide whether to crank up in terms of approach or avoidance. And if the decision is avoid, that could be flight or freeze as a more particular black and white choice.

    As you say, when you get to the level of really complex (and culturally informed) emotions - like despair - the dichotomy is contained within the very concept. If despair is defined as a lack of hope, then despair is always going to make you think of hope - its antithesis. You can't actually have one without the thought of the other. Whiteness is really the sublimated idea that blackness happens to be maximally absent.
  • Emotions are a sense like sight and hearing
    Emotions are actually a sense like sight. They allow us to see the values that things and situations hold in our lives.TranscendedRealms

    I can agree with this as a starting point but then is emotion really also an action? Sure, having a feeling of positivity or negativity is a state we can experience. We can call it a sensation. But at a deeper level, it is an orientation response - a call to action. It speaks to the broad assessment of whether to approach or avoid. Positivity draws us towards, negativity repels.

    So yes, the emotional brain is fully part of every moment of perception. We can't see anything without a basic feeling of evaluation - even if the feeling is a disinterested "meh".

    But to define emotion as a sensory modality - one to tally along with sight, hearing, taste, touch and smell - seems a miscategorisation as it is instead a generalised approach~avoidance kind of decision-making that applies across all these particular sensory modes.

    Then as we dig deeper into this "emotional faculty", we can see that it has more complex structure. As well as strong feelings of approach vs avoidance, it has a still more general decision to make of relaxed vs aroused. Does the self need to crank for big action - like approaching or avoiding? Or can the self relax and conserve energy because there is nothing to react to - a feeling which itself can be either positive or negative, depending on whether that lack of stimulus is a relief or a matter of boredom.

    Anyway, this basic action decision - crank up vs wind down - is a dichotomy wired into the body's nervous system as the contrasting sympathetic vs parasympathetic pathways. It is very real as a distinction built into the nervous system's design.

    So of course the emotions exist to evaluate the world. But they have to get news of that world through the senses, or perceptual paths. A tiger has to be seen or heard before an evaluation - positive or negative - can happen. So emotion is general in then making sense of a sensation - pointing us quickly to the right kind of action. It really has a foot in both the traditional camps.
  • The evolution of sexual reproduction
    If you check out evolutionary biologists like Nick Lane, there are much more sensible stories than this "unwanted over-powering" scenario of yours.

    For instance, sex had to develop for life to become multicellular complex. It permitted the systematic recombination of genes that meant each gene was exposed to selective pressure individually. Selection could see and tune individual traits rather than having to judge a complex organism on its whole genome. The good didn't have to be thrown away with the bad.

    This was such an advantage it easily paid for the disadvantage of half the population not being breeders. And it was likely even an essential step to weed out actual parasites - introns - which would otherwise have infested DNA strands to the point of replicational extinction.

    The asymmetry of egg vs sperm - the reason for two actual sexes - is an extension of this logic. It separated stemline variance (the egg with its essential metabolic kit) from germline variance (the sperm with its inactive DNA package). So the egg could preserve the essentials of the successful system of living while the sperm could become the freely disposable experiment.

    The commonly taught idea is that sexual reproduction developed because it gave a greater amount of diversity to a gene pool, which in turn helps keep the species healthy by preventing unfit genes from replicating. This is probably at least part of the story...darthbarracuda

    Or the whole of the story, generally speaking...

    But if you are talking about Homo sap specifically, what might appeal to your anti-natalism is the incredible violence foisted on the human female body by having to give birth to a monstrously brained infant through an inadequately designed bipedal birth canal.

    Babies. There's your real parasites, eh?
  • The Observer's Bias Paradox (Is this really a paradox?)
    You can control for the biases you believe to be there. And if you can control for the particular biases of some specific domain, then you can also control for bias in a general fashion when investigating the question of scientific bias itself.

    So as is the general case with these kinds of self-referential paradoxes, you can break out of the apparent circularity by referencing hierarchy. Different levels of analysis - the general vs the particular - break apart the deductive loop to allow conclusions to be arrived at via inductive reasoning.

    That means of course that you can't transcend the conditions for knowledge. You can't get outside the world you are trying to organise and so prove absolutely some claim - like that you have correctly removed all possible observer biases in your attempt to demonstrate that observer bias is indeed a real thing.

    But you can then quite straightforwardly demonstrate that you have minimised your uncertainty about this being true. If you frame a general theory - the hypothesis that observer bias exists in science - you can then check the degree to which that prediction measures as true. A theory that is general enough should even predict the biases you will bring to the table when exploring this hypothesis.

    So while you can't break out of the circle of explanation, you can show that the general and the particular - the theory and its tests - are becoming ever more definite. The more possibility the general rule about observer bias absorbs, the more unlikely it is that any of the particular forms of bias will escape notice.

    Induction means accepting probabilities rather than demanding certainties. But in the end, that is how knowledge works. And science has developed a vast array of practical tools for dealing with observer bias - even if it is well known that it gets pretty relaxed about actually applying them much of the time.
  • The Unconscious
    Again I am gob-smacked that you simply repeat my own arguments back to me.

    The only difference is that I emphasise the complementary logic involved. The brain has an obvious interest in doing as much at an automatic learnt level as possible, because by doing that, attentional level deliberation is by default reserved to deal with whatever else turns out to be unexpected, novel, or otherwise most significant about some passing moment.

    And I've long been championing functional models - like Grossberg's ART neural nets, or Friston's Bayesian brain - which best make that point.

    To the extent that the brain can make its environment predictable, it doesn't really have to pay attention to it. It already knows what is going on before it happens. The other side of that coin is then that when events start not meeting expectations, the brain knows to flip to the complementary form of analysis - the one we call attentional and deliberative. Rather than the smooth and skilled stereotype response, the higher brain can enter a creative and exploratory mode of thinking, remembering and learning. With the frontal planning areas and working memory engaged, the world can be kept in mind long enough to try stuff until some new understanding appears to have a good fit.

    So it is an approach that accounts for the phenomenology pretty easily. It explains stuff like how we can hit tennis balls or drive cars when attentional processing is much too slow and much too tentative to account for such real world skill.

    I don't know why you would start banging on about pre-conscious habit being "primitive". It should be clear enough that habit equates with accumulated wisdom. Attention and habit are two ends of the one dynamical process of coming to understand the world in useful fashion. So there is no evolutionary sequence here. Both had to arise together because each is formally the other's opposite as a style of processing.

    But again, that is a point that is difficult to understand unless you are a Peircean or systems thinker.

    Reductionists think that complexity builds hierarchically from the ground up. You have primitive unthinking creatures that are a bundle of reflexes. Then evolution keeps adding more intricate processing and suddenly out pops a self aware mind. It is the same computational paradigm that leads people to expect awareness to pop out as part of an information processing sequence that culminates in some final data display.

    But natural philosophy understands hierarchical organisation to be triadic. Everything starts with a symmetry breaking that then progresses in two complementary directions. The two orthogonal poles of organisation that result - the local and the global - can then interact. You have a holistic system which self-organises.

    So when it comes to brains, or simpler nervous systems, you can't talk about which came first - attention or habit. They have to arise together as a way of mutually breaking some vaguer state of uncertainty or indeterminacy.

    A jumping spider has a brain the size of a poppy seed and yet it still has this same contrast between attentional and habitual processing. It can creep around the back of dangerous prey after it has paused long enough to assess the situation.

    Now of course a jumping spider is not conscious like a cat let alone a child. But - if we define consciousness vs unconscious in terms of a functional contrast of processing strategies - then we can say it is also a conscious creature as we can stick electrodes in its head and observe the same fundamental attention vs habit distinction.

    Our intuition that the consciousness of the jumping spider is hardly as good as ours is then also accounted for by the fact it has to pause and consider. It has to pounce rather than smoothly pursue. It is more robotic in that its levels of neural performance are not so integrated in "real time", nor are they integrated in a general fashion over scales of minutes, hours and days.

    So definitely we can see a clear difference in scale - without having to claim an essential difference in kind.

    But if we get down to a worm or a jellyfish, neither attentional-processing nor habit-forming exist except in the most neurally reduced form. You can demonstrate the habituation of reflexes. So there is a bit of "in the moment" learning to go with a bit of genetically-inherited instinct. There is some kind of contrast in adaptive behavioural response - the precursors to attention and habit. Yet also it is getting about as attenuated as we can imagine.

    So I am speaking to a different model of a system - the model of an organism rather than a machine. A system that processes signs rather than information. And that is just a different paradigm with its own developmental and self organising logic.
  • The Unconscious
    I'm puzzled that you think "an NCC approach" or "an integration via recurrent networks" is somehow different to what I said. I'm also puzzled if you don't think I was specific about human introspective self-awareness being an added, culturally-evolved, linguistic skill.

    If you want to side with Lamme and replace the dichotomy of habit~attention with feed-forward vs feedback - as the best way of getting away from having to talk about unconscious vs conscious - then that is not really different from what I said. I said habits are emitted fast and direct while attention is about the slower top-down evolution of novel states of global focus or constraint.

    So where does an important difference lie with the cites you provide? I would say Lamme is an example of representationalism - the idea that consciousness involves some bottom-up data crunching that has to rise to some level and produce "a display" ... with all the homuncularity that then ensues in having pushed the "witnessing self" out of the picture again.

    Byoung-Kyong Min is then an example of trying to locate consciousness to a particular brain structure rather than just focusing on the dynamics of integrative (and differentiating) neural processes. Again, representationalism hovers in the background. The talk is of neural states that are to be imagined as some sort of display (to whom?). Consciousness becomes a thing, a substance, as representationalism - in begging the question of how the extra quality of awareness arises - is basically dualistic and leaves us always with the two things of the neural display and the unanswered issue of how this extra feature of reportable witnessing can arise.

    As I argue, the habit vs attention distinction is the routine way into understanding the functional anatomy of the brain. Anyone taking a general information processing route to explaining the brain will find this is a core structural dichotomy.

    And then I distinguished my own position within this general standard approach. I said I was taking the ecological, systems thinking, sign processing, etc, etc, angle. So that marks a big shift in paradigm from data-processing and representationalism. It puts semiosis or a modelling relation approach centre stage.

    When you hear me talk about attention, you immediately think about that as the creation of some kind of state of display. But I am thinking about attention in terms of constraint and the globally coherent suppression of possible neural activity. Attention brings things into focus by creating fleeting useful states of limitation. It is repression for a purpose, if we want to put it simply.

    I object to your insistence on a single approach limited to attention and habit and on briefly reviewing some literature find many other approaches in the field.prothero

    Well you are misunderstanding what I said. Attention vs habit is a general distinction used to organise our scientific understanding of how brains "process the world". It was what got experimental psychology going in the 1800s. It kind of got lost with the heavily computational, data-processing, representationalism of 1970s cogsci, but has come back again as a foreground distinction with 1990s neuroscience.

    Then within cognitive neuroscience - the study of the brain's functional anatomy - I stand with a number of counter-positions. So as I say, I am with the dynamicists, ecologists, the anti-representationalists, the Bayesians, and most particularly, the semiologists.

    If you go in that direction far enough, you are then talking about brain function in terms of sign processing rather than information processing. The implicit dualism of representationalism has been left behind and now it is about a triadic modelling relation in which self and world co-emerge as a concrete causal state of affairs. The "I" is not a mysterious homuncular witness but instead the very action of imposing a state of constraint on material possibility.

    Yes, this doesn't seem to explain "consciousness" - as a dualist/representationalist will always still believe it needs to be explained. It just doesn't speak to the issue of "the psychic cause of an aware display". But tough. That is why consciousness is such a bad term when it comes to doing science. It carries with it all its dualist/representationalist overtones. It is a verbal trap. Shifting the conversation to attention vs habit is the first step to breaking with this culturally and religiously entrenched metaphysical paradigm.
  • The Unconscious
    I think it is a difference in scale but then I am a panpsychist (panexperientialist) of sorts.prothero

    OK, so it is a difference in scale as the neuroscience suggests. But then you want to make some kind of claim about a difference in kind?

    This is where we might discover if anything useful can be said about what you feel to be missing from my pragmatic account based on naturalistic or ecological information processing principles.
  • The Unconscious
    The assertion here is that "attention" is a primitive neurological function, seen in say frogs and fruitflies. Do we wish to say they are "aware" and "conscious" in the same manner as humans?prothero

    Is it a difference in kind or difference in scale? Is mind something only humans have or does the degree correlate with neural organisational complexity?

    Both are reasonable hypotheses. And what we do know is that the degree of organisational complexity actually does correlate with how most people would rank sentience.

    As to the rest, I don't think you could have read my earlier posts.
  • The Unconscious
    As soon as you can define awareness or consciousness in a way that can be neuroscientifically investigated, then we can have a sensible debate about what exactly is extra or different.

    You claim that even a cursory review of the literature supports you. I hope you don't just mean stuff like blindsight where those folk still had intact superior colliculi and so a preattentive path for guiding their visual search. Of course they could report having had an instinct to look somewhere as well as report they had no consequent visual image due to their particular brain damage.
  • The Unconscious
    Again, the argument would be that attention (and habit) are neuroscientific terms. They speak to information processes that can be mapped to brain architecture. And so real questions can be asked.

    Talk about consciousness is talk about phenomenology. Unless it is rephrased as some kind of information processing claim, there is no way of investigating it as a modelled construct.

    So unless you ground the term conscious (or unconscious) in some kind of information processing paradigm, you can't even ask the question scientifically. And then the extent to which you tie your notion of "being conscious" to neuroscience, you find that it overlaps more and more with reportable attentional states.
  • I thought science does not answer "Why?"
    You're still thinking 'fundamental particles',Wayfarer

    Or more like fundamental resonance modes in being the simplest possible permutation symmetries. Particles are excitations of a quantum field rather than scraps of matter. So their "why" is because of nature's "desire" for lowest mode simplicity.
  • I thought science does not answer "Why?"
    Like I said, science is ultimately not concerned with why. Above it is implied that "Why?" does not matter.WISDOMfromPO-MO

    So if someone asks you why 1 +1=2, then you would reply that it is necessarily so. It has mathematical inescapability.

    What then when fundamental physics discovers the same lack of alternatives? Particles like quarks and leptons simply have to be as they represent the simplest possible symmetry states. Nature can't be broken down any further. Like cubes and tetrahedrons, ultimate simplicity has mathematical inevitabilty. And that is then the why. It is just a formal constraint that something has to be what is left after everything has got broken down to the least complex possible basics.

    This isn't the ordinary notion of a telic goal or purpose. But it is a scientific one. And it places a limit on infinite regress. There actually is a simplest state in the end. You wind up with quarks and leptons as they are as simple as it gets.
  • I thought science does not answer "Why?"
    I thought that science, therefore, just focuses on what is and ignores or dodges "Why?".WISDOMfromPO-MO

    It's a matter of emphasis. In the end, science can't avoid teleology in some form. Analysis must break causality into two general parts - the what part which covers material and efficient cause, and the why part which covers formal and final cause.

    But scientific explanation as a social activity brings society the most concrete rewards when it focuses on what style, or mechanistic rather than organismic, models of causality.

    Forget about the reasons for things, or the design of things. We humans can supply those parts of the equation when applying scientific knowledge to creating a technological world. Just give us what type analysis that we can use to make machinery - or closed systems of material efficient causes.

    So big science would be all four causes. Techno science fetishises what questions as it operates with a less ambitious, but more everyday useful, purpose.
  • The First Words... The Origin of Human Language
    And to communicate.Bitter Crank

    Sure. But in recreating a likely evolutionary sequence, we can be sure that tool-use started a million years ahead of symbolic thought, and hence symbolic communication. Art and decoration only started about 100,000 years ago. But hominids were handy with spears 400,000 years ago, and possibly making fire a million years ago.

    So a selective pressure that led to a lateralisation and tighter organisation of the hominid brain would have had a long time working on motor skills - generalised planning and fine motor control. The evolution of the opposable thumb following the evolution of bipedalism, etc.

    Then speech itself - as further brain specialisation - would be the johnny come lately, piggybacking on that rise in pre-motor specialisation for sequential/serial motor organisation. The main actual changes would be a redesign of the hominid palate, tongue and vocal tract. Our jaws got pulled in, the tongue hunched to fill the palate, the hyoid dropped in ways that created a new choking hazard.

    So in terms of the probable evolutionary trajectory, sign language would have come after vocal language, just like writing and typing did. Signing has disadvantages as the first departure point because - unlike speaking - signing isn't as serially restricted. It has too many degrees of freedom. Speech eliminates those and forces a grammatical structure as a result.

    At least that is my summary of the paleo evidence and the many theories going around.
  • Wittgenstein, Dummett, and anti-realism
    You have the word, the thing, and the 'referring' relation between the two. It's triadic.Janus

    That's not it at all because it doesn't do sufficient justice to the "mind" with its goals and meanings. You are only talking about two physically real things - a physical mark and a physical thing - and then throwing in the "third thing" of some vague "referring relation". And everything that is troublesome is then swept under that rug.

    In any case, how else could we make sense of our talk about things, other than to accept that our talk is indeed about things?Janus

    And you did it again. Who is this "our" or "we" that suddenly pops up? You just ticked off the three things that are just two physical things in interaction - a mark and a thing - and now it is back to dualism where it is a mind that hovers over the proceedings in some vague fashion.
  • Wittgenstein, Dummett, and anti-realism
    Yeah. I already said it is hard to see that. Dualism is that deeply rooted in the folk view.

    So reference and representationalism is just taken for granted. You literally "can't see it" due to a background of presuppositions that is also not being acknowledged.

    Yes, nouns name things. And within a certain metaphysics - a metaphysics of thingness - that is a perfectly self-consistent stance. But once "thingness" is brought into question, then we can start to wonder at the things we so happily give a name.

    Isn't that Wittgenstein 101? Many of philosophy's traditional central puzzles are simply a misunderstanding of language use.

    But then why Wittgenstein is inadequate - despite Ramsey whispering Peircean semiotics in his ear - is that rather than this making metaphysics bunk, it is why better metaphysics is demanded. The proper focus of philosophy has to shift to semiotics - a theory of meaning - in general.

    (And not PoMo with its dyadic Saussarean semiotics, but proper triadic or structuralist semiotics. :) )
  • The First Words... The Origin of Human Language
    On a point of neuroscience, swear words are more emotionally expressive vocalisations - said by the cingulate cortex, as it were - rather than prefrontally orchestrated speech acts.

    That is why they feel like involuntary explosions that take some conscious effort to suppress. Or that socially they communicate a state of feeling rather than some cogent meaning.

    This all goes to the evolutionary argument that vocalisation started off at a lower brain level - emotional vocalisation akin to expressive grunts and coos. Then connecting the new higher level brain organisation - developed for "articulate" tool use and tool making - back to that, was the crucial pre-adaptation for grammatical speech acts.

    Broca's area is really just another part of the pre-motor frontal planning hierarchy. So we evolved careful voluntary control over the use of our hands to chip flints and throw spears.

    Rather than neurons burrowing upwards, more important was higher level neurons burrowing down to begin to regulate lower level execution in imaginatively deliberate fashion. The cingulate then was no longer top dog as the rather automatic producer of expressive social noises. The higher brain became the more generalised planner and controller. But still, swear words expose the existence of the old system.
  • Wittgenstein, Dummett, and anti-realism
    However, what's not being taken into consideration, is how meaning is first attributed...

    No world... no meaning.
    creativesoul

    Again, that just restates the metaphysics that leads to the blind alley of dualism. Sure, in simple-minded fashion, we can insist the world actually exists - just as we experience it. And just as words socially construct that experiencing.

    It has the status of unquestioned pragmatic utility as a belief. Kick a stone, and it should hurt.

    But philosophy is kind of supposed to rise above that as an inquiry. The issue is not really whether there "actually is a world". Instead it is what "meaning" really is in "the world". And neither realists, nor idealists, have a good approach to that.
  • Wittgenstein, Dummett, and anti-realism
    But if you tell me there are invisible yellow unicorns, what am I to do with that? That's not how we use color words.Srap Tasmaner

    It is hard to give up the commonsense-seeming notion that words refer to things. So the way you talk about this philosophically looks to presume the two realms of the mental and the physical. And then we can point to objects in both realms - real qualia and real things. And that reality then allows true correspondence relations. There is the experience of yellow in my head. There is the yellowness or wavelength energy out there in the world. Words can then safely refer - ostensively point to - some thing that is a fact of the matter. Whether out there or in my head.

    So commonsense defends a dualistic paradigm of realms with real objects - both mental and physical. Then words are simply labels or tokens. All they do is add a tag for talking about the real.

    But another approach - pragmatic or semiotic - would be to give words a properly causal role in reality. So now rather than merely pointing - an uninvolved position that changes no real facts - words are a habit of constraint. Speaking is part of the shaping of reality - mental or physical (to the degree that divide still exists).

    So to speak of an invisible yellow unicorn is to create constraints on possibility (physical or mental). It restricts interpretation in a way that is meaningful. We are saying this unicorn, if it were visible, would be yellow (calling anything yellow itself being a general interpretive constraint on experience).

    Of course unicorns are fictions. Invisible colour a contradiction. So the actual set of invisible yellow unicorns would be very empty indeed.

    But that is not the point. It is how words actually function. And their role - as signs - is not to point from a mental idea to a material instance, as is "commonsense". Their role is to place pragmatic limits on existence. Or rather, restrict the interpretive relation we have with "the world" in some useful or meaningful goal achieving fashion.

    So with yellow, it doesn't matter what we each have privately in our heads. What matters is that there is some reliable social habit of communication where "yellow" is a sign that acts to constrain all our mental activity in a fashion where we are most likely to respond to the world in sufficiently similar ways.

    The big change here is from demanding the need for absolute truth or certainty - the pointer that points correctly - to a more relaxed view of word use where word meaning is only as constraining as useful. The fact that there is irreducible uncertainty - like do we all have the same qualia when we agree we are seeing "yellow" - becomes thankfully a non-issue. This kind of fundamentally unknowability is accepted because we can always tighten shared definitions if wanted. But more importantly, not having to sweat such detail is a huge semantic saving of effort - and the basic source of language creativity. You want slippery words otherwise you would be as uninventive as a machine or computer.

    So it is a paradigm shift. Words work by restricting states of interpretation or experience. They don't have to point from an idea to a world, or connect every physical object with its mental equivalent object in "true referential" fashion.

    But words do have to be effective as encoding habits of thought. They have to produce the kinds of relational states of which they attempt to speak. An "invisible yellow unicorn" is an example of the kind of word combination that is perfectly possible, but which could have no meaning as either something we could physically encounter or properly imagine.
  • The First Words... The Origin of Human Language
    The first words were born in this state of longing, fairly desperate, for absent, 'missing' things.Mark Aman

    Or before that, the first "words" would have pointed at socially present ideas. So they would have highlighted real possibilities present to both parties at that moment. Or at least present to one mind, and so an attempt to attract the attention of another mind to that sharable focus on doing something social (and thus fairly abstract).

    We get this from observing social communication in chimps especially. There is a lot said in eye gaze, hand gestures and expressive vocalisation. Holding out a hand can mean please share.

    So the crucible of language development would be this basic need - sophisticated co-ordination of behaviour within a co-operative troop structure. And the first thing to refer to would be "things we could be doing" - with at least one mind already thinking about the presence of said possibility.

    Pointing to that which is absent - physically or socially - is then a still more sophisticated level of thought or rationality. And that would require proper articulate speech - words and rules.

    So if grammatical speech is what you are talking about, then the reference to counterfactual possibilities - absences - does look to arise at that point.

    But again, I would argue that the ability to point to some particular social action I have in mind was the fertile ground that got language started.
  • The Unconscious
    The puzzle for me is that in talking in these terms you seem to be adopting the information-processing approach you criticised earlier when I mentioned students of Pylyshyn proposing to dissociate attention from consciousness.mcdoodle

    Well I said I would reject that old fashioned cogsci symbol-processing paradigm and instead of information processing, I speak of sign processing.

    So instead of the 1970s conviction that disembodied, multirealisable, algorithms could "do consciousness", I am saying that actually we have to understand "processing" in a Peircean semiotic fashion as a pragmatic sign relation which seeks to control its world for some natural purpose. And this has become the reasonably widespread understanding within the field, with the decisive shift to neural network and Bayesian prediction architectures in neuroscience, and enactive or ecological approaches in psychology and philosophy of mind.

    So all the laboratory experiments carried out in the name for the search for the attentional and automatic processes in the brain still stand. What has changed - for some of us - is the paradigm within which such data is interpreted.

    Attention and habit are characteristics we seem to share with many other animals; consciousness is something we don't seem to share with all that many of them (as I would say), if any (as some say).mcdoodle

    What makes human mentality distinctive is that it is has an extra level of social semiosis because Homo sapiens evolved articulate, grammatical, speech. Language encodes a new possibility of cultural engagement with the material world. And that is of course revolutionary. It gives us the habit of self-aware introspection and self-regulation. It gives us the "powers" of autobiographically structured recollection and generalised creative imagination.

    But apart from that, we are exactly as other animals. We share the same bio-semiotic level of awareness that comes from having a nervous system that can encode information neurally.

    So this - as I said - is another reason why "consciousness" is such a bad folk psychology word. It conflates biological semiosis and social semiosis in ways that really leave people confused. It makes self-consciousness seem like a biological level evolutionary development.

    I was thinking about placebo studies, which I read a lot about earlier in the year. However cunning our studies of placebos, we can't scientifically get beyond something irreducible about 'belief' and 'expectation'. The I-viewpoint is not, as yet at any rate, susceptible to an 'information-processing argument'.mcdoodle

    But it does make sense as a sign-processing argument. Straight away we can see that we don't have to search for the secret of those kinds of beliefs in bio-semiosis. They are instead the product of a linguistic cultural construct - social-semiosis.

    And that is all right. It is the same naturalistic process - sign-processing - happening in a new medium on a higher scale.
  • The Unconscious
    Thought and feeling can be pretty passive. That was my point: not that consciousness isn't involved in habit, intent, and action, but that those things aren't necessary.Mongrel

    So thought, feeling and consciousness generally can also be "pretty active"?

    In other words, you are making an irrelevant distinction given that one of my key points is that consciousness, or attention level processing, wants to be as little involved in the messy detail of responding to the world as possible.

    The brain's architecture is set up to with this sharp division of labour that I describe - attention vs habit. It makes sense to learn to deal with the world in as much a rote, automatic, learnt, skilled fashion as possible. That in itself becomes a selective filter so that only anything which by definition is new, difficult, significant, surprising, gets escalated to undergo the exact opposite style of processing. One that is creative, holistic, tentative, exploratory, deliberative.

    Note how talk of consciousness always comes back to the "thingness" of experience. It is classic Cartesean substance metaphysics. Consciousness is a something, a mental stuff, a mental realm. The unconscious is then another kind of stuff, another kind of realm. No surprise nothing feels explained by that kind of rhetoric.

    But my approach zeroes in on the very machinery of reasoning and understanding. We can see how a particular division of labour - a symmetry breaking - is rational. The question becomes what else could evolve as an optimal way to set up a modelling relation between a self and a world?
  • The Unconscious
    If the constraint is general rather than particular, we are right back at the level of general intentionality, which has the capacity to produce many different particular states of attentional focus.Metaphysician Undercover

    You are just muddling with words to prolong an argument. As is usual.

    Another way of putting it is that vague intentionality becomes crisp intentionality through attentional focusing.

    There you go. Another statement which you can muddle away at forever. :)
  • The Unconscious
    I can only repeat what I've already said.

    The brain is already an "intentional device". It is full of potential intentions at all times.

    Then what we call being conscious is centrally about focusing this general state of intentionality so that some concrete goal emerges to dominate the immediate future. This requires all contradictory intentions to be suppressed. Some particular attentional focus and state of intentionality emerges.

    Then this in turn becomes the general constraint that places limits on habit-level performance. Attention can't control rapid, smooth, highly learnt behaviour with latencies of milliseconds. And nor would that even make sense - as attention is there to be slow and deliberate, to break things apart rather than stick them together in unthinking complexes, to do the learning that masters novelty rather than the performing in which novelty is minimised.

    So the dichotomy of attention and habit is no accident. It is what logic demands as it dichotomises our response to the world in exactly the way that has to happen. It is an obviously reasonable division of labour.

    Let me run you through it again.

    General brain-level intentionality is the ground for attentionally-focused particular states of intention.

    Attentionally-focused intentionality is the generalised constraint on the freedom of learnt habits and automaticisms that arise to fill in the many particular sub-goals necessary for achieving that greater general goal.

    I don't have to notice what my feet and hands do when turning a corner in the car. If it's routine, the mid-brain/cerebellum fills in those blanks unthinkingly. I form no reportable working memory in the prefrontal cortex. What I experience phenomenally is what folk label "flow". Or smooth action with an "out of the body" sense of not having to be intentionally in charge.

    You can obsess about trying to make my right words wrong. But haven't you got better things to do?
  • The Unconscious
    So the word "cat" may be used to refer to a particular cat, or it may be used to refer to cats in general, but to confuse these two is category error, or equivocation.Metaphysician Undercover

    That's really great, MU. But you are the one barking about there being only the one possible use of "intent" here. I'm happy not to confuse them the way you keep doing.
  • The Unconscious
    And thought and feeling and planning and imagining aren't actions? Muscular action isn't both voluntary and involuntary?
  • The Unconscious
    A person's totally paralyzed by a neuromuscular blockade and they're conscious.Mongrel

    A person can be conscious without having any particular intentions.Mongrel

    Maybe you just don't realise how disjointed your thinking is? Two different points and you ask don't I agree as if you were still talking about the one thing - which still remains unexplained.

    Why should either present a difficulty in terms of the attention~habit conceptual framework of a neuroscientific account?

    Of course if there is a block between the central nervous system and the skeletal muscle system, then "conscious wishes" are thwarted. A runner with no legs can't run. Big deal.

    Likewise if attention doesn't focus your state of mind, it is unfocused. If you have no need to act, then you rest. And if you want to talk about intentionality as something very general, then rest and other forms of inaction are how organisms save energy and avoid risks.

    We could go on to talk about vigilance, creativity, the right brain's mode of attending. It's all standard fare within an attention~habit neuroscientific framework.

    But as I say, you don't seem to be realising that your replies don't even stick to the point you were making an instance ago.
  • The Unconscious
    Wake me up if you want to engage in the substance of my posts, which have been about how the conceptual dichotomy of attention~habit makes neuroscientific sense of what folk talk about when they're feeling baffled by conscious and unconscious thought and action.
  • The Unconscious
    You were saying medicine is no folk craft. So that is why medicine would try to understand what goes on exactly in the mechanistic information-processing fashion that I originally said was the better way to even enter a conversation about the unconscious.

    If you want me to agree to my own point, well sheesh, just take it as read, dude.

    If you thought you were challenging anything I said, have a go at tidying up your posts.

    If you just want to express your usual hostility, big deal.