• The Unconscious
    As soon as you can define awareness or consciousness in a way that can be neuroscientifically investigated, then we can have a sensible debate about what exactly is extra or different.

    You claim that even a cursory review of the literature supports you. I hope you don't just mean stuff like blindsight where those folk still had intact superior colliculi and so a preattentive path for guiding their visual search. Of course they could report having had an instinct to look somewhere as well as report they had no consequent visual image due to their particular brain damage.
  • The Unconscious
    Again, the argument would be that attention (and habit) are neuroscientific terms. They speak to information processes that can be mapped to brain architecture. And so real questions can be asked.

    Talk about consciousness is talk about phenomenology. Unless it is rephrased as some kind of information processing claim, there is no way of investigating it as a modelled construct.

    So unless you ground the term conscious (or unconscious) in some kind of information processing paradigm, you can't even ask the question scientifically. And then the extent to which you tie your notion of "being conscious" to neuroscience, you find that it overlaps more and more with reportable attentional states.
  • I thought science does not answer "Why?"
    You're still thinking 'fundamental particles',Wayfarer

    Or more like fundamental resonance modes in being the simplest possible permutation symmetries. Particles are excitations of a quantum field rather than scraps of matter. So their "why" is because of nature's "desire" for lowest mode simplicity.
  • I thought science does not answer "Why?"
    Like I said, science is ultimately not concerned with why. Above it is implied that "Why?" does not matter.WISDOMfromPO-MO

    So if someone asks you why 1 +1=2, then you would reply that it is necessarily so. It has mathematical inescapability.

    What then when fundamental physics discovers the same lack of alternatives? Particles like quarks and leptons simply have to be as they represent the simplest possible symmetry states. Nature can't be broken down any further. Like cubes and tetrahedrons, ultimate simplicity has mathematical inevitabilty. And that is then the why. It is just a formal constraint that something has to be what is left after everything has got broken down to the least complex possible basics.

    This isn't the ordinary notion of a telic goal or purpose. But it is a scientific one. And it places a limit on infinite regress. There actually is a simplest state in the end. You wind up with quarks and leptons as they are as simple as it gets.
  • I thought science does not answer "Why?"
    I thought that science, therefore, just focuses on what is and ignores or dodges "Why?".WISDOMfromPO-MO

    It's a matter of emphasis. In the end, science can't avoid teleology in some form. Analysis must break causality into two general parts - the what part which covers material and efficient cause, and the why part which covers formal and final cause.

    But scientific explanation as a social activity brings society the most concrete rewards when it focuses on what style, or mechanistic rather than organismic, models of causality.

    Forget about the reasons for things, or the design of things. We humans can supply those parts of the equation when applying scientific knowledge to creating a technological world. Just give us what type analysis that we can use to make machinery - or closed systems of material efficient causes.

    So big science would be all four causes. Techno science fetishises what questions as it operates with a less ambitious, but more everyday useful, purpose.
  • The First Words... The Origin of Human Language
    And to communicate.Bitter Crank

    Sure. But in recreating a likely evolutionary sequence, we can be sure that tool-use started a million years ahead of symbolic thought, and hence symbolic communication. Art and decoration only started about 100,000 years ago. But hominids were handy with spears 400,000 years ago, and possibly making fire a million years ago.

    So a selective pressure that led to a lateralisation and tighter organisation of the hominid brain would have had a long time working on motor skills - generalised planning and fine motor control. The evolution of the opposable thumb following the evolution of bipedalism, etc.

    Then speech itself - as further brain specialisation - would be the johnny come lately, piggybacking on that rise in pre-motor specialisation for sequential/serial motor organisation. The main actual changes would be a redesign of the hominid palate, tongue and vocal tract. Our jaws got pulled in, the tongue hunched to fill the palate, the hyoid dropped in ways that created a new choking hazard.

    So in terms of the probable evolutionary trajectory, sign language would have come after vocal language, just like writing and typing did. Signing has disadvantages as the first departure point because - unlike speaking - signing isn't as serially restricted. It has too many degrees of freedom. Speech eliminates those and forces a grammatical structure as a result.

    At least that is my summary of the paleo evidence and the many theories going around.
  • Wittgenstein, Dummett, and anti-realism
    You have the word, the thing, and the 'referring' relation between the two. It's triadic.Janus

    That's not it at all because it doesn't do sufficient justice to the "mind" with its goals and meanings. You are only talking about two physically real things - a physical mark and a physical thing - and then throwing in the "third thing" of some vague "referring relation". And everything that is troublesome is then swept under that rug.

    In any case, how else could we make sense of our talk about things, other than to accept that our talk is indeed about things?Janus

    And you did it again. Who is this "our" or "we" that suddenly pops up? You just ticked off the three things that are just two physical things in interaction - a mark and a thing - and now it is back to dualism where it is a mind that hovers over the proceedings in some vague fashion.
  • Wittgenstein, Dummett, and anti-realism
    Yeah. I already said it is hard to see that. Dualism is that deeply rooted in the folk view.

    So reference and representationalism is just taken for granted. You literally "can't see it" due to a background of presuppositions that is also not being acknowledged.

    Yes, nouns name things. And within a certain metaphysics - a metaphysics of thingness - that is a perfectly self-consistent stance. But once "thingness" is brought into question, then we can start to wonder at the things we so happily give a name.

    Isn't that Wittgenstein 101? Many of philosophy's traditional central puzzles are simply a misunderstanding of language use.

    But then why Wittgenstein is inadequate - despite Ramsey whispering Peircean semiotics in his ear - is that rather than this making metaphysics bunk, it is why better metaphysics is demanded. The proper focus of philosophy has to shift to semiotics - a theory of meaning - in general.

    (And not PoMo with its dyadic Saussarean semiotics, but proper triadic or structuralist semiotics. :) )
  • The First Words... The Origin of Human Language
    On a point of neuroscience, swear words are more emotionally expressive vocalisations - said by the cingulate cortex, as it were - rather than prefrontally orchestrated speech acts.

    That is why they feel like involuntary explosions that take some conscious effort to suppress. Or that socially they communicate a state of feeling rather than some cogent meaning.

    This all goes to the evolutionary argument that vocalisation started off at a lower brain level - emotional vocalisation akin to expressive grunts and coos. Then connecting the new higher level brain organisation - developed for "articulate" tool use and tool making - back to that, was the crucial pre-adaptation for grammatical speech acts.

    Broca's area is really just another part of the pre-motor frontal planning hierarchy. So we evolved careful voluntary control over the use of our hands to chip flints and throw spears.

    Rather than neurons burrowing upwards, more important was higher level neurons burrowing down to begin to regulate lower level execution in imaginatively deliberate fashion. The cingulate then was no longer top dog as the rather automatic producer of expressive social noises. The higher brain became the more generalised planner and controller. But still, swear words expose the existence of the old system.
  • Wittgenstein, Dummett, and anti-realism
    However, what's not being taken into consideration, is how meaning is first attributed...

    No world... no meaning.
    creativesoul

    Again, that just restates the metaphysics that leads to the blind alley of dualism. Sure, in simple-minded fashion, we can insist the world actually exists - just as we experience it. And just as words socially construct that experiencing.

    It has the status of unquestioned pragmatic utility as a belief. Kick a stone, and it should hurt.

    But philosophy is kind of supposed to rise above that as an inquiry. The issue is not really whether there "actually is a world". Instead it is what "meaning" really is in "the world". And neither realists, nor idealists, have a good approach to that.
  • Wittgenstein, Dummett, and anti-realism
    But if you tell me there are invisible yellow unicorns, what am I to do with that? That's not how we use color words.Srap Tasmaner

    It is hard to give up the commonsense-seeming notion that words refer to things. So the way you talk about this philosophically looks to presume the two realms of the mental and the physical. And then we can point to objects in both realms - real qualia and real things. And that reality then allows true correspondence relations. There is the experience of yellow in my head. There is the yellowness or wavelength energy out there in the world. Words can then safely refer - ostensively point to - some thing that is a fact of the matter. Whether out there or in my head.

    So commonsense defends a dualistic paradigm of realms with real objects - both mental and physical. Then words are simply labels or tokens. All they do is add a tag for talking about the real.

    But another approach - pragmatic or semiotic - would be to give words a properly causal role in reality. So now rather than merely pointing - an uninvolved position that changes no real facts - words are a habit of constraint. Speaking is part of the shaping of reality - mental or physical (to the degree that divide still exists).

    So to speak of an invisible yellow unicorn is to create constraints on possibility (physical or mental). It restricts interpretation in a way that is meaningful. We are saying this unicorn, if it were visible, would be yellow (calling anything yellow itself being a general interpretive constraint on experience).

    Of course unicorns are fictions. Invisible colour a contradiction. So the actual set of invisible yellow unicorns would be very empty indeed.

    But that is not the point. It is how words actually function. And their role - as signs - is not to point from a mental idea to a material instance, as is "commonsense". Their role is to place pragmatic limits on existence. Or rather, restrict the interpretive relation we have with "the world" in some useful or meaningful goal achieving fashion.

    So with yellow, it doesn't matter what we each have privately in our heads. What matters is that there is some reliable social habit of communication where "yellow" is a sign that acts to constrain all our mental activity in a fashion where we are most likely to respond to the world in sufficiently similar ways.

    The big change here is from demanding the need for absolute truth or certainty - the pointer that points correctly - to a more relaxed view of word use where word meaning is only as constraining as useful. The fact that there is irreducible uncertainty - like do we all have the same qualia when we agree we are seeing "yellow" - becomes thankfully a non-issue. This kind of fundamentally unknowability is accepted because we can always tighten shared definitions if wanted. But more importantly, not having to sweat such detail is a huge semantic saving of effort - and the basic source of language creativity. You want slippery words otherwise you would be as uninventive as a machine or computer.

    So it is a paradigm shift. Words work by restricting states of interpretation or experience. They don't have to point from an idea to a world, or connect every physical object with its mental equivalent object in "true referential" fashion.

    But words do have to be effective as encoding habits of thought. They have to produce the kinds of relational states of which they attempt to speak. An "invisible yellow unicorn" is an example of the kind of word combination that is perfectly possible, but which could have no meaning as either something we could physically encounter or properly imagine.
  • The First Words... The Origin of Human Language
    The first words were born in this state of longing, fairly desperate, for absent, 'missing' things.Mark Aman

    Or before that, the first "words" would have pointed at socially present ideas. So they would have highlighted real possibilities present to both parties at that moment. Or at least present to one mind, and so an attempt to attract the attention of another mind to that sharable focus on doing something social (and thus fairly abstract).

    We get this from observing social communication in chimps especially. There is a lot said in eye gaze, hand gestures and expressive vocalisation. Holding out a hand can mean please share.

    So the crucible of language development would be this basic need - sophisticated co-ordination of behaviour within a co-operative troop structure. And the first thing to refer to would be "things we could be doing" - with at least one mind already thinking about the presence of said possibility.

    Pointing to that which is absent - physically or socially - is then a still more sophisticated level of thought or rationality. And that would require proper articulate speech - words and rules.

    So if grammatical speech is what you are talking about, then the reference to counterfactual possibilities - absences - does look to arise at that point.

    But again, I would argue that the ability to point to some particular social action I have in mind was the fertile ground that got language started.
  • The Unconscious
    The puzzle for me is that in talking in these terms you seem to be adopting the information-processing approach you criticised earlier when I mentioned students of Pylyshyn proposing to dissociate attention from consciousness.mcdoodle

    Well I said I would reject that old fashioned cogsci symbol-processing paradigm and instead of information processing, I speak of sign processing.

    So instead of the 1970s conviction that disembodied, multirealisable, algorithms could "do consciousness", I am saying that actually we have to understand "processing" in a Peircean semiotic fashion as a pragmatic sign relation which seeks to control its world for some natural purpose. And this has become the reasonably widespread understanding within the field, with the decisive shift to neural network and Bayesian prediction architectures in neuroscience, and enactive or ecological approaches in psychology and philosophy of mind.

    So all the laboratory experiments carried out in the name for the search for the attentional and automatic processes in the brain still stand. What has changed - for some of us - is the paradigm within which such data is interpreted.

    Attention and habit are characteristics we seem to share with many other animals; consciousness is something we don't seem to share with all that many of them (as I would say), if any (as some say).mcdoodle

    What makes human mentality distinctive is that it is has an extra level of social semiosis because Homo sapiens evolved articulate, grammatical, speech. Language encodes a new possibility of cultural engagement with the material world. And that is of course revolutionary. It gives us the habit of self-aware introspection and self-regulation. It gives us the "powers" of autobiographically structured recollection and generalised creative imagination.

    But apart from that, we are exactly as other animals. We share the same bio-semiotic level of awareness that comes from having a nervous system that can encode information neurally.

    So this - as I said - is another reason why "consciousness" is such a bad folk psychology word. It conflates biological semiosis and social semiosis in ways that really leave people confused. It makes self-consciousness seem like a biological level evolutionary development.

    I was thinking about placebo studies, which I read a lot about earlier in the year. However cunning our studies of placebos, we can't scientifically get beyond something irreducible about 'belief' and 'expectation'. The I-viewpoint is not, as yet at any rate, susceptible to an 'information-processing argument'.mcdoodle

    But it does make sense as a sign-processing argument. Straight away we can see that we don't have to search for the secret of those kinds of beliefs in bio-semiosis. They are instead the product of a linguistic cultural construct - social-semiosis.

    And that is all right. It is the same naturalistic process - sign-processing - happening in a new medium on a higher scale.
  • The Unconscious
    Thought and feeling can be pretty passive. That was my point: not that consciousness isn't involved in habit, intent, and action, but that those things aren't necessary.Mongrel

    So thought, feeling and consciousness generally can also be "pretty active"?

    In other words, you are making an irrelevant distinction given that one of my key points is that consciousness, or attention level processing, wants to be as little involved in the messy detail of responding to the world as possible.

    The brain's architecture is set up to with this sharp division of labour that I describe - attention vs habit. It makes sense to learn to deal with the world in as much a rote, automatic, learnt, skilled fashion as possible. That in itself becomes a selective filter so that only anything which by definition is new, difficult, significant, surprising, gets escalated to undergo the exact opposite style of processing. One that is creative, holistic, tentative, exploratory, deliberative.

    Note how talk of consciousness always comes back to the "thingness" of experience. It is classic Cartesean substance metaphysics. Consciousness is a something, a mental stuff, a mental realm. The unconscious is then another kind of stuff, another kind of realm. No surprise nothing feels explained by that kind of rhetoric.

    But my approach zeroes in on the very machinery of reasoning and understanding. We can see how a particular division of labour - a symmetry breaking - is rational. The question becomes what else could evolve as an optimal way to set up a modelling relation between a self and a world?
  • The Unconscious
    If the constraint is general rather than particular, we are right back at the level of general intentionality, which has the capacity to produce many different particular states of attentional focus.Metaphysician Undercover

    You are just muddling with words to prolong an argument. As is usual.

    Another way of putting it is that vague intentionality becomes crisp intentionality through attentional focusing.

    There you go. Another statement which you can muddle away at forever. :)
  • The Unconscious
    I can only repeat what I've already said.

    The brain is already an "intentional device". It is full of potential intentions at all times.

    Then what we call being conscious is centrally about focusing this general state of intentionality so that some concrete goal emerges to dominate the immediate future. This requires all contradictory intentions to be suppressed. Some particular attentional focus and state of intentionality emerges.

    Then this in turn becomes the general constraint that places limits on habit-level performance. Attention can't control rapid, smooth, highly learnt behaviour with latencies of milliseconds. And nor would that even make sense - as attention is there to be slow and deliberate, to break things apart rather than stick them together in unthinking complexes, to do the learning that masters novelty rather than the performing in which novelty is minimised.

    So the dichotomy of attention and habit is no accident. It is what logic demands as it dichotomises our response to the world in exactly the way that has to happen. It is an obviously reasonable division of labour.

    Let me run you through it again.

    General brain-level intentionality is the ground for attentionally-focused particular states of intention.

    Attentionally-focused intentionality is the generalised constraint on the freedom of learnt habits and automaticisms that arise to fill in the many particular sub-goals necessary for achieving that greater general goal.

    I don't have to notice what my feet and hands do when turning a corner in the car. If it's routine, the mid-brain/cerebellum fills in those blanks unthinkingly. I form no reportable working memory in the prefrontal cortex. What I experience phenomenally is what folk label "flow". Or smooth action with an "out of the body" sense of not having to be intentionally in charge.

    You can obsess about trying to make my right words wrong. But haven't you got better things to do?
  • The Unconscious
    So the word "cat" may be used to refer to a particular cat, or it may be used to refer to cats in general, but to confuse these two is category error, or equivocation.Metaphysician Undercover

    That's really great, MU. But you are the one barking about there being only the one possible use of "intent" here. I'm happy not to confuse them the way you keep doing.
  • The Unconscious
    And thought and feeling and planning and imagining aren't actions? Muscular action isn't both voluntary and involuntary?
  • The Unconscious
    A person's totally paralyzed by a neuromuscular blockade and they're conscious.Mongrel

    A person can be conscious without having any particular intentions.Mongrel

    Maybe you just don't realise how disjointed your thinking is? Two different points and you ask don't I agree as if you were still talking about the one thing - which still remains unexplained.

    Why should either present a difficulty in terms of the attention~habit conceptual framework of a neuroscientific account?

    Of course if there is a block between the central nervous system and the skeletal muscle system, then "conscious wishes" are thwarted. A runner with no legs can't run. Big deal.

    Likewise if attention doesn't focus your state of mind, it is unfocused. If you have no need to act, then you rest. And if you want to talk about intentionality as something very general, then rest and other forms of inaction are how organisms save energy and avoid risks.

    We could go on to talk about vigilance, creativity, the right brain's mode of attending. It's all standard fare within an attention~habit neuroscientific framework.

    But as I say, you don't seem to be realising that your replies don't even stick to the point you were making an instance ago.
  • The Unconscious
    Wake me up if you want to engage in the substance of my posts, which have been about how the conceptual dichotomy of attention~habit makes neuroscientific sense of what folk talk about when they're feeling baffled by conscious and unconscious thought and action.
  • The Unconscious
    You were saying medicine is no folk craft. So that is why medicine would try to understand what goes on exactly in the mechanistic information-processing fashion that I originally said was the better way to even enter a conversation about the unconscious.

    If you want me to agree to my own point, well sheesh, just take it as read, dude.

    If you thought you were challenging anything I said, have a go at tidying up your posts.

    If you just want to express your usual hostility, big deal.
  • The Unconscious
    We can obviously resist what a culture wants us to believe.praxis

    Roll that rock, Sisyphus. :)
  • The Unconscious
    I asked how it was relevant to any position I've advanced. You can't explain. Oddly that is tiresome.
  • The Unconscious
    Where's the problem with one thing's general being another's particular.

    Put these various items in hierarchical order - cat, Fluffy, animal, persian, mammal. It's not hard is it?
  • The Unconscious
    Telling the truth now, eh? Get over yourself dude.
  • The Unconscious
    There's no hope because the way general beliefs about the mind are socially constructed are socially useful. You can't fight what culture wants you to believe as part of its own self-preserving mythology.

    Talk about consciousness is a way to fix individual humans within some social state of conception. If we think of ourselves as freely choosing souls or rational beings, separate from our gross animal physicality (or Freudian unconscious), then that is exactly the myth by which we will learn - get into the habit of - acting. If you think about the nature of consciousness in the conventional fashion, then society is assured you will behave within the scope of that conventional construct.
  • The Unconscious
    Or your conception of consciousness demonstrably impotent.
  • The Unconscious
    But so what? If you think this somehow impacts on any position I've expressed, please explain why.
  • The Unconscious
    And does medicine treat that as spooky woo or does it search for the mechanistic explanation? Wouldn't you like the docs to be sure whether you happen to suffer curare muscular poisoning or a brain stem lesion?
  • The Unconscious
    Wow. This is news! Next you will be telling me you can think of things, but not do them.
  • The Unconscious
    Well my point was consciousness is a confused folk psychology term. And that is why neuroscience tries to sharpen things by tieing what we sort of mean in the standard socially constructed folk view to constructs, like attention and habit, which are defensible as the objects of laboratory research. When we talk about attention, there is an information processing argument to explain what that is and identify it with actual brain architecture.

    That is why it is better, in my opinion.

    As to habit being often conscious, that just confirms the haziness of consciousness as an explanatory construct. There is a good reason why the word is barely used in neuroscience research. You might as well be talking about souls or res cogitans.

    Of course what gets done by habit can also be the subject of our attention and become reportable - fixed in working memory and contemplated as something that just happened. Yet that demonstrates the essential dissociation. We act fast and automatically as that is efficient when we know what we are doing. And then "consciousness" or attentional level reportability comes that split second after the fact. We can introspect and form a memory of that automatic action we just performed.

    And as I have also explained, the actually important relation between attention and habit is that attention produces some general state of intentionality ahead of every moment of action. So it creates some general state of mental constraint - I want to get around this next corner in my car to reach my destination - and then all my well learnt driving habits can slot into place in automatic fashion.

    I don't need to have reportable awareness of exactly what to do to coordinate my hand on the gear shift, my foot on the clutch. Fuck the detail, let it take care of itself. Achieving the goal, getting the next step to where I'm going, is what I need to focus on.

    So the folk psychology term of consciousness has huge problems once you try to apply it in science. It confounds biology and sociology in believing things like introspection to be a biological function rather than a linguistically structured skill. It makes the big mistake of thinking awareness to be a running realtime representation of reality rather than having this complex internal temporal structure. It makes a big mistake in creating this homuncular self that is then witnessing the representation.

    So consciousness - and all its crew: unconscious, non-conscious, subconscious, preconscious, semi-conscious - is a very familiar social construct that just ought to be junked so we can start over again on a better metaphysical and scientific basis.

    But no hope of that of course.
  • van Inwagen's expanded free will defense, also more generally, The Problem of Evil
    For crying out loud, I was addressing a specific point - "van Inwagen suggests that an explanation for why human-oriented horrors exist is because there is no "cut-off" line to be drawn that isn't arbitrary."

    My reply was your intuitions might make more sense if they applied the right statistical model.

    We are used to thinking of statistical systems that are in fact bounded to create cut-off lines. We can adjust the parameters - control the degrees of freedom - so the system arrives at some mean equilibrium state. Fluctuations exist, but they are confined in Gaussian fashion to an actual averageness.

    However this would conflict with a God that has a reconciliation plan. God basically wants to set the system up with a bunch of humans who enjoy complete and unbounded freewill. Good luck with that. But anyway, it then becomes inconsistent to start poking your fingers into this creation to cancel out the extreme horrors that will occur ... just by ordinary statistical variation.

    If God wanted only a Gaussian level of nastiness, he should have added a governor device to the boundless freedom of the human imagination. But that would be a different story. Far more contrived, far less grand and universal.
  • van Inwagen's expanded free will defense, also more generally, The Problem of Evil
    As you say, Inwagen put those outside the bounds of his argument. So I'm not sure why you want to change the goal-posts.

    However the same argument does apply to nature as a whole if it is meant to be a free system. If you regard meteorites as a horror, you should expect them to come raining down over all scales. Planet-crushers may be rare, but - barring particular Godly intervention - their size lacks an upper bound.
  • van Inwagen's expanded free will defense, also more generally, The Problem of Evil
    But Ingwagen is already accepting that God wants there to be freewill at that point. That must be some ultimate good. And so the price you pay for that is having humans making bad or mad choices.

    Ingwagen says:

    If God simply “canceled” all the horrors of this world by an endless series of miracles, he would thereby frustrate his own plan of reconciliation.

    So once you accept this general plan of reconciliation, then the question of statistical means and expectable degrees of variation come into play.

    The issue is what buttons does God leave himself to fiddle with to put some upper bound on horrors. Well, either he defeats himself and does away with freewill and its unbound growth, or he has to live with the fact that such a system will deliver horrors over all scales of human possibility.
  • van Inwagen's expanded free will defense, also more generally, The Problem of Evil
    I'm not entirely sure why van Inwagen thinks such a minimum line does not exist.darthbarracuda

    I would say van Inwagen could be justified like this.

    Our initial intuition is that a line could be drawn accurately because there is some average degree of horror that would be consistent with God's grand plan of eventual reconciliation. Like a school room, you can tolerate a certain average degree of naughtiness, but then a clear line can be drawn that will have only the usual bell curve, or Gaussian, statistical error. There will be a line with a bit of noise, a bit of fuzz, yet it is constrained narrowly enough so there is as little variation as possible.

    That is how we normally think about the statistics of systems ruled by some global constraint - ie: God as an expression of the central limit theorem in regard to the horrors of existence.

    But also there is a separate and more fundamental pattern of nature - the powerlaw distribution. In a system that is characterised by free growth in contrasting directions, you get instead an outcome that has no actual mean. When accidents or errors happen, they happen evenly over all possible scales.

    This is familiar from anything fractal or scalefree. There is no biggest or smallest fluctuation any more. Instead the only thing constant is the amount of power being expressed at every possible scale of being. So with a wave, instead of some comfortable average height, you get a ton more very small waves than you might expect, and also there seems no limit to how giant the occasional freak wave becomes. Or in human economic terms, there are billions of people living on $2 a day, yet also a few billionaires like Bill Gates whose income beats small nations. It is just a different statistical pattern for reasons that are easy enough to understand.

    So applied to van Inwagen, if we look at humanity as this kind of self-organising growth story - two rival tendencies in interaction - then we can still get some kind of system minimum average in the weight of horror being created at every scale. But the scale itself has no top or bottom. It must be the case that you get a whole lot of surprisingly trivial stuff - all those paper cuts and net flames - and then no upper limit on the completely off the chart rape-mutilations. The upper bound horrors still are constrained - there is only enough system energy for the occasional horror to be delivered at that top end scale. Yet still, such horrors are to be expected - without that being a problem to the general claim that God's will is in effect.

    So if humanity is imagined as a static situation - no growth - then it ought to conform to a Gaussian minimum of horror. But to the degree that humanity is an open system, freely growing, then it ought to conform to a powerlaw statistics on all things.

    And of course, saintliness should show the same scalefree behaviour too. It would have its Bill Gates equivalents simply by the vagaries of chance.
  • Confidence, evidence, and heaps
    There's still something odd about that zone in the middle. Any thoughts?Srap Tasmaner

    Did you mention that exactly halfway is where the rate of increase peaks, so is also exactly where the rate of decrease first starts?

    The middle section of a logistic function only looks odd in the sense that we can't really see much going on in terms of big change, but a big causal-level change is happening. Naked growth is giving away to constrained growth.

    And while models of growth within limits might make you think that one more, or one less, can't be a big deal, a more accurate modelling of a sand pile would probably be one that includes system correlations (the drag of global limits) from the start. So the kind of non-linearity that explains phase changes, like where water turns to ice once some threshold between molecular thermal jitter vs inter-molecular electrostatic forces is breached.

    With the Sorites sand pile, we are basically asking when the emergent global property of "acting like a pile" appears. And that requires some dynamical definition. A pile should be some heap that has some characteristic global cohesion. Shaking it about or adding more stuff shouldn't change its basic mathematical form. It would still look the same in whatever critical way you think defines a pile.

    So adding or subtracting a grain of sand to a box of sand doesn't really make any difference until the box is actually full. And that is what your intuition tells you when viewing a logistic growth function because you are not thinking about any critical shift in the rate you are able to add or subtract those grains.

    But if you are instead thinking about a sand pile as having globally cohesive behaviour that emerges once there is a sufficient weight of inter-grain correlations to outweigh a matching weight of individual sand grain freedom, then single grains do start to make a clear difference at the critical threshold of any such phase change. There comes a point where the whole first started to be greater than its parts.
  • The Unconscious
    Actually, you said that intentionality is formed by "attentional" focus.Metaphysician Undercover

    Actually I said a state of intentionality.

    And when I was talking about a generalised intent, I was explicit about that meaning a general constraint in regard to the particular actions to be supplied by rapid habit level machinery. The point being that intentionality of course cashes out as finality, and hence the causality of constraints.

    You lost me with your claim that attention and consciousness, habit and non-conscious, cannot be related. Just too contrary.
  • Cosmological Arg.: Infinite Causal Chain Impossible
    The 'inside vs outside' or 'immanent vs transcendent' division doesn't have to be understood in terms of a sky-father, indeed in Platonist philosophy it generally was not understood in those terms at all.Wayfarer

    Yes, Platonism, or better yet Aristotelianism, is more sophisticated. Instead of just being anthropomorphic, the larger cause of being is assigned to top-down formal and final cause. Or what in the systems approach we would call constraints. Forms and purposes place limits on the scope of accidents.

    But still the transcendent vs immanent distinction remains central. Platonism wanted to place the forms in a transcendent realm of ideas. Aristotle was striving after a more immanent naturalism. Modern holism would talk about form and telos - as the globalised constraints - being what evolve and so emerge to regulate their worlds in determinate fashion. Law grows as its shape is already logically necessary.

    So Greek metaphysics was largely organic and immanent in spirit. The early dudes spoke about logos and flux, peras and apeiron - or regulating constraints and chaotic degrees of freedom. Being became determinate by potentiality becoming self-restricting or shaped by a common trajectory.

    But then Plato stood apart in asserting that the forms of nature did not emerge in time, rather they stood apart as eternal. And somehow from there - Platonia - they managed to shape the Chora, the materiality that was somehow the receptacle or whatever could take such an impression.

    That doesn't makes sense. Although it does start to make more sense once you start talking about emergent structure mathematically - as symmetry-breaking maths does. So that does cash out Platonic form in a self-organising way. Once you have constrained dimensionality to just three spatial dimensions, there are only a limited set of completely regular polygons or Platonic solids. It is just a timeless inevitability that cubes, tetrahedrons, etc, will emerge given a temporal process which limits the dimensionality of chaotic being to flat 3D space.

    So we can work our way back to Plato. But only by showing how forms are emergent and therefore immanent rather than transcendent. They come after the fact as an actuality, even if they were already present latently at the beginning as an as yet unexpressed potentiality.
  • The Unconscious
    So it should be clear that it was you making the category error, not myself. You talked about how "intentionality", and "a generalised intent" forms from attention, but when I took exception to this, you insisted you were talking about particular intentions.Metaphysician Undercover

    You are still not getting it. I said the process of attending leads to a particular state of intention. So it brings intentionality - our general long-run state of orientation to the world - into some particular focused state. And then in doing that, the particular attentional/intentional state should be understood not as something already fleshed out and action specific, but instead a fixing of limits, a production of a state of generalised constraint on action.

    From that generalised constraint on action, a habit level of performance can take over. Strict bounds have been set that allow the lower brain to fire off automatically and unthinkingly. Permission to fire has been granted the frontline troops. Attention then gets reserved for monitoring performance in terms of being there to pick up errors, problems, significance, or whatever else might prompt the need for a re-focusing of the prevailing state of intent.

    I appreciate this is a dynamical and complex tale. But that is how it is. The general and the particular are always going together as this is a hierarchical systems view of causality.

    Attentional level thought and intention forming is there to deal with time horizons of seconds to minutes. Habit level emitted responses are there to deal with action by the split second. So attention creates the mindset. Habit takes that as its context and does its rapid fire thing. Attention then kicks back in to refocus as much as seems necessary when habit generates an alert telling that it is either faced by the unexpected or it has come upon something already flagged as important.

    All of that flying along and making sense of the world from a self-centred point of view is what we would call intentionality. It is not a function of the brain but a characteristic of life and mind.

    However we can talk also of intentions - some focused mindset that exists at some point of time. That would be intentionality particularised.

    No I don't see any problem here. It is quite clear that intention develops from the more general toward the more particular. I'm hungry, I intend to eat. I look in the fridge and see some ground beef, so I intend to eat hamburger. I decide to turn on the BBQ and intend to eat grilled hamburgers. Intention is always there, whether it's in the more general, or more particular form.Metaphysician Undercover

    But you yourself said you had to notice that you were hungry. So attending to a feeling was a first step. And from there flowed an action plan, an intention to actually do some particular thing. Choices can only form following attention. Although faced with the same situation often enough, those choices do become habits. I know its confusing.

    I take it we are in agreement then. It is incorrect to say that intentionality, or generalized intent is formed from attention. It is correct to say that things like attention and habit are formed with intention. So when I find you speaking in this incorrect way in the future, you should not object when I correct you.Metaphysician Undercover

    No, I'm not yet getting you understand a word I say.
  • Cosmological Arg.: Infinite Causal Chain Impossible
    No the anti-thesis is that causes are necessarily contingent, only probabilities, contingent events that could have always been otherwise, that's all that is available to us.Cavacava

    Yes. Naturalism would oppose supernaturalism as immanence vs transcendence. So the first cause or prime mover would have to be understood as a self-organising tendency arising "within", instead of some externally imposed agency.

    Thus it makes more sense to talk of a first accident rather than a first cause. Or in modern technical parlance, a first fluctuation that spontaneously broke a symmetry. An accident clicked and turned out to be the first step in a chain of events - like whatever random thing happened to tip a first domino and send the rest rattling flat.

    This still leaves the question of creation rather unsolved. But it is a better place to start. Instead of needing the overkill of an all-powerful supernatural agency - a big daddy in the sky - it says the first cause was the very least of all things, a random fluctuation. Zero agency, zero identity. Any slight push of any kind could have done the trick because ... "things were poised".

    Maybe the cat's tail brushed the waiting dominos. Maybe it was a puff of breeze or the rumble of traffic. The point is that it never mattered and is antithetical in being non-agential - merely the kind of accident that was inevitably going to happen.